Using Terraform to Create AWS EKS cluster
Terraform is an open-source tool created by HashiCorp that allows you to define and provision infrastructure using a high-level configuration language. It uses configuration files written in HashiCorp Configuration Language (HCL) or JSON to describe the desired state of your infrastructure. Terraform then manages the creation, modification, and versioning of your infrastructure resources across various providers like AWS, Azure, Google Cloud, and others.
Some key features of Terraform include:
- Infrastructure as Code (IaC): Manage your infrastructure using code, which allows for versioning, automation, and easier collaboration.
- Declarative Configuration: Describe what you want your infrastructure to look like, and Terraform figures out how to achieve that state.
- Plan and Apply: Terraform generates an execution plan showing what changes will be made and then applies those changes to your infrastructure.
- State Management: Terraform maintains a state file that keeps track of the resources it manages, helping to ensure that your infrastructure is always in the desired state.
Prerequisites
Before you begin, ensure you have the following prerequisites in place:
Terraform:
- Install Terraform, the infrastructure as code tool.
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main" sudo apt-get update && sudo apt-get install terraform
AWS CLI:
- Install and configure the AWS CLI. Ensure it’s configured with the correct AWS credentials and region.
aws configure
kubectl:
- Install
kubectl
, the Kubernetes command-line tool.
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl
Helm:
- Install Helm, the package manager for Kubernetes.
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bashIAM Role:
- Ensure you have an IAM role with sufficient permissions to create and manage EKS clusters, VPCs, and related resources.
Step-by-Step Guide to Create AWS EKS with Fargate, Load Balancer, and Ingress Controller Using Variables
GitHub Project Link :-
1. Set Up Your Project Directory
Create a directory for your Terraform project and navigate into it.
mkdir terraform-eks-fargate
cd terraform-eks-fargate
2. Create main.tf
File
This file will contain the main Terraform configuration.
provider "aws" {
region = var.aws_region
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.9.0"
name = var.vpc_name
cidr = var.vpc_cidr
azs = var.availability_zones
public_subnets = var.public_subnets
private_subnets = var.private_subnets
enable_dns_hostnames = true
enable_nat_gateway = true
single_nat_gateway = true
tags = {
Name = var.vpc_name
}
}
output "vpc_id" {
description = "The ID of the VPC"
value = module.vpc.vpc_id
}
output "private_subnets" {
description = "List of IDs of private subnets"
value = module.vpc.private_subnets
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "20.20.0"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
fargate_profiles = [
{
name = "default"
selectors = [{
namespace = var.fargate_namespace
}]
}
]
tags = {
Name = var.cluster_name
}
}
output "cluster_endpoint" {
description = "The endpoint of the EKS cluster"
value = module.eks.cluster_endpoint
}
output "cluster_security_group_id" {
description = "The security group ID of the EKS cluster"
value = module.eks.cluster_security_group_id
}
3. Create variables.tf
File
This file will define the variables used in the Terraform configuration.
variable "aws_region" {
description = "The AWS region to create resources in"
type = string
default = "ap-south-1"
}
variable "vpc_name" {
description = "The name of the VPC"
type = string
default = "eks-vpc"
}
variable "vpc_cidr" {
description = "The CIDR block for the VPC"
type = string
default = "10.0.0.0/16"
}
variable "availability_zones" {
description = "The availability zones to use for the subnets"
type = list(string)
default = ["ap-south-1a", "ap-south-1b", "ap-south-1c"]
}
variable "public_subnets" {
description = "The CIDR blocks for the public subnets"
type = list(string)
default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}
variable "private_subnets" {
description = "The CIDR blocks for the private subnets"
type = list(string)
default = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
}
variable "cluster_name" {
description = "The name of the EKS cluster"
type = string
default = "my-cluster"
}
variable "cluster_version" {
description = "The Kubernetes version for the EKS cluster"
type = string
default = "1.30" # Updated to a supported version
}
variable "fargate_namespace" {
description = "The namespace to use for Fargate profiles"
type = string
default = "default"
}
4. Create terraform.tfvars
File
This file will set the values for the variables.
aws_region = "ap-south-1"
vpc_name = "eks-vpc"
vpc_cidr = "10.0.0.0/16"
availability_zones = ["ap-south-1a", "ap-south-1b", "ap-south-1c"]
public_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
private_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
cluster_name = "my-cluster"
cluster_version = "1.30"
fargate_namespace = "default"
5. Initialize Terraform
Initialize your Terraform project to download the necessary providers and modules.
terraform init
6. Apply the Configuration
Apply the Terraform configuration to create the EKS cluster and associated resources.
terraform apply
7. Configure kubectl
After the cluster is created, configure kubectl
to interact with your EKS cluster.
aws eks --region ap-south-1 update-kubeconfig --name my-cluster
Replace ap-south-1
with your region and my-cluster
with your cluster name.
Verify that kubectl
is configured correctly by running:
kubectl get nodes
You should see the nodes of your EKS cluster listed.
8. Deploy the NGINX Ingress Controller
First, add the NGINX Ingress Controller Helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Next, install the NGINX Ingress Controller using Helm:
helm install nginx-ingress ingress-nginx/ingress-nginx
9. Deploy a Sample Application with an Ingress Resource
Create a Kubernetes manifest file for a sample application with an Ingress resource.
Create sample-app.yaml File
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
Apply the manifest to your EKS cluster.
kubectl apply -f sample-app.yaml
10. Verify the Ingress
Get the external IP address of the Ingress Controller:
kubectl get services -o wide -w --namespace ingress-nginx
Sample Project Structure
terraform-eks-fargate/
├── main.tf
├── variables.tf
├── terraform.tfvars
└── sample-app.yaml
This detailed guide should help you set up an AWS EKS cluster with Fargate, a Load Balancer, and an NGINX Ingress Controller using Terraform and variables. Modify the configurations as needed for your specific requirements.