Building a Local Kubernetes Cluster with KinD: A Step-by-Step Guide to Networking, Ingress, Monitoring, and Observability
As Kubernetes becomes the standard for container orchestration, running it locally with KinD (Kubernetes in Docker) offers a convenient and efficient way to manage clusters for development and testing. KinD spins up Kubernetes nodes inside Docker containers, giving you a complete cluster environment on your local machine.
This guide will walk you through creating a Kubernetes cluster with KinD, deploying a sample application, configuring networking with Calico CNI, exposing services via Ingress, and setting up Observability using Prometheus and Grafana to monitor your cluster’s health and performance.
Table of Contents:
- Setting up Docker, kubectl, and KinD.
- Creating a KinD cluster with 1 master and 2 worker nodes.
- Installing Calico CNI for networking.
- Configuring NodePort to expose applications.
- Deploying an Nginx application.
- Configuring Ingress for domain-based access.
- Monitoring and Observability with Prometheus and Grafana.
- Installing and using the Metrics Server for real-time cluster utilization.
Prerequisites:
Ensure you have the following installed:
- Docker: To run Kubernetes nodes as containers.
- kubectl: The Kubernetes command-line tool for interacting with the cluster.
- KinD: A tool that provisions Kubernetes clusters in Docker.
Step 1: Install Docker, kubectl, and KinD
Install Docker
Docker is a prerequisite for KinD, as KinD runs Kubernetes nodes inside Docker containers.
For Ubuntu, install Docker with:
sudo apt-get update
sudo apt-get install -y docker.io
For macOS and Windows, download Docker Desktop from the Docker website.
Install kubectl
kubectl
is the tool to interact with your Kubernetes cluster. Install it using the following commands:
curl -LO "https://dl.k8s.io/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/kubectl
Verify your installation:
kubectl version --client
Install KinD
KinD will help us create and manage Kubernetes clusters in Docker.
curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
Verify KinD installation:
kind version
Step 2: Create a KinD Cluster with 1 Master and 2 Worker Nodes
KinD allows you to customize cluster configurations. Let’s create a Kubernetes cluster with one master and two worker nodes.
Cluster Configuration File
Create a file called kind-cluster-cni.yaml
:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
networking:
disableDefaultCNI: true
podSubnet: "192.168.0.0/16"
- 1 control-plane (master) and 2 worker nodes.
- Default CNI is disabled since we’ll be installing Calico CNI.
Create the Cluster
Run the following to create the Kubernetes cluster:
kind create cluster --config kind-cluster-cni.yaml
Verify the cluster and nodes:
kubectl get nodes
Step 3: Install Calico CNI for Networking
Kubernetes requires a CNI plugin to manage networking between pods. We’ll use Calico, a robust CNI for Kubernetes.
Install Calico
To install the Calico CNI plugin, run:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Verify that Calico pods are up and running:
kubectl get pods -n kube-system
You should see multiple pods related to Calico, indicating that the network is properly configured.
Step 4: Set Up NodePort for Application Access
NodePort is a Kubernetes service that exposes your application on a specific port. The default range for NodePort services is 30000–32767. Ensure that your firewall allows this range.
Open NodePort Range on Firewall (Linux)
sudo ufw allow 30000:32767/tcp
Now your application can be accessed via a port within this range on your control-plane or worker nodes.
Step 5: Deploy a Sample Nginx Application
Now that the networking setup is complete, let’s deploy a simple Nginx web server as an example application.
Nginx Deployment YAML
Create a file called nginx-deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
type: NodePort
Deploy Nginx
Apply the YAML file to deploy the Nginx service:
kubectl apply -f nginx-deployment.yaml
You can access the Nginx application by visiting http://<NodeIP>:30080
. Use kubectl get nodes -o wide
to find the NodeIP.
Step 6: Configure Ingress for Domain-Based Access
To allow external traffic to your application, you can set up an Ingress controller that handles routing traffic to the correct services based on a domain name.
Install Nginx Ingress Controller
Install the Nginx Ingress Controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Wait until the Ingress controller pods are running:
kubectl get pods -n ingress-nginx --watch
Set Up Domain Name Mapping
For local development, you can map a domain to your KinD control-plane IP using the /etc/hosts
file.
Get the control-plane IP:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' kind-control-plane
Update your /etc/hosts
file to map the domain:
sudo nano /etc/hosts
Add the following:
<Control-Plane-IP> myapp.local
Create an Ingress Resource
Next, we’ll create an Ingress rule to map our domain to the Nginx service.
Create a file called nginx-ingress.yaml
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
Apply the Ingress rule:
kubectl apply -f nginx-ingress.yaml
Now, access Nginx using the domain http://myapp.local
.
Step 7: Monitoring and Observability with Prometheus and Grafana
In a production-like environment, observability is key to ensuring your system is running optimally. We will set up Prometheus for monitoring and Grafana for visualizing the data.
Installing Prometheus
To install Prometheus, apply the following:
kubectl apply -f https://github.com/prometheus-operator/prometheus-operator/releases/latest/download/bundle.yaml
Prometheus will begin collecting metrics from Kubernetes resources.
Installing Grafana
Grafana is an open-source tool for visualizing metrics. Install Grafana using Helm:
helm repo add grafana https://grafana.github.io/helm-charts
helm install grafana grafana/grafana
Expose Grafana as a NodePort service:
kubectl expose deployment grafana --type=NodePort --name=grafana-service
Access Grafana at http://<NodeIP>:<NodePort>
. The default credentials are admin
/admin
.
Creating Dashboards in Grafana
Grafana allows you to import pre-built dashboards. To visualize Kubernetes metrics, navigate to Dashboards > Import, and use the Kubernetes Prometheus dashboard ID (e.g., 315) from the Grafana Dashboard Repository.
Now you can monitor your cluster’s health, node performance, and resource usage in real-time.
Step 8: Install Metrics Server for Resource Utilization Monitoring
To keep track of CPU and memory usage, install the Metrics Server:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
After installation, use the following commands to check utilization:
- Node utilization:
kubectl top nodes
- Pod utilization:
kubectl top pods
Conclusion
In this guide, you’ve learned how to:
- Set up a local Kubernetes cluster using KinD.
- Install Calico CNI for networking.
- Deploy a NodePort service to expose applications.
- Configure Ingress to route traffic using a domain name.
- Implement Monitoring and Observability with Prometheus and Grafana.
- Use the Metrics Server for resource monitoring.
This setup serves as a solid foundation for building more complex Kubernetes clusters and services in your local environment, with monitoring in place to ensure stability and performance.