I have came across the term cluster API while I was contributing to Flatcar Linux. But I didn’t knew much about it then. In recent days I have been tinkering around the Kubernetes and started learning what cluster API is and what it does. So Cluster API or CAPI is a tool from the Kubernetes Special Interest Group(SIG) that uses Kubernetes-style APIs and patterns to automate cluster lifecycle management for platform operators.
In general term it is the project that helps manage your k8s cluster no matter where they are including various cloud providers. Because a k8s cluster include a lot of component from hardware, software, services, networking, storage and so on and so forth.
Motivation
I wrote this blog in the motivation of setting it up locally and contribute in this project. In recent days I have came across a lot of Computer Science core subjects like Computer Networking, Database Management System and really amazed to see the interconnection with the distributed systems.
I am still very new in the operation of various cloud provider but in the near future I am willing to learn those thing and apply Kubernetes over there.
I also want to participate in the GSoC and work in this particular project and Improve CAPG by adding more features and support GKE.
Setting up CAPI locally with Docker
Requirements : You need to have the following packages installed in your system before starting it –
Step 1 –
Infrastructure Provider – It is like a provider which is providing compute & resources in order to spin a cluster. We are going to use docker as our infrastructure here.
- Create a kind config file for allowing the Docker provider to access Docker on the host:
cat > kind-cluster-with-extramounts.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
EOF
- Then I create a kind cluster using the following config file –
kind create cluster --config kind-cluster-with-extramounts.yaml
Step 2 –
Now installing the clusterctl tool to manage the lifecycle of a CAPI management cluster –
- Installation in linux OS – (For other OS – ref)
$ curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.0/clusterctl-linux-amd64 -o clusterctl
$ chmod +x ./clusterctl
$ sudo mv ./clusterctl /usr/local/bin/clusterctl
$ clusterctl version
Step 3 –
Now it’s time for use the clusterctl to transform the kind cluster to a management cluster by clusterctl init
command. The command accepts a list of provider.
Management Cluster – A Management cluster is a Kubernetes cluster that manages the lifecycle of Workload Clusters. A Management Cluster is also where one or more Infrastructure Providers run, and where resources such as Machines are stored.
- I am using docker as my infrastructure so I will use the command below –
clusterctl init --infrastructure docker
Step 4 –
Now it’s time for creating a workload cluster.
Workload Cluster – A workload cluster is a cluster created by a ClusterAPI controller, which is not a bootstrap cluster, and is meant to be used by end-users.
- Now we use
clusterctl generate cluster
to generate a YAML file to create a workload cluster.
clusterctl generate cluster test-workload-cluster --flavor development \
--kubernetes-version v1.21.2 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> test-workload-cluster.yaml
- Now apply the file to create the workload cluster –
kubectl apply -f test-workload-cluster.yaml
Step 5 –
Now we verify our workload cluster and access it.
- Get the status of the cluster
kubectl get cluster
- View the cluster and it’s resources
clusterctl describe cluster test-workload-cluster
- Check the status of the control plane
kubectl get kubeadmcontrolplane
Note – The controller plane won’t be ready untill the next step when I install the CNI (Container Network Interface).
Step 6 –
Now it’s the time to setup the CNI solution
- First get the workload cluster kubeconfig
clusterctl get kubeconfig test-workload-cluster > test-workload-cluster.kubeconfig
- It will use
calico
for an example.
kubectl --kubeconfig=./test-workload-cluster.kubeconfig apply -f https://docs.projectcalico.org/v3.18/manifests/calico.yaml
- After some time the node should be up and running.
kubectl --kubeconfig=./test-workload-cluster.kubeconfig get nodes
Step 7 –
Now it’s the last phase to delete the resources –
- Delete the workload cluster
kubectl delete cluster test-workload-cluster
- Delete the management cluster
kind delete cluster