How I got selected for the LFX Mentorship Program

LFX Mentorship (previously known as Community Bridge) is a platform developed by the Linux Foundation, which promotes and accelerates the adoption, innovation, and sustainability of open-source software.

LFX Mentorship is actively used by the Cloud Native Computing Foundation(CNCF) as a mentorship platform across the CNCF projects

Program Schedule

2022 — Fall Term — September 1st – Nov 30th

2022 — Summer Term — June 1st – August 31st (My Term)

2022 — Spring Term — March 1st – May 31st

How to Apply

You have to write the Cover Letter and mention all the points about why you are interested in the projects and any previous work you have done or not and what you expect from the project etc.

Tip: Start contributing early and talk to the maintainers about your interests in the program and start to discuss the issue/feature you are going to work on.

My Project

My Project is Cluster API Provider for GCP(CAPG). It is a CNCF Project that helps manage the Kubernetes cluster in the Google Cloud Platform. Currently, another provider Cluster API Provider for AWS(CAPA), Cluster API Provider for Azure(CAPZ) has the support for taking advantage of GPU in their cluster but CAPG doesn’t have so Me and Subhasmita my co-mentee will work on the project to add support for GPU in the CAPG.

My Mentors

My Co-Mentee

Well, my journey would be a little monotonous if I didn’t have a co-mentee. It makes my work a little interesting because when we are both stuck on anything we hope on a call and discuss things. Also the weekly work we divide each other and teach each other what we have learned.

How It All Started

I didn’t have any plan to do LFX from the beginning. I started my journey with CAPG for GSoC”22. I applied for the same project and the same feature in the GSoC but that didn’t happen because the project didn’t get selected in the GSoC eventually all the applications to the project got rejected as well. So I talked to the maintainer Richard and told them that can I work in the GPU work as I was very interested in it. He told me that there is still hope in the LFX Mentorship and he opened an application there and I applied there. And then I got selected for the LFX Mentorship 🎉

How It Is Going

I was a little bit worried about how I will work on a big project like this where there are thousands of lines of code and me just a written a project with a max of 500 lines. But I am amazed how the maintainers made my journey very easy and got me onboarded with the introduction to the project for a couple of weeks and gave me small tasks of trying things out and asking a question if I am stuck at any point.

Next Steps:

I will start the GPU work the next week with Subhasmita and keep contributing to the project in the future.

Advertisement

Create a managed cluster using Cluster API Provider for Google Cloud Platform (CAPG)

In the previous blog, I explained how to create and manage Kubernetes with cluster API locally with the help of docker infrastructure.

In this blog, I will explain how to create and manage the k8s with Cluster API in the google cloud.

Note – Throughout the blog, I will use Kubernetes version 1.22.9 and it is recommended to use the version of our OS image created by the image builder. You can check from kubernetes.json and use that.

Step 1 –

  • Create the kind cluster –
kind create cluster --image kindest/node:v1.22.9 --wait 5m

Step 2 –

Follow image builder for GCP steps and build an image.

Step 3 –

  • Export the following env variables – (reference)
export GCP_PROJECT_ID=<YOUR PROJECT ID>
export GOOGLE_APPLICATION_CREDENTIALS=<PATH TO GCP CREDENTIALS>
export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )

export CLUSTER_TOPOLOGY=true
export GCP_REGION="us-east4"
export GCP_PROJECT="<YOU GCP PROJECT NAME>"
export KUBERNETES_VERSION=1.22.9
export IMAGE_ID=projects/$GCP_PROJECT/global/images/<IMAGE ID>
export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2
export GCP_NODE_MACHINE_TYPE=n1-standard-2
export GCP_NETWORK_NAME=default
export CLUSTER_NAME=test

Step 4 –

setup the network in this example we are using the default network so we will create some router/nats for our workload cluster to have internet access.

gcloud compute routers create "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" --region="${GCP_REGION}" --network="default"

gcloud compute routers nats create "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter" --nat-all-subnet-ip-ranges --auto-allocate-nat-external-ips

Step 5 –

  • Initialize the infrastructure
clusterctl init --infrastructure gcp
  • Generate the workload cluster config and apply it
clusterctl generate cluster $CLUSTER_NAME --kubernetes-version v1.22.9 > workload-test.yaml

kubectl apply -f workload-test.yaml
  • View the cluster and its resources
$ clusterctl describe cluster $CLUSTER_NAME
NAME                                                               READY  SEVERITY  REASON                 SINCE  MESSAGE
/test                                                              False  Info      WaitingForKubeadmInit  5s
├─ClusterInfrastructure - GCPCluster/test
└─ControlPlane - KubeadmControlPlane/test-control-plane            False  Info      WaitingForKubeadmInit  5s
  └─Machine/test-control-plane-x57zs                               True                                    31s
    └─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2
  • Check the status of the control plane
$ kubectl get kubeadmcontrolplane
NAME                 CLUSTER   INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE    VERSION
test-control-plane   test                                           1                  1         1             2m9s   v1.22.9

Note – The controller plane won’t be ready until the next step when I install the CNI (Container Network Interface).

Step 6 –

  • Get the kubeconfig for the workload cluster
$ clusterctl get kubeconfig $CLUSTER_NAME > workload-test.kubeconfig
  • Apply the cni
kubectl --kubeconfig=./workload-test.kubeconfig \
  apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml
  • Wait a bit and you should see this when getting the kubeadmcontrolplane
$ kubectl get kubeadmcontrolplane
NAME                 CLUSTER   INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE     VERSION
test-control-plane   test      true          true                   1          1       1         0             6m33s   v1.22.9


$ kubectl get nodes --kubeconfig=./workload-test.kubeconfig
NAME                       STATUS   ROLES                  AGE   VERSION
test-control-plane-7xzw2   Ready    control-plane,master   62s   v1.22.9

Step 7 –

  • Edit the MachineDeployment in the workload-test.yaml it has 0 replicas add the replicas you want to have your nodes, in this case, we used 2. Apply the workload-test.yaml
$ kubectl apply -f workload-test.yaml
  • After a few minutes, you should see something like this –
$ clusterctl describe cluster $CLUSTER_NAME
NAME                                                               READY  SEVERITY  REASON  SINCE  MESSAGE
/test                                                              True                     15m
├─ClusterInfrastructure - GCPCluster/test
├─ControlPlane - KubeadmControlPlane/test-control-plane            True                     15m
│ └─Machine/test-control-plane-x57zs                               True                     19m
│   └─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2
└─Workers
  └─MachineDeployment/test-md-0                                    True                     10m
    └─2 Machines...                                                True                     13m    See test-md-0-68bd55744b-qpk67, test-md-0-68bd55744b-tsgf6

$ kubectl get nodes --kubeconfig=./workload-test.kubeconfig
NAME                       STATUS   ROLES                  AGE   VERSION
test-control-plane-7xzw2   Ready    control-plane,master   21m   v1.22.9
test-md-0-b7766            Ready    <none>                 17m   v1.22.9
test-md-0-wsgpj            Ready    <none>                 17m   v1.22.9

Yaaa! Now we have a Kubernetes cluster in the GCP with 1 control pannel with 2 worker nodes.

Step 8 –

Delete what you have created –

$ kubectl delete cluster $CLUSTER_NAME

$ gcloud compute routers nats delete "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" \
    --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter"

$ gcloud compute routers delete "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" \
    --region="${GCP_REGION}"

$ kind delete cluster

What is Kubernetes Cluster API and Setup a Local Cluster API using Docker

I have came across the term cluster API while I was contributing to Flatcar Linux. But I didn’t knew much about it then. In recent days I have been tinkering around the Kubernetes and started learning what cluster API is and what it does. So Cluster API or CAPI is a tool from the Kubernetes Special Interest Group(SIG) that uses Kubernetes-style APIs and patterns to automate cluster lifecycle management for platform operators.
In general term it is the project that helps manage your k8s cluster no matter where they are including various cloud providers. Because a k8s cluster include a lot of component from hardware, software, services, networking, storage and so on and so forth.

Motivation

I wrote this blog in the motivation of setting it up locally and contribute in this project. In recent days I have came across a lot of Computer Science core subjects like Computer Networking, Database Management System and really amazed to see the interconnection with the distributed systems.
I am still very new in the operation of various cloud provider but in the near future I am willing to learn those thing and apply Kubernetes over there.
I also want to participate in the GSoC and work in this particular project and Improve CAPG by adding more features and support GKE.

Setting up CAPI locally with Docker

Requirements : You need to have the following packages installed in your system before starting it –

Step 1 –

Infrastructure Provider – It is like a provider which is providing compute & resources in order to spin a cluster. We are going to use docker as our infrastructure here.

  • Create a kind config file for allowing the Docker provider to access Docker on the host:
cat > kind-cluster-with-extramounts.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
    - hostPath: /var/run/docker.sock
      containerPath: /var/run/docker.sock
EOF
  • Then I create a kind cluster using the following config file –
kind create cluster --config kind-cluster-with-extramounts.yaml

Step 2 –

Now installing the clusterctl tool to manage the lifecycle of a CAPI management cluster –

  • Installation in linux OS – (For other OS – ref)
$ curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.0/clusterctl-linux-amd64 -o clusterctl
$ chmod +x ./clusterctl
$ sudo mv ./clusterctl /usr/local/bin/clusterctl
$ clusterctl version

Step 3 –

Now it’s time for use the clusterctl to transform the kind cluster to a management cluster by clusterctl init command. The command accepts a list of provider.

Management Cluster – A Management cluster is a Kubernetes cluster that manages the lifecycle of Workload Clusters. A Management Cluster is also where one or more Infrastructure Providers run, and where resources such as Machines are stored.

  • I am using docker as my infrastructure so I will use the command below –
clusterctl init --infrastructure docker

Step 4 –

Now it’s time for creating a workload cluster.

Workload Cluster – A workload cluster is a cluster created by a ClusterAPI controller, which is not a bootstrap cluster, and is meant to be used by end-users.

  • Now we use clusterctl generate cluster to generate a YAML file to create a workload cluster.
clusterctl generate cluster test-workload-cluster --flavor development \
--kubernetes-version v1.21.2 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> test-workload-cluster.yaml
  • Now apply the file to create the workload cluster –
kubectl apply -f test-workload-cluster.yaml

Step 5 –

Now we verify our workload cluster and access it.

  • Get the status of the cluster
kubectl get cluster
  • View the cluster and it’s resources
clusterctl describe cluster test-workload-cluster
  • Check the status of the control plane
kubectl get kubeadmcontrolplane

Note – The controller plane won’t be ready untill the next step when I install the CNI (Container Network Interface).

Step 6 –

Now it’s the time to setup the CNI solution

  • First get the workload cluster kubeconfig
clusterctl get kubeconfig test-workload-cluster > test-workload-cluster.kubeconfig
  • It will use calico for an example.
kubectl --kubeconfig=./test-workload-cluster.kubeconfig apply -f https://docs.projectcalico.org/v3.18/manifests/calico.yaml
  • After some time the node should be up and running.
kubectl --kubeconfig=./test-workload-cluster.kubeconfig get nodes

Step 7 –

Now it’s the last phase to delete the resources –

  • Delete the workload cluster
kubectl delete cluster test-workload-cluster
  • Delete the management cluster
kind delete cluster