Learning: Kubernetes – Service, Scalability, Rolling Updates


There multiple challenges with pods. Suppose we have two pods with one front-end and one back-end. Now we have couple of questions –

  • How does the front-end app expose to the outside world?
  • How the front-end app talks to the back-end app?
  • When a pod dies a new pod gets created and get assigned with a new IP Address. How to resolve Pod IP changes, when pod die?

So, the services are the way of grouping of pods in a cluster. We can have as many as services in cluster. There are mainly three type of services in k8s –

  1. ClusterIP – It actually deals with the pod IP change problem. It’s a static IP address that can be attached with each pod. So even the pod dies the service stays in place and don’t change. Exposes the Service on an internal IP in the cluster. Here the service only reachable within the cluster.
  2. NodePort – Makes a Service accessible from outside the cluster.
  3. Load Balancer – Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service.
# Get the running services
$ kubectl get services

# Expose service to outside of the world
$ kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080

# Delete a specific service
$ kubectl delete service -l app=kubernetes-bootcamp


When we want to scale our app then we create multiple replica of the pods on the nodes and to balance the request on each pods we use load balancer service.

Rolling Update

K8s allows us to do rolling updates and let’s see how it does –

  • First it creates the new pod with updated config.
  • Then it replaces the new pods with old ones one by one and change the pod IP addresses.

It allows app to update in zero down time.

# Update image of the application
$ kubectl set image

# Get Rollout update status
$ kubectl rollout status <service_name>

# Get the service info
$ kubectl describe <service_name>

# Roll Back to the deployment to your last working version
$ kubectl rollout undo <deployment_name>

Don’t Be Afraid Of Being A Beginner

In this blog I am going to share my journey about from where I started and how I started and as well as where I am now.

Starting College Life

Well I got into Diploma in year 2017 in Electronics And Telecommunication stream. As like other students I just took engineering and didn’t had any knowledge of which branch I should take and where I want to see me in future. The 1st semester just crossed with just academic studies related to curriculum.

How It Began

In the 2nd semester one day I came to college and got to know that there is a cyber security seminar that happened and as usual I was pretty late so I missed the seminar but one of my friend came in the class room and was discussing about what he listened in the seminar. I asked him about what happened there and he told that the person came was taking about web and particularly python which is a very easy to learn language and has very good demand in the job market. Also he gave me the person’s phone number. So, I was pretty curious about the python for the whole day. I went to home and pinged the person “Hi, I want to learn python and how I can start?”. He replied “You can start with python itself.”.

So I opened the YouTube and searched the Python tutorial and got this playlist. And I started learning it. While going through the playlist I started doing research about the python about what I can do and what are the areas I can work after learning the basics. So I got to know about a lot of new terms including machine learning, web development, back end development. Then I made a small calculator with the help of a blog from the internet and I couldn’t understand the codes at that time because I was pretty new so I copied a lot of code. Also I got to know about frameworks like Django and flask and I was not sure which one should I pick. Then I finally finished the playlist and picked Django and started learning it from YouTube. And at that point of time I also got to know about Git & GitHub. I opened my GitHub account and starts pushing whatever I was learning from the YouTube and blogs.

How I started Open Source Contribution

Then I got to know about Open Source and contribution to where I can get experience in writing industry standard codes. So again I start searching in the web and I got few answers about “How to contribute to open source?” and I got one answer from Sayan from Quora. I gathered a lot of information from this post and was searching through all the code base of these project but somehow nothing was working for me. So I pinged Sayan on Instagram. He then guided me to pick one project from Mozilla which is pontoon then I joined their IRC channel and asked the maintainer if he can pick me one issue so that I can work on. He gave me one issue and eventually after discussing with maintainer I solved the issue and opened my first PR. It was quite hard for me first time figuring out where to start and what to do but the feelings after solving a issue gives you a whole lot of joy and confidence. Then in the pontoon project I solved a quite few other issues.

Doing DGPLUG Summer Training

After that I got to know about a summer training where Sayan and few other folks from different open source project conduct in order to bring people into open source. So I joined the #dgplug channel in the freenode. I joined the summer training the year 2019 and it was awesome in one word because I got to know a lot of insights of various technologies like git and ssh and how asymmetric and symmetric encryption works etc. Also they encouraged me to start writing blogs. The main thing was I got to know a lot of new people from the community like Kuntal, Pradvan, Priyanka, Pravar and lot more.

Attending Pycon India

While going to college I was talking to Sayan and he told me to attend the Pycon India and I came home and talked to my parents and asked if I can go. They said yes and I went there in Chennai and It was an unique experience in itself. I wrote a blog post on pycon India experience.

2020 Pandemic Start 😦

And it was my one of the worst year of my life completely sitting in home for 1 year and also I regret not doing/learning anything new in this year. I just skipped everything this year from studies to programming.

2021 BTech started

Yes I am in the middle of pandemic batch. I started my btech and from here a lot have changed in my life I shifted my stream from ETCE in Diploma to CSE in Btech. The 3rd(Lateral) was very short due to pandemic we just got 1.5 months to complete all the subjects and then semester exam. After that one day I was talking to Sayan on his company got acquired by Microsoft he told me about the essence of learning the core subjects of Computer Science beside doing coding all the time. And it was a life saving conversation to me because I was about to skip the subjects like Automata and Compiler. I will tell be What I have studied so far –

  • Computer Organization & Architecture – This is one of the most important subject in the curriculum. I studied it mostly by watching various YouTube channels like GateSmashers.
  • Discrete Math – From the beginning I am not that good in math and I also didn’t paid much attention in this subject to be honest.
  • Automata – Yep! this subject I have studied from various YouTube channels and I used to solve question regarding NFA, DFA, Context Free Grammar, Turing Machine etc.
  • Compiler – I started this subject with lot of motivation but it gradually decreased as I was having problem a lot in understanding the subject from the Dragon Book. I still have plans to improve this subject in future and make my own compiler.
  • Operating System – This is the subject I love the most in my entire curriculum. First I completed the Galvin’s book on OS. Then I watched several YouTube videos of different topics in the OS. The most interesting part in the OS was semaphores.
  • Object Oriented Programming – This is again one of the most important subject if you want to write complex code and design large scale systems. I knew the basic OOPs concepts already so I started with this book. There I got to know a lot about design patterns like Singleton Design pattern, Factory design pattern etc.
  • DBMS – It is again one of my most favorite subject. I knew the basics of DBMS as I used to develop applications using Django. But I was lacking in the inner concepts of DBMS like how it handles concurrency and how it do recovery etc. And also I got to know a lot about complex Join queries and how these works. I watched playlist by Knowledge gate where Sanchit Jain sir have explained each and every topic very clearly.
  • Computer Network – This is the subject I am currently studying and the most amazing this whatever I study in this subject I use day to day in my life. I started learning Networking using Ravindrababu Ravula channel and after finishing it I started the Knowledge Gate playlist. And also side by side me Sayan, Nabarun and few other folks sir regularly and study the CN from the book CompTIA Network+ Certification All-in-One Exam Guide, Eighth Edition (Exam N10-008), 8th Edition. Also we are planning to start doing practical using whatever we studied in CN.

Learning Golang and Contributing to Flatcar Linux and Kubernetes

When Sayan’s company got acquired by the Microsoft he told that his entire product is open source and it is related to Operating Systems. I was pretty exited about it because I always wanted to contribute to some OS projects. I joined their Matrix channel and introduced myself. After getting into the project I talked to the maintainers and picked one issue. But the main thing was I had to learn about Golang and I started learning it. After learning the basics of Golang I started tinkering with the issue and worked on something that interested me very much in the OS was semaphores. After several discussions with the maintainer I solved the issue and here is the PR also I gave a talk in their monthly community call.

After that I kept contributing to the OS and fixed several issues. The OS has an immutable file structure so the organization has to deliver the required software for the community and is responsible for updating and adding new applications. I added and upgraded quite a few applications as well from the Gentoo ebuild repository.

Later in the Flatcar Linux the most interesting and challenging thing that I worked on is that I integrated the Fleetlock Protocol in their locksmith project. The idea was that locksmith was highly bound to etcd. So the users who want to have a cluster reboot coordination needs to use etcd. The idea is to implement a FleetLock client into the locksmith. So the Fleetlock will stay in the middle and whichever application supports the Fleetlock client will be able to interact with the Locksmith and will be able to reboot a set of clusters. I also gave a talk about this too in their community call.

Still I was lacking a lot of knowledge in Golang but thanks to Nabarun he helped me understanding a lot of design patterns while designing a Go application and helped me understand about the interface and made a key-value storage. I also wrote a blog about the interfaces. Currently I am contributing to CAPG project and looking forward to contribute more in this and other Kubernetes project in future. Currently I am learning Kubernetes and System Design and giving a lot of interview these days for the internships.

I am still improving myself everyday by learning various concepts in the CS and interconnecting each other and applying in the projects whenever possible. But If I look back and see myself, I can clearly see my improvement from the past years. As Steve Jobs once said “You can’t connect the dots looking forward; you can only connect them looking backward“.

Learning: Kubernetes – Pods and ReplicaSet Simplified


A Pod is the smallest execution unit of a Kubernetes application. Each Pod represents a part of a workload that is running on your cluster.

We usually have one pod per application. Inside that pod we could have multiple container.

  • A Pod is a Kubernetes abstraction that represents a group of one or more application containers and some shared resources.
    • It has shared volumes.
    • Cluster IP(Every pod has unique IP even in same Node)
    • Info about how to run container.
  • We don’t deal with containers instead we work with pods.
  • If a container dies inside a pod it will be automatically restarted.
  • Each pod is tied to one node until termination.
  • Pods that are running inside k8s are only visible from other pods and services inside the k8s cluster.
  • We have to expose the app to outside the k8s.

Multiple Container Pods – The pods are always designed to support multiple correlated containers. The containers in a pod is automatically scheduled in same VM or physical machine in the cluster.

The containers can communicate to each other and share resources.

Pods Networking –

  • Each pod is assigned with a unique IP address.
  • Each container in pods share the network share the same IP address with port.
  • The containers inside a pod can communicate to each other with localhost.
  • The containers inside a pod can also communicate using Inter Process Communication.

Life Cycle of a Pod

  • A pod is said to be ephemeral.
  • A pod is never rescheduled to a different node Instead the pod is replaced by a new one.
  • If a node fails the pods assigned to it also fail.

Generally a pod has 5 phases –

  1. Pending – Pod has been accepted by the cluster but one or more container haven’t been setup.
  2. Running – A pod has been bound to a node and containers have started.
  3. Succeeded – All containers in the pod have been terminated successfully.
  4. Failed – At least one container have been terminated in failure.
  5. Unknown – For some reason the state of pod could not be obtained.
# Create a deployment
$ kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1

# Get deployment info
$ kubectl get deployments

# Get the list of pods running
$ kubectl get pods

# See which containers are running inside a pod
$ kubectl describe pods

# Run a command inside a container
$ kubectl exec $POD_NAME -- env

# Open bash inside a container
$ kubectl exec -it $POD_NAME -- bash


We don’t create the pods directly. The reason is suppose we need 4 pods in our deployment always and if we create the pods directly and the one pod goes down then we have to create the pods manually.

That’s why we use ReplicaSet. It is a management system that ensure that I have the desired set of pods in the k8s cluster. And the controller check the current state with the desired state and see if the current pod count match the ReplicaSet count or not. If not it creates or deletes pods.

# Get the replica set
$ kubectl get rs

# Scale up the app and change replicaset
$ kubectl scale deployments/kubernetes-bootcamp --replicas=4

# Scale down the app
$ kubectl scale deployments/kubernetes-bootcamp --replicas=2

# To see a pod in managed by ReplicaSet
$ kubectl get pods <pod_name> -o yaml

# Delete the ReplicaSet
$ kubectl delete rs <replica_name>

# Delete the replica set but keep the pods
$ kubectl delete rs <replica_name> --cascade=false

Learning: Kubernetes – Cluster, Control Plane, Nodes


It is a set of machines connected and work together to run as a single unit. The idea is deploying the containerized application without tying them to a specific machine.

There are mainly two component of any k8s cluster –

  • Master Node or Control Plane
  • Worker Nodes
  • Virtual Network

Control Plane

In general it is the control panel of a cluster and manages the entire cluster.

  1. It runs the API server which works as the entry point for the Kubernetes cluster.
  2. It runs the controller manager which keeps an overview of the cluster and Maintain application desired state.
  3. It runs scheduler which is responsible for scheduling containers and pods for different nodes based on workload and available server resources on each node. After deciding which Node to use for pod or container creation it actually sends the requests to the particular node’s kubelet process and kubelet does the creation of the pod and containers.
  4. Another important thing that runs is ETCD Key-value storage which holds the current state of the cluster.


It is a physical computer or VM that serves as a worker machine in a k8s cluster.

It is a physical computer or VM that serves as a worker machine in a k8s cluster.

  1. Each worker node have docker containers of different application deployed on it.
  2. The kubelet manage the node ****talk to the control plane.
  3. The node use Kubernetes API to communicate to the control plane.
  4. Two node can’t have the same name as name identifies the node.
# Start the cluster with minikube
$ minikube start

# Get the cluster info
$ kubectl cluster-info

# Get node information
$ kubectl get nodes

Daily Learning: Computer Networks – Securing TCP/IP

There are 5 points that to me mentioned while understanding security in TCP/IP –

  • Encryption – It’s the scramble of data such a manner that in between the data can’t be read. Also at the receiving end the the data must be descramble.
  • Integrity – It’s the guarantee that data that is received is same as originally sent.
  • Nonrepudiation A person cannot deny that they took a specific action.
  • Authentication – It means that who ever is accessing the data is the person you want to access the data. username & password defines the authentication.
  • Authorization – It means what an authenticated person do with the data.


A packet of data on the Internet often comes with a port number encapsulated in the segment or datagram, for example, so a bad guy quickly knows what type of data he’s reading. All data starts as cleartext, which roughly means the data hasn’t been encrypted yet.

Here comes the encryption and we use cipher. A cipher is a general term for a way to encrypt data. An algorithm is the mathematical formula that underlies the cipher. When you run plaintext through a cipher algorithm using a key, you get the encrypted ciphertext.

Types of encryption –

  • Symmetric Encryption – It is the encryption algorithm which use single key to both encrypt and decrypt the data. So the key must be shared between the sender and receiver. There are major two kind of symmetric key –
    • Block cipher – Here it divide the data in blocks and encrypt each block. usually 128 bit block.
    • Stream cipher – Here the algorithm encrypt each bit coming from stream of bits.
  • Asymmetric Encryption – There was a major drawback in the symmetric encryption that the key is tampered then the communication is vulnerable. For that we use asymmetric encryption. Here suppose there is two people Alice & David and alice wants to send the data to the david then alice will create a key pair of public and private key. The public key is used for the encryption and the private key is used for the decryption. So the alice will give her public key to david and david will encrypt the data and send it to the david. Now alice will decrypt the data with the help of her private key.


  • It’s a mathematical function that we run in a string and get a fixed length of string(checksum or message digest).
  • The message digest will always be same length regardless of the input string length.
  • The hash is a one way function that we can’t get back the message from the hash.

Uses –

  • When we download any file from the internet the download provider also provide a message digest of the download file.
  • We first download the checksum and download the file.
  • Then we compare our checksum with the downloaded checksum.
  • If the checksum is not correct then the data has been tampered in the middle.

Digital Signature

It is the proof of truth about someone took some action in a network. and they can’t deny it.

  • First sender hashes the message and encrypt with sender’s private key.
  • Then send the data to the receiver.
  • Then the receiver decrypts the message with the help of sender’s public key.
  • If the hash match then it’s the proof that the sender has sent the message.


The invention of SSH was heavily related to the telnet protocol because the telnet protocol was completely unsecured that everything was transferred in plain text. Then a student from Helsinki University Of Technology Tatu Ylonen created another protocol called SSH after his network was breached because of telnet.

Working Principal

  • When a client wants to connect to the server for the first time the server sends it’s public key to the client.
  • Then the client create a session ID and encrypt it using the public key and sends it back to the server.
  • Then the server decrypt the session ID using it’s private key and use in all the data communication going forward.
  • Then the client server decides the type of encryption will be used for the session(Generally AES).

SSH can use public key for encryption and we can turn off the password based authentication.

Use Public/Private Key for authentication –

  • client first generate a key pair using ssh-keygen.
  • The the public key is sent to the server and the private key is kept safe in the client machine.
  • WHen you connect to the server the client create a signature using it’s private key and send to server.
  • The server check the signature using it’s public key and if everything matches you are authenticated to the server.

Create a key Value storage using Golang – Part 1

A few days back I gave a interview in a company for Golang developer intern. There they asked me about a lot of questions and some I cracked and some I couldn’t cracked. The things I failed miserably was Go interfaces and Go Concurrency(Channels & atomic package). I took it as a motivation and learnt the Go interface again and wrote a blog on that. Now It was time for me to develop a project so my knowledge become more concrete. Nabarun gave a idea to make a key-value storage using Go that will support multiple storage types.

So I started building the project. First create a directory api/models from the root of your directory. Then create a file called records.go and write the following code –

type InMemory struct {
	Data map[string]string `json:"data"`

This code will be responsible for in memory storage of the key – value storage. Now lets define some interface that will be help us when we will be enhancing our app to support more storage systems like file structure. Add the following code in records.go

type StorageSetter interface {
	Store(string, string)

type StorageGetter interface {
	List() map[string]string
	Get(string) (string, error)

type StorageDestroyer interface {
	Delete(string) error

type Storage interface {

Now let’s see why I have written multiple interface like setter and Getter? Because it is always best practice to keep the interface small. It actually helps increasing abstraction.

Now let’s define a helper function in records.go that will return the InMemory structs.

// Return In Memory struct
func NewCache() *InMemory {
	return &InMemory{Data: make(map[string]string, 2)}

Let’s now create the methods in records.go which will do operations on the struct –

func (r *InMemory) Store(key, val string) {
	r.Data[key] = val

func (r *InMemory) List() map[string]string {
	return r.Data

func (r *InMemory) Get(key string) (string, error) {
	val, ok := r.Data[key]
	if !ok {
		return "", errors.New("key not found")
	return val, nil

func (r *InMemory) Delete(key string) error {
	_, ok := r.Data[key]
	if !ok {
		return errors.New("key not found")
	delete(r.Data, key)
	return nil

Now let’s define our server so create a directory api/controllers and create base.go file inside it. And write the code inside it.

type Server struct {
	Router *http.ServeMux
	Cache  models.Storage

This Server struct will contain dependency of the server which is typically the router and the storage interface.

Now create a server.go inside the api directory and write the code –

package api

import (


var server controllers.Server

// Initialize and run the server
func Run() {
	var storageType string

	flag.StringVar(&storageType, "storage-type", "in-memory",
		"Define the storage type that will be used in the server. By defaut the value is in-memory.")


Here you can see it is taking the flag from the command line and passing it in the Initialize method and calling Run method to run the server in the port 8888. Now let’s define these two Initialize & Run method in the base.go file –

func (s *Server) Initialize(storageType string) {
	s.Router = http.NewServeMux()

	switch storageType {
	case "in-memory":
		s.Cache = models.NewCache()
	case "disk":
		s.Cache = models.NewDisk()
		log.Fatal("Use flags `in-memory` or `disk`")

	log.Printf("Starting server with %v storage", storageType)


// Run the server on desired port and logs the status
func (s *Server) Run(addr string) {
	cert, err := tls.LoadX509KeyPair("localhost.crt", "localhost.key")
	if err != nil {
		log.Fatalf("Couldn't load the certificate: %v", cert)

	server := &http.Server{
		Addr:    ":" + addr,
		Handler: s.Router,
		TLSConfig: &tls.Config{
			Certificates: []tls.Certificate{cert},

	fmt.Println("Listenning to port", addr)
	log.Fatal(server.ListenAndServeTLS("", ""))

Here you can see Initialize method is setting the router for the server and then setting the storage for different storage and at the last it is initializing the routes.

In the Run method it is loading the certificate and setting up the server and running the server at the end.

Now let’s define initializeRoutes function that we saw in the last initialize method in the base.go. Create a routes.go file in side api/controllers

func (s *Server) initializeRoutes() {
	s.Router.HandleFunc("/record", s.Create)
	s.Router.HandleFunc("/records", s.List)
	s.Router.HandleFunc("/get/record", s.Get)
	s.Router.HandleFunc("/del/record", s.Delete)

Now we will see the implementation of the route controllers. Create a cache.go file inside the api/controllers and paste the below code –

package controllers

import (

	j "github.com/aniruddha2000/goEtcd/api/json"

func (s *Server) Create(w http.ResponseWriter, r *http.Request) {
	if r.Method == "POST" {
		key := r.Form["key"]
		val := r.Form["val"]

		for i := 0; i < len(key); i++ {
			s.Cache.Store(key[i], val[i])

		j.JSON(w, r, http.StatusCreated, "Record created")
	} else {
		j.JSON(w, r, http.StatusBadRequest, "POST Request accepted")

func (s *Server) List(w http.ResponseWriter, r *http.Request) {
	if r.Method == "GET" {
		records := s.Cache.List()
		j.JSON(w, r, http.StatusOK, records)
	} else {
		j.JSON(w, r, http.StatusBadRequest, "GET Request accepted")

func (s *Server) Get(w http.ResponseWriter, r *http.Request) {
	if r.Method == "GET" {
		keys, ok := r.URL.Query()["key"]
		if !ok || len(keys[0]) < 1 {
			log.Println("Url Param 'key' is missing")
		key := keys[0]

		val, err := s.Cache.Get(key)
		if err != nil {
			j.JSON(w, r, http.StatusNotFound, err.Error())
		j.JSON(w, r, http.StatusOK, map[string]string{key: val})
	} else {
		j.JSON(w, r, http.StatusBadRequest, "POST Request accepted")

func (s *Server) Delete(w http.ResponseWriter, r *http.Request) {
	if r.Method == "DELETE" {
		keys, ok := r.URL.Query()["key"]
		if !ok || len(keys[0]) < 1 {
			log.Println("Url Param 'key' is missing")
		key := keys[0]

		err := s.Cache.Delete(key)
		if err != nil {
			j.JSON(w, r, http.StatusNotFound, err.Error())
		j.JSON(w, r, http.StatusNoContent, map[string]string{"data": "delete"})
	} else {
		j.JSON(w, r, http.StatusBadRequest, "DELETE Request accepted")

Here you can see the controller that will handle different route traffics and call the record methods and doing the operations.

I have creates a helper JSON method to reduce redundant code while writing the route controllers. create a api/json directory and crate a json.go file. Paste the code below –

func JSON(w http.ResponseWriter, r *http.Request, statusCode int, data interface{}) {
	w.Header().Set("Location", fmt.Sprintf("%s%s", r.Host, r.RequestURI))

In the next part I will walk you though how to extend this application to support disk based storage along side with In-Memory storage.

What is Kubernetes Cluster API and Setup a Local Cluster API using Docker

I have came across the term cluster API while I was contributing to Flatcar Linux. But I didn’t knew much about it then. In recent days I have been tinkering around the Kubernetes and started learning what cluster API is and what it does. So Cluster API or CAPI is a tool from the Kubernetes Special Interest Group(SIG) that uses Kubernetes-style APIs and patterns to automate cluster lifecycle management for platform operators.
In general term it is the project that helps manage your k8s cluster no matter where they are including various cloud providers. Because a k8s cluster include a lot of component from hardware, software, services, networking, storage and so on and so forth.


I wrote this blog in the motivation of setting it up locally and contribute in this project. In recent days I have came across a lot of Computer Science core subjects like Computer Networking, Database Management System and really amazed to see the interconnection with the distributed systems.
I am still very new in the operation of various cloud provider but in the near future I am willing to learn those thing and apply Kubernetes over there.
I also want to participate in the GSoC and work in this particular project and Improve CAPG by adding more features and support GKE.

Setting up CAPI locally with Docker

Requirements : You need to have the following packages installed in your system before starting it –

Step 1 –

Infrastructure Provider – It is like a provider which is providing compute & resources in order to spin a cluster. We are going to use docker as our infrastructure here.

  • Create a kind config file for allowing the Docker provider to access Docker on the host:
cat > kind-cluster-with-extramounts.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
- role: control-plane
    - hostPath: /var/run/docker.sock
      containerPath: /var/run/docker.sock
  • Then I create a kind cluster using the following config file –
kind create cluster --config kind-cluster-with-extramounts.yaml

Step 2 –

Now installing the clusterctl tool to manage the lifecycle of a CAPI management cluster –

  • Installation in linux OS – (For other OS – ref)
$ curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.0/clusterctl-linux-amd64 -o clusterctl
$ chmod +x ./clusterctl
$ sudo mv ./clusterctl /usr/local/bin/clusterctl
$ clusterctl version

Step 3 –

Now it’s time for use the clusterctl to transform the kind cluster to a management cluster by clusterctl init command. The command accepts a list of provider.

Management Cluster – A Management cluster is a Kubernetes cluster that manages the lifecycle of Workload Clusters. A Management Cluster is also where one or more Infrastructure Providers run, and where resources such as Machines are stored.

  • I am using docker as my infrastructure so I will use the command below –
clusterctl init --infrastructure docker

Step 4 –

Now it’s time for creating a workload cluster.

Workload Cluster – A workload cluster is a cluster created by a ClusterAPI controller, which is not a bootstrap cluster, and is meant to be used by end-users.

  • Now we use clusterctl generate cluster to generate a YAML file to create a workload cluster.
clusterctl generate cluster test-workload-cluster --flavor development \
--kubernetes-version v1.21.2 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> test-workload-cluster.yaml
  • Now apply the file to create the workload cluster –
kubectl apply -f test-workload-cluster.yaml

Step 5 –

Now we verify our workload cluster and access it.

  • Get the status of the cluster
kubectl get cluster
  • View the cluster and it’s resources
clusterctl describe cluster test-workload-cluster
  • Check the status of the control plane
kubectl get kubeadmcontrolplane

Note – The controller plane won’t be ready untill the next step when I install the CNI (Container Network Interface).

Step 6 –

Now it’s the time to setup the CNI solution

  • First get the workload cluster kubeconfig
clusterctl get kubeconfig test-workload-cluster > test-workload-cluster.kubeconfig
  • It will use calico for an example.
kubectl --kubeconfig=./test-workload-cluster.kubeconfig apply -f https://docs.projectcalico.org/v3.18/manifests/calico.yaml
  • After some time the node should be up and running.
kubectl --kubeconfig=./test-workload-cluster.kubeconfig get nodes

Step 7 –

Now it’s the last phase to delete the resources –

  • Delete the workload cluster
kubectl delete cluster test-workload-cluster
  • Delete the management cluster
kind delete cluster

Daily Learning: Computer Networks – Access Control Methods – CSMA(Carrier Sensing Multiple Access)/CD(Collision Detection)


  1. There can be multiple stations in a network.
  2. It will sense if the transmission line is busy or not. If it is not busy then it will transmit the data.
  3. It will also sense for any type of collision while sending the data.

Persistence Methods

  1. 1 Persistent – It sense the medium continuously. When the medium is free it sends the packet imediately.
  2. Non-Persistent – It first sense the medium then waits for a random amount of time then again sense.
  3. P Persistent – It fist sense the medium continuously then when the medium is free it generate a random number and check if the number is less than the probability of which host if the number is less than some host’s probability then that host will transmit the data.


Vulnerable Time – It is the total propagation time Tp. If first bit of the packet reaches to the end of the medium then every station will heard of the transmission and no one will transmit.


The Carrier Sense Multiple Access/ Collision Detection protocol is used to detect a collision in the media access control (MAC) layer. Once the collision was detected, the CSMA CD immediately stopped the transmission by sending the signal so that the sender does not waste all the time to send the data packet. Suppose a collision is detected from each station while broadcasting the packets. In that case, the CSMA CD immediately sends a jam signal to stop transmission and waits for a random time context before transmitting another data packet. If the channel is found free, it immediately sends the data and returns it.

It is used in the wired medium and used by Ethernet.

Transmission Time – Tt = 2 * Tp


CSMA stands for Carrier Sense Multiple Access with Collision Avoidance. It means that it is a network protocol that uses to avoid a collision rather than allowing it to occur, and it does not deal with the recovery of packets after a collision. It is similar to the CSMA CD protocol that operates in the media access control layer. In CSMA CA, whenever a station sends a data frame to a channel, it checks whether it is in use. If the shared channel is busy, the station waits until the channel enters idle mode. Hence, we can say that it reduces the chances of collisions and makes better use of the medium to send data packets more efficiently.

It is used in the wireless interface

  • Interframe Space(IFS) = Collision are avoided by deferring transmission even if the channel is found idle. When an idle channel is found, the station does not send immediately. It waits for a period of time called the interframe space.
  • Contention Window – We divide the network into windows and if some signal collide the it has to wait for the next window which is 2^n [n = number of collision]

Minimum amount of data – L ≥ 2 * Tp * B

Efficiency(η) = Tt / (C * 2 * Tp) + Tt + Tp

Back off Algorithm – Wt – K * Tslwt [ k = 0 – 2^n-1 ] [ n = collision number ]

Daily Learning: Computer Networks – Access Control Methods – TDM, Polling, Token Passing, Aloha

Types of Communication Links

  • Point to Point Link
  • Broadcast Link – The connection is shared between all the stations.

Need Of Access Control

In the broadcast link if all stations are sending data simultaneously then there will be collision that’s why we implement the Access Control.

Types Of Access Control Method

1. TDM(Time Division Multiplexing) –

Divide the time into slots and assign each slot to one station.

Efficiency(η) = 1 / 1 + a [ a = Tp / Tt ]

2. Polling –

When a station wants to transmit the data then only we give the chance to that station to transmit the data.

Efficiency(η) = Tt / Tpoll + Tt + Tp [ Tt = Time taken for transmission, Tp = Time taken for propagation]

3. Token passing-

Token – A token is a small message composed of a special bit pattern.

Ring Latency – It is time taken by a bit to cover the entire ring and come back to the same point.

RL = d / v + N * b

[ d = length of the ring, v = velocity of data in ring, N = no. of stations in ring, b = time taken by each station to hold the bit before transmitting it (bit delay)]

Cycle Time – The time taken by the token to complete one revolution of the ring is known as cycle time.

CL – d / v + N * (THT)

[ d = length of the ring, v = velocity of data in ring, N = no. of stations in ring, THT = Token Holding Time ]

Strategies –

Delayed Token Reinsertion (DTR) –

Station keeps holding the token until the last bit of the data packet transmitted by it takes the complete revolution of the ring and comes back to it.

Working –

After a station acquires the token,

  • It transmits its data packet.
  • It holds the token until the data packet reaches back to it.
  • After data packet reaches to it, it discards its data packet as its journey is completed.
  • It releases the token.

Token Holding Time (THT) = Transmission delay + Ring Latency = Tt + Tp [ Tt = Transmission time, Tp = Propagation time ]

Ring Latency = Tp + N x bit delay = 0 [ bit delay = 0 ]

Early Token Reinsertion (ETR) –

Station releases the token immediately after putting its data packet to be transmitted on the ring.

Token Holding Time (THT) = Transmission delay of data packet = Tt

4. Aloha

Rules –

  1. Any station can transmit data to a channel at any time.
  2. No carrier sensing.
  3. There is no collision detection.
  4. It re-transmit the data after some time.(If acknowledgement don’t come)

There are mainly two type of aloha –

  • Pure Aloha –
  1. The total vulnerable time = 2 * Tfr [ Tfr = Average time required to send a packet ]
  2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
  3. Successful transmission of data frame is S = G * e ^ – 2 G.
  • Slotted Aloha –

We divide the process into slots and a host can only send packets at the beginning of any slot. If it comes after then it has to wait till next slot.

  1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.
  2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G * e ^ – 2 G.
  3. The total vulnerable time required in slotted Aloha is Tfr.


Golang Interface Simplified

What is Interface?

Interface is used for abstraction. It contains one or more method signatures. Below is an example of how we define interface.

type Human interface {

Why we use interface?

In simple term interfaces are the contract for the methods for different structure type. To increase the code readability and maintenance we use interface. Let’s say there is Person datatype in my application and all the methods mentioned above actually implement the Person data type.

type Person struct {
	First string
	Last  string

Now let’s say the method mentioned in the interface actually implement the Person struct

func (p Person) speak() {
	fmt.Println("I am Person ", p.First)

Now the interesting part our software got a new requirement of adding another data type called SecretAgent.

type SecretAgent struct {
	Person Person
	Org    string

Now we define another method speak() for the SecretAgent data type.

func (s SecretAgent) speak() {
	fmt.Println("I am secret agent ", s.Person.First)

Now we can take the help of interface and the power of abstraction. We define a function that will take the interface and call the speak method.

func Earthling(h Human) {
	fmt.Println("Hey there I am from planet Earth")

Understand what happened above? The Indian function take the human interface and call the speak method and we don’t have to specify for which data type the speak is going to work it will be managed by the go interfaces. So, it reduced a lot of hard coding and our design is future ready to accept more data type.

Let’s see the main function.

func main() {
	sa1 := SecretAgent{
		Person: Person{First: "James", Last: "Bond"},
		Org:    "MI6",
	sa2 := SecretAgent{
		Person: Person{First: "Ajit", Last: "Doval"},
		Org:    "RAW",
	p1 := Person{First: "Dr.", Last: "Strange"}