How to Use Multiple Git Configs on One Computer

If you are like me and do open source contributions from an office laptop and your company uses some other git service then this blog is for you.

Using separate directory for repos

Let’s say we will create a directory based on our type of work.

  • Work
  • Personal

Create a global git config file .gitconfig

You should have a global gitconfig from where you will map your specific type of gitconfigs.

And that’s it .gitconfig

Create two specific gitconfig for two purposes

  • .gitconfig-work
  • .gitconfig-personal

Map those two gitconfigs with global gitconfig with directory

git config --global --add includeif.gitdir:/path/to/work/directory .gitconfig-work

git config --global --add includeif.gitdir:/path/to/personal/directory .gitconfig-personal

Specify information in the gitconfigs

[user]
 name = work_user
 email = work_email
[user]
 name = personal_user
 email = personal_email

Go to the directory and see your config list

$ cd ~/work
$ mkdir work-test-repo
$ cd work-test-repo
$ git init
		*Initialized empty Git repository in /Users/aniruddha/work/work-test-repo/.git/*
$ git config -l   
		*credential.helper=osxkeychain
		includeif.gitdir:~/personal/.path=~/.gitconfig-personal
		includeif.gitdir:~/work/.path=~/.gitconfig-work
		**user.name=working_me
		user.email = work@work.com**
		core.repositoryformatversion=0
		core.filemode=true
		core.bare=false
		core.logallrefupdates=true
		core.ignorecase=true
		core.precomposeunicode=true*          
$ cd ~/personal
$ mkdir personal-test-repo
$ git init
	*Initialized empty Git repository in /Users/aniruddha/personal/.git/*
$ git config -l
	*credential.helper=osxkeychain
	includeif.gitdir:~/personal/.path=~/.gitconfig-personal
	**user.name=me_personal
	user.email=personal@personal.com**
	includeif.gitdir:~/work/.path=~/.gitconfig-work
	core.repositoryformatversion=0
	core.filemode=true
	core.bare=false
	core.logallrefupdates=true
	core.ignorecase=true
	core.precomposeunicode=true*

Now you can see you have two types of gitconfigs according to your directory.

Advertisement

My LFX mentorship experience contributing to Custer API GCP

In my previous blog post, I shared how I got selected for the LFX mentorship. In this post, I am going to write about my experience contributing to Cluster API GCP.

Mentorship Project Description

The mentorship was about adding GPU support for CAPG. For Google Cloud Platform it is NVIDIA GPU that it supports as of now. So, We first started with planning our road map about what are the steps that are required for adding the GPU support. The first thing we decided to do is create a GPU driver-enabled OS image that can take advantage of the GPUs in the VM. For that, we created this PR. Here we mostly added packer config files so that it will create the OS image with NVIDIA GPU drivers.

The next thing that we did was to make changes in the CAPG API so that we can declare the fields that are required to create the VMs with GPU in the GCP. After that, we added the validations and webhooks for the new API changes so that incoming requests will be validated properly. Finally, we added the unit tests and end-to-end tests so that we have fully tested software in the main branch. Here is the PR we created in the CAPG repo that has all the changes mentioned above.

And after all our hard work we successfully created VMs with GPU with Cluster API in the GCP.

My experience overall

I never thought of doing LFX and contributing to such big projects a few months back. The only thing that kept me motivated and kept me contributing was the awesome community and the projects. In the beginning, to get familiar with the project, my mentors gave me the task to spin a normal Kubernetes managed cluster in the GCP using Cluster API and reading the documentation. Throughout the mentorship, all my mentors Dims, Richard, and Carlos helped me overcome all kinds of challenges to complete the task, and also they gave me the motivation and enthusiasm to push my boundaries and learn new things every day. This mentorship not only helped me to become a better developer in the Cloud Native technologies but also helped me a better thinker in terms of solving real-world engineering problems. In one word my overall experience with LFX mentorship is fabulous and wonderful. And last but not least all of the above would have been incomplete if I didn’t have my co-mentee Subhasmita.

Future Scope

After this project, I started taking other open source issues in the CAPG and also started contributing to CAPI as well. And I will keep contributing to the CNCF project in the future and hopefully, I will work on more such big and significant features in the future.

Becoming Kubernetes & Kubernetes SIG member

Another great thing that happened to me was that I recently became Kubernetes, Kubernetes-SIG member. Thanks to Carlos, Nabarun, Richard, Dims for giving me +1

Also, if you have any queries regarding Cluster API GCP or Cluster API, feel free to join the Kubernetes slack using the link: https://slack.k8s.io/ and then join the #cluster-api-gcp #cluster-api channel. And, also feel free to ping me @aniruddha on slack if you have any questions.

Check for Kubernetes deployment with client-go library

For the past couple of days, I have been tinkering with the client-go library. It provides the necessary interfaces and methods by which you can manipulate the Kubernetes cluster resources from your go code. After exploring for a while I started working on a side project that does some checking over deployment and if the deployment doesn’t have a certain environment variable it will delete the deployment other wise it will keep it as it is.

Setup

In this blog, I am not going to give idea about how to set up a go project.

First, create a directory named app and create another directory inside it called service. Now create a file named init.go inside the service directory.

package service

import (
	"log"
	"os"
	"path/filepath"

	"k8s.io/client-go/kubernetes"
	"k8s.io/client-go/rest"
	"k8s.io/client-go/tools/clientcmd"
)

// Initializes the kube config clientset
func Init() *kubernetes.Clientset {
	config, err := rest.InClusterConfig()
	if err != nil {
		kubeconfig := filepath.Join("home", "aniruddha", ".kube", "config")
		if envvar := os.Getenv("KUBECONFIG"); len(envvar) > 0 {
			kubeconfig = envvar
		}

		config, err = clientcmd.BuildConfigFromFlags("", kubeconfig)
		if err != nil {
			log.Fatalf("kubeconfig can't be loaded: %v\n", err)
		}
	}

	clientset, err := kubernetes.NewForConfig(config)
	if err != nil {
		log.Fatalf("error getting config client: %v\n", err)
	}

	return clientset
}

In the above code example, we call InClusterConfig first and that actually gives back the config object that contains a common attribute that can be passed to a Kubernetes client on initialization. If we couldn’t find the config we look for the Kube config in the default location in most of the Linux.

After we got the config now it’s time for initializing a client. We do it by NewForConfig method. It returns a clientset that contains the client’s resources for each group. Like the pods can be accessed by the corev1 group in the clientset struct.

Check for deployments

Create another directory under the app dir named client.

package client

import (
	"fmt"
	"log"
	"time"

	"k8s.io/apimachinery/pkg/util/wait"
	"k8s.io/client-go/informers"
	"k8s.io/client-go/tools/cache"
)

const (
	ENVNAME = "TEST_ENV_NAME"
)

// Check for Deployment and start a go routine if new deployment added
func (c *Client) CheckDeploymentEnv(ns string) {
	informerFactory := informers.NewSharedInformerFactory(c.C, 30*time.Second)

	deploymentInformer := informerFactory.Apps().V1().Deployments()
	deploymentInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
		AddFunc: func(obj interface{}) {
			log.Println("Deployment added. Let's start checking!")

			ch := make(chan error, 1)
			done := make(chan bool)

			go c.check(ns, ch, done)

		loop:
			for {
				select {
				case err := <-ch:
					log.Fatalf("error checking envvar: %v", err)
				case <-done:
					break loop
				}
			}
		},
	})

	informerFactory.Start(wait.NeverStop)
	informerFactory.WaitForCacheSync(wait.NeverStop)
}

Now in the CheckDeploymentEnv method, we first going to create the NewSharedInformerFactory which is going to give us back an interface that can be helpful to retrieve various resources from the local cache of the cluster. Then we can handle various events like add, update, delete, etc in the cluster and take action accordingly.

Then we add another function in the same file as above.

func (c *Client) check(namespace string, ch chan error, done chan bool) {
	deployments, err := ListDeploymentWithNamespace(namespace, c.C)
	if err != nil {
		ch <- fmt.Errorf("list deployment: %s", err.Error())
	}

	for _, deployment := range deployments.Items {
		var envSet bool
		for _, cntr := range deployment.Spec.Template.Spec.Containers {
			for _, env := range cntr.Env {
				if env.Name == ENVNAME {
					log.Printf("Deployment name: %s has envvar. All set to go!", deployment.Name)
					envSet = true
				}
			}
		}
		if !envSet {
			log.Printf("No envvar name %s - Deleting deployment with name %s\n", ENVNAME, deployment.Name)
			err = DeleteDeploymentWithNamespce(namespace, deployment.Name, c.C)
			if err != nil {
				ch <- err
			}
		}
	}
	done <- true
}

Here we list the deployments(covered next) and for every deployment, we check for env variables and delete them if we found that the env variable is missing. And pass true to the done channel if everything is successful otherwise pass the error to the other channel.

Deployment Handler

Create another file named deployment.go in the client directory.

package client

import (
	"fmt"
	"log"

	v1 "k8s.io/api/apps/v1"
	"k8s.io/apimachinery/pkg/api/errors"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/client-go/kubernetes"
)

// List deployment resource with the given namespace
func ListDeploymentWithNamespace(ns string, clientset *kubernetes.Clientset) (*v1.DeploymentList, error) {
	deployment, err := clientset.AppsV1().Deployments(ns).List(ctx, metav1.ListOptions{})
	if err != nil {
		return nil, err
	}
	return deployment, nil
}

// Delete deployment resource with the given namespace
func DeleteDeploymentWithNamespce(ns, name string, clientset *kubernetes.Clientset) error {
	err := clientset.AppsV1().Deployments(ns).Delete(ctx, name, metav1.DeleteOptions{})
	if err != nil {
		if errors.IsNotFound(err) {
			log.Printf("Deployment don't exists with name %s\n", name)
			return nil
		} else {
			return fmt.Errorf("delete Deployment: %v", err)
		}
	}
	log.Printf("Deployment deleted with name: %v\n", name)

	return nil
}

Here we have two methods one for listing the deployments and another for deleting them. Here we directly get the resources from clients means we are querying them on the Kubernetes API server unlike previously from the local in-memory cache.

Now create another file name client.go in the client directory. and use the code below.

package client

import (
	"context"

	"k8s.io/client-go/kubernetes"
)

var (
	ctx = context.TODO()
)

type Client struct {
	C *kubernetes.Clientset
}

// Return a new Client
func NewClient() *Client {
	return &Client{}
}

main.go

package main

import (
	"flag"
	"log"

	"github.com/aniruddha2000/yosemite/app/client"
	"github.com/aniruddha2000/yosemite/app/service"
)

func main() {
	var nameSpace string

	flag.StringVar(&nameSpace, "ns", "test-ns",
		"namespace name on which the checking is going to take place")

	log.Printf("Checking Pods for namespace %s\n", nameSpace)
	c := client.NewClient()
	c.C = service.Init()

	c.CheckDeploymentEnv(nameSpace)
}

Here we are just taking the namespace from the flag and calling all the necessary functions mentioned in the entire article.

Run the app in the Kubernetes cluster

In order to run the app in the cluster, we have to set up CusterRole & ClusterRoleBinding for the default service account for the pod.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-namespace-clusterrole
rules:
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["list", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: pod-namespace-clusterrolebinding
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default
roleRef:
  kind: ClusterRole
  name: pod-namespace-clusterrole
  apiGroup: rbac.authorization.k8s.io

Then you have to build the project and make a docker image out of it using docker build, docker tag & docker push command. Then create a deployment YAML template mentioned below and apply that.

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: client
  name: client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: client
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: client
    spec:
      containers:
      - image: <YOUR DOCKER IMAGE>
        name: client-app
        resources: {}
status: {}

Here is my GitHub URL for the project – https://github.com/aniruddha2000/yosemite/

You can find how to run the project in the README of the mentioned GitHub URL above.

What is RBAC in Kubernetes?

RBAC stands for Role Based Access Control. It allows us to define user privilege in the Kubernetes cluster that will restrict users from doing the unwanted operation. We describe access rights such as who is allowed to create, update, and delete resources.

Why do we need it?

  • To make the cluster more secure.
  • To scale our cluster to various development teams and avoid conflict between them.

Objects

In RBAC API there are main 4 types of objects –

  • Role – It’s used for namespace object constraints.
  • RoleBinding – Mapping the Role to the user.
  • ClusterRole – It’s used for the cluster-wide resource constraints.
  • CLusterRoleBinding – It’s used for mapping the ClusterRole to the user.

Example

Now we are going to create the objects mentioned above and see how these all work.

ClusterRole & ClusterRoleBinding

First, we are going to create a service account

kubectl create serviceaccount bob

Now write the below two YAML files for the ClusterRole & ClusterRoleBinding-

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: bob
rules:
  - apiGroups:
      - ''
    resources:
      - pods
      - pods/status
      - namespace
      - deployments
    verbs:
      - get
      - list
      - watch
      - create
      - update
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: bob-binding
subjects:
  - kind: ServiceAccount
    name: bob
    namespace: default
roleRef:
  kind: ClusterRole
  name: bob
  apiGroup: rbac.authorization.k8s.io

Here we first create a service account and we define a role that will be able to get, list, watch, create, and update the pods, deployments, and namespace.

Later we create a cluster role binding that will map the cluster role to the service account.

Role & RoleBinding

Let’s create a namespace first.

apiVersion: v1
kind: Namespace
metadata:
  name: application
  labels:
    name: alice

Then define the below Role and RoleBinding

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: application
  name: alice
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: alice-binding
  namespace: application
subjects:
- kind: User
  name: alice
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: alice
  apiGroup: rbac.authorization.k8s.io

Here we create a namespace and then define a role that will allow get, watch, and list operations on the pods to the Alice namespace.

Later we map the role binding to the namespace with role binding.

How I got selected for the LFX Mentorship Program

LFX Mentorship (previously known as Community Bridge) is a platform developed by the Linux Foundation, which promotes and accelerates the adoption, innovation, and sustainability of open-source software.

LFX Mentorship is actively used by the Cloud Native Computing Foundation(CNCF) as a mentorship platform across the CNCF projects

Program Schedule

2022 — Fall Term — September 1st – Nov 30th

2022 — Summer Term — June 1st – August 31st (My Term)

2022 — Spring Term — March 1st – May 31st

How to Apply

You have to write the Cover Letter and mention all the points about why you are interested in the projects and any previous work you have done or not and what you expect from the project etc.

Tip: Start contributing early and talk to the maintainers about your interests in the program and start to discuss the issue/feature you are going to work on.

My Project

My Project is Cluster API Provider for GCP(CAPG). It is a CNCF Project that helps manage the Kubernetes cluster in the Google Cloud Platform. Currently, another provider Cluster API Provider for AWS(CAPA), Cluster API Provider for Azure(CAPZ) has the support for taking advantage of GPU in their cluster but CAPG doesn’t have so Me and Subhasmita my co-mentee will work on the project to add support for GPU in the CAPG.

My Mentors

My Co-Mentee

Well, my journey would be a little monotonous if I didn’t have a co-mentee. It makes my work a little interesting because when we are both stuck on anything we hope on a call and discuss things. Also the weekly work we divide each other and teach each other what we have learned.

How It All Started

I didn’t have any plan to do LFX from the beginning. I started my journey with CAPG for GSoC”22. I applied for the same project and the same feature in the GSoC but that didn’t happen because the project didn’t get selected in the GSoC eventually all the applications to the project got rejected as well. So I talked to the maintainer Richard and told them that can I work in the GPU work as I was very interested in it. He told me that there is still hope in the LFX Mentorship and he opened an application there and I applied there. And then I got selected for the LFX Mentorship 🎉

How It Is Going

I was a little bit worried about how I will work on a big project like this where there are thousands of lines of code and me just a written a project with a max of 500 lines. But I am amazed how the maintainers made my journey very easy and got me onboarded with the introduction to the project for a couple of weeks and gave me small tasks of trying things out and asking a question if I am stuck at any point.

Next Steps:

I will start the GPU work the next week with Subhasmita and keep contributing to the project in the future.

Create a managed cluster using Cluster API Provider for Google Cloud Platform (CAPG)

In the previous blog, I explained how to create and manage Kubernetes with cluster API locally with the help of docker infrastructure.

In this blog, I will explain how to create and manage the k8s with Cluster API in the google cloud.

Note – Throughout the blog, I will use Kubernetes version 1.22.9 and it is recommended to use the version of our OS image created by the image builder. You can check from kubernetes.json and use that.

Step 1 –

  • Create the kind cluster –
kind create cluster --image kindest/node:v1.22.9 --wait 5m

Step 2 –

Follow image builder for GCP steps and build an image.

Step 3 –

  • Export the following env variables – (reference)
export GCP_PROJECT_ID=<YOUR PROJECT ID>
export GOOGLE_APPLICATION_CREDENTIALS=<PATH TO GCP CREDENTIALS>
export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )

export CLUSTER_TOPOLOGY=true
export GCP_REGION="us-east4"
export GCP_PROJECT="<YOU GCP PROJECT NAME>"
export KUBERNETES_VERSION=1.22.9
export IMAGE_ID=projects/$GCP_PROJECT/global/images/<IMAGE ID>
export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2
export GCP_NODE_MACHINE_TYPE=n1-standard-2
export GCP_NETWORK_NAME=default
export CLUSTER_NAME=test

Step 4 –

setup the network in this example we are using the default network so we will create some router/nats for our workload cluster to have internet access.

gcloud compute routers create "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" --region="${GCP_REGION}" --network="default"

gcloud compute routers nats create "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter" --nat-all-subnet-ip-ranges --auto-allocate-nat-external-ips

Step 5 –

  • Initialize the infrastructure
clusterctl init --infrastructure gcp
  • Generate the workload cluster config and apply it
clusterctl generate cluster $CLUSTER_NAME --kubernetes-version v1.22.9 > workload-test.yaml

kubectl apply -f workload-test.yaml
  • View the cluster and its resources
$ clusterctl describe cluster $CLUSTER_NAME
NAME                                                               READY  SEVERITY  REASON                 SINCE  MESSAGE
/test                                                              False  Info      WaitingForKubeadmInit  5s
├─ClusterInfrastructure - GCPCluster/test
└─ControlPlane - KubeadmControlPlane/test-control-plane            False  Info      WaitingForKubeadmInit  5s
  └─Machine/test-control-plane-x57zs                               True                                    31s
    └─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2
  • Check the status of the control plane
$ kubectl get kubeadmcontrolplane
NAME                 CLUSTER   INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE    VERSION
test-control-plane   test                                           1                  1         1             2m9s   v1.22.9

Note – The controller plane won’t be ready until the next step when I install the CNI (Container Network Interface).

Step 6 –

  • Get the kubeconfig for the workload cluster
$ clusterctl get kubeconfig $CLUSTER_NAME > workload-test.kubeconfig
  • Apply the cni
kubectl --kubeconfig=./workload-test.kubeconfig \
  apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml
  • Wait a bit and you should see this when getting the kubeadmcontrolplane
$ kubectl get kubeadmcontrolplane
NAME                 CLUSTER   INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE     VERSION
test-control-plane   test      true          true                   1          1       1         0             6m33s   v1.22.9


$ kubectl get nodes --kubeconfig=./workload-test.kubeconfig
NAME                       STATUS   ROLES                  AGE   VERSION
test-control-plane-7xzw2   Ready    control-plane,master   62s   v1.22.9

Step 7 –

  • Edit the MachineDeployment in the workload-test.yaml it has 0 replicas add the replicas you want to have your nodes, in this case, we used 2. Apply the workload-test.yaml
$ kubectl apply -f workload-test.yaml
  • After a few minutes, you should see something like this –
$ clusterctl describe cluster $CLUSTER_NAME
NAME                                                               READY  SEVERITY  REASON  SINCE  MESSAGE
/test                                                              True                     15m
├─ClusterInfrastructure - GCPCluster/test
├─ControlPlane - KubeadmControlPlane/test-control-plane            True                     15m
│ └─Machine/test-control-plane-x57zs                               True                     19m
│   └─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2
└─Workers
  └─MachineDeployment/test-md-0                                    True                     10m
    └─2 Machines...                                                True                     13m    See test-md-0-68bd55744b-qpk67, test-md-0-68bd55744b-tsgf6

$ kubectl get nodes --kubeconfig=./workload-test.kubeconfig
NAME                       STATUS   ROLES                  AGE   VERSION
test-control-plane-7xzw2   Ready    control-plane,master   21m   v1.22.9
test-md-0-b7766            Ready    <none>                 17m   v1.22.9
test-md-0-wsgpj            Ready    <none>                 17m   v1.22.9

Yaaa! Now we have a Kubernetes cluster in the GCP with 1 control pannel with 2 worker nodes.

Step 8 –

Delete what you have created –

$ kubectl delete cluster $CLUSTER_NAME

$ gcloud compute routers nats delete "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" \
    --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter"

$ gcloud compute routers delete "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" \
    --region="${GCP_REGION}"

$ kind delete cluster

Advantages of Golang sync.RWMutex over sync.Mutex

First of all, let’s understand what mutex is and why we use it. Mutex is a locking mechanism that protects shared data in a multi-threaded program where multiple threads are accessing the data concurrently. If we don’t use mutex then race conditions might happen in the program that will lead to inconsistent data throughout the program.

There are two types of Mutex in Golang –

  • sync.Mutex
    It protects the shared data both in reading & writing. This means if one thread is reading/writing another thread can’t read/write into the data. And if there is multiple thread reading then the read will happen one by one by each thread.
package main

import (
	"fmt"
	"sync"
	"time"
)

type SyncData struct {
	lock sync.Mutex
	wg   sync.WaitGroup
}

func main() {
	// m := map[int]int{}

	var sc SyncData

	sc.wg.Add(7)

	go readLoop(&sc)
	go readLoop(&sc)
	go readLoop(&sc)
	go readLoop(&sc)
	go writeLoop(&sc)
	go writeLoop(&sc)
	go writeLoop(&sc)

	sc.wg.Wait()
}

func writeLoop(sc *SyncData) {
	sc.lock.Lock()
	time.Sleep(1 * time.Second)
	fmt.Println("Write lock")
	fmt.Println("Write unlock")
	sc.lock.Unlock()
	sc.wg.Done()
}

func readLoop(sc *SyncData) {
	sc.lock.Lock()
	time.Sleep(1 * time.Second)
	fmt.Println("Read lock")
	fmt.Println("Read unlock")
	sc.lock.Unlock()
	sc.wg.Done()
}

Playground

Here you can see the write will block both read/write and the read will block the write as well as read. [E.G – You can see the delay between the read print statements]

  • Sync.RWMutex
    Now we know if the data is the same and the system is read-heavy it is ok to allow multiple threads to read from the data as there won’t be any conflict. So we use RWMutex instead where the idea is any number of readers can acquire the read lock at the same time but only one writer will be able to acquire the write lock at a time.
package main

import (
	"fmt"
	"sync"
	"time"
)

type SyncData struct {
	lock sync.RWMutex
	wg   sync.WaitGroup
}

func main() {
	// m := map[int]int{}

	var sc SyncData

	sc.wg.Add(7)

	go readLoop(&sc)
	go readLoop(&sc)
	go readLoop(&sc)
	go readLoop(&sc)
	go writeLoop(&sc)
	go writeLoop(&sc)
	go writeLoop(&sc)

	sc.wg.Wait()
}

func writeLoop(sc *SyncData) {
	sc.lock.Lock()
	time.Sleep(1 * time.Second)
	fmt.Println("Write lock")
	fmt.Println("Write unlock")
	sc.lock.Unlock()
	sc.wg.Done()
}

func readLoop(sc *SyncData) {
	sc.lock.RLock()
	time.Sleep(1 * time.Second)
	fmt.Println("Read lock")
	fmt.Println("Read unlock")
	sc.lock.RUnlock()
	sc.wg.Done()
}

Playground

Here you can see the write is blocking read & write but read is not blocking any read. Multiple threads is able to read at the same time. [E.G. – You can see the delay is write print statement but you won’t see any delay in read print statement]

Learning: Kubernetes – Deployments & StatefulSet

Deployments

Deployments are the way we manage pods in k8s. We specify all possible information about the pods like which version image it is going to pick and how many replicas of the pod will be there.

  • Properties
    • The spec.selector specify which pod it needs to manage.
    • When we update a deployment, it first creates a new pod, deletes an old pod, and makes sure that 125% of the desired number of pods is available at any time.
  • Rollout to a Previous Version When rolling out to a previous version we just use – kubectl rollout undo deployment/nginx-deployment When rolling out to another previous version we use – kubectl rollout undo deployment/nginx-deployment --to-revision=2

StatefulSet

Just like we manage the stateless applications with deployments we work with stateful applications with StatefulSet.

  • Properties
    • The StatefulSet cannot be created/deleted at the same time
    • can’t be accessed randomly
    • The replica set here is not identical.
    • Each pod gets a unique identifier in increasing order and these are required while rescheduling.
    • Each pod has its own physical store.
    • There is a master pod that is only allowed to change data.
    • All the slave pods sync with the master pod in order to achieve data consistency.
    • When a new pod joins the replica set it first clones all the data from one of the slave pods and after that starts to sync.
  • StatefulSets are valuable for applications that require one or more of the following.
    • Stable, unique network identifiers.
    • Stable, persistent storage.
    • Ordered, graceful deployment and scaling.
    • Ordered, automated rolling updates.
  • Data Persistence If a pod dies then all its data will be lost. So in order to counter this, we use persistent volume attached to every pod.
    • The storage has all the synchronized data with the pod’s state data.
    • When a pod gets replaced the persistent volume gets reattached to the pod and the state of the pod gets resumed.

What is System Call

System Call – It is the interface between the userspace program and kernel program to requests for resources.

Now why we need system calls-

  • Reading and writing from files demand system calls.
  • If a file system wants to create or delete files, system calls are required.
  • System calls are used for the creation and management of new processes.
  • Network connections need system calls for sending and receiving packets.
  • Access to hardware devices like scanner, printer, need a system call.

Here are the five types of System Calls in OS:

  • Process Control – This system call deals with process creation and termination, wait & signal events and allocate and free memory.
  • File Management – This deals with the file manipulation like create, update, delete, read, write and add attributes to the file.
  • Device Management – This deals with the device buffers like reading and writing as well as adding and removing logical devices.
  • Information Maintenance – It handle the data transfer between user and OS kernel.
  • Communications – This is used for inter process communication. Create and delete communication connections, send and receive messages etc.

Learning: Kubernetes – Persistent Volume & Persistent Volume Claim

Volume – Volume in Kubernetes can be thought of as a directory that can be accessed by containers in the pod. Volume helps persists the data even if the pod restarts.

  • PV
    • A Persistent Volume (PV) is a piece of storage in the cluster.
    • It is a cluster-level resource like a pod and doesn’t have any namespace.
    • It is been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a StorageClass.
  • PVC
    • A PersistentVolumeClaim (PVC) is a request for storage by a user that can be fulfilled by a PV.
    • Persistent Volumes and PersistentVolumeClaim are independent of Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods.
  • Access Modes
    • ReadWriteOnce – It is used when we allow only one node to read & write on the volume. Multiple pods running on the same node can access the volume.
    • ReadOnlyMany – It is used when we allow read access to many pods.
    • ReadWriteMany – It is used when we allow read & write access to many nodes.
    • ReadWriteOncePod – It is used when we allow only one pod in a node for reading & writing.