How to Use Multiple Git Configs on One Computer

If you are like me and do open source contributions from an office laptop and your company uses some other git service then this blog is for you.

Using separate directory for repos

Let’s say we will create a directory based on our type of work.

  • Work
  • Personal

Create a global git config file .gitconfig

You should have a global gitconfig from where you will map your specific type of gitconfigs.

And that’s it .gitconfig

Create two specific gitconfig for two purposes

  • .gitconfig-work
  • .gitconfig-personal

Map those two gitconfigs with global gitconfig with directory

git config --global --add includeif.gitdir:/path/to/work/directory .gitconfig-work

git config --global --add includeif.gitdir:/path/to/personal/directory .gitconfig-personal

Specify information in the gitconfigs

[user]
 name = work_user
 email = work_email
[user]
 name = personal_user
 email = personal_email

Go to the directory and see your config list

$ cd ~/work
$ mkdir work-test-repo
$ cd work-test-repo
$ git init
		*Initialized empty Git repository in /Users/aniruddha/work/work-test-repo/.git/*
$ git config -l   
		*credential.helper=osxkeychain
		includeif.gitdir:~/personal/.path=~/.gitconfig-personal
		includeif.gitdir:~/work/.path=~/.gitconfig-work
		**user.name=working_me
		user.email = work@work.com**
		core.repositoryformatversion=0
		core.filemode=true
		core.bare=false
		core.logallrefupdates=true
		core.ignorecase=true
		core.precomposeunicode=true*          
$ cd ~/personal
$ mkdir personal-test-repo
$ git init
	*Initialized empty Git repository in /Users/aniruddha/personal/.git/*
$ git config -l
	*credential.helper=osxkeychain
	includeif.gitdir:~/personal/.path=~/.gitconfig-personal
	**user.name=me_personal
	user.email=personal@personal.com**
	includeif.gitdir:~/work/.path=~/.gitconfig-work
	core.repositoryformatversion=0
	core.filemode=true
	core.bare=false
	core.logallrefupdates=true
	core.ignorecase=true
	core.precomposeunicode=true*

Now you can see you have two types of gitconfigs according to your directory.

What is RBAC in Kubernetes?

RBAC stands for Role Based Access Control. It allows us to define user privilege in the Kubernetes cluster that will restrict users from doing the unwanted operation. We describe access rights such as who is allowed to create, update, and delete resources.

Why do we need it?

  • To make the cluster more secure.
  • To scale our cluster to various development teams and avoid conflict between them.

Objects

In RBAC API there are main 4 types of objects –

  • Role – It’s used for namespace object constraints.
  • RoleBinding – Mapping the Role to the user.
  • ClusterRole – It’s used for the cluster-wide resource constraints.
  • CLusterRoleBinding – It’s used for mapping the ClusterRole to the user.

Example

Now we are going to create the objects mentioned above and see how these all work.

ClusterRole & ClusterRoleBinding

First, we are going to create a service account

kubectl create serviceaccount bob

Now write the below two YAML files for the ClusterRole & ClusterRoleBinding-

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: bob
rules:
  - apiGroups:
      - ''
    resources:
      - pods
      - pods/status
      - namespace
      - deployments
    verbs:
      - get
      - list
      - watch
      - create
      - update
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: bob-binding
subjects:
  - kind: ServiceAccount
    name: bob
    namespace: default
roleRef:
  kind: ClusterRole
  name: bob
  apiGroup: rbac.authorization.k8s.io

Here we first create a service account and we define a role that will be able to get, list, watch, create, and update the pods, deployments, and namespace.

Later we create a cluster role binding that will map the cluster role to the service account.

Role & RoleBinding

Let’s create a namespace first.

apiVersion: v1
kind: Namespace
metadata:
  name: application
  labels:
    name: alice

Then define the below Role and RoleBinding

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: application
  name: alice
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: alice-binding
  namespace: application
subjects:
- kind: User
  name: alice
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: alice
  apiGroup: rbac.authorization.k8s.io

Here we create a namespace and then define a role that will allow get, watch, and list operations on the pods to the Alice namespace.

Later we map the role binding to the namespace with role binding.

Learning: Kubernetes – Deployments & StatefulSet

Deployments

Deployments are the way we manage pods in k8s. We specify all possible information about the pods like which version image it is going to pick and how many replicas of the pod will be there.

  • Properties
    • The spec.selector specify which pod it needs to manage.
    • When we update a deployment, it first creates a new pod, deletes an old pod, and makes sure that 125% of the desired number of pods is available at any time.
  • Rollout to a Previous Version When rolling out to a previous version we just use – kubectl rollout undo deployment/nginx-deployment When rolling out to another previous version we use – kubectl rollout undo deployment/nginx-deployment --to-revision=2

StatefulSet

Just like we manage the stateless applications with deployments we work with stateful applications with StatefulSet.

  • Properties
    • The StatefulSet cannot be created/deleted at the same time
    • can’t be accessed randomly
    • The replica set here is not identical.
    • Each pod gets a unique identifier in increasing order and these are required while rescheduling.
    • Each pod has its own physical store.
    • There is a master pod that is only allowed to change data.
    • All the slave pods sync with the master pod in order to achieve data consistency.
    • When a new pod joins the replica set it first clones all the data from one of the slave pods and after that starts to sync.
  • StatefulSets are valuable for applications that require one or more of the following.
    • Stable, unique network identifiers.
    • Stable, persistent storage.
    • Ordered, graceful deployment and scaling.
    • Ordered, automated rolling updates.
  • Data Persistence If a pod dies then all its data will be lost. So in order to counter this, we use persistent volume attached to every pod.
    • The storage has all the synchronized data with the pod’s state data.
    • When a pod gets replaced the persistent volume gets reattached to the pod and the state of the pod gets resumed.

What is System Call

System Call – It is the interface between the userspace program and kernel program to requests for resources.

Now why we need system calls-

  • Reading and writing from files demand system calls.
  • If a file system wants to create or delete files, system calls are required.
  • System calls are used for the creation and management of new processes.
  • Network connections need system calls for sending and receiving packets.
  • Access to hardware devices like scanner, printer, need a system call.

Here are the five types of System Calls in OS:

  • Process Control – This system call deals with process creation and termination, wait & signal events and allocate and free memory.
  • File Management – This deals with the file manipulation like create, update, delete, read, write and add attributes to the file.
  • Device Management – This deals with the device buffers like reading and writing as well as adding and removing logical devices.
  • Information Maintenance – It handle the data transfer between user and OS kernel.
  • Communications – This is used for inter process communication. Create and delete communication connections, send and receive messages etc.

Learning: Kubernetes – Persistent Volume & Persistent Volume Claim

Volume – Volume in Kubernetes can be thought of as a directory that can be accessed by containers in the pod. Volume helps persists the data even if the pod restarts.

  • PV
    • A Persistent Volume (PV) is a piece of storage in the cluster.
    • It is a cluster-level resource like a pod and doesn’t have any namespace.
    • It is been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a StorageClass.
  • PVC
    • A PersistentVolumeClaim (PVC) is a request for storage by a user that can be fulfilled by a PV.
    • Persistent Volumes and PersistentVolumeClaim are independent of Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods.
  • Access Modes
    • ReadWriteOnce – It is used when we allow only one node to read & write on the volume. Multiple pods running on the same node can access the volume.
    • ReadOnlyMany – It is used when we allow read access to many pods.
    • ReadWriteMany – It is used when we allow read & write access to many nodes.
    • ReadWriteOncePod – It is used when we allow only one pod in a node for reading & writing.

Learning: Kubernetes – Container Runtime Interface & Garbage Collection

Container Runtime Interface

The Container Runtime Interface (CRI) is the primary protocol for the communication between the kubelet and Container Runtime.

Container Runtime – It is the software that helps run & manage containers in a host operating system. There are a number of Container runtimes in the market from Docker, runC, containerd, etc.

So in order to make an abstraction over all the container runtime supported by the Kubernetes the community has introduced a new concept called CRI(Container Runtime Interface) that talks to the container runtime.

The kubelet talks to the Container Runtime Interface(CRI) using a gRPC framework where kubelet is the client and CRI is the server.

Garbage Collection

It is term that k8s use to clean up the cluster resource.

  • Owner & Dependents In k8s there are some objects that are dependent on others. So k8s clean up the related object before deleting the object.
  • Cascading Deletion k8s deletes an object that no longer has owner references. Like the pods left after deleting the ReplicaSet.
    • Foreground Cascading Deletion –
      • The object we are trying to delete goes in a progressive state.
      • The Kubernetes API server sets the object’s metadata. deletion timestamp field to the time the object was marked for deletion.
      • The Kubernetes API server also sets the metadata. finalizers field to foregroundDeletion.
      • After going into the in-progress state the controller deletes all the dependent and removes the parent object.
    • Background Cascading Deletion –
      • Here the k8s deletes the owner object immediately.
      • Then the controller clean up the dependent objects.

Learning: Kubernetes – Why We Need Pod Abstraction Above Containers

I have discussed pods in the previous blog. Now in the short, a container is a standard unit of software that packages up code and all its dependencies in a virtualized environment that has its own file system.

As nodes are the VM or Physical Machine we could have run the container inside it without having the pod abstraction. But there will be some major problems that will arise in terms of managing the cluster and that is networking.

As we all know the container application runs in a specific port and more than one application can’t occupy a port. So if you need two containers of the same application running in a node then the same application needs to run in a different port and connection between them will be very messy.

And that is why Kubernetes solves the problem with pod abstraction. Each pod has a unique network namespace. This means each pod will have its own virtual ethernet. It’s like the pod is a small VM inside the node. And now each pod will have the application container running with the same port and there will be no conflict because all the containers running in self-contained isolated machines.

Now suppose a pod has more than one container(The main container and a helper container) then the container inside the pod will communicate with each other using the localhost.

Learning: Kubernetes – Controller & Cloud Controller

Controller

k8s controller is like a thermostat in a room. It checks the current temperature and maintains the temperature by turning off & on the switch. Same here the k8s controller checks the current state and verify that if the current state is equal to the desired state or not. If the current state is not matched with the desired state it makes the necessary changes and brings to the desired state.

Types of the controller –

  • ReplicaSet – It is responsible for maintaining the set desired number of pods.
  • Deployment – Deployment is the most common way to get your app on Kubernetes. It maintains a ReplicaSet with the desired configuration.
  • StatefulSet – A StatefulSet is used to manage stateful applications with persistent storage.
  • Job – A Job creates one or more short-lived Pods and expects them to successfully terminate.
  • CronJob – A CronJob creates Jobs on a schedule.
  • DaemonSet – A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them.

Cloud Controller

The cloud controller manager lets you link your cluster into your cloud provider’s API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.

  • Different functions
    • Node Controller –
      • It updates node objects when new servers are created in the cloud.
      • Annotating and labeling the node object with the cloud-specific information.
      • Obtain node hostname & network address.
      • Checks the node health. If the node has been deleted from the cloud then it also removes the node from the k8s cluster.
    • Route Controller – It configures the routes properly so the nodes on the k8s cluster can communicate with each other.
    • Service Controller – It interacts with the cloud provider’s API to set up a load balancer and other infrastructure components.

Learning: Kubernetes – ConfigMaps & Secrets

Every application has configuration data like API key, DB URL, DB user, DB password, etc. Yes, you can hardcode these data in your application but after some time it will be unmanageable. You need some kind of dynamic solution where you define all these data once and every component of your cluster can access these data.

Suppose our application DB URL changes for that we need to change every place where the URL is used. So, for this our application needs to be rebuilt again and we have to change the code inside the application. To encounter this situation we use ConfigMaps and Secrets for storing the application’s required data.

ConfigMaps – A ConfigMaps is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.

Ex – ConfigMaps contains the URL of the database, user, and other non-credential data.

Secrets – A secret is similar to ConfigMaps, except a secret is used for sensitive information such as credentials. One of the main differences is that you have to explicitly tell kubectl to show you the contents of a secret.

  • It is encoded in base64.

Learning: Kubernetes – Labels & Selectors, Finalizers

Labels & Selectors

Labels – Labels are key-value pairs attached to pods, ReplicaSet & Services. They are used for identifying objects for pods and ReplicaSet. It can be added at the creation time or can be modified or added at the run time.

Some properties –

  • must be 63 characters or less (can be empty),
  • unless empty, must begin and end with an alphanumeric character ([a-z0-9A-Z]),
  • could contain dashes (“), underscores (_), dots (.), and alphanumeric between.

Selectors – Kubernetes API currently supports two type of selectors −

  • Equality-based selectors – We use = == != as equality selector.
environment = production
tier != frontend
  • Set-based selectors – It allows filtering of object in a set of values. There are three king of operator allowed in notin exists.
environment in (production, qa)
tier notin (frontend, backend)
partition
!partition

Finalizers

Deleting an object isn’t as simple as it looks because it involves a lot of conditional checks about the used resources. Finalizers are certain conditions be met before an object can be deleted.

When you run kubectl delete namespace/example k8s check the finalizers defined on that object.

  1. Issue a deletion command. – Kubernetes marks the object as pending deletion. This leaves the resource in the read-only “Terminating” state.
  2. Run each of the actions associated with the object’s Finalizers. – Each time a Finalizer action completes, that Finalizer is detached from the object, so it’ll no longer appear in the metadata.finalizers field.
  3. Kubernetes keeps monitoring the Finalizers attached to the object. – The object will be deleted once the metadata.finalizers field is empty, because all Finalizers were removed by the completion of their actions.