How I got selected for the LFX Mentorship Program

LFX Mentorship (previously known as Community Bridge) is a platform developed by the Linux Foundation, which promotes and accelerates the adoption, innovation, and sustainability of open-source software.

LFX Mentorship is actively used by the Cloud Native Computing Foundation(CNCF) as a mentorship platform across the CNCF projects

Program Schedule

2022 — Fall Term — September 1st – Nov 30th

2022 — Summer Term — June 1st – August 31st (My Term)

2022 — Spring Term — March 1st – May 31st

How to Apply

You have to write the Cover Letter and mention all the points about why you are interested in the projects and any previous work you have done or not and what you expect from the project etc.

Tip: Start contributing early and talk to the maintainers about your interests in the program and start to discuss the issue/feature you are going to work on.

My Project

My Project is Cluster API Provider for GCP(CAPG). It is a CNCF Project that helps manage the Kubernetes cluster in the Google Cloud Platform. Currently, another provider Cluster API Provider for AWS(CAPA), Cluster API Provider for Azure(CAPZ) has the support for taking advantage of GPU in their cluster but CAPG doesn’t have so Me and Subhasmita my co-mentee will work on the project to add support for GPU in the CAPG.

My Mentors

My Co-Mentee

Well, my journey would be a little monotonous if I didn’t have a co-mentee. It makes my work a little interesting because when we are both stuck on anything we hope on a call and discuss things. Also the weekly work we divide each other and teach each other what we have learned.

How It All Started

I didn’t have any plan to do LFX from the beginning. I started my journey with CAPG for GSoC”22. I applied for the same project and the same feature in the GSoC but that didn’t happen because the project didn’t get selected in the GSoC eventually all the applications to the project got rejected as well. So I talked to the maintainer Richard and told them that can I work in the GPU work as I was very interested in it. He told me that there is still hope in the LFX Mentorship and he opened an application there and I applied there. And then I got selected for the LFX Mentorship 🎉

How It Is Going

I was a little bit worried about how I will work on a big project like this where there are thousands of lines of code and me just a written a project with a max of 500 lines. But I am amazed how the maintainers made my journey very easy and got me onboarded with the introduction to the project for a couple of weeks and gave me small tasks of trying things out and asking a question if I am stuck at any point.

Next Steps:

I will start the GPU work the next week with Subhasmita and keep contributing to the project in the future.

Create a managed cluster using Cluster API Provider for Google Cloud Platform (CAPG)

In the previous blog, I explained how to create and manage Kubernetes with cluster API locally with the help of docker infrastructure.

In this blog, I will explain how to create and manage the k8s with Cluster API in the google cloud.

Note – Throughout the blog, I will use Kubernetes version 1.22.9 and it is recommended to use the version of our OS image created by the image builder. You can check from kubernetes.json and use that.

Step 1 –

  • Create the kind cluster –
kind create cluster --image kindest/node:v1.22.9 --wait 5m

Step 2 –

Follow image builder for GCP steps and build an image.

Step 3 –

  • Export the following env variables – (reference)
export GCP_PROJECT_ID=<YOUR PROJECT ID>
export GOOGLE_APPLICATION_CREDENTIALS=<PATH TO GCP CREDENTIALS>
export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )

export CLUSTER_TOPOLOGY=true
export GCP_REGION="us-east4"
export GCP_PROJECT="<YOU GCP PROJECT NAME>"
export KUBERNETES_VERSION=1.22.9
export IMAGE_ID=projects/$GCP_PROJECT/global/images/<IMAGE ID>
export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2
export GCP_NODE_MACHINE_TYPE=n1-standard-2
export GCP_NETWORK_NAME=default
export CLUSTER_NAME=test

Step 4 –

setup the network in this example we are using the default network so we will create some router/nats for our workload cluster to have internet access.

gcloud compute routers create "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" --region="${GCP_REGION}" --network="default"

gcloud compute routers nats create "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter" --nat-all-subnet-ip-ranges --auto-allocate-nat-external-ips

Step 5 –

  • Initialize the infrastructure
clusterctl init --infrastructure gcp
  • Generate the workload cluster config and apply it
clusterctl generate cluster $CLUSTER_NAME --kubernetes-version v1.22.9 > workload-test.yaml

kubectl apply -f workload-test.yaml
  • View the cluster and its resources
$ clusterctl describe cluster $CLUSTER_NAME
NAME                                                               READY  SEVERITY  REASON                 SINCE  MESSAGE
/test                                                              False  Info      WaitingForKubeadmInit  5s
├─ClusterInfrastructure - GCPCluster/test
└─ControlPlane - KubeadmControlPlane/test-control-plane            False  Info      WaitingForKubeadmInit  5s
  └─Machine/test-control-plane-x57zs                               True                                    31s
    └─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2
  • Check the status of the control plane
$ kubectl get kubeadmcontrolplane
NAME                 CLUSTER   INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE    VERSION
test-control-plane   test                                           1                  1         1             2m9s   v1.22.9

Note – The controller plane won’t be ready until the next step when I install the CNI (Container Network Interface).

Step 6 –

  • Get the kubeconfig for the workload cluster
$ clusterctl get kubeconfig $CLUSTER_NAME > workload-test.kubeconfig
  • Apply the cni
kubectl --kubeconfig=./workload-test.kubeconfig \
  apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml
  • Wait a bit and you should see this when getting the kubeadmcontrolplane
$ kubectl get kubeadmcontrolplane
NAME                 CLUSTER   INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE     VERSION
test-control-plane   test      true          true                   1          1       1         0             6m33s   v1.22.9


$ kubectl get nodes --kubeconfig=./workload-test.kubeconfig
NAME                       STATUS   ROLES                  AGE   VERSION
test-control-plane-7xzw2   Ready    control-plane,master   62s   v1.22.9

Step 7 –

  • Edit the MachineDeployment in the workload-test.yaml it has 0 replicas add the replicas you want to have your nodes, in this case, we used 2. Apply the workload-test.yaml
$ kubectl apply -f workload-test.yaml
  • After a few minutes, you should see something like this –
$ clusterctl describe cluster $CLUSTER_NAME
NAME                                                               READY  SEVERITY  REASON  SINCE  MESSAGE
/test                                                              True                     15m
├─ClusterInfrastructure - GCPCluster/test
├─ControlPlane - KubeadmControlPlane/test-control-plane            True                     15m
│ └─Machine/test-control-plane-x57zs                               True                     19m
│   └─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2
└─Workers
  └─MachineDeployment/test-md-0                                    True                     10m
    └─2 Machines...                                                True                     13m    See test-md-0-68bd55744b-qpk67, test-md-0-68bd55744b-tsgf6

$ kubectl get nodes --kubeconfig=./workload-test.kubeconfig
NAME                       STATUS   ROLES                  AGE   VERSION
test-control-plane-7xzw2   Ready    control-plane,master   21m   v1.22.9
test-md-0-b7766            Ready    <none>                 17m   v1.22.9
test-md-0-wsgpj            Ready    <none>                 17m   v1.22.9

Yaaa! Now we have a Kubernetes cluster in the GCP with 1 control pannel with 2 worker nodes.

Step 8 –

Delete what you have created –

$ kubectl delete cluster $CLUSTER_NAME

$ gcloud compute routers nats delete "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" \
    --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter"

$ gcloud compute routers delete "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" \
    --region="${GCP_REGION}"

$ kind delete cluster

Advantages of Golang sync.RWMutex over sync.Mutex

First of all, let’s understand what mutex is and why we use it. Mutex is a locking mechanism that protects shared data in a multi-threaded program where multiple threads are accessing the data concurrently. If we don’t use mutex then race conditions might happen in the program that will lead to inconsistent data throughout the program.

There are two types of Mutex in Golang –

  • sync.Mutex
    It protects the shared data both in reading & writing. This means if one thread is reading/writing another thread can’t read/write into the data. And if there is multiple thread reading then the read will happen one by one by each thread.
package main

import (
	"fmt"
	"sync"
	"time"
)

type SyncData struct {
	lock sync.Mutex
	wg   sync.WaitGroup
}

func main() {
	// m := map[int]int{}

	var sc SyncData

	sc.wg.Add(7)

	go readLoop(&sc)
	go readLoop(&sc)
	go readLoop(&sc)
	go readLoop(&sc)
	go writeLoop(&sc)
	go writeLoop(&sc)
	go writeLoop(&sc)

	sc.wg.Wait()
}

func writeLoop(sc *SyncData) {
	sc.lock.Lock()
	time.Sleep(1 * time.Second)
	fmt.Println("Write lock")
	fmt.Println("Write unlock")
	sc.lock.Unlock()
	sc.wg.Done()
}

func readLoop(sc *SyncData) {
	sc.lock.Lock()
	time.Sleep(1 * time.Second)
	fmt.Println("Read lock")
	fmt.Println("Read unlock")
	sc.lock.Unlock()
	sc.wg.Done()
}

Playground

Here you can see the write will block both read/write and the read will block the write as well as read. [E.G – You can see the delay between the read print statements]

  • Sync.RWMutex
    Now we know if the data is the same and the system is read-heavy it is ok to allow multiple threads to read from the data as there won’t be any conflict. So we use RWMutex instead where the idea is any number of readers can acquire the read lock at the same time but only one writer will be able to acquire the write lock at a time.
package main

import (
	"fmt"
	"sync"
	"time"
)

type SyncData struct {
	lock sync.RWMutex
	wg   sync.WaitGroup
}

func main() {
	// m := map[int]int{}

	var sc SyncData

	sc.wg.Add(7)

	go readLoop(&sc)
	go readLoop(&sc)
	go readLoop(&sc)
	go readLoop(&sc)
	go writeLoop(&sc)
	go writeLoop(&sc)
	go writeLoop(&sc)

	sc.wg.Wait()
}

func writeLoop(sc *SyncData) {
	sc.lock.Lock()
	time.Sleep(1 * time.Second)
	fmt.Println("Write lock")
	fmt.Println("Write unlock")
	sc.lock.Unlock()
	sc.wg.Done()
}

func readLoop(sc *SyncData) {
	sc.lock.RLock()
	time.Sleep(1 * time.Second)
	fmt.Println("Read lock")
	fmt.Println("Read unlock")
	sc.lock.RUnlock()
	sc.wg.Done()
}

Playground

Here you can see the write is blocking read & write but read is not blocking any read. Multiple threads is able to read at the same time. [E.G. – You can see the delay is write print statement but you won’t see any delay in read print statement]

Learning: Kubernetes – Deployments & StatefulSet

Deployments

Deployments are the way we manage pods in k8s. We specify all possible information about the pods like which version image it is going to pick and how many replicas of the pod will be there.

  • Properties
    • The spec.selector specify which pod it needs to manage.
    • When we update a deployment, it first creates a new pod, deletes an old pod, and makes sure that 125% of the desired number of pods is available at any time.
  • Rollout to a Previous Version When rolling out to a previous version we just use – kubectl rollout undo deployment/nginx-deployment When rolling out to another previous version we use – kubectl rollout undo deployment/nginx-deployment --to-revision=2

StatefulSet

Just like we manage the stateless applications with deployments we work with stateful applications with StatefulSet.

  • Properties
    • The StatefulSet cannot be created/deleted at the same time
    • can’t be accessed randomly
    • The replica set here is not identical.
    • Each pod gets a unique identifier in increasing order and these are required while rescheduling.
    • Each pod has its own physical store.
    • There is a master pod that is only allowed to change data.
    • All the slave pods sync with the master pod in order to achieve data consistency.
    • When a new pod joins the replica set it first clones all the data from one of the slave pods and after that starts to sync.
  • StatefulSets are valuable for applications that require one or more of the following.
    • Stable, unique network identifiers.
    • Stable, persistent storage.
    • Ordered, graceful deployment and scaling.
    • Ordered, automated rolling updates.
  • Data Persistence If a pod dies then all its data will be lost. So in order to counter this, we use persistent volume attached to every pod.
    • The storage has all the synchronized data with the pod’s state data.
    • When a pod gets replaced the persistent volume gets reattached to the pod and the state of the pod gets resumed.

What is System Call

System Call – It is the interface between the userspace program and kernel program to requests for resources.

Now why we need system calls-

  • Reading and writing from files demand system calls.
  • If a file system wants to create or delete files, system calls are required.
  • System calls are used for the creation and management of new processes.
  • Network connections need system calls for sending and receiving packets.
  • Access to hardware devices like scanner, printer, need a system call.

Here are the five types of System Calls in OS:

  • Process Control – This system call deals with process creation and termination, wait & signal events and allocate and free memory.
  • File Management – This deals with the file manipulation like create, update, delete, read, write and add attributes to the file.
  • Device Management – This deals with the device buffers like reading and writing as well as adding and removing logical devices.
  • Information Maintenance – It handle the data transfer between user and OS kernel.
  • Communications – This is used for inter process communication. Create and delete communication connections, send and receive messages etc.

Learning: Kubernetes – Persistent Volume & Persistent Volume Claim

Volume – Volume in Kubernetes can be thought of as a directory that can be accessed by containers in the pod. Volume helps persists the data even if the pod restarts.

  • PV
    • A Persistent Volume (PV) is a piece of storage in the cluster.
    • It is a cluster-level resource like a pod and doesn’t have any namespace.
    • It is been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a StorageClass.
  • PVC
    • A PersistentVolumeClaim (PVC) is a request for storage by a user that can be fulfilled by a PV.
    • Persistent Volumes and PersistentVolumeClaim are independent of Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods.
  • Access Modes
    • ReadWriteOnce – It is used when we allow only one node to read & write on the volume. Multiple pods running on the same node can access the volume.
    • ReadOnlyMany – It is used when we allow read access to many pods.
    • ReadWriteMany – It is used when we allow read & write access to many nodes.
    • ReadWriteOncePod – It is used when we allow only one pod in a node for reading & writing.

Learning: Kubernetes – Container Runtime Interface & Garbage Collection

Container Runtime Interface

The Container Runtime Interface (CRI) is the primary protocol for the communication between the kubelet and Container Runtime.

Container Runtime – It is the software that helps run & manage containers in a host operating system. There are a number of Container runtimes in the market from Docker, runC, containerd, etc.

So in order to make an abstraction over all the container runtime supported by the Kubernetes the community has introduced a new concept called CRI(Container Runtime Interface) that talks to the container runtime.

The kubelet talks to the Container Runtime Interface(CRI) using a gRPC framework where kubelet is the client and CRI is the server.

Garbage Collection

It is term that k8s use to clean up the cluster resource.

  • Owner & Dependents In k8s there are some objects that are dependent on others. So k8s clean up the related object before deleting the object.
  • Cascading Deletion k8s deletes an object that no longer has owner references. Like the pods left after deleting the ReplicaSet.
    • Foreground Cascading Deletion –
      • The object we are trying to delete goes in a progressive state.
      • The Kubernetes API server sets the object’s metadata. deletion timestamp field to the time the object was marked for deletion.
      • The Kubernetes API server also sets the metadata. finalizers field to foregroundDeletion.
      • After going into the in-progress state the controller deletes all the dependent and removes the parent object.
    • Background Cascading Deletion –
      • Here the k8s deletes the owner object immediately.
      • Then the controller clean up the dependent objects.

Learning: Kubernetes – Why We Need Pod Abstraction Above Containers

I have discussed pods in the previous blog. Now in the short, a container is a standard unit of software that packages up code and all its dependencies in a virtualized environment that has its own file system.

As nodes are the VM or Physical Machine we could have run the container inside it without having the pod abstraction. But there will be some major problems that will arise in terms of managing the cluster and that is networking.

As we all know the container application runs in a specific port and more than one application can’t occupy a port. So if you need two containers of the same application running in a node then the same application needs to run in a different port and connection between them will be very messy.

And that is why Kubernetes solves the problem with pod abstraction. Each pod has a unique network namespace. This means each pod will have its own virtual ethernet. It’s like the pod is a small VM inside the node. And now each pod will have the application container running with the same port and there will be no conflict because all the containers running in self-contained isolated machines.

Now suppose a pod has more than one container(The main container and a helper container) then the container inside the pod will communicate with each other using the localhost.

Create a GraphQL API using Golang – Part 1

For the past couple of days, I have been tinkering with GraphQL and followed an awesome blog post, and created my own GraphQL API. But the blog post only contains Creating Links and Getting Links. I understand all the components and started extending the application to support the Update & Delete as well.

You should complete this and follow my blog to extend the app. Let’s start –

Get A Single Link

At first, add a query in the GraphQL Schema – graph/schema.graphqls

type Query {
  links: [Link!]!
  link(id: ID!): Link!
}

Then run the $ go run github.com/99designs/gqlgen generate

You will see a resolver being created in schema.resolvers.go with the below function signature –

func (r *queryResolver) Link(ctx context.Context, id string) (*model.Link, error) {

Now go to the internal/links/links.go and add a Get method to get the Link with respect to the id from the database.

func Get(id string) Links {
	var link Links
	stmt, err := database.Db.Prepare("SELECT ID, Title, Address FROM Links WHERE ID=?")
	if err != nil {
		log.Fatal(err)
	}
	defer stmt.Close()

	err = stmt.QueryRow(id).Scan(&link.ID, &link.Title, &link.Address)
	if err != nil {
		log.Fatal(err)
	}
	return link
}

Now it’s time for the resolver to come in picture –

func (r *queryResolver) Link(ctx context.Context, id string) (*model.Link, error) {
	link := links.Get(id)
	return &model.Link{
		ID:      link.ID,
		Title:   link.Title,
		Address: link.Address,
	}, nil
}

Update A Link

Now is the time to update an existing link. Now this time as we are writing into the database we have to use mutations.

type Mutation {
   updateLink(id: ID!, input: NewLink!): Link!
}

Now let’s generate the same using $ go run github.com/99designs/gqlgen generate

And add the Update method in the links.go –

func (link Links) Update(id string) int64 {
	stmt, err := database.Db.Prepare("UPDATE Links SET Title=? , Address=? WHERE ID=?")
	if err != nil {
		log.Fatal(err)
	}
	defer stmt.Close()

	res, err := stmt.Exec(link.Title, link.Address, id)
	if err != nil {
		log.Fatal(err)
	}

	rowsAffected, err := res.RowsAffected()
	if err != nil {
		log.Fatal(err)
	}
	return rowsAffected
}

Here we are taking the id input and doing an update operation on to it and returning the affected rows count.

Now it’s time for the resolvers –

func (r *mutationResolver) UpdateLink(ctx context.Context, id string, input model.NewLink) (*model.Link, error) {
	link := links.Links{
		Title:   input.Title,
		Address: input.Address,
	}
	rowsAffected := link.Update(id)
	if rowsAffected == 0 {
		return nil, errors.New("zero rows affected")
	}
	return &model.Link{
		ID:      id,
		Title:   link.Title,
		Address: link.Address,
	}, nil
}

Here we are taking the GraphQL input and doing an update operation on that and returning the updated value with an in-between affected rows check for 0.

Delete A Link

Now as usual add the mutation first in the schema –

type Mutation {
    deleteLink(id: ID!): String!
}

Now let’s generate the same using $ go run github.com/99designs/gqlgen generate

Now add the code in the links.go to perform the delete operation –

func Delete(id string) int64 {
	stmt, err := database.Db.Prepare("DELETE FROM Links WHERE ID=?")
	if err != nil {
		log.Fatal(err)
	}
	defer stmt.Close()

	res, err := stmt.Exec(id)
	if err != nil {
		log.Fatal(err)
	}

	rowsAffected, err := res.RowsAffected()
	if err != nil {
		log.Fatal(err)
	}
	return rowsAffected
}

Here we run the delete query and get the rows affected count and return it.

Now it’s time to resolve it –

func (r *mutationResolver) DeleteLink(ctx context.Context, id string) (string, error) {
	rowsAffected := links.Delete(id)
	if rowsAffected == 0 {
		return "", errors.New("zero rows affected")
	}
	return fmt.Sprintf("%v rows affected", rowsAffected), nil
}

Here we call the delete with the desired id string as an input and if got deleted successfully then we return a string with “<number> rows affected”.

Create a key Value storage using Golang – Part 2

In the previous blog I have discussed about how to make a key-value storage with a in memory storage. Now I am going to discuss about how you can extend this to use a file system storage.

Let’s first define our structure which is going to hold the information about the storage –

type DiskFS struct {
	FS             filesystem.Fs
	RootFolderName string
}

Now for this project we are going to create our own filesystem implementation we can use afero but I decided to use my own implementation for more learning.

First create a directory in the root folder called filesystem and create a file called fs.go and write the following code

package filesystem

import (
	"io"
	"os"
)

type FileSystem struct {
	Fs
}

type File interface {
	io.Closer
	io.Reader
	io.ReaderAt
	io.Seeker
	io.Writer
	io.WriterAt

	Name() string
	Readdir(count int) ([]os.FileInfo, error)
	Stat() (os.FileInfo, error)
	Sync() error
	WriteString(s string) (ret int, err error)
}

type Fs interface {
	Create(name string) (File, error)
	Mkdir(name string, perm os.FileMode) error
	Open(name string) (File, error)
	OpenFile(name string, flag int, perm os.FileMode) (File, error)
	Stat(name string) (os.FileInfo, error)
	Remove(name string) error
}

Now as we are going to use os module to implement the filesystem create a file called osfs.go and write the below code

package filesystem

import (
	"os"
)

type OsFs struct {
	Fs
}

// Return a File System for OS
func NewOsFs() Fs {
	return &OsFs{}
}

func (OsFs) Create(name string) (File, error) {
	file, err := os.Create(name)
	if err != nil {
		return nil, err
	}
	return file, nil
}

func (OsFs) Open(name string) (File, error) {
	file, err := os.Open(name)
	if err != nil {
		return nil, err
	}
	return file, err
}

func (OsFs) OpenFile(name string, flag int, perm os.FileMode) (File, error) {
	file, err := os.OpenFile(name, flag, perm)
	if err != nil {
		return nil, err
	}
	return file, nil
}

func (OsFs) Mkdir(name string, perm os.FileMode) error {
	return os.Mkdir(name, perm)
}

func (OsFs) Stat(name string) (os.FileInfo, error) {
	return os.Stat(name)
}

func (OsFs) Remove(name string) error {
	return os.Remove(name)
}

Now let’s create utility functions to handle some situations. create a utils.go file and write the code

package filesystem

import (
	"os"
)

func DirExists(fs Fs, name string) (bool, error) {
	file, err := fs.Stat(name)
	if err == nil && file.IsDir() {
		return true, nil
	}
	if os.IsNotExist(err) {
		return false, nil
	}
	return false, err
}

func Exists(fs Fs, name string) (bool, error) {
	_, err := fs.Stat(name)
	if err == nil {
		return true, nil
	}
	if os.IsNotExist(err) {
		return false, nil
	}
	return false, err
}

func ReadDir(fs Fs, dirName string) ([]os.FileInfo, error) {
	dir, err := fs.Open(dirName)
	if err != nil {
		return nil, err
	}
	defer dir.Close()
	list, err := dir.Readdir(-1)
	if err != nil {
		return nil, err
	}
	return list, nil
}

func ReadFile(fs Fs, name string) ([]byte, error) {
	file, err := fs.Open(name)
	if err != nil {
		return nil, err
	}
	defer file.Close()
	data, err := os.ReadFile(name)
	if err != nil {
		return nil, err
	}
	return data, nil
}

Ok so our filesystem is done. Now it’s time to write the code for file system based storage implementation. Write the below code inside records.go file

// Return the Disk structure file system
func NewDisk(rootFolder string) *DiskFS {
	diskFs := filesystem.NewOsFs()
	ok, err := filesystem.DirExists(diskFs, rootFolder)
	if err != nil {
		log.Fatalf("Dir exists: %v", err)
	}
	if !ok {
		err := diskFs.Mkdir(rootFolder, os.ModePerm)
		if err != nil {
			log.Fatalf("Create dir: %v", err)
		}
	}
	return &DiskFS{FS: diskFs, RootFolderName: rootFolder}
}

// Store key, value in the file system
func (d *DiskFS) Store(key, val string) {
	file, err := d.FS.Create(d.RootFolderName + "/" + key)
	if err != nil {
		log.Fatalf("Create file: %v", err)
	}
	defer file.Close()

	_, err = file.Write([]byte(val))
	if err != nil {
		log.Fatalf("Writing file: %v", err)
	}
}

func (d *DiskFS) List() map[string]string {
	m := make(map[string]string, 2)
	dir, err := filesystem.ReadDir(d.FS, d.RootFolderName)
	if err != nil {
		log.Fatalf("Error reading the directory: %v", err)
	}

	for _, fileName := range dir {
		content, err := filesystem.ReadFile(d.FS, d.RootFolderName+"/"+fileName.Name())
		if err != nil {
			log.Fatalf("Error reading the file: %v", err)
		}
		m[fileName.Name()] = string(content)
	}
	return m
}

func (d *DiskFS) Get(key string) (string, error) {
	ok, err := filesystem.Exists(d.FS, d.RootFolderName+"/"+key)
	if err != nil {
		log.Fatalf("File exist: %v", err)
	}

	if ok {
		file, err := filesystem.ReadFile(d.FS, d.RootFolderName+"/"+key)
		if err != nil {
			log.Fatalf("Error reading the file: %v", err)
		}
		return string(file), nil
	}
	return "", errors.New("key not found")
}

func (d *DiskFS) Delete(key string) error {
	ok, err := filesystem.Exists(d.FS, d.RootFolderName+"/"+key)
	if err != nil {
		log.Fatalf("File exist: %v", err)
	}
	if ok {
		err = d.FS.Remove(d.RootFolderName + "/" + key)
		if err != nil {
			log.Fatalf("Delete file err: %v", err)
		}
		return nil
	}
	return errors.New("key not found")
}

Now if you run the app with -storage-type=disk flag you can access all the file system based operation and you can see a directory gets created called storage and inside it the file created and in the file the content is the value.

In the next part I am going to write the tests for the application.