Check for Kubernetes deployment with client-go library

For the past couple of days, I have been tinkering with the client-go library. It provides the necessary interfaces and methods by which you can manipulate the Kubernetes cluster resources from your go code. After exploring for a while I started working on a side project that does some checking over deployment and if the deployment doesn’t have a certain environment variable it will delete the deployment other wise it will keep it as it is.

Setup

In this blog, I am not going to give idea about how to set up a go project.

First, create a directory named app and create another directory inside it called service. Now create a file named init.go inside the service directory.

package service

import (
	"log"
	"os"
	"path/filepath"

	"k8s.io/client-go/kubernetes"
	"k8s.io/client-go/rest"
	"k8s.io/client-go/tools/clientcmd"
)

// Initializes the kube config clientset
func Init() *kubernetes.Clientset {
	config, err := rest.InClusterConfig()
	if err != nil {
		kubeconfig := filepath.Join("home", "aniruddha", ".kube", "config")
		if envvar := os.Getenv("KUBECONFIG"); len(envvar) > 0 {
			kubeconfig = envvar
		}

		config, err = clientcmd.BuildConfigFromFlags("", kubeconfig)
		if err != nil {
			log.Fatalf("kubeconfig can't be loaded: %v\n", err)
		}
	}

	clientset, err := kubernetes.NewForConfig(config)
	if err != nil {
		log.Fatalf("error getting config client: %v\n", err)
	}

	return clientset
}

In the above code example, we call InClusterConfig first and that actually gives back the config object that contains a common attribute that can be passed to a Kubernetes client on initialization. If we couldn’t find the config we look for the Kube config in the default location in most of the Linux.

After we got the config now it’s time for initializing a client. We do it by NewForConfig method. It returns a clientset that contains the client’s resources for each group. Like the pods can be accessed by the corev1 group in the clientset struct.

Check for deployments

Create another directory under the app dir named client.

package client

import (
	"fmt"
	"log"
	"time"

	"k8s.io/apimachinery/pkg/util/wait"
	"k8s.io/client-go/informers"
	"k8s.io/client-go/tools/cache"
)

const (
	ENVNAME = "TEST_ENV_NAME"
)

// Check for Deployment and start a go routine if new deployment added
func (c *Client) CheckDeploymentEnv(ns string) {
	informerFactory := informers.NewSharedInformerFactory(c.C, 30*time.Second)

	deploymentInformer := informerFactory.Apps().V1().Deployments()
	deploymentInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
		AddFunc: func(obj interface{}) {
			log.Println("Deployment added. Let's start checking!")

			ch := make(chan error, 1)
			done := make(chan bool)

			go c.check(ns, ch, done)

		loop:
			for {
				select {
				case err := <-ch:
					log.Fatalf("error checking envvar: %v", err)
				case <-done:
					break loop
				}
			}
		},
	})

	informerFactory.Start(wait.NeverStop)
	informerFactory.WaitForCacheSync(wait.NeverStop)
}

Now in the CheckDeploymentEnv method, we first going to create the NewSharedInformerFactory which is going to give us back an interface that can be helpful to retrieve various resources from the local cache of the cluster. Then we can handle various events like add, update, delete, etc in the cluster and take action accordingly.

Then we add another function in the same file as above.

func (c *Client) check(namespace string, ch chan error, done chan bool) {
	deployments, err := ListDeploymentWithNamespace(namespace, c.C)
	if err != nil {
		ch <- fmt.Errorf("list deployment: %s", err.Error())
	}

	for _, deployment := range deployments.Items {
		var envSet bool
		for _, cntr := range deployment.Spec.Template.Spec.Containers {
			for _, env := range cntr.Env {
				if env.Name == ENVNAME {
					log.Printf("Deployment name: %s has envvar. All set to go!", deployment.Name)
					envSet = true
				}
			}
		}
		if !envSet {
			log.Printf("No envvar name %s - Deleting deployment with name %s\n", ENVNAME, deployment.Name)
			err = DeleteDeploymentWithNamespce(namespace, deployment.Name, c.C)
			if err != nil {
				ch <- err
			}
		}
	}
	done <- true
}

Here we list the deployments(covered next) and for every deployment, we check for env variables and delete them if we found that the env variable is missing. And pass true to the done channel if everything is successful otherwise pass the error to the other channel.

Deployment Handler

Create another file named deployment.go in the client directory.

package client

import (
	"fmt"
	"log"

	v1 "k8s.io/api/apps/v1"
	"k8s.io/apimachinery/pkg/api/errors"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/client-go/kubernetes"
)

// List deployment resource with the given namespace
func ListDeploymentWithNamespace(ns string, clientset *kubernetes.Clientset) (*v1.DeploymentList, error) {
	deployment, err := clientset.AppsV1().Deployments(ns).List(ctx, metav1.ListOptions{})
	if err != nil {
		return nil, err
	}
	return deployment, nil
}

// Delete deployment resource with the given namespace
func DeleteDeploymentWithNamespce(ns, name string, clientset *kubernetes.Clientset) error {
	err := clientset.AppsV1().Deployments(ns).Delete(ctx, name, metav1.DeleteOptions{})
	if err != nil {
		if errors.IsNotFound(err) {
			log.Printf("Deployment don't exists with name %s\n", name)
			return nil
		} else {
			return fmt.Errorf("delete Deployment: %v", err)
		}
	}
	log.Printf("Deployment deleted with name: %v\n", name)

	return nil
}

Here we have two methods one for listing the deployments and another for deleting them. Here we directly get the resources from clients means we are querying them on the Kubernetes API server unlike previously from the local in-memory cache.

Now create another file name client.go in the client directory. and use the code below.

package client

import (
	"context"

	"k8s.io/client-go/kubernetes"
)

var (
	ctx = context.TODO()
)

type Client struct {
	C *kubernetes.Clientset
}

// Return a new Client
func NewClient() *Client {
	return &Client{}
}

main.go

package main

import (
	"flag"
	"log"

	"github.com/aniruddha2000/yosemite/app/client"
	"github.com/aniruddha2000/yosemite/app/service"
)

func main() {
	var nameSpace string

	flag.StringVar(&nameSpace, "ns", "test-ns",
		"namespace name on which the checking is going to take place")

	log.Printf("Checking Pods for namespace %s\n", nameSpace)
	c := client.NewClient()
	c.C = service.Init()

	c.CheckDeploymentEnv(nameSpace)
}

Here we are just taking the namespace from the flag and calling all the necessary functions mentioned in the entire article.

Run the app in the Kubernetes cluster

In order to run the app in the cluster, we have to set up CusterRole & ClusterRoleBinding for the default service account for the pod.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-namespace-clusterrole
rules:
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["list", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: pod-namespace-clusterrolebinding
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default
roleRef:
  kind: ClusterRole
  name: pod-namespace-clusterrole
  apiGroup: rbac.authorization.k8s.io

Then you have to build the project and make a docker image out of it using docker build, docker tag & docker push command. Then create a deployment YAML template mentioned below and apply that.

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: client
  name: client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: client
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: client
    spec:
      containers:
      - image: <YOUR DOCKER IMAGE>
        name: client-app
        resources: {}
status: {}

Here is my GitHub URL for the project – https://github.com/aniruddha2000/yosemite/

You can find how to run the project in the README of the mentioned GitHub URL above.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s