Setup Kubernetes CI/CD for Backend Apps

Harsh Singhal
8 min readDec 16, 2020

There are multiple things we need to setup for setting up a pipeline for our backend codebase like setting up our environment variable, SSL certificate, Kubernetes Service, Ingress and the main thing which is deployment.

Before getting started, if you don’t know about Kubernetes terms, I would like you to go through this blog as a refresher.

Install CLI tools

If you haven’t setup CLIs tools required for deployment. Use below commands to install.

First install gcloud from https://cloud.google.com/sdk/

To set gcloud in macos environment run

echo "source ~/.bashrc" >> ~/.bash_profile
source ~/.bash_profile
which gcloud

Then install kubectl using command

gcloud components install kubectl

To install kube_secrets, Run

sudo gem install kube_secrets_encode

To Update your GCP CLI tool

gcloud components update && gcloud components install alpha beta

Create Cluster

This is the main thing in which all the resources reside. Generally for every deployment environment create separate cluster. I will create cluster for dev environment inside GCP(Google Cloud)

First set the project Id to your local gcloud CLI using command below by replacing PROJECT_ID to your own.

gcloud config set project PROJECT_ID

Set default compute zone according to your customer base by this command

gcloud config set compute/zone us-central1-f

Create kubernetes cluster

Now run this command to create Cluster by changing CLUSTER_NAME to yours like example-dev

gcloud container clusters create example-dev --enable-autoscaling --min-nodes 1 --max-nodes 10 --enable-autoupgrade

You can customised minimum and maximum nodes according to your application and environment.

Get Credential for Cluster

Run

gcloud container clusters get-credentials example-dev

this will have output like below

Fetching cluster endpoint and auth data.
kubeconfig entry generated for example-dev.

Create your deployment

Now its time to create your deployment but before that you need to set your environment variable.

There are two options for that, one is using config map and other is using secret, though secret is not that much secure as it used base64 for encryption but as long no other person has access to our google project, we are safe.

Creating Secrets

To create secrets add yaml file like below as app-secrets.yaml

apiVersion: v1
data:
.env: |-
SECRET_1='SECRET_1_VALUE'
SECRET_2='SECRET_2_VALUE'
SECRET_3='SECRET_3_VALUE'
SECRET_4='SECRET_4_VALUE'
kind: Secret
metadata:
name: custom_secret_name
namespace: default
selfLink: /api/v1/namespaces/default/secrets/custom_secret_name

Here .env is my key and all secrets are in long formatted string

Secrets value needs to be encrypted to base64 before pushing to kubernetes. Run this command to convert secrets file to base64 encoded secrets file

kube_secrets --file=app-secrets.yaml > app-secrets-base64.yaml

Now you have base64 encoded file which will look like below

apiVersion: v1
data:
'.env': U0VDUkVUXzE9J1NFQ1JFVF8xX1ZBTFVFJwpTRUNSRVRfMj0nU0VDUkVUXzJfVkFMVUUnClNFQ1JFVF8zPSdTRUNSRVRfM19WQUxVRScKU0VDUkVUXzQ9J1NFQ1JFVF80X1ZBTFVFJw==
kind: Secret
metadata:
name: custom_secret_name
namespace: default
selfLink: '/api/v1/namespaces/default/secrets/custom_secret_name'

Now my value for .env key has encoded in base64, you can add as many keys you want

I am adding .env so that i can create a file inside my container having values like above plain secrets, so that I can use that file inside my code to import my configs

Though We can add environment variable from deployment itselt.

Secrets can also be used to storing SSL certificate that can be used by Ingress to support HTTPS.

Now as we have encoded secret, lets push them to Kubernetes Cluster using

kubectl create -f app-secrets-base64.yaml

To delete secret use command below

kubectl delete secret custom_secret_name

To reapply secrets if some value changes, use command below

kubectl apply -f app-secrets-base64.yaml

Apply Deployment

The deployment resource type sits above a replica set and can manipulate them. In other words, deployments provide updates for pods replica sets.

Now to add the deployment first create file app.yaml with content

apiVersion: apps/v1
kind: Deployment
metadata:
labels: # custom labels can be used to filter in big deployment
app: appName
service: api
type: core
name: api
namespace: default
spec:
selector:
matchLabels:
app: appName # it select from the label
service: api
type: core
template:
metadata:
labels:
app: appName
service: api
type: core
spec:
containers:
- env:
- name: APP_PORT
value: '8080'
- name: NODE_ENV
value: 'production' # you can pass environment variables from here as well
image: gcr.io/my-project-123/repoName-dev:2.0 # created through cloudbuild.yaml in root of project directory
name: appName
ports:
- containerPort: 8080 # Exposed port from container (Dockerfile)
protocol: TCP
resources:
limits:
memory: 512Mi # maximum RAM can be allocate to a container
cpu: 400m # maximum CPU can be allocated to a container
requests:
memory: 256Mi # default RAM allocated to a container
cpu: 300m # Default CPU allocated to a container
volumeMounts:
- mountPath: /usr/app/dist/var # path to mount secrets as file
# This path can be used in code to load environment file
name: secret-volume
readOnly: true
volumes:
- name: secret-volume
secret:
secretName: custom-secret-name
strategy:
rollingUpdate:
maxSurge: 1 # This define additional how many new containers can be created during new container image set (latest deployment)
maxUnavailable: 0 # This defines how many container can be unavailable maximum at a time.
type: RollingUpdate # <1>

Now run below command to apply deployment

kubectl apply -f app.yaml

Now before apply this deployment, we need to have our docker image in Google Container Registry called gcr.

As you can see I have specified image value as gcr.io/my-project-123/repoName-dev:2.0. In this replace your project, repo, branch and tag .

Now we should have this image present in GCR for this deployment to be successful.

For that, we need to push docker image to GCR, but there are better ways like setting up Build Trigger in Cloud Build that will start as soon there is new commit & push in GIT repository and build the docker image using Dockerfile inside the codebase.

This support Github, Bitbucket & Cloud Source Repository which is google GIT interface.

If you are using Github or Bitbucket you can directly use same repository after authenticate and setup your build trigger, otherwise you can use Cloud Source repository as origin or other remote like google.

So As I am using Gitlab as my main Git service, I will use Cloud Source Repository for this, for that I will create one Repo for my app and add as another remote google using below command for repository name app_backend

gcloud init && git config --global credential.https://source.developers.google.com.helper gcloud.sh

git remote add google \ https://source.developers.google.com/p/$PROJECT_ID/r/$REPO_NAME

and to push all branch’s code

git push --all google

Now as I have Source Repository setup, I can now setup by build trigger by going to Build Trigger inside Cloud Build in Google Cloud Platform

In This add branch name as dev or master depend for which environment you are creating, basically this whenever there is a push on this branch, this trigger will create the docker image inside GCR.

Hit Create Trigger to save your trigger

There is more customised way though to create your docker file and also set that image to the latest deployment, which is using cloudbuild.yaml file, you need to specify the location of this file or you can keep in root of the codebase.

This file look like below

steps:
- name: 'gcr.io/cloud-builders/docker'
args:
[
'build',
'-t',
'gcr.io/$_KUBE_PROJECT/$REPO_NAME-$BRANCH_NAME:$SHORT_SHA',
'.',
]
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$_KUBE_PROJECT/$REPO_NAME-$BRANCH_NAME:$SHORT_SHA']
- name: 'gcr.io/cloud-builders/kubectl'
args:
- 'set'
- 'image'
- 'deployment/$_APP_NAME'
- '$_APP_NAME=gcr.io/$_KUBE_PROJECT/$REPO_NAME-$BRANCH_NAME:$SHORT_SHA'
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_KUBE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_KUBE_CLUSTER}'
- 'CLOUDSDK_CORE_PROJECT=${_KUBE_PROJECT}'

For this file to set new deployment automatically, we need to add Cluster.Getcontainer role to Cloudbuild service account by going to IAM Policy tab

As you can see, there are three steps docker build, docker push and kubectl set. This is more customised way as we can do more step if required and do things in configurable way.

For this you should have your build trigger setup like this

It may give error for first time, so i suggest build first image using Dockerfile only and then after export LATEST_TAG tag_7t78 exporting to shell reapply app.yaml using kubectl apply -f app.yaml. Note: you need to export other variables too. And later you can use cloudbuild.yaml file.

Now your deployment is done, its time to expose it using Service and Ingress

Creating AutoScaler to leave the worry of scaling.

You can optionally create HorizontalPod Autoscaler that can help your autoscaling automatically.

To do this, create autoscaler.yaml file having content below.

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: appName-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: appName # name of Deployment
minReplicas: 2 # min replica present at any time
maxReplicas: 5 # max replica it can reach (5 is only for dev environment. You can increase as much for prod)
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 70 # If average utilisation increased from 70%, then new pod will be created
- type: Resource
resource:
name: memory
targetAverageValue: 150Mi # If memory average utilisation increased from 150 MB, then new Pod will be created.

Now run below command to apply autoscaler.

kubectl apply -f autoscaler.yaml

Creating Service and Ingress

A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them — sometimes called a micro-service.

Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere.

An Ingress is a collection of rules that allow inbound connections to reach the cluster services.

It can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name based virtual hosting, and more. Users request ingress by POSTing the Ingress resource to the API server. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic in an HA manner.

Creating Service

Create app-service.yaml file having the content like below

apiVersion: v1
kind: Service
metadata:
name: appServiceName # app-service
labels:
app: appName
service: api
spec:
type: NodePort
selector:
app: appName
service: api
ports:
- protocol: TCP
port: 80 # Outside port
targetPort: 8080 # PORT at which container is serving content

Now run below command to apply service

kubectl apply -f app-service.yaml

Creating Google Managed Certificate to run service over HTTPS

Create cert.yaml file having content like below

apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: devCertificateName
spec:
domains:
- api.dev.example.com

This might take 10–15 minutes to create Certificate.

Create static IP to use for dev resource that should be mapped to *.dev.example.com

Run below command

gcloud compute addresses create appDevIPName --global

Use this IP address to map your DNS entry of *.dev.example.com for A records.

To describe IP address, Use below command

gcloud compute addresses describe appDevIPName --global

Creating Ingress

Create app-ingress.yaml file having content like below

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: appIngressName
annotations:
kubernetes.io/ingress.global-static-ip-name: appDevIPName
kubernetes.io/ingress.allow-http: 'false' # To block all HTTP traffic
networking.gke.io/managed-certificates: devCertificateName
labels:
app: appName
spec:
backend:
serviceName: appServiceName
servicePort: 80

Now run below command to apply ingress

kubectl apply -f app-ingress.yaml

Now you will have setup your app with a public ip that you can get from ingress by going to Google Cloud Platform that you can map with your DNS.

KUDOS!

Some Useful command

To List all the context/cluster kubectl config get-contexts

To check current context kubectl config current-context

To switch context kubectl config use-context CLUSTER_NAME

--

--