What are Kubernetes Deployments?
In this post, we shall explore Kubernetes workflows and learn how to configure and optimize a delivery pipeline that will build a Docker image of your application and run it on a K8s cluster.
Benefits of Kubernetes
Kubernetes can be described as a container orchestration platform which scales and runs your application on the cloud or a remote machine. To make it a tad easier, think of it as a container manager which automatically handles operations you would have to otherwise do manually.
Here are a few benefits of using Kubernetes:
- Self healing abilities – through an automated scheduler, Kubernetes has the ability to swap out containers with fresh ones in cases of errors or time outs.
- Rollouts and Rollbacks – in addition to self healing abilities, Kubernetes (k8s) implements rollouts on new deployments, similar to blue green deployments greatly reducing downtime chances.
- Load distribution and auto discovery – decoupled applications running on Kubernetes are able to communicate on a local cluster network, reducing the effort needed in exposing application addresses. In addition to this, Kubernetes has multiple load distribution points. This means, you can distribute load from an ingress layer as well as a service layer to pods.
- Horizontal and vertical scaling – Kubernetes allows us to scale both horizontally and vertically, depending on the scenario. You could run more than 500 containers of the same application, and still generously manage resources allocated to each container almost effortlessly. This depicts the resilience K8s has to offer for your applications!
- Delivery velocity – the pace at which you release your application is business critical to every team today. Back in the day, releases had to be done by a number of team members, during scheduled maintenance periods with a lot of interruptions and downtime.
Even without Continuous Deployment, Kubernetes is able to facilitate and manage releases of various sizes with very almost no downtime.
Kubernetes itself is made up of several components. However, we will not focus on all of them in this article, focusing mostly on containers. We will also be using Docker.
Containers on Kubernetes run in groups known as pods. Containers in a pod share the same network, storage and address. This means accessing a pod's address would, in reality, mean accessing one of the containers in the pod:
Bear in mind that while you really don't need a pipeline to get an application running on the cloud, because of the SDKs, on a larger scale, teams would find it highly inefficient to rely on local deployments.
How does Kubernetes deployment work?
A pipeline can be considered as a way to move a service or application from point A to B. In terms of CI/CD, we can divide them into three types:
- Continuous Integration – tests and versions the code through version control platforms such as GitHub. That's where Buddy comes in, offering easier and more efficient way of pipeline configuration.
- Continuous Delivery – facilitates deployment of applications from version control platforms to the cloud or other vendor-specific services. Delivery pipelines require approval for deployments to specific environments, such as production or client-facing environments.
- Continuous Deployment – facilitates the deployment to the cloud without human interference, approval or input.
The microservice pattern introduced a new way of software implementation. Consider this a mobile pattern, involving several moving parts all unified to render a single application.
An example scenario is pictured below. It involves 3 backends developed by different teams, under different repositories, possibly causing some friction between the teams if not handled properly:
With or without DevOps personnel, your team shouldn't have to worry about ops-related issues, like figuring out delivery of the three application components. The most important thing is maintaining the focus on the product.
Kubernetes automation pitfalls:
Back in the day, the deployment stack was built majorly on Shell scripts. This often proved complex for team members without previous experience with the stack. Right now, almost every platform offers YAML. Being a declarative and more transparent language, YAML has a fairly easier learning curve. However, there are platforms out there that unfortunately, still need shell workarounds on YAML.
Buddy tackles those issues thanks to its intuitive GUI and declarative YAML configuration of your pipelines.
Security is a critical component of any pipeline. One of the key security issues is handling keys and secrets. In most cases, keys and secrets are added onto platforms as environment variables, after manual encryption and later translated and decrypted during the build. In the midst of defining these jobs, it becomes very easy to leak these details either through printing keys or versioning to public Docker images. It is also advised to avoid use of unrestricted API keys on third party services.
How Buddy handles security - Auto-encryption and manual encryption with a push of a button. - Action-variable suggestion and default environment variables common to repositories
Ambiguous platform and tool correlation
Platform correlation has to be one of the greatest challenges. Various teams handle this differently: from out of the box platform-specific YAML modules to scripted connections. It is recommended to take a module approach, in lieu of scripted pipelines, frequently involving several steps: from fetching SDK's, to authorization, to actual deployments. This often leads to a fairly complex, error-prone and bulky pipeline.
Buddy offers a variety of itegrations with major providers and a schema-rich
buddy.yml script with declarative pipeline actions.
How does Kubernetes deployment work? Example pipeline
In this example, we will be deploying a Mangoes API backend from a GitHub repository into a Kubernetes cluster running on the Google Kubernetes Engine (GKE). Mangoes are fun!
With all that in mind, and without further theory, let's have a look at workflows. We will demonstrate a Kubernetes pipeline that syncs a cluster in a simple yet efficient pipeline. Below are the steps we will go through to get this application running:
- A repository for our backend
- A pipeline to build and deploy our backend
- A Kubernetes cluster
- An auth mechanism that paves shipping of our backend from Buddy to GKE
Begin with forking the Mangoes project on GitHub. Next, log in to your Buddy account, add a new project, and select the repository from your dropdown list. Once the project is synchronized, Buddy will detect the stack in your repository (this particular backend runs on Express.js):
Our current goal is to deploy a backend to a cluster. To make this goal more specific, let's break it down further into steps:
- Build a Docker Image
- Version the Docker Image to Docker Hub
- Deploy backend to Kubernetes
- We are versioning the image on Docker Hub. Kubernetes shall operate by fetching the backend's image from DockerHub for provisioning containers.
- Avoid versioning on public repositories to prevent exposing your code. Cloud Vendors such as Google Cloud as well as Docker provide private repositories.
- It is important to label your pipelines according to their purpose, for easy navigation.
Go ahead and click Add a new pipeline. This particular pipeline builds and deploys on every push to a Master branch, which qualifies it as a Continuous Deployment pipeline. Make sure to apply the settings below:
With the pipeline created, we can get down to the actions. Our first step is to build a Docker image. Go ahead and select Build Image:
Take note, that Buddy already identifies resources in the pipeline repository and puts forward suggestions!
The next step will be providing a name and path to the Dockerfile. A quick glance at the repository will show the details as below:
Leave the name as it is (
Dockerfile) and set the path as the repository root:
Under the Options tab, add in your registry details. If it's your first time setting with Kubernetes, I recommend Docker Hub. Othwerwise, select the one matching up with your vendor, i.e. Google Cloud Registry.
Never bake keys or credentials into your Docker images – especially public!
The Docker image for the Mangoes app is also publicly available here.
Our pipeline can now identify the first step towards deploying our application! Let's add in the second step by clicking the + icon right below the Build Docker Image action:
This new action is going to deploy our backend to a Kubernetes cluster on Google Cloud. Select Apply Deployment in the Kubernetes section of the action or enter the name in the search input:
On the Setup tab, add in your cluster details. Buddy natively integrates Google Kubernetes Engine, Amazon EKS, and Azure AKS. It also supports private clusters. For the purposes of this guide, we will run our image on Google:
Authentication & config
In order for Buddy to deploy to GKE, API calls need to be authenticated. We will use Service Accoounts as the authentication method for this example. In this case the service account is very basic, only having access to Kubernetes through the Kubernetes Engine Admin role.
To learn more about gcloud's roles and permissions, refer here.
Make sure to create and download a key for your service account from Google Cloud and paste it in the Service Account Key field on Buddy (the service will automatically encrypt your key). The last step is selecting the YAML file with the configuration file below:
Kubectl & options
Now, switch to the Options tab and select the kubectl version. Feel free to use the version supported by your cluster. Otherwise, leave it set to
latest. Set the rest of options as shown below, with the Grace period set to 30 and the Timeout to
In a similar fashion, create another K8s deploy action for
mango-server-svc – a service that will expose our application (make sure to use a different YAML file for that). The whole pipeline will now look like this:
Testing Kubernetes pipeline
Let's see if everything works as expected. Click the Run Pipeline button in the top-right corner of the page to start the build:
A pop-up will appear with an option to deploy to a specific commit. Proceed without change or select a commit of your choice:
After triggering the execution, you will be directed to the progress page with all your actions listed as either queued, completed or working (in progress). If an error occurs on any of your actions, you can check the logs for details on what happened:
If everything goes through correctly, Buddy will mark the job as successful in a blissful green color:
Here's how the deployment looks running on GKE, exposed by the service that we deployed in the final step:
K8s deployment optimization
Kubernetes is a container-based platform for deploying, scaling and running applications. Buddy lets you automate your Kubernetes delivery workflows with a series of dedicated K8s actions.
Each time you make changes to your application code or Kubernetes configuration, you have two options to update your cluster:
kubectl apply or
kubectl set image.
In such case your workflow most often looks like this:
- Edit code or configuration .YML
- Push it to your Git repository
- Build an new Docker image
- Push the Docker image
- Log in to your K8s cluster
kubectl set image
With Buddy you can avoid most of these steps by doing a simple push to Git! :)
Actions used in this guide:
How to automate Kubernetes releases on every push
We now know the basics of Kubernetes deployment strategy. Let's go through some of the cases that will help you optimize your delivery even better.
If you often use
kubectl apply or
kubectl set image, this is for you!
Configuring delivery pipeline
Add a new pipeline, set the trigger mode to On every push, and select the branch that will trigger the pipeline
Add the Build Docker image action. Switch the Tab to Options and select Docker Hub from the dropdown under Docker registry. Choose the Dockerfile path, Docker repository, and the name of the image that you want to push.
Depending on your scenario, add Set K8s Image or Apply K8s Deployment action
You can use the number of the revision for the tag with environment variables.
Example K8s deployment scenarios
Scenario 1: If you use kubectl set image go with Set K8s Image action:
Select which container should be replaced and which image you want to use. Make sure to enter the name and tag of the image from step #2 above.
Buddy will turn off the running nodes and switch them back on with the new image version.
If you are using a tag which is constant for every execution (eg.
branchName) but different from 'latest', make sure to set the Pull Policy to 'Always'. Find out more about updating images.
Scenario 2: If you use kubectl apply go with Apply K8s Deployment action:
With every change in the YAML config or in your app code, Buddy will apply the deployment and Kubernetes will start transforming the containers to the desired state.
The action will wait for the status of deployment, checking its rollout status. If any errors occur, the pipeline will stop as 'failed'.
How to automate running Kubernetes pods or jobs
If you often run tasks in a container, such as:
- DB migration during the deployment of new version
- batch jobs, eg. creating a directory structure for a new version of your app
you can either use pods or jobs. The first type launches a single pod with the task; the second one launches a series of pods until a specified number of them ends with a successful status.
Pipeline configuration for running Kubernetes pods or jobs
Let's say you have an application on a K8s cluster and the repository contains as follows:
- source code of your application
- a Dockerfile with instructions on creating an image of your app
- DB migration scripts
- a Dockerfile with instructions on creating an image that will run the migration during the deployment (db migration runner)
In this case, you can configure a pipeline that will:
A. Build application and migrate images (first action)
B. Push them to the Docker Hub (second action)
C. Trigger the DB migration using the previously built image (third action). You can define the image, commands and deployment using a YAML file:
Once you make a push, the pipeline will automatically build and push the images to the repository and run your migration scripts. How cool is that?
The job action will wait until the command has finished executing. If the exit status is different than 0, the action will be labeled as 'failed'
D. The last action is using either Apply K8s Deployment or Set K8s Image to update the image in your K8s application. Once you add the action, the whole pipeline will look like this:
When everything is in place, make a push once again and watch Buddy automatically perform the whole workflow.
If you're not sure if you can apply our solution to your workflow, please reach out on the live-chat or drop a line to email@example.com and we'll do it for you.