Kubernetes deployments - ultimate guide
What are Kubernetes Deployments?
In this post, we shall explore Kubernetes workflows and learn how to configure and optimize a delivery pipeline that will build a Docker image of your application and run it on a K8s cluster.
Image loading...
Benefits of Kubernetes
Kubernetes can be described as a container orchestration platform which scales and runs your application on the cloud or a remote machine. To make it a tad easier, think of it as a container manager which automatically handles operations you would have to otherwise do manually.
Here are a few benefits of using Kubernetes:
- Self healing abilities – through an automated scheduler, Kubernetes has the ability to swap out containers with fresh ones in cases of errors or time outs.
- Rollouts and Rollbacks – in addition to self healing abilities, Kubernetes (k8s) implements rollouts on new deployments, similar to blue green deployments greatly reducing downtime chances.
- Load distribution and auto discovery – decoupled applications running on Kubernetes are able to communicate on a local cluster network, reducing the effort needed in exposing application addresses. In addition to this, Kubernetes has multiple load distribution points. This means, you can distribute load from an ingress layer as well as a service layer to pods.
- Horizontal and vertical scaling – Kubernetes allows us to scale both horizontally and vertically, depending on the scenario. You could run more than 500 containers of the same application, and still generously manage resources allocated to each container almost effortlessly. This depicts the resilience K8s has to offer for your applications!
- Delivery velocity – the pace at which you release your application is business critical to every team today. Back in the day, releases had to be done by a number of team members, during scheduled maintenance periods with a lot of interruptions and downtime.
Kubernetes Structure
Kubernetes itself is made up of several components. However, we will not focus on all of them in this article, focusing mostly on containers. We will also be using Docker.
Containers on Kubernetes run in groups known as pods. Containers in a pod share the same network, storage and address. This means accessing a pod's address would, in reality, mean accessing one of the containers in the pod:
Image loading...
How does Kubernetes deployment work?
A pipeline can be considered as a way to move a service or application from point A to B. In terms of CI/CD, we can divide them into three types:
- Continuous Integration – tests and versions the code through version control platforms such as GitHub. That's where Buddy comes in, offering easier and more efficient way of pipeline configuration.
- Continuous Delivery – facilitates deployment of applications from version control platforms to the cloud or other vendor-specific services. Delivery pipelines require approval for deployments to specific environments, such as production or client-facing environments.
- Continuous Deployment – facilitates the deployment to the cloud without human interference, approval or input.
An example scenario is pictured below. It involves 3 backends developed by different teams, under different repositories, possibly causing some friction between the teams if not handled properly:
Image loading...
Kubernetes automation pitfalls:
Technology Stack
Back in the day, the deployment stack was built majorly on Shell scripts. This often proved complex for team members without previous experience with the stack. Right now, almost every platform offers YAML. Being a declarative and more transparent language, YAML has a fairly easier learning curve. However, there are platforms out there that unfortunately, still need shell workarounds on YAML.
Security
Security is a critical component of any pipeline. One of the key security issues is handling keys and secrets. In most cases, keys and secrets are added onto platforms as environment variables, after manual encryption and later translated and decrypted during the build. In the midst of defining these jobs, it becomes very easy to leak these details either through printing keys or versioning to public Docker images. It is also advised to avoid use of unrestricted API keys on third party services.
How Buddy handles security
- Auto-encryption and manual encryption with a push of a button.
- Action-variable suggestion and default environment variables common to repositories
Ambiguous platform and tool correlation
Platform correlation has to be one of the greatest challenges. Various teams handle this differently: from out of the box platform-specific YAML modules to scripted connections. It is recommended to take a module approach, in lieu of scripted pipelines, frequently involving several steps: from fetching SDK's, to authorization, to actual deployments. This often leads to a fairly complex, error-prone and bulky pipeline.
buddy.yml
script with declarative pipeline actions.
How does Kubernetes deployment work? Example pipeline
With all that in mind, and without further theory, let's have a look at workflows. We will demonstrate a Kubernetes pipeline that syncs a cluster in a simple yet efficient pipeline. Below are the steps we will go through to get this application running:
- A repository for our backend
- A pipeline to build and deploy our backend
- A Kubernetes cluster
- An auth mechanism that paves shipping of our backend from Buddy to GKE
Pipeline configuration
Actions used in this guide:
Begin with forking the Mangoes project on GitHub. Next, log in to your Buddy account, add a new project, and select the repository from your dropdown list. Once the project is synchronized, Buddy will detect the stack in your repository (this particular backend runs on Express.js):
Image loading...
Our current goal is to deploy a backend to a cluster. To make this goal more specific, let's break it down further into steps:
- Build a Docker Image
- Version the Docker Image to Docker Hub
- Deploy backend to Kubernetes
- We are versioning the image on Docker Hub. Kubernetes shall operate by fetching the backend's image from Docker Hub for provisioning containers.
- Avoid versioning on public repositories to prevent exposing your code. Cloud Vendors such as Google Cloud as well as Docker provide private repositories.
- It is important to label your pipelines according to their purpose, for easy navigation.
Go ahead and click Add a new pipeline. This particular pipeline builds and deploys on every push to a Master branch, which qualifies it as a Continuous Deployment pipeline. Make sure to apply the settings below:
Image loading...
With the pipeline created, we can get down to the actions. Our first step is to build a Docker image. Go ahead and select Build Image:
Image loading...
The next step will be providing a name and path to the Dockerfile. A quick glance at the repository will show the details as below:
Image loading...
Leave the name as it is (Dockerfile
) and set the path as the repository root:
Image loading...
Under the Options tab, add in your registry details. If it's your first time setting with Kubernetes, I recommend Docker Hub. Othwerwise, select the one matching up with your vendor, i.e. Google Cloud Registry.
Image loading...
Our pipeline can now identify the first step towards deploying our application! Let's add in the second step by clicking the + icon right below the Build Docker Image action:
Image loading...
This new action is going to deploy our backend to a Kubernetes cluster on Google Cloud. Select Apply Deployment in the Kubernetes section of the action or enter the name in the search input:
Image loading...
On the Setup tab, add in your cluster details. Buddy natively integrates Google Kubernetes Engine, Amazon EKS, and Azure AKS. It also supports private clusters. For the purposes of this guide, we will run our image on Google:
Image loading...
Authentication & config
In order for Buddy to deploy to GKE, API calls need to be authenticated. We will use Service Accoounts as the authentication method for this example. In this case the service account is very basic, only having access to Kubernetes through the Kubernetes Engine Admin role.
Make sure to create and download a key for your service account from Google Cloud and paste it in the Service Account Key field on Buddy (the service will automatically encrypt your key). The last step is selecting the YAML file with the configuration file below:
Image loading...
Kubectl & options
Now, switch to the Options tab and select the kubectl version. Feel free to use the version supported by your cluster. Otherwise, leave it set to latest
. Set the rest of options as shown below, with the Grace period set to 30 and the Timeout to 0
:
Image loading...
Services deployment
In a similar fashion, create another K8s deploy action for mango-server-svc
– a service that will expose our application (make sure to use a different YAML file for that). The whole pipeline will now look like this:
Image loading...
Testing Kubernetes pipeline
Let's see if everything works as expected. Click the Run Pipeline button in the top-right corner of the page to start the build:
Image loading...
A pop-up will appear with an option to deploy to a specific commit. Proceed without change or select a commit of your choice:
Image loading...
After triggering the execution, you will be directed to the progress page with all your actions listed as either queued, completed or working (in progress). If an error occurs on any of your actions, you can check the logs for details on what happened:
Image loading...
If everything goes through correctly, Buddy will mark the job as successful in a blissful green color:
Image loading...
Here's how the deployment looks running on GKE, exposed by the service that we deployed in the final step:
Image loading...
Image loading...
K8s deployment optimization
Kubernetes is a container-based platform for deploying, scaling and running applications. Buddy lets you automate your Kubernetes delivery workflows with a series of dedicated K8s actions.
Image loading...
Each time you make changes to your application code or Kubernetes configuration, you have two options to update your cluster: kubectl apply
or kubectl set image
.
In such case your workflow most often looks like this:
- Edit code or configuration .YML
- Push it to your Git repository
- Build an new Docker image
- Push the Docker image
- Log in to your K8s cluster
- Run
kubectl apply
orkubectl set image
With Buddy you can avoid most of these steps by doing a simple push to Git! :)
Actions used in this guide:
How to automate Kubernetes releases on every push
We now know the basics of Kubernetes deployment strategy. Let's go through some of the cases that will help you optimize your delivery even better.
kubectl apply
or kubectl set image
, this is for you!
Configuring delivery pipeline
Add a new pipeline, set the trigger mode to On every push, and select the branch that will trigger the pipeline
Image loading...
Add the Build Docker image action. Switch the Tab to Options and select Docker Hub from the dropdown under Docker registry. Choose the Dockerfile path, Docker repository, and the name of the image that you want to push.
Image loading...
Depending on your scenario, add Set K8s Image or Apply K8s Deployment action
Example K8s deployment scenarios
Scenario 1: If you use kubectl set image go with Set K8s Image action:
Select which container should be replaced and which image you want to use. Make sure to enter the name and tag of the image from step #2 above.
Buddy will turn off the running nodes and switch them back on with the new image version.
branchName
) but different from 'latest', make sure to set the Pull Policy to 'Always'. Find out more about updating images.
Image loading...
Scenario 2: If you use kubectl apply go with Apply K8s Deployment action:
With every change in the YAML config or in your app code, Buddy will apply the deployment and Kubernetes will start transforming the containers to the desired state.
Image loading...
How to automate running Kubernetes pods or jobs
If you often run tasks in a container, such as:
- DB migration during the deployment of new version
- backups
- batch jobs, eg. creating a directory structure for a new version of your app
you can either use pods or jobs. The first type launches a single pod with the task; the second one launches a series of pods until a specified number of them ends with a successful status.
Pipeline configuration for running Kubernetes pods or jobs
Let's say you have an application on a K8s cluster and the repository contains as follows:
- source code of your application
- a Dockerfile with instructions on creating an image of your app
- DB migration scripts
- a Dockerfile with instructions on creating an image that will run the migration during the deployment (db migration runner)
In this case, you can configure a pipeline that will:
A. Build application and migrate images (first action)
B. Push them to the Docker Hub (second action)
Image loading...
C. Trigger the DB migration using the previously built image (third action). You can define the image, commands and deployment using a YAML file:
Image loading...
Once you make a push, the pipeline will automatically build and push the images to the repository and run your migration scripts. How cool is that?
D. The last action is using either Apply K8s Deployment or Set K8s Image to update the image in your K8s application. Once you add the action, the whole pipeline will look like this:
Image loading...
When everything is in place, make a push once again and watch Buddy automatically perform the whole workflow.
Jarek Dylewski
Customer Support
A journalist and an SEO specialist trying to find himself in the unforgiving world of coders. Gamer, a non-fiction literature fan and obsessive carnivore. Jarek uses his talents to convert the programming lingo into a cohesive and approachable narration.