Docker and Kubernetes workflows on Buddy
In this post, we shall explore Kubernetes workflows and learn how to configure a delivery pipeline that will build a Docker image of your application and run it on a K8s cluster.
Benefits of Kubernetes
Kubernetes can be described as a container orchestration platform which scales and runs your application on the cloud or a remote machine. To make it a tad easier, think of it as a container manager which automatically handles operations you would have to otherwise do manually.
Here are a few benefits of using Kubernetes:
- Self healing abilities – through an automated scheduler, Kubernetes has the ability to swap out containers with fresh ones in cases of errors or time outs.
- Rollouts and Rollbacks – in addition to self healing abilities, Kubernetes (k8s) implements rollouts on new deployments, similar to blue green deployments greatly reducing downtime chances.
- Load distribution and auto discovery – decoupled applications running on Kubernetes are able to communicate on a local cluster network, reducing the effort needed in exposing application addresses. In addition to this, Kubernetes has multiple load distribution points. This means, you can distribute load from an ingress layer as well as a service layer to pods.
- Horizontal and vertical scaling – Kubernetes allows us to scale both horizontally and vertically, depending on the scenario. You could run more than 500 containers of the same application, and still generously manage resources allocated to each container almost effortlessly. This depicts the resilience K8s has to offer for your applications!
- Delivery velocity – the pace at which you release your application is business critical to every team today. Back in the day, releases had to be done by a number of team members, during scheduled maintenance periods with a lot of interruptions and downtime.
Even without Continuous Deployment, Kubernetes is able to facilitate and manage releases of various sizes with very almost no downtime.
Kubernetes itself is made up of several components. However, we will not focus on all of them in this article, focusing mostly on containers. We will also be using Docker.
Containers on Kubernetes run in groups known as pods. Containers in a pod share the same network, storage and address. This means accessing a pod's address would, in reality, mean accessing one of the containers in the pod:
Bear in mind that while you really don't need a pipeline to get an application running on the cloud, because of the SDKs, on a larger scale, teams would find it highly inefficient to rely on local deployments.
Example K8s deployment scenario
A pipeline can be considered as a way to move a service or application from point A to B. In terms of CI/CD, we can divide them into three types:
- Continuous Integration – tests and versions the code through version control platforms such as GitHub. That's where Buddy comes in, offering easier and more efficient way of pipeline configuration.
- Continuous Delivery – facilitates deployment of applications from version control platforms to the cloud or other vendor-specific services. Delivery pipelines require approval for deployments to specific environments, such as production or client-facing environments.
- Continuous Deployment – facilitates the deployment to the cloud without human interference, approval or input.
The microservice pattern introduced a new way of software implementation. Consider this a mobile pattern, involving several moving parts all unified to render a single application.
An example scenario is pictured below. It involves 3 backends developed by different teams, under different repositories, possibly causing some friction between the teams if not handled properly:
With or without DevOps personnel, your team shouldn't have to worry about ops-related issues, like figuring out delivery of the three application components. The most important thing is maintaining the focus on the product.
Issues teams face when automating:
Back in the day, the deployment stack was built majorly on Shell scripts. This often proved complex for team members without previous experience with the stack. Right now, almost every platform offers YAML. Being a declarative and more transparent language, YAML has a fairly easier learning curve. However, there are platforms out there that unfortunately, still need shell workarounds on YAML.
Buddy tackles those issues thanks to its intuitive GUI and declarative YAML configuration of your pipelines.
Security is a critical component of any pipeline. One of the key security issues is handling keys and secrets. In most cases, keys and secrets are added onto platforms as environment variables, after manual encryption and later translated and decrypted during the build. In the midst of defining these jobs, it becomes very easy to leak these details either through printing keys or versioning to public Docker images. It is also advised to avoid use of unrestricted API keys on third party services.
How Buddy handles security - Auto-encryption and manual encryption with a push of a button. - Action-variable suggestion and default environment variables common to repositories
Ambiguous platform and tool correlation
Platform correlation has to be one of the greatest challenges. Various teams handle this differently: from out of the box platform-specific YAML modules to scripted connections. It is recommended to take a module approach, in lieu of scripted pipelines, frequently involving several steps: from fetching SDK's, to authorization, to actual deployments. This often leads to a fairly complex, error-prone and bulky pipeline.
Buddy offers a variety of itegrations with major providers and a schema-rich
buddy.yml script with declarative pipeline actions.
In this example, we will be deploying a Mangoes API backend from a GitHub repository into a Kubernetes cluster running on the Google Kubernetes Engine (GKE). Mangoes are fun!
With all that in mind, and without further theory, let's have a look at workflows. We will demonstrate a Kubernetes pipeline that syncs a cluster in a simple yet efficient pipeline. Below are the steps we will go through to get this application running:
- A repository for our backend
- A pipeline to build and deploy our backend
- A Kubernetes cluster
- An auth mechanism that paves shipping of our backend from Buddy to GKE
Begin with forking the Mangoes project on GitHub. Next, log in to your Buddy account, add a new project, and select the repository from your dropdown list. Once the project is synchronized, Buddy will detect the stack in your repository (this particular backend runs on Express.js):
Our current goal is to deploy a backend to a cluster. To make this goal more specific, let's break it down further into steps:
- Build a Docker Image
- Version the Docker Image to Docker Hub
- Deploy backend to Kubernetes
- We are versioning the image on Docker Hub. Kubernetes shall operate by fetching the backend's image from DockerHub for provisioning containers.
- Avoid versioning on public repositories to prevent exposing your code. Cloud Vendors such as Google Cloud as well as Docker provide private repositories.
- It is important to label your pipelines according to their purpose, for easy navigation.
Go ahead and click Add a new pipeline. This particular pipeline builds and deploys on every push to a Master branch, which qualifies it as a Continuous Deployment pipeline. Make sure to apply the settings below:
With the pipeline created, we can get down to the actions. Our first step is to build a Docker image. Go ahead and select Build Image:
Take note, that Buddy already identifies resources in the pipeline repository and puts forward suggestions!
The next step will be providing a name and path to the Dockerfile. A quick glance at the repository will show the details as below:
Leave the name as it is (
Dockerfile) and set the path as the repository root:
Under the Options tab, add in your registry details. If it's your first time setting with Kubernetes, I recommend Docker Hub. Othwerwise, select the one matching up with your vendor, i.e. Google Cloud Registry.
Never bake keys or credentials into your Docker images – especially public!
The Docker image for the Mangoes app is also publicly available here.
Our pipeline can now identify the first step towards deploying our application! Let's add in the second step by clicking the + icon right below the Build Docker Image action:
This new action is going to deploy our backend to a Kubernetes cluster on Google Cloud. Select Apply Deployment in the Kubernetes section of the action or enter the name in the search input:
On the Setup tab, add in your cluster details. Buddy natively integrates Google Kubernetes Engine, Amazon EKS, and Azure AKS. It also supports private clusters. For the purposes of this guide, we will run our image on Google:
Authentication & config
In order for Buddy to deploy to GKE, API calls need to be authenticated. We will use Service Accoounts as the authentication method for this example. In this case the service account is very basic, only having access to Kubernetes through the Kubernetes Engine Admin role.
To learn more about gcloud's roles and permissions, refer here.
Make sure to create and download a key for your service account from Google Cloud and paste it in the Service Account Key field on Buddy (the service will automatically encrypt your key). The last step is selecting the YAML file with the configuration file below:
Kubectl & options
Now, switch to the Options tab and select the kubectl version. Feel free to use the version supported by your cluster. Otherwise, leave it set to
latest. Set the rest of options as shown below, with the Grace period set to 30 and the Timeout to
In a similar fashion, create another K8s deploy action for
mango-server-svc – a service that will expose our application (make sure to use a different YAML file for that). The whole pipeline will now look like this:
Testing the pipeline
Let's see if everything works as expected. Click the Run Pipeline button in the top-right corner of the page to start the build:
A pop-up will appear with an option to deploy to a specific commit. Proceed without change or select a commit of your choice:
After triggering the execution, you will be directed to the progress page with all your actions listed as either queued, completed or working (in progress). If an error occurs on any of your actions, you can check the logs for details on what happened:
If everything goes through correctly, Buddy will mark the job as successful in a blissful green color:
Here's how the deployment looks running on GKE, exposed by the service that we deployed in the final step:
In this guide, we tried to explore the recommended approach to deploying dockerized applications on Kubernetes. We have also explored Buddy and how to effectively leverage the GUI and how easy it is to plug in Google Cloud. Consider this a starting point to greater workloads!