July 23, 2019

Your First Kubernetes Cluster Deployment

Table of Contents

Now that we have an understanding of what a container can be used for and the important parts that make up the ever-popular container orchestrator, Kubernetes, time for us to spin up a small cluster. Kubernetes as a project moves very quickly and only recently has the cadence slowed on the releases.

Because of the popularity and velocity of the project, there is certainly a lot of choices when looking at Kubernetes vendors. Vendors try to differentiate themselves from the stigma of “Just Another Kubernetes Vendor / Platform” with a host of prescriptions around operations and additional value adds. 

Competition is fierce from public cloud vendors offering services such as

Google’s Kubernetes Engine and Amazon’s Elastic Kubernetes Service to Platform-as-a-Service vendors such as Red Hat’s OpenShift and Pivotal’s PKS for mindshare and your hard-earned dollars.

Feel free to go through the blog post or video to see strategy on creating a Kubernetes Cluster of your very own! We will skip some of the operational items for now and focus on your first K8s deployment not necessarily administering the platform. 

Watch and Learn

A short video on spinning up Minikube on your Mac using Homebrew and running through the commands in this tutorial.

Minikube - Local Kubernetes

The quickest way to interact with Kubernetes on your laptop, in my opinion, is Minikube. Minikube has been around since the summer of 2016 and tries to match Kubernetes’ minor release versions.
Installing Minikube on your Mac is super simple using a package manager like Homebrew. You will also need the virtualization piece which is VirtualBox. If your installation method does not come with VirtualBox, VB is a quick install.

You can follow these steps to install Minikube and VirtualBox from Homebrew. 

  • Install Homebrew from the Terminal.
  • Join the Cask Channel for more software with the terminal command “brew tap caskroom/cask”.
  • Install VirtualBox with “brew cask install virtualbox”.
  • Install Minikube/KubeCTL with “brew cask install minikube”.
  • By default 4GB of memory is allocated. You can give more resources e.g 8GB with “minikube config set memory 8128” after Minikube has been installed.
  • Start Minikube with “minikube start”.
  • Fireup your K8s dashboard with “minikube dashboard”.

With those steps you are ready to head to the KubeCTL Command Line Interface and go!

Chicken or the Egg - KubeCTL First or Last?

Back in part two, we talked about mechanisms to interact with our Kubernetes Clusters. KubeCTL, the command line interface for Kubernetes, is one of the primary mechanisms.

If this is your first install of Minikube, the latest version of KubeCTL will match the latest version of Kubernetes inside Minikube because KubeCTL will be installed with the Minikube Brew Installer. The version of KubeCTL and Kubernetes need to match or you might get some weird API warnings in the future. 

I tend to install KubeCTL first then wire my Kubernetes Cluster to KubeCTL. The good news is you can have more than one KubeCTL Context which means you can have more than one cluster. Though for this example letting the Minikube installer take care of that is perfectly fine and you can focus on the harder stuff like YAML.

A little YAML never hurt anyone

YAML aka “Yet Another Markup Language” is the primary descriptor language used in Kubernetes. One of the biggest benefits of Kubernetes is that Kubernetes is declarative; you declare what you want and K8’s figures out the rest. We can describe an application, let’s say Kung-Fu-Canary application endpoint(s) in a kung-fu-canary.yaml.

If you have not dealt with YAML before, be forewarned that YAML is space-separated and not necessarily reliant on the data structure. Gasp the tabs vs space war rages on. For the command-line folks, I would encourage getting a linter (code validator) of some sort.

Getting your hands around the YAML syntax is pretty straight forward. Kubernetes uses YAML maps and YAML lists to describe state / a deployment. Wikipedia has a pretty solid explanation of YAML syntax. Here at Harness, we will provide you with an exemplar Kung-Fu-Canary to get your learn on.

Kung-Fu-Canary, Your First Deployment!

We can use this simple-ish deployment as an example. In Kubernetes speak, a Deployment is declarative/desired state of a Pod and Replica Set. The benefit of a Deployment is the ability to manage the Deployment as a whole; it can remove, revert, etc.

I posted the kung-fu-canary Deployment on GitHub for your copy and paste pleasure. Feel free to download for the exercise.

Kung fu Canary - Kubernetes Deployment


Let’s break down a few pieces of the glorious kung-fu-canary.yaml. We are defining that this particular resource/YAML is a Deployment. We define our Deployment Specification that we need a minimum number of Replicas and what to do in case there is an update/upgrade and define a strategy on how the Rolling Update is supposed to perform.

Kubernetes only takes action for what is declared and what is interpreted by Kubernetes. Readiness and Liveness Checks are important. Without them, Kubernetes will only restart/replenish your containers if there was an event such as a Sig-Term totally killing a container. The Readiness and

Liveness Checks can tell more about the running application and what to do if an application is unhealthy.
With that kung-fu-canary.YAML, onwards to the deployment!

A Handful of KubeCTL Commands

Remembering a handful of commands will make you dangerous in the K8s world. Let’s go through KubeCTL apply, get, describe, and scale. The Kubernetes documentation does have a lengthy cheat sheet but we can focus on the basics here.

One of the first commands to use would be “kubectl apply”. With the Apply command, you are executing your deployable YAML. I am a stickler than most of my operations that I need to perform in Kubernetes, I am keeping in YAML for versioning, etc. Can deploy our kung-fu-canary deployment with “kubectl apply -f kung-fu-canary.yaml”. Now your first Deployment is being deployed!

You can navigate to the WebUI [ “minikube dashboard” in a terminal in case you closed it] and see your Deployment!

WebUI - Kubernetes


Now that you have your first Deployment, let’s take a look at what's going on under the covers. With the Get command, we can get a list of running Pods (remember from part two the mighty Pod) or other specified resources with “kubectl get pods.”

Now that we know our Pods, we can take a deeper look at them. With the Describe command, you can see the details or resources, in this case our kung-fu-canary Pod. From the listed Pod from “kubectl get pods”, lets “kubectl describe pod pod_id”. Look at that lovely state information.

What if we need some more or less firepower from our Replica Sets? Our example kung-fu-canary Deployment leverages three Replicas. What if we want to increase that number to four? The Scale command can set a new size of several resources. Simply run “kubectl scale --current-replicas=3 --replicas=4 deployment/kung-fu-canary” which will scale from 3 to 4 Replicas.

Bonus Command: Expose

Without having access to our application, our application would be very lonely. The Expose command will create a Service to expose your application. To expose our kung-fu-canary deployment in the simplest sense can run “kubectl expose deployment kung-fu-canary --port=80 --type=LoadBalancer”. Can find out the public IP address of your cluster which is where the Master is running in Minikube by running “kubectl cluster-info.”

You can inspect your new service which is running with “kubectl describe service kung-fu-canary”. Take note of the NodePort. To access your nginx instance, can cheat and use the Minikube command “minikube service kung-fu-canary” which will bring up the public_ip:NodePort for you auto-magically.

Expose Command

Cleanup

Once we are all done with this example deployment, like a good citizen we can clean up what we did.

With the Delete command, we can remove the Deployment which we did. We wrapped our deployable into a Deployment so can simply run “kubectl delete deploy/kung-fu-canary.” Can validate your Pods are gone with “kubectl get all” and no kung-fu pods show up or our trusted friend “kubectl get pods” and should return no resources.

If you chose to run the bonus command, can delete the Service which exposed kung-fu-canary which is conveniently named kung-fu-canary with “kubectl delete service kung-fu-canary.” Can validate the service is gone with “kubectl get services” and kung-fu-canary should not be there.

Lastly, we can shut down our old faithful friend Minikube by running “minikube stop.” If you want to start from scratch or make more changes, can run “minikube delete.”

What About the Actual Kubernetes Cluster?

In our next part, part four, we will start talking about the Kubernetes Cluster itself. Important in the back of your head separate your application to the actually Kubernetes platform. Both your application and Kubernetes Clusters have different scaling mechanisms. 

A tl;dr, most Kubernetes operations would be adding/subtracting nodes, replacing pluggable pieces, and upgrading the platform. No mystery there, like any other piece of software.

Harness, Supercharging your Kubernetes Deployments

As we peel away the Kubernetes onion, the “magic” that Kubernetes preforms boil down to a declarative driven system. We describe the end state and Kubernetes with the scheduler and resource manager capabilities strives to provide what we describe. Though pulling off more complicated tasks such as a canary release or release validation would require several well-orchestrated steps and underlying descriptions of those steps. 

The Harness Platform makes those items incredibly easy. Now with your Kubernetes knowledge you can install the Harness Delegates into a Kubernetes environment, even Minikube! Stay tuned for part four of the blog series, operationalization.
-Ravi

You might also like
No items found.

Similar Blogs

No items found.
Continuous Delivery & GitOps