Kubernetes in Docker on Ubuntu and Mac OS X

By | March 8, 2020

Kubernetes and the echo-system around it has developed a lot during the last few years making Kubernetes much more accessible.
In this article, which purpose mostly is to scribble down some notes for myself, I am going to describe how to run Kubernetes in Docker containers using Kind (Kubernetes in Docker) on Ubuntu and Mac OS X.

Update 2020-03-29:

After having used Kind some time I have encountered two issues that have caused me to continue my search for a development Kubernetes cluster:
Kind cluster cannot be restarted.
Kind does not support the LoadBalancer type.

For an alternative to Kind, please refer to my article on running K3S in virtual machines.

A word of caution:

The creator of Kind only recommend Kind for testing Kubernetes or testing Kubernetes applications and states that it is not suitable for production use.

Prerequisites

Since Kubernetes is to be run in Docker, Docker is an obvious prerequisite. If Docker isn’t installed already:

Installing Kind

Installing Kind is accomplished in the same way for Ubuntu and Mac OS X:

  • Check which version to install by opening the following URL in a browser:
    https://github.com/kubernetes-sigs/kind/releases/
    At the time of writing, the latest version is 0.7.0 and this is the version I will be installing.
  • Open a terminal window.
  • Download the Kind binary:
    Replace the “v.0.7.0” in the URL below with the version from the prvious step!
    curl -Lo ./kind “https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64”
  • Make the Kind binary executable:
    chmod +x kind
  • Move the Kind binary to an appropriate location:
    sudo mv ./kind /usr/local/bin/kind
  • Verify the Kind installation:
    kind –version

With Kind successfully installed, you should see something like the following in the terminal as a result of the last command:

kind version 0.7.0

Installing kubectl

The kubectl tool is a command-line tool for interacting with Kubernetes clusters.

I will install the latest version of kubectl:

  • Ubuntu: Download the latest version of kubectl for Ubuntu:
    curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
  • Mac OS X: Download the latest version of kubectl for Mac OS X:
    curl -LO “https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl”
  • Make the kubectl binary executable:
    chmod +x kubectl
  • Move the kubectl binary to an appropriate location:
    sudo mv ./kubectl /usr/local/bin/kubectl
  • Verify the kubectl installation:
    kubectl version –client

If kubectl has been successfully installed, the result of the last command should generate output similar to the following:

Client Version: version.Info{Major:"1", Minor:"15",
GitVersion:"v1.15.5",
GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea",
GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z",
GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}

Install kubectl Autocompletion for Ubuntu

The following commands installs kubectl autocompletion in the bash shell; the first for the current shell and the second installs it for all new bash shell sessions:

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

Install kubectl Autocompletion for Mac OS X

To install kubectl autocompletion for Mac OS X, install and activate the bash-completion package like this:

  • Install the bash-completion package:
    brew install bash-completion
  • Add the following lines to the .bash_profile file in the home directory of your user:
    if [ -f $(brew –prefix)/etc/bash_completion ]; then
    . $(brew –prefix)/etc/bash_completion
    fi
  • Activate bas completion by reloading the bash shell:
    source ~/.bash_profile

Starting a Cluster

With Kind and kubectl in place, a Kubernetes cluster can now be created and started using Kind:

  • kind create cluster

When Kind has successfully started the Kubernetes cluster, the console should contain output along the following lines:

Starting a Kubernetes cluster with Kind.

If the suggested command is issued like this:

sudo kubectl cluster-info --context kind-kind

The following output can be seen in the terminal window:

Kubernetes master is running at https://127.0.0.1:32768
KubeDNS is running at https://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Stopping a Cluster

On occasions you may also want to stop a cluster and remove all Docker containers associated with the cluster:

  • In the terminal, delete the cluster using:
    sudo kind delete cluster

Restarting a Cluster

I spent some time working with Kind, setting up a cluster, and suddenly realized that it was a bit late in the evening so I shut down my computer and went off to bed. In the morning, I tried to continue from where I left off the day before only to find that, at the time of writing, Kind clusters do not survive a host restart and it is not possible to restart a Kind cluster.

So for the time being, I will have to delete my cluster and (re)create it:

  • In a terminal window, issue the following commands:
sudo kind delete cluster
sudo kind create cluster

Adding Nodes to a Cluster

Adding nodes to a Kubernetes cluster running in Docker containers on one single computer will of course not improve performance or availability of the services running in the cluster but it can be useful from an educational point of view.
When looking at some different alternatives, I tried to run Kind in Ubuntu VMs using Multipass. It did work well with a single node cluster, but as soon as I tried adding one or more nodes, Kind failed to create a cluster and reported an error. I have not investigated this further.

Running a multi-node cluster with Kind in Ubuntu desktop on a laptop with 8GB RAM posed no problems.

To extend a Kubernetes cluster created with Kind with a worker node running in another container, follow these steps:

  • Create a file named “config.yaml” with the contents below.
    Nodes of the types control-plane or worker may be added as desired.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4

nodes:
- role: control-plane
- role: worker
  • If there is a cluster created with Kind running, stop and remove it.
    sudo kind delete cluster
  • Start a new cluster using the configuration file created earlier:
    sudo kind create cluster –config=config.yaml

When Kind has successfully started the Kubernetes cluster, the console should contain output similar to that in the following picture:

Starting a multi-node Kubernetes cluster with Kind.

Note that there are two small boxes after the “Preparing nodes” message this time.
We can also verify the number of nodes in the cluster using kubectl:

kubectl get nodes

With the above configuration, the result is:

NAME                 STATUS   ROLES    AGE   VERSION
kind-control-plane   Ready    master   22m   v1.17.0
kind-worker          Ready    <none>   21m   v1.17.0

If the docker ps command is run, the following Kind-related containers can be seen:

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS                    PORTS                                            NAMES
ace81e33b019        kindest/node:v1.17.0   "/usr/local/bin/entr…"   28 minutes ago      Up 27 minutes             127.0.0.1:32768->6443/tcp                        kind-control-plane
0ead1dda6f66        kindest/node:v1.17.0   "/usr/local/bin/entr…"   28 minutes ago      Up 27 minutes                                                              kind-worker

There is much more to Kubernetes and Kind but this will have to conclude these notes. If you have followed the instructions here, the result is an empty Kubernetes cluster in which you can experiment and/or educate yourself to your heart’s content.

Happy coding!

Leave a Reply

Your email address will not be published. Required fields are marked *