Sailing the 7 seas with Kubernetes Helm

Helm is THE Kubernetes Package Manager. Think of it like a way to yum install your applications, but, in Kubernetes. You might ask, “Heck, a deployment will give me most of what I need. Why don’t I just create a pod spec?” The thing is that Helm will give you a way to template those specs, and give you a workflow for managing those templates by what it calls “charts”. Today we’ll go ahead and step through the process of installing Helm, installing a pre-built pod given an existing chart, and then we’ll make an existing chart of our own and deploy it – and we’ll change some template values so that it runs in a couple different ways. Ready? Anchors aweigh…

The beginning base of these instruction on the official quickstart guide. We’ll then extrapolate from there as there’s a few considerations that we have now.

  1. We need to configure RBAC, which isn’t officially covered yet. The official way they say to do it is to turn off RBAC – nope, not going to do that.
  2. We’re also going to make our own Helm charts, which aren’t covered in the quick start guide, so we’ll expand from there.

Requirements

So, this assumes you’ve already got a Kubernetes cluster up and running, and usually… These articles assume CentOS 7.3 running. It might not exactly require CentOS 7.3 this time, but, just know that’s my reference, and I’m using Kubernetes 1.6.

If you don’t have a Kubernetes cluster up, may I recommend using my kube-ansible playbooks – and I’ve got an article detailing how to use those playbooks.

Optionally – you can create persistent volumes. You can skip this step if you want, but, the example charts that we will install require some volume persistence. We’ll run it with persistence turned off, but, it’s “more realistic” if-you-will. And if you don’t have persistent volumes setup, you might want to try my method for using GlusterFS to back persistent volumes, as detailed in this blog post.

About Helm

Helm is really two parts, a client and a server. The client is helm and the server is tiller – all the boat references! Cause the definition of your applications are called charts.

So if you’re at the helm of a ship, and you steer (according to your charts), you’d move your tiller. See? All the ships!

These charts are essentially templates for how to deploy your pods. Without helm, you’d just create specs which are yaml files which define how the pod is to be run. But, using helm – we can make charts which make for more flexible specs. That way we can run the same application with differing parameters in the same or a different cluster.

Why not template them with Ansible, then? You could, too. But, using helm gives us a more direct work-flow for define the charts and deploying them, and should free up our playbooks to allow for lower-level infrastructure creation, and let our applications be abstracted from that, and let us leverage what Kubernetes has to offer without having to overly complicate our playbooks for applications – which should likely require more frequent reconfiguration than the underlying pieces. For the record, in my opinion – using Ansible isn’t the wrong way. It’s just another way.

Download Helm

Let’s pick out a version from the github releases of helm and download the binary onto our Kubernetes master server.

[centos@kube-master ~]$ curl -sL https://storage.googleapis.com/kubernetes-helm/helm-v2.4.1-linux-amd64.tar.gz > helm.tar.gz
[centos@kube-master ~]$ tar -xzvf helm.tar.gz 
[centos@kube-master ~]$ chmod +x linux-amd64/helm 
[centos@kube-master ~]$ sudo cp linux-amd64/helm /usr/local/bin

Now, let’s check its version.

[centos@kube-master ~]$ helm version
Client: &version.Version{SemVer:"v2.4.1", GitCommit:"46d9ea82e2c925186e1fc620a8320ce1314cbb02", GitTreeState:"clean"}

It will also take a second to complete, and then timeout and probably complain that it can’t connect to tiller. Which is fine for now. So, that’s coming up soon.

Run helm init

[centos@kube-master ~]$ helm init

That should start tiller for us – so you’ll have to watch for it come up, go ahead and watch -n1 kubectl get pods --all-namespaces

And we’ll have to create an RBAC for it, too. I used this gist as a reference.

[centos@kube-master ~]$ kubectl --namespace kube-system create sa tiller
serviceaccount "tiller" created
[centos@kube-master ~]$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding "tiller" created
[centos@kube-master ~]$ kubectl --namespace kube-system patch deploy/tiller-deploy -p '{"spec": {"template": {"spec": {"serviceAccountName": "tiller"}}}}'
deployment "tiller-deploy" patched

Go ahead and watch your pods, cause it’s going to restart the tiller pod, so do something like watch -n1 kubectl get pods --all-namespaces until it comes back.

Let’s run an example app

Let’s go ahead and update our repo.

[centos@kube-master ~]$ helm repo update

And then we can install say… MongoDB (I’m wearing a MongoDB t-shirt today, so why not that one). If you’d like to install something else checkout the official “stable” repo and see what’s available.

[centos@kube-master ~]$ helm install --set persistence.enabled=false stable/mongodb

Note that we’re already doing something that sets Helm apart from “just using a spec file”. Like, if you’re familiar with my other tutorials you may have seen me create pods from specs before, where I’ve created a yaml file, and then I tell kubernetes to create it with something like kubectl create -f mongodb.yaml.

So running that helm install is going to give you some output like this (I clipped out some of the output)…

[centos@kube-master ~]$ helm install --set persistence.enabled=false stable/mongodb
[...snip...]
NOTES:
MongoDB can be accessed via port 27017 on the following DNS name from within your cluster:
silly-ladybird-mongodb.default.svc.cluster.local

To connect to your database run the following command:

   kubectl run silly-ladybird-mongodb-client --rm --tty -i --image bitnami/mongodb --command -- mongo --host silly-ladybird-mongodb

So let’s go ahead and use mongo for fun. Note, this command is going to take a while because Kube is going to pull a new image for you.

[centos@kube-master ~]$ kubectl run silly-ladybird-mongodb-client --rm --tty -i --image bitnami/mongodb --command -- mongo --host fallacious-giraffe-mongodb

You might have to hit enter, and it lets you know to do that too.

Let’s do something with it while we’re here.

> use kitchen;
switched to db kitchen
> db.kitchen.insert({"beer": {"heady topper": 4,"sip of sunshine": "awww yeah"}})
> db.kitchen.find().pretty()
{
  "_id" : ObjectId("591cb31956ed4d11bd5b82c0"),
  "beer" : {
    "heady topper" : 4,
    "sip of sunshine" : "awww yeah"
  }
}

Ok, cool, it works!

So what about some visibility of what charts we have deployed? Run helm list to check it out for yourself. This is a list of what are referred to as “releases”.

[centos@kube-master ~]$ helm list
NAME        REVISION  UPDATED                   STATUS    CHART           NAMESPACE
aged-uakari 1         Wed May 17 20:26:18 2017  DEPLOYED  mongodb-0.4.10  default  

Then you can go ahead and remove this sample one.

[centos@kube-master ~]$ helm delete aged-uakari
release "aged-uakari" deleted

Let’s create our own chart

I got a little help for creating a first chart from this blog post. Let’s go ahead and create our own.

We’re going to try to create an nginx instance that serves a photograph of a pickle. Because, that is absurd enough for me.

Scaffolding your chart, and basic commands

The first thing you’ll do is scaffold your chart.

[centos@kube-master ~]$ helm create pickle-chart

That will create a directory ./pickle-chart with the contents you need to create a chart. The contents look about like so:

[centos@kube-master ~]$ find pickle-chart/
pickle-chart/
pickle-chart/Chart.yaml
pickle-chart/templates
pickle-chart/templates/ingress.yaml
pickle-chart/templates/deployment.yaml
pickle-chart/templates/service.yaml
pickle-chart/templates/NOTES.txt
pickle-chart/templates/_helpers.tpl
pickle-chart/charts
pickle-chart/values.yaml
pickle-chart/.helmignore

You can check if the syntax is ok with a helm lint like so:

[centos@kube-master ~]$ helm lint pickle-chart
==> Linting pickle-chart
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures

And you can wrap it all up with helm package, which will make a tarball for you.

[centos@kube-master ~]$ helm package pickle-chart
[centos@kube-master ~]$ ls -lh pickle-chart-0.1.0.tgz 
-rw-rw-r--. 1 centos centos 2.2K May 18 17:54 pickle-chart-0.1.0.tgz

Let’s edit the charts to make them our own

Change your directory into the newly created ./pickle-chart dir. First let’s look at Chart.yaml in this directory – this is a bunch of meta data for our chart. I edited mine to look like:

apiVersion: v1
description: An nginx instance that serves a pickle photo
name: pickle-chart
version: 0.0.1

Now, move into the ./templates/ directory and you’re going to see a few things here – yaml files, but, they’re templates. And they’re templated as sprig templates.

If you’ve created pod specs before, these won’t seem too too weird, at least in name. Especially deployment.yaml and service.yaml. As you could imagine, these define a deployment, and a service. Feel free to surf around these and explore for yourself to get an idea of what you could customize, or better yet, add to.

Let’s modify the values.yaml – this is where the values of the majority of the parameters for the template come from.

Including the docker image that we’re going to use, which is dougbtv/pickle-nginx – should you care to build the image yourself, I posted the dockerfile and context as a gist.

We’re going to leave the majority of values.yaml as the default. I change the image section and also added the pickletype.

IMPORTANT: Github page didn’t like the embedded templates here in the markdown for my blog, it would fail building them. So you’ll have to pick up these two files from this gist.. Copy out both the values.yaml and deployment.yaml. And use them here.

Now, modify the ./templates/deployment.yaml. Again, most of it is default, but, you’ll see that I added an env section. This is used by the image to do something, more than “just statically deploy” – we’ll get to that in a moment.

Cool, that’s all set for now.

Let’s run our brand spankin’ new Helm charts!

Alright, so, now make sure you’re up a directory from the ./pickle-chart directory, and let’s fire it off.

Install the chart like so:

[centos@kube-master ~]$ helm install ./pickle-chart

Now, wait until it’s fully deployed, I do this by watching like this:

[centos@kube-master ~]$ watch -n1 kubectl get pods --show-all

And wait until it’s showing as running.

Now – it creates a service for us, so let’s check out what that service is with kubectl get svc.

Here’s the IP it’s listening on:

[centos@kube-master ~]$ kubectl get svc | grep -i pickle | awk '{print $2}'

We’ll save that as a variable and curl it.

[centos@kube-master ~]$ pickle_ip=$(kubectl get svc | grep -i pickle | awk '{print $2}')
[centos@kube-master ~]$ curl -s $pickle_ip | grep -i img
    <img src="pickle.png" />

Great, now note that the img src is pickle.png. This – we have made configurable, so let’s deploy our chart differently.

First I’ll go and delete the release. So list the charts and delete, a la:

[centos@kube-master ~]$ helm list
NAME                REVISION  UPDATED                   STATUS    CHART               NAMESPACE
interesting-buffalo 1         Thu May 18 19:51:51 2017  DEPLOYED  pickle-chart-0.0.1  default  
[centos@kube-master ~]$ helm delete interesting-buffalo
release "interesting-buffalo" deleted

Now – we’re going to run this differently by changing a default value in our template.

[centos@kube-master ~]$ helm install --set pickletype=pickle-man ./pickle-chart

This sets the pickletype which will change something our application.

Now, go ahead and pick up the IP from the service again, and we’ll curl it…

[centos@kube-master ~]$ pickle_ip=$(kubectl get svc | grep -i pickle | awk '{print $2}')
[centos@kube-master ~]$ curl -s $pickle_ip | grep -i img
    <img src="pickle-man.png" />

We can now see that we’re serving a different photo – this time a pickle cartoon that is a “pickle man” as opposed to… Just a pickle.

Oh yeah – and you can deploy from a tarball…

[centos@kube-master ~]$ rm pickle-chart-0.1.0.tgz 
[centos@kube-master ~]$ helm package pickle-chart/
[centos@kube-master ~]$ helm install pickle-chart-0.0.1.tgz 

Or you can install from an absolute URL containing the tarball, too.

And there you have it – you’ve gone ahead and…

  • Installed Helm
  • Installed a sample application (mongodb)
  • Created your own helm chart
  • Deployed a release
  • Change the parameters for the templated values to create a new release with different parameters.

Good luck sailing the 7 seas!

Gleaming the Kube - Building Kubernetes from source

Time to bust out your kicktail skateboards Christian Slater fans, we’re about to gleam the kube, or… really, fire up our terminals and build Kubernetes from source. Then we’ll manually install what we built and configure it and get it running. Most of the time, you can just yum install or docker run the things you need, but, sometimes… That’s just not enough when you’re going to need some finer grain control. Today we’re going to look at exactly how to build Kubernetes from source, and then how to deploy a working Kubernetes given the pieces therein. I base this tutorial on using the official build instructions from the Kube docs to start. But, the reality is as much as it’s easy to say the full instructions are git clone and make release – that just won’t cut the mustard. We’ll need to do a couple ollies and pop-shove-its to really get it to go. Ready? Let’s roll…

The goals here today are to:

  • Build binaries from Kubernetes source code
  • Install the binaries on a system
  • Configure the Kubernetes system (for a single node)
  • Run a pod to verify that it’s all working.

Requirements

This walk-through assumes a CentOS 7.3 machine, for one. And I use a VM to isolate this from my usual workstation environment. I wind up in this tutorial spinning up two boxes – one to build Kubernetes, and one to deploy Kubernetes on. You can do it all on one host if you choose.

I setup my VMs often times using this great article for spinning up centos cloud images but it relies only on having the default disk size for the CentOS cloud images, which is 8 gigs which isn’t enough.

Definitely need a lot of memory, I had it bomb out at 8 gigs, so I bumped up to 16 gigs.

Then – you need a lot more disk than I had. I’ve been using VMs spun up with virsh to setup this environment, so I’ll have to add some storage, which is outlined below. You need AT LEAST 20 gigs free. I’m not sure what it’s doing to need that must disk, but, it needs it. I go with 24 gigs below, and outline how to add an extra disk for the /var/lib/docker/ directory.

Spinning up a VM and adding a disk

If you’ve got another machine to use, that’s fine – and you can skip this section. Make sure you have at least 24 gig of free disk space available. You can even potentially use your workstation, if you’d like.

Go ahead and spin up a VM, in my case I’m spinning up CentOS cloud images. So let’s attach a spare disk, and we’ll use it for /var/lib/docker/.

Spin up a VM however you like with virsh and let’s create a spare disk image. I’m going to create one that’s 24 gigs.

Looking at your virtual machine host Here’s my running instance, called buildkube

$ virsh list | grep -Pi "( Id|buildkube|\-\-\-)"
 Id    Name                           State
----------------------------------------------------
 5     buildkube                      running

Create a spare disk:

$ dd if=/dev/zero of=/var/lib/libvirt/images/buildkube-docker.img bs=1M count=24576

Then we can attach it.

$ virsh attach-disk "buildkube" /var/lib/libvirt/images/buildkube-docker.img vdb --cache none
Disk attached successfully

Now, let’s ssh to the guest virtual machine and we’ll format and mount the disk.

From there we’ll use fdisk to make a new partition

$ fdisk /dev/vdb

Choose:

  • n: New Partition
  • p: Primary partition
  • Accept defaults for part number, first and last sector.
  • w: Write the changes.
  • q: quit

Now that you’ve got that set, let’s format it and give it a label.

$ mkfs -t ext3 /dev/vdb1
$ e2label /dev/vdb1 /docker

Add it to the fstab, and mount it. Create an entry in /etc/fstab like so:

/dev/vdb1   /var/lib/docker ext3    defaults    0   0

Now, create a directory to mount it and mount it.

$ mkdir -p /var/lib/docker
$ mount /var/lib/docker/

I then like to create a file and edit it to make sure things are OK.

Alright – you should be good to go.

Readying for install

Alright, first things first – go ahead and do a yum update -y and install any utils you want (your editor, etc) – and give ‘er a reboot.

Let’s install some utilities we’re going to use; git & tmux.

$ sudo yum install -y tmux git

Install docker-ce per the docs

$ sudo yum-config-manager --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum install docker-ce
$ sudo systemctl enable docker
$ sudo systemctl start docker
$ sudo docker -v

Now that we’ve got docker ready, we can start onto the builds (they use Docker, fwiw.)

Kick off the build

Ok, get everything ready, clone a specific tag in this instance (with a shallow depth) and move into that dir and fire up tmux (cause it’s gonna take a while)

$ git clone -b v1.6.3 --depth 1 https://github.com/kubernetes/kubernetes.git
$ cd kubernetes
$ tmux new -s build

Now let’s choose a release style. If you want bring up the Makefile in an editor and surf around, the official docs recommend make release, but, in order to quicken this up (still takes a while, about 20 minutes on my machine), we’ll do a quick-release (which doesn’t cross-compile or run the full test suite)

$ make quick-release

You can now exit the tmux screen while you do other things (or… get yourself a coffee, that’s what I’m going to do), you can do so with ctrl+b then d. (And you can return to it with tmux a – assuming it’s the only session, or tmux a -t build if there’s multiple sessions). Should this fail – read through and see what’s going on. Mostly – I recommend to begin with starting with a tagged release (master isn’t always going to build, I’ve found), that way you know it should in theory have clean enough code to build, and then assuming that’s OK check your disk and memory situation as it’s rather stringent requirements there, I’ve found.

Alright, now… Back with your coffee and it’s finished? Awesome, let’s peek around and see what we created. It creates some tarballs and some binaries, so we can see where they are.

$ find . | grep gz
$ find . | grep -iP "(client|server).bin"

For now we’re just concerned with those binaries, and we’ll move ‘em into the right place.

Let’s deploy it

I spun up another host with standard disk (8gig in my case) and I’m going to put the pieces together there.

Surprise! We need another yum update -y and a docker install. So let’s perform that on this “deploy host”

$ sudo yum-config-manager --add-repo     https://download.docker.com/linux/centos/docker-ce.repo
$ yum install -y docker-ce
$ systemctl enable docker --now
$ yum update -y 
$ reboot

A place for everything, and everything in its place

Now, from your build host, scp some things… Let’s try to make it as close to the rpm install as possible.

$ pwd
/home/centos/kubernetes
$ scp -i ~/.ssh/id_vms ./_output/release-stage/client/linux-amd64/kubernetes/client/bin/kubectl centos@192.168.122.147:~       
$ scp -i ~/.ssh/id_vms ./_output/release-stage/server/linux-amd64/kubernetes/server/bin/kubeadm centos@192.168.122.147:~
$ scp -i ~/.ssh/id_vms ./_output/release-stage/server/linux-amd64/kubernetes/server/bin/kubelet centos@192.168.122.147:~
$ scp -i ~/.ssh/id_vms ./build/debs/kubelet.service centos@192.168.122.147:~
$ scp -i ~/.ssh/id_vms ./build/debs/kubeadm-10.conf centos@192.168.122.147:~

Alright, that’s everything but CNI, basically.

Now back to the deploy host, let’s place the binaries and in the configs in the right places…


# Move binaries
[root@deploykube centos]# mv kubeadm /usr/bin/
[root@deploykube centos]# mv kubectl /usr/bin/
[root@deploykube centos]# mv kubelet /usr/bin/

# Move systemd unit and make directories
[root@deploykube centos]# mv kubelet.service /etc/systemd/system/kubelet.service
[root@deploykube centos]# mkdir -p /etc/kubernetes/manifests
[root@deploykube centos]# mkdir -p /etc/systemd/system/kubelet.service.d/
[root@deploykube centos]# mv kubeadm-10.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

# Edit kubeadm config, with two lines from below
[root@deploykube centos]# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

When you edit that 10-kubeadm.conf add these two lines above the ExecStart= line:

Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

Install CNI binaries

Now, we gotta setup CNI. First, clone and build it (you need git and golang).

[root@deploykube centos]# yum install -y git golang
[root@deploykube centos]# git clone -b v0.5.2 --depth 1 https://github.com/containernetworking/cni.git
[root@deploykube centos]# cd cni/
[root@deploykube cni]# ./build.sh 

With that complete, we can now copy out the binary plugins.

[root@deploykube cni]# mkdir -p /opt/cni/bin
[root@deploykube cni]# cp bin/* /opt/cni/bin/

Ok, looking like… we’re potentially close. Reload units and start & enable kubelet.

[root@deploykube cni]# systemctl daemon-reload
[root@deploykube cni]# systemctl enable kubelet --now

Now, let’s try a kubeadm. We’re going to aim to use flannel.

[root@deploykube cni]# kubeadm init --pod-network-cidr 10.244.0.0/16

Amazingly…. that completed for me on the first go, guess I did my homework!

From that output you’ll also get your join commands should you want to expand the cluster beyond this one node. You don’t need to run this if you’re just running a single node like me in this tutorial. The command will look like:

  kubeadm join --token 49cb93.48ac0d64e3f6ccf6 192.168.122.147:6443

Follow steps to use the cluster with kubectl, must be a regular non-root user. So we’ll use centos in this case.

[centos@deploykube ~]$   sudo cp /etc/kubernetes/admin.conf $HOME/
[centos@deploykube ~]$   sudo chown $(id -u):$(id -g) $HOME/admin.conf
[centos@deploykube ~]$   export KUBECONFIG=$HOME/admin.conf
[centos@deploykube ~]$ kubectl get nodes
NAME                       STATUS     AGE       VERSION
deploykube.example.local   NotReady   52s       v1.6.3

And let’s watch that for a while… Get yourself a coffee here for a moment.

[centos@deploykube ~]$ watch -n1 kubectl get nodes

Install pod networking, flannel here.

Tricked ya! Hope you didn’t get coffee already. Wait, that won’t be ready until we have a pod network. So let’s add that.

I put the yamls in a gist, so we can curl ‘em like so:

$ curl https://gist.githubusercontent.com/dougbtv/a6065c316019642ecc1706d6e785a037/raw/16554948e306359090c3d52c3c7b0bcffea2e450/flannel-rbac.yaml > flannel-rbac.yaml
$ curl https://gist.githubusercontent.com/dougbtv/a6065c316019642ecc1706d6e785a037/raw/16554948e306359090c3d52c3c7b0bcffea2e450/flannel.yaml > flannel.yaml

And apply those…

[centos@deploykube ~]$ kubectl apply -f flannel-rbac.yaml 
clusterrole "flannel" created
clusterrolebinding "flannel" created
[centos@deploykube ~]$ kubectl apply -f flannel.yaml 
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

Now…. you should have a ready node.

[centos@deploykube ~]$ kubectl get nodes
NAME                       STATUS    AGE       VERSION
deploykube.example.local   Ready     21m       v1.6.3

Run a pod to verify we can actually use Kubernetes

Alright, so… That’s good, now, can we run a pod? Let’s use my favorite example, 2 nginx pods via a replication controller.

[centos@deploykube ~]$ cat nginx.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

And watch until they come up… You should have a couple…

[centos@deploykube ~]$ kubectl get pods
NAME          READY     STATUS    RESTARTS   AGE
nginx-5hmxs   1/1       Running   0          41s
nginx-m42cz   1/1       Running   0          41s

And let’s curl something from them…

[centos@deploykube ~]$ kubectl describe pod nginx-5hmxs | grep -P "^IP"
IP:     10.244.0.3
[centos@deploykube ~]$ curl -s 10.244.0.3 | grep -i thank
<p><em>Thank you for using nginx.</em></p>

Word! That’s a running kubernetes from source.


Some further inspection of the built pieces

What follow are some notes of mine while I got all the pieces together to make the build work. If you’re interested in some of the details, it might be worthwhile reading, however, it’s somewhat raw notes, I didn’t overly groom it before posting it.

The kubernetes.tar.gz tarball

Let’s take a look at the kubernetes.tar.gz, that sounds interesting. It kind of is. It looks like a lot of setup goods.

$ cd _output/release-tars/
$ cp kubernetes.tar.gz /tmp/

And I’m in the ./cluster/centos and doing ./build.sh – failed. So I started reading about it, and it points at this old doc for a centos cluster which then says “Oh yeah, that’s deprecated.” and then pointed to the contemporary kubeadm install method (which I’ve been using in kube-ansible).

So, how are we going to install it so we can use it?

Kubernetes isn’t terribly bad to deal with regarding deps, why? Golang. Makes it simple for us.

So let’s see how a rpm install looks after all, we’ll use that as our basis for installing and configuring kube.

Usually in kube-ansible, I install these rpms

- kubelet
- kubeadm
- kubectl
- kubernetes-cni

Here’s what results from these.

[root@kubecni-master centos]# rpm -qa | grep -i kub
kubernetes-cni-0.5.1-0.x86_64
kubectl-1.6.2-0.x86_64
kubeadm-1.6.2-0.x86_64
kubelet-1.6.2-0.x86_64

[root@kubecni-master centos]# rpm -qa --list kubernetes-cni
/opt/cni
/opt/cni/bin
/opt/cni/bin/bridge
/opt/cni/bin/cnitool
/opt/cni/bin/dhcp
/opt/cni/bin/flannel
/opt/cni/bin/host-local
/opt/cni/bin/ipvlan
/opt/cni/bin/loopback
/opt/cni/bin/macvlan
/opt/cni/bin/noop
/opt/cni/bin/ptp
/opt/cni/bin/tuning

[root@kubecni-master centos]# rpm -qa --list kubectl
/usr/bin/kubectl

[root@kubecni-master centos]# rpm -qa --list kubeadm
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
/usr/bin/kubeadm

[root@kubecni-master centos]# rpm -qa --list kubelet
/etc/kubernetes/manifests
/etc/systemd/system/kubelet.service
/usr/bin/kubelet

So the binaries, that makes sense, but where do the etc pieces come from?

So it appears that in the clone, the

  • ./build/debs/kubelet.service == /etc/systemd/system/kubelet.service
  • ./build/debs/kubeadm-10.conf ~= /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

The kubeadm.conf is… similar, but not exactly. I modify it to add a cgroup driver, and there’s also two additional lines from the RPM

Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

Didn’t find it in the git clone with grep -Prin "cgroup-driver"

I also realize in my playbooks I do this…

- name: Add custom kubadm.conf ExecStart
  lineinfile:
    dest: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    regexp: 'systemd$'
    line: 'ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS --cgroup-driver=systemd'

So I accounted for that as well.

Let's run Homer on Kubernetes!

I have to say that Homer is a favorite of mine. Homer is VoIP analysis & monitoring – on steroids. Not only has it saved my keister a number of times when troubleshooting VoIP platforms, but it has an awesome (and helpful) open source community. In my opinion – it should be an integral part of your devops plan if you’re deploying VoIP apps (really important to have visibility of your… Ops!). Leif and I are using Homer as part of our (still WIP) vnf-asterisk demo VNF (virtualized network function). We want to get it all running in OpenShift & Kubernetes. Our goal for this walk-through is to get Homer up and running on Kubernetes, and generate some traffic using HEPgen.js, and then view it on the Homer Web UI. So – why postpone joy? Let’s use homer-docker to go ahead and get Homer up and running on Kubernetes.

Do you just want to play? You can skip down to the “requirements” section and put some hands on the keyboard.

Some background

First off, I really enjoy working upstream with the sipcapture crew – they’re really nice, and have created quite a fine community around the world of software that comprises Homer. They’re really a friendly bunch, and they’re always looking to make Homer better – I’m a regular contributor, and you’ll see I proudly contribute as evidenced by my badge showing membership of the sipcapture org on my github profile!

This hasn’t yet landed in the official upstream homer-docker repo yet. It will eventually, maybe even by the time you’re reading this. So, look for a ./k8s directory in the official homer-docker repository. There’s a couple things I need to change in order to get it in there – in part I need to get a build pipeline to get the images into a registry. Because, a registry is required, and frankly it’s easy to “just use dockerhub”. If you’ve got a registry – use your own! You can trust it. Later on in the article, I’ll encourage you to use your own registry or dockerhub images if you please, but, also give you the option of using the images I have already built – so you can just get it to work.

I also need to document it – that’s partially why I’m writing this article, cause I can generate some good docs for the repo proper! And there’s at least a few rough edges to sand off (secret management, usage of cron jobs).

That being said, currently – I have the pieces for this in a fork of the homer-docker repo, in the ‘k8s’ branch on dougbtv/homer-docker

Eventually – I’d like to make these compatible with OpenShift – which isn’t a long stretch. I’m a fan of running OpenShift; it encourages a lot of good practices, and I think as an organization it can help lower your bus number. It’s also easier to manage and maintain, but… It is a little more strict, so I like to mock-up a deployment in vanilla Kubernetes.

Requirements

The steepest of requirements being that you need Kubernetes running – actually getting Homer up and going afterwards is just a couple commands! Here’s my short list:

  • Kubernetes, 1.6.0 or greater (1.5 might work, I haven’t tested it)
  • Master node has git installed (and maybe your favorite text editor)

That’s a short list, right? Well… Installing Kubernetes isn’t that hard (and I’ve got the ansible playbooks to do it). But, we also need to have some persistent storage to use.

Why’s that? Well… Containers are for the most part ephemeral in nature. They come, and they go, and they don’t leave a lot of much around. We love them for that! They’re very reproducible, and we love that. But – with that we lose our data everytime they die. There’s certain stuff we want to keep around with Homer – especially: All of our database data. So we create persistent storage in order to keep it around. There’s many plugins for persistent volumes you can use with Kubernetes, such as NFS, iSCSI, CephFS, Flocker (and proprietary stuff, too), etc.

I highly recommend you follow my guide for installing Kubernetes with persistent volume storage backed by GlusterFS. If you can follow through my guide successfully – that will get you to exactly the place you need to be to follow the rest of this guide. It also puts you in a place that feasible for actually running in production – the other option is to use “host path volumes”, Kubernetes docs have a nice tutorial on how to use host path volumes – however, we lose a lot of the great value of Kubernetes when we use host path volumes – they’re not portable across hosts, so, they’re effectly only good for a simple development use-case. If you do follow my guide – I recommend that you stop before you actually create the MariaDB stuff. That way you don’t have to clean it up (but you can leave it there and it won’t cause any harm, tbh).

My guide on Kubernetes with GlusterFS backed persistent volumes also builds upon another one of my guide for installing Kubernetes on CentOS. Which may also be helpful.

Both of these guides use a CentOS 7.3 host, which we run Ansible playbooks against, to then run 4 virtual machines which comprise our Kubernetes 1.6.1 (at the time of writing) cluster.

If you’re looking for something smaller (e.g. not a cluster), maybe you want to try minikube.

So, got your Kubernetes up and running?

Alright, if you’ve read this far, let’s make sure we’ve got a working kubernetes, change your namespace, or if you’re like me, I’ll just mock this up in a default namespace.

So, SSH into the master and just go ahead and check some basics….

[centos@kube-master ~]$ kubectl get nodes
[centos@kube-master ~]$ kubectl get pods

Everything looking to your liking? E.g. no errors and no pods running that you don’t want running? Great, you’re ready to rumble.

Clone my fork, using the k8s branch.

Ok, next up, we’re going to clone my fork of homer-docker, so go ahead and get that going…

[centos@kube-master ~]$ git clone -b k8s https://github.com/dougbtv/
[centos@kube-master ~]$ cd homer-docker/

Alright, now that we’re there, let’s take a small peek around.

First off in the root directoy there’s a k8s-docker-compose.yml. It’s a Docker compose file that’s really only there for a single purpose – and that’s to build images from. The docker-compose.yml file that’s there is for just a standard kind of deployment with just docker/docker-compose.

Optional: Build your Docker images

If you want to build your own docker images and push them to a registry (say Dockerhub) – now’s the time to do that. It’s completely optional – if you don’t, you’ll just wind up pulling my images from Dockerhub, they’re in the dougbtv/* namespace. Go ahead and skip ahead to the “persistent volumes” section if you don’t want to bother with building your own.

So, first off you need Docker compose…

[centos@kube-master homer-docker]$ sudo /bin/bash -c 'curl -L https://github.com/docker/compose/releases/download/1.12.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose'
[centos@kube-master homer-docker]$ sudo chmod +x /usr/local/bin/docker-compose
[centos@kube-master homer-docker]$ sudo /usr/local/bin/docker-compose -v
docker-compose version 1.12.0, build b31ff33

And add it to your path if you wish.

Now – go ahead and replace my namespace with your own. Replacing YOURNAME with, well, your own name (which is the namespace used in your registry for example)

[centos@kube-master homer-docker]$ find . -type f -print0 | xargs -0 sed -i 's/dougbtv/YOURNAME/g'

Now you can kick off a build.

[centos@kube-master homer-docker]$ sudo /usr/local/bin/docker-compose -f k8s-docker-compose.yml build

Now you’ll have a bunch of images in your docker images list, and you can docker login and docker push yourname/each-image as you like.

Persistent volumes

Alright, now this is rather important, we’re going to need persistent volumes to store our data in. So let’s get those going.

I’m really hoping you followed my tutorial on using Kubernetes with GlusterFS because you’ll have exactly the volumes we need. If you haven’t – I’m leaving this as an excersize for the reader to create host path volumes, say if you’re using minikube or otherwise. If you do choose that adventure, think about modifying my glusterfs-volumes.yaml file.

During my tutorial where we created volumes, there’s a file in cento’s home @ /home/centos/glusterfs-volumes.yaml – and we ran a kubectl create -f /home/centos/glusterfs-volumes.yaml.

Once we ran that, we have volumes that are available to use, you can check them out with:

[centos@kube-master homer-docker]$ kubectl get pv
NAME               CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
gluster-volume-1   600Mi      RWO           Delete          Available             storage                  3h
gluster-volume-2   300Mi      RWO           Delete          Available             storage                  3h
gluster-volume-3   300Mi      RWO           Delete          Available             storage                  3h
gluster-volume-4   100Mi      RWO           Delete          Available             storage                  3h
gluster-volume-5   100Mi      RWO           Delete          Available             storage                  3h

Noting that in the above command kubectl get pv – the pv means “persistent volumes”. Once you have these volumes in your install – you’re good to proceed to the next steps.

Drum roll please – start a deploy of Homer!

Alright, now… with that in place there’s just three steps we need to perform, and we’ll look at the results of those after we run them. Those steps are:

  • Make persistent volume claims (to stake a claim to the space in those volumes)
  • Create service endpoints for the Homer services
  • Start the pods to run the Homer containers

Alright, so go ahead and move yourself to the k8s directory in the clone you created earlier.

[centos@kube-master k8s]$ pwd
/home/centos/homer-docker/k8s

Now, there’s 3-4 files here that really matter to us, go ahead and check them out if you so please.

These are the one’s I’m talking about:

[centos@kube-master k8s]$ ls -1 *yaml
deploy.yaml
hepgen.yaml
persistent.yaml
service.yaml

The purpose of each of these files is…

  • persistent.yaml: Defines our persistent volume claims.
  • deploy.yaml: Defines which pods we have, and also configurations for them.
  • service.yaml: Defines the exposed services from each pod.

Then there’s hepgen.yaml – but we’ll get to that later!

Alright – now that you get the gist of the lay of the land. Let’s run each one.

Changing some configuration options…

Should you need to change any options, they’re generally environment variables and are in the ConfigMap section of the deploy.yaml. Some of those environment variables are really secrets, and it’s an improvement that could be made to this deployment.

Create Homer Persistent volume claims

Alright, we’re going to need the persistent volume claims, so let’s create those.

[centos@kube-master k8s]$ kubectl create -f persistent.yaml 
persistentvolumeclaim "homer-data-dashboard" created
persistentvolumeclaim "homer-data-mysql" created
persistentvolumeclaim "homer-data-semaphore" created

Now we can check out what was created.

[centos@kube-master k8s]$ kubectl get pvc
NAME                   STATUS    VOLUME             CAPACITY   ACCESSMODES   STORAGECLASS   AGE
homer-data-dashboard   Bound     gluster-volume-4   100Mi      RWO           storage        18s
homer-data-mysql       Bound     gluster-volume-2   300Mi      RWO           storage        18s
homer-data-semaphore   Bound     gluster-volume-5   100Mi      RWO           storage        17s

Great!

Create Homer Services

Ok, now we need to create services – which allows our containers to interact with one another, and us to interact the services they create.

[centos@kube-master k8s]$ kubectl create -f service.yaml 
service "bootstrap" created
service "cron" created
service "kamailio" created
service "mysql" created
service "webapp" created

Now, let’s look at what’s there.

[centos@kube-master k8s]$ kubectl get svc
NAME                CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
bootstrap           None             <none>        55555/TCP   6s
cron                None             <none>        55555/TCP   6s
glusterfs-cluster   10.107.123.112   <none>        1/TCP       23h
kamailio            10.105.142.140   <none>        9060/UDP    5s
kubernetes          10.96.0.1        <none>        443/TCP     1d
mysql               None             <none>        3306/TCP    5s
webapp              10.101.132.226   <none>        80/TCP      5s

You’ll notice some familiar faces if you’re used to deploying Homer with the homer-docker docker-compose file – there’s kamailio, mysql, the web app, etc.

Create Homer Pods

Ok, now – we can create the actual containers to get Homer running.

[centos@kube-master k8s]$ kubectl create -f deploy.yaml 
configmap "env-config" created
job "bootstrap" created
deployment "cron" created
deployment "kamailio" created
deployment "mysql" created
deployment "webapp" created

Now, go ahead and watch them come up – this could take a while during the phase where the images are pulled.

So go ahead and watch patiently…

[centos@kube-master k8s]$ watch -n1 kubectl get pods --show-all

Wait until the STATUS for all pods is either completed or running. Should one of the pods fail, you might want to get some more information about it. Let’s say the webapp isn’t running; you could describe the pod, and get logs from the container with:

[centos@kube-master k8s]$ kubectl describe pod webapp-3002220561-l20v4
[centos@kube-master k8s]$ kubectl logs webapp-3002220561-l20v4

Verify the backend – generate some traffic with HEPgen.js

Alright, now that we have all the pods up – we can create a hepgen job.

[centos@kube-master k8s]$ kubectl create -f hepgen.yaml 
job "hepgen" created
configmap "sample-hepgen-config" created

And go and watch that until the STATUS is Completed for the hepgen job.

[centos@kube-master k8s]$ watch -n1 kubectl get pods --show-all

Now that it has run, we can verify that the calls are in the database, let’s look at something simple-ish. Remember, the default password for the database is literally secret.

So go ahead and enter the command line for MySQL…

[centos@kube-master k8s]$ kubectl exec -it $(kubectl get pods | grep mysql | awk '{print $1}') -- mysql -u root -p
Enter password: 

And then you can check out the number of entries intoday sip_capture_call_* table. There should in theory be 3 entries here.

mysql> use homer_data;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables like '%0420%';
+-----------------------------------+
| Tables_in_homer_data (%0420%)     |
+-----------------------------------+
| isup_capture_all_20170420         |
| rtcp_capture_all_20170420         |
| sip_capture_call_20170420         |
| sip_capture_registration_20170420 |
| sip_capture_rest_20170420         |
| webrtc_capture_all_20170420       |
+-----------------------------------+
6 rows in set (0.01 sec)

mysql> SELECT COUNT(*) FROM sip_capture_call_20170420;
+----------+
| COUNT(*) |
+----------+
|        3 |
+----------+
1 row in set (0.00 sec)

It’s all there! That means our backend is generally working. But… That’s the hard dirty work for Homer, we want to get into the good stuff – some visualization of our data, so let’s move on to the front-end.

Expose the front-end

Alright in theory now you can look at kubectl get svc and see the service for the webapp, and visit that URL.

But, following with my tutorial, if you’ve run these in VMs on a CentOS host, well… you have a little more to do to expose the front-end. This is also (at least somewhat) similiar to how you’d expose an external IP address to access this in a more-like-production setup.

So, let’s go ahead and change the service for the webapp to match the external IP address of the master.

Note, you’ll see it has an internal IP address assigned by your CNI networking IPAM.

[centos@kube-master k8s]$ kubectl get svc | grep -Pi "^Name|webapp"
NAME                CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
webapp              10.101.132.226   <none>        80/TCP      32m

Given that, right from the master (or likely anywhere on Kube nodes) you could just curl 10.101.132.226 and there’s your dashboard, but, man it’s hard to navigate the web app using curl ;)

So let’s figure out the IP address of our host. Mine is in the 192.168.122range and yours will be too if you’re using my VM method here.

[centos@kube-master k8s]$ kubectl delete svc webapp
service "webapp" deleted
[centos@kube-master k8s]$ ipaddr=$(ip a | grep 192 | awk '{print $2}' | perl -pe 's|/.+||')
[centos@kube-master k8s]$ kubectl expose deployment webapp --port=80 --target-port=80 --external-ip $ipaddr
service "webapp" exposed

Now you’ll see we have an external address, so anyone who can access 192.168.122.14:80 can see this.

[centos@kube-master k8s]$ kubectl get svc | grep -Pi "^Name|webapp"
NAME                CLUSTER-IP       EXTERNAL-IP      PORT(S)     AGE
webapp              10.103.140.63    192.168.122.14   80/TCP      53s

However if you’re using my setup, you might have to tunnel traffic from your desktop to the virtual machine host in order to do that. So, I did so with something like:

ssh root@virtual_machine_host -L 8080:192.168.122.14:80

Now – I can type in localhost:8080 in my browser, and… Voila! There is Homer!

Remember to login with username admin and password test123.

Change your date to “today” and hit search, and you’ll see all the information we captured from running HEPgen.

And there… You have it! Now you can wrangle in what’s going on with your Kubernetes VoIP platform :)

How-to use GlusterFS to back persistent volumes in Kubernetes

A mountain I keep walking around instead of climbing in my Kubernetes lab is storing persistent data, I kept avoiding it. Sure – in a lab, I can just throw it all out most of the time. But, what about when we really need it? I decided I would use GlusterFS to back my persistent volumes and I’ve got to say… My experience with GlusterFS was great, I really enjoyed using it, and it seems rather resilient – and best of all? It was pretty easy to get going and to operate. Today we’ll spin up a Kubernetes cluster using my kube-ansible playbooks, and use some newly included plays that also setup a GlusterFS cluster. With that in hand, our goal will be to setup the persistent volumes and claims to those volumes, and we’ll spin up a MariaDB pod that stores data in a persistent volume, important data that we want to keep – so we’ll make some data about Vermont beer as it’s very very important.

Update: Hey! Check it out – I have a new article about GlusterFS for kube. Worth a gander as well.

Requirements

First up, this article will use my spin-up Kubernetes on CentOS article as a basis. So if there’s any details you feel are missing from here – make sure to double check that article as it goes further in depth for the moving parts that make up the kube-ansible playbooks. Particularly there’s more detail that article on how to modify the inventories and what’s going on there, too (and where your ssh keys are, which you’ll need too).

Now, what you’ll need…

  • A CentOS 7.3 host capable of spinning up a few virtual machines
  • A machine where you can run Ansible (which can also be the same host if you like), and has git to clone our playbooks.

That’s what we’re going to base it on. If you’d rather not use virtual machines, that’s OK! But, if you choose to spin this up on bare metal, you’ll have to do all the OS install yourself (as you guessed, or maybe you’re all cool and using Bifrost or Spacewalk or something cool, and that’s great too). To make it interesting, I’d recommend at least 3 hosts (a master and two minions), and… There’s one more important part you’re going to have to do if you’re using baremetal – and that’s going to be to make sure there’s a couple empty partitions available. Read ahead first and see what it looks like here with the VMs. Those partitions you’ll have to format for GlusterFS. In fact – that’s THE only hard part of this whole process is that you’ve gotta have some empty partitions across a few hosts that you can modify.

Let’s get our Kubernetes cluster running.

Ok, step zero – you need a clone of my playbooks, so make a clone and move into it’s directory…

git clone --branch v0.0.6 https://github.com/redhat-nfvpe/kube-ansible.git

Since we’ve got that we’re going to do run the virt-host-setup.yml playbook which sets up our CentOS host so that it can create a few virtual machines. The defaults spin up 4 machines, and you can modify some of these preferences by going into the vars/all.yml if you please. Also, you’ll need to modify the inventory/virthost.inventory file to suit your environment.

ansible-playbook -i inventory/virthost.inventory virt-host-setup.yml

Once that is complete, on your virtual machine host you should see some machines running if you were to run

virsh list --all

The virt-host-setup will complete with a set of IP addresses, so go ahead and use those in the inventory/vms.inventory file, and then we can start our Kubernetes installation.

ansible-playbook -i inventory/vms.inventory kube-install.yml

You can check that Kubernetes is running successfully now, SSH into your Master (you’ll need to do other work there soon, too)

kubectl get nodes

And you should see the master and 3 minions, by default. Alright, Kubernetes is up.

Let’s get GlusterFS running!

So we’re going to use a few playbooks here to get this all setup for you. Before we do that, let me speak to what’s happening in the background, and we’ll take a little peek for ourselves with our setup up and running.

First of all, most of my work to automate this with Ansible was based on this article on installing a GlusterFS cluster on CentOS, which I think comes from Storage SIG (maybe). I also referenced this blog article from Gluster about GlusterFS with Kubernetes. Last but not least, there’s example implementations of GlusterFS From Kubernetes GitHub repo.

Next a little consideration for your own architectural needs is that we’re going to use the Kubernetes nodes as GlusterFS themselves. Additionally – then we’re running GlusterFS processes on the hosts themselves. So, this wouldn’t work for an all Atomic Host setup. Which is unfortunate, and I admit I’m not entirely happy with it, but it might be a premature optimization of sorts right now. However, this is more-or-less a proof-of-concept. If you’re so inclined, you might be able to modify what’s here to adapt it to a fully containerized deployment (it’d be a lot, A LOT, swankier). You might want to organize this otherwise, but, it was convenient to make it this way, and could be easily broken out with a different inventory scheme if you so wished.

Attach some disks

The first playbook we’re going to run is the vm-attach-disk playbook. This is based on this publicly available help from access.redhat.com. The gist is that we create some qcow images and attach them to our running guests on the virtual machine host.

Let’s first look at the devices available on our kube-master for instance, so list the block devices…

[centos@kube-master ~]$ lsblk | grep -v docker

You’ll note that there’s just a vda mounted on /.

Let’s run that playbook now and take a peek at it again after.

ansible-playbook -i inventory/virthost.inventory vm-attach-disk.yml

Now go ahead and look on the master again, and list those block devices.

[centos@kube-master ~]$ lsblk | grep -v docker

You should have a vdb that’s 4 gigs. Great, that’s what the playbook does, it does it across the 4 guests. You’ll note it’s not mounted, and it’s not formatted.

Configure those disks, and install & configure Gluster!

Now that our disks are attached, we can go ahead and configure Gluster.

ansible-playbook -i inventory/vms.inventory gluster-install.yml

Here’s what’s been done:

  • Physical volumes and volume groups created on those disks.
  • Disks formatted as XFS.
  • Partitions mounted on /bricks/brick1 and /bricks/brick2.
  • Nodes attached to GlusterFS cluster.
  • Gluster volumes created across the cluster.
  • Some yaml for k8s added @ /home/centos/glusterfs*yaml

Cool, now that it’s setup let’s look at a few things. We’ll head right to the master to check this out.

[root@kube-master centos]# gluster peer status

You should see three peers with a connected state. Additionally, this should be the case on all the minions, too.

[root@kube-minion-2 centos]# gluster peer status
Number of Peers: 3

Hostname: 192.168.122.14
Uuid: 3edd6f2f-0055-4d97-ac81-4861e15f6e49
State: Peer in Cluster (Connected)

Hostname: 192.168.122.189
Uuid: 48e2d30b-8144-4ae8-9e00-90de4462e2bc
State: Peer in Cluster (Connected)

Hostname: 192.168.122.160
Uuid: d20b39ba-8543-427d-8228-eacabd293b68
State: Peer in Cluster (Connected)

Lookin’ good! How about volumes available?

[root@kube-master centos]# gluster volume status

Which will give you a lot of info about the volumes that have been created.

Let’s try using GlusterFS (optional)

So this part is entirely optional. But, do you want to see the filesystem in action? Let’s temporarily mount a volume, and we’ll write some data to it, and see it appear on other hosts.

[root@kube-master centos]# mkdir /mnt/gluster
[root@kube-master centos]# ipaddr=$(ifconfig | grep 192 | awk '{print $2}')
[root@kube-master centos]# mount -t glusterfs $ipaddr:/glustervol1 /mnt/gluster/

Ok, so now we have a gluster volume mounted at /mnt/gluster – let’s go ahead and put a file in there.

[root@kube-master centos]# echo "foo" >> /mnt/gluster/bar.txt

Now we should have a file, bar.txt with the contents “foo” on all the nodes in the /bricks/brick1/brick1 directory. Let’s verify that on a couple nodes.

[root@kube-master centos]# cat /bricks/brick1/brick1/bar.txt 
foo

And on kube-minion-2…

[root@kube-minion-2 centos]# cat /bricks/brick1/brick1/bar.txt
foo

Cool! Nifty right? Now let’s clean up.

[root@kube-master centos]# rm /mnt/gluster/bar.txt
[root@kube-master centos]# umount /mnt/gluster/

Add the persistent volumes to Kubernetes!

Alright, so you’re all good now, it works! (Or, I hope it works for you at this point, from following 1,001 blog articles for how-to documents, I know sometimes it can get frustrating, but… I’m hopeful for you).

With that all set and verified a bit, we can go ahead and configure Kubernetes to get it all looking good for us.

On the master, as the centos user, look in the /home/centos/ dir for these files…

[centos@kube-master ~]$ ls glusterfs-* -lah
-rw-rw-r--. 1 centos centos  781 Apr 19 19:08 glusterfs-endpoints.json
-rw-rw-r--. 1 centos centos  154 Apr 19 19:08 glusterfs-service.json
-rw-rw-r--. 1 centos centos 1.6K Apr 19 19:11 glusterfs-volumes.yaml

Go ahead and inspect them if you’d like. Let’s go ahead and implement them for us.

[centos@kube-master ~]$ kubectl create -f glusterfs-endpoints.json 
endpoints "glusterfs-cluster" created
[centos@kube-master ~]$ kubectl create -f glusterfs-service.json 
service "glusterfs-cluster" created
[centos@kube-master ~]$ kubectl create -f glusterfs-volumes.yaml 
persistentvolume "gluster-volume-1" created
persistentvolume "gluster-volume-2" created
persistentvolume "gluster-volume-3" created
persistentvolume "gluster-volume-4" created
persistentvolume "gluster-volume-5" created

Now we can ask kubectl to show us the persistent volumes pv.

[centos@kube-master ~]$ kubectl get pv
NAME               CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
gluster-volume-1   600Mi      RWO           Delete          Available             storage                  18s
gluster-volume-2   300Mi      RWO           Delete          Available             storage                  18s
gluster-volume-3   300Mi      RWO           Delete          Available             storage                  18s
gluster-volume-4   100Mi      RWO           Delete          Available             storage                  18s
gluster-volume-5   100Mi      RWO           Delete          Available             storage                  18s

Alright! That’s good now, we can go ahead and put these to use.

Let’s create our claims

First, we’re going to need a persistent volume claim. So let’s craft one here, and we’ll get that going. The persistent volume claim is like “staking a claim” of land. We’re going to say “Hey Kubernetes, we need a volume, and it’s going to be this big”. And it’ll allocate it smartly. And it’ll let you know for sure if there isn’t anything that it can claim.

So create a file like so…

[centos@kube-master ~]$ cat pvc.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: null
  name: mariadb-data
spec:
  storageClassName: storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 250Mi
status: {}

And then use kubectl create to apply it.

[centos@kube-master ~]$ kubectl create -f pvc.yaml 
persistentvolumeclaim "mariadb-data" created

And now we can list the “persistent volume claims”, the pvc

[centos@kube-master ~]$ kubectl get pvc
NAME           STATUS    VOLUME             CAPACITY   ACCESSMODES   STORAGECLASS   AGE
mariadb-data   Bound     gluster-volume-2   300Mi      RWO           storage        20s

You’ll see that Kubernetes was smart about it, and of the volumes we created – it used juuuust the right one. We had a 600 meg claim, 300 meg claims, and a couple 100 meg claims. It picked the 300 meg claim properly. Awesome!

Now, let’s put those volumes to use in a Maria DB pod.

Great, now we have some storage we can use across the cluster. Let’s go ahead and use it. We’re going to use Maria DB cause it’s a great example of a real-world way that we’d want to persist data – in a database.

So let’s create a YAML spec for this pod. Make yours like so:

[centos@kube-master ~]$ cat mariadb.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: mariadb
spec:
  containers:
  - env:
      - name: MYSQL_ROOT_PASSWORD
        value: secret
    image: mariadb:10
    name: mariadb
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/mysql
      name: mariadb-data
  restartPolicy: Always
  volumes:
  - name: mariadb-data
    persistentVolumeClaim:
      claimName: mariadb-data

Cool, now create it…

[centos@kube-master ~]$ kubectl create -f mariadb.yaml 

Then, watch it come up…

[centos@kube-master ~]$ watch -n1 kubectl describe pod mariadb 

Let’s make some persistent data! Err, put beer in the fridge.

Once it comes up, let’s go ahead and create some data in there we can pull back up. (If you didn’t see it in the pod spec, you’ll want to know that the password is “secret” without quotes).

This data needs to be important right? Otherwise, we’d just throw it out. So we’re going to create some data regarding beer.

You’ll note I’m creating a database called kitchen with a table called fridge and then I’m inserting some of the BEST beers in Vermont (and likely the world, I’m not biased! ;) ). Like Heady Topper from The Alchemist, and Lawson’s sip of sunshine, and the best beer ever created – Hill Farmstead’s Edward

[centos@kube-master ~]$ kubectl exec -it mariadb -- /bin/bash
root@mariadb:/# stty cols 150
root@mariadb:/# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 10.1.22-MariaDB-1~jessie mariadb.org binary distribution

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE kitchen;
Query OK, 1 row affected (0.02 sec)

MariaDB [(none)]> USE kitchen;
Database changed
MariaDB [kitchen]> 
MariaDB [kitchen]> 
MariaDB [kitchen]> CREATE TABLE fridge (id INT AUTO_INCREMENT, item VARCHAR(255), quantity INT, PRIMARY KEY (id));
Query OK, 0 rows affected (0.31 sec)

MariaDB [kitchen]> INSERT INTO fridge VALUES (NULL,'heady topper',6);
Query OK, 1 row affected (0.05 sec)

MariaDB [kitchen]> INSERT INTO fridge VALUES (NULL,'sip of sunshine',6);
Query OK, 1 row affected (0.04 sec)

MariaDB [kitchen]> INSERT INTO fridge VALUES (NULL,'hill farmstead edward',6); 
Query OK, 1 row affected (0.03 sec)

MariaDB [kitchen]> SELECT * FROM fridge;
+----+-----------------------+----------+
| id | item                  | quantity |
+----+-----------------------+----------+
|  1 | heady topper          |        6 |
|  2 | sip of sunshine       |        6 |
|  3 | hill farmstead edward |        6 |
+----+-----------------------+----------+
3 rows in set (0.00 sec)

Destroy the pod!

Cool – well that’s all well and good, we know there’s some beer in our kitchen.fridge table in MariaDB.

But, let’s destroy the pod, first – where is the pod running, which minion? Let’s check that out. We’re going to restart it until it appears on a different node. (We could create an anti-affinity and all that good stuff, but, we’ll just kinda jimmy it here for a quick demo.)

[centos@kube-master ~]$ kubectl describe pod mariadb | grep -P "^Node:"
Node:       kube-minion-2/192.168.122.43

Alright, you’ll see mine is running on kube-minion-2, let’s remove that pod and create it again.

[centos@kube-master ~]$ kubectl delete pod mariadb
pod "mariadb" deleted
[centos@kube-master ~]$ kubectl create -f mariadb.yaml 
[centos@kube-master ~]$ watch -n1 kubectl describe pod mariadb 

Watch it come up again, and if it comes up on the same node – delete it and create it again. I believe it happens round-robin-ish, so… It’ll probably come up somewhere else.

Now, once it’s up – let’s go and check out the data in it.

[centos@kube-master ~]$ kubectl exec -it mariadb -- mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.1.22-MariaDB-1~jessie mariadb.org binary distribution

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> USE kitchen;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [kitchen]> SELECT * FROM fridge;
+----+-----------------------+----------+
| id | item                  | quantity |
+----+-----------------------+----------+
|  1 | heady topper          |        6 |
|  2 | sip of sunshine       |        6 |
|  3 | hill farmstead edward |        6 |
+----+-----------------------+----------+
3 rows in set (0.00 sec)

Hurray! There’s all the beer still in the fridge. Phew!!! Precious, precious beer.

koko - Connect Containers together with virtual ethernet connections

Let’s dig into koko created by Tomofumi Hayashi. koko (the project’s namesake comes from “COntainer COnnector”) is a utility written in Go that gives us a way to connect containers together with “veth” (virtual ethernet) devices – a feature available in the Linux kernel. This allows us to specify interfaces that the containers use and link them together – all without using Linux bridges. koko has become a cornerstone of the zebra-pen project, an effort I’m involved in to analyze gaps in containerized NFV workloads, specifically it routes traffic using Quagga, and we setup all the interfaces using koko. The project really took a turn for the better when Tomo came up with koko and we implemented it in zebra-pen. Ready to see koko in action? Let’s jump in the pool!

Quick update note: This article was written before “koko” was named…. “koko”! It was previously named “vethcon” as it dealt primarily with “veth connections for containers” and that’s what we focus on in this article. Now, “vethcon” does more than just use “veth” interfaces, and henceforth it was renamed. Now, it can also do some cool work with vxlan interfaces to do what we’ll do here – but also across hosts! This article still focuses on using veth interfaces. I did a wholesale find-and-replace of “vethcon” with “koko” and everything should “just work”, but, just so you can be forewarned.

We’ll talk about the back-story for what veth interfaces are, and talk a little bit about Linux network namespaces. Then we’ll dig into the koko source itself and briefly step through what it’s doing.

Last but not least – what fun would it be if we didn’t fire up koko and get it working? If you’re less interested in the back story, just scroll down to the “Ready, set, compile!” section. From there you can get your hands on the keyboard and dive into the fun stuff. Our goal will be to compile koko, connect two containers with one another, look at those interfaces and get a ping to come across them.

We’ll just connect a couple containers together, but, using koko you can also connect network namespaces to containers, and network namespaces to network namespaces, too.

Another note before we kick this off – koko’s life has really just begun, it’s useful and functional as it is. But, Tomo has bigger and better ideas for it – there’s some potential in the future for creating vxlan interfaces (and given that the rename happened, those are in there at least as a prototype), and getting it working with CNI – but, there’s still experimentation to be done there, and I don’t want to spoil it by saying too much. So, as I’ve heard said before “That’s another story for another bourbon.”

Requirements

If you want to sing along – the way that I’m going through this is using a fresh install of CentOS 7. In my case I’m using the generic cloud image. Chances are this will be very similar with a RHEL or Fedora install. But if you want to play along the same exact way, spin yourself up a fresh CentOS 7 VM.

You’re also going to need a spankin’ fresh version of Docker. So we’ll install from the official Docker RPM repos and install a really fresh one.

The back-story

koko leverages “veth” – as evidenced by its name. veth interfaces aren’t exactly new, veth devices were proposed way back in ‘07. The original authors describe veth as:

Veth stands for Virtual ETHernet. It is a simple tunnel driver that works at the link layer and looks like a pair of ethernet devices interconnected with each other.

veth interfaces come in pairs, and that’s what we’ll do in a bit, we’ll pair them up together with two containers. If you’d like to see some diagrams of veth pairs in action – I’ll point you to this article from opencloudblog which has does a nice job illustrating it.

Another concept that’s important to the functioning of koko is “network namespaces”. Linux namespaces is the general concept here that allows network namespaces – in short they give us a view of resources that are limited to a “namespace”. Linux namespaces are a fundamental part of how containers function under Linux, it provides the over-arching functionality that’s necessary to segregate processes and users, etc. This isn’t new either – apparently it begun in 2002 with mount-type namespaces.

Without network namespaces, in Linux all of your interfaces and routing tables are all mashed together and available to one another. With network namespaces, you can isolate these from one another, so they can work independently from one-another. This will give processes a specific view of these interfaces.

Let’s look at the koko go code.

So, what’s under the hood? In essence, koko uses a few modules and then provides some handling for us to pick out the container namespace and assign veth links to the containers. Its simplicity is its elegance, and quite a good idea.

It’s worth noting that koko may change after I write this article, so if you’re following along with a clone of koko – you might want to know what point in the git history it exists, so I’ll point you towards browsing the code at commitish 35c4c58 if you’d like.

Let’s first look at the modules, then, I’ll point you through the code just a touch, in case you wanted to get in there and look a little deeper.

The libraries

And other things that are more utilitarian, such as package context, c-style getopts, and internal built-ins like os,fmt,net, etc.

Application flow

Note: some of this naming may have changed a bit with the koko upgrade

At it’s core, koko defines a data object called vEth, which gives us a structure to store some information about the connections that we’ll make.

It’s a struct and is defined as so:

// ---------------------------------------------------- -
// ------------------------------ vEth data object.  - -
// -------------------------------------------------- -
// -- defines a data object to describe interfaces
// -------------------------------------------------- -

type vEth struct {
    // What's the network namespace?
    nsName string
    // And what will we call the link.
    linkName string
    // Is there an ip address?
    withIPAddr bool
    // What is that ip address.
    ipAddr net.IPNet
}

In some fairly terse diagramming using asciiflow, the general application flow goes as follows… (It’s high level, I’m missing a step or two, but, it’d help you dig through the code a bit if you were to step through it)

main()
  +
  |
  +------> parseDOption()  (parse -d options from cli)
  |
  +------> parseNOption()  (parse -n options from cli)
  |
  +------> makeVeth(veth1, veth2) with vEth data objects
               +
               |
               +------>  getVethPair(link names)
               |             +
               |             |
               |             +------>  makeVethPair(link)
               |                          +
               |                          |
               |                          +----> netlink.Veth()
               |
               +------>  setVethLink(link) for link 1 & 2

Ready, set, compile!

Ok, first let’s get ready and install the dependencies that we need. Go makes it really easy on us – it handles its own deps and we basically will just need golang, git and Docker.


# Enable the docker ce repo
[centos@koko ~]$ sudo yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo

# Install the deps.
[centos@koko ~]$ sudo yum install -y golang git docker-ce

# Start and enable docker
[centos@koko ~]$ sudo systemctl start docker && sudo systemctl enable docker

# Check that docker is working
[centos@koko ~]$ sudo docker ps

Now let’s set the gopath and git clone the code.

# Set the go path
[centos@koko ~]$ rm -Rf gocode/
[centos@koko ~]$ mkdir -p gocode/src
[centos@koko ~]$ export GOPATH=/home/centos/gocode/

# Clone koko
[centos@koko ~]$ git clone https://github.com/redhat-nfvpe/koko.git /home/centos/gocode/src/koko

Finally, we’ll grab the dependencies and compile koko.

# Fetch the dependencies for koko
[centos@koko ~]$ cd gocode/
[centos@koko gocode]$ go get koko

# Now, let's compile it
[centos@koko ~]$ go build koko

Now you can go ahead and run the help if you’d like.

[centos@koko gocode]$ ./koko -h

Usage:
./koko -d centos1:link1:192.168.1.1/24 -d centos2:link2:192.168.1.2/24 #with IP addr
./koko -d centos1:link1 -d centos2:link2  #without IP addr
./koko -n /var/run/netns/test1:link1:192.168.1.1/24 <other>  

Make a handy-dandy little Docker image

Let’s make ourselves a handy Docker image that we can use – we’ll base it on CentOS and just add a couple utilities for inspecting what’s going on.

Make a Dockerfile like so:

FROM centos:centos7
RUN yum install -y iproute tcpdump

I just hucked my Dockerfile into tmp and built from there.

[centos@koko gocode]$ cd /tmp/
[centos@koko tmp]$ vi Dockerfile
[centos@koko tmp]$ sudo docker build -t dougbtv/inspect-centos .

Run your containers

Now you can spin up a couple containers based on those images…

Note that we’re going to run these with --network none as a demonstration.

Let’s do that now…

[centos@koko gocode]$ sudo docker run --network none -dt --name centos1 dougbtv/inspect-centos /bin/bash
[centos@koko gocode]$ sudo docker run --network none -dt --name centos2 dougbtv/inspect-centos /bin/bash

If you exec ip link on either of the containers you’ll see they only have a local loopback interfaces.

[centos@koko gocode]$ sudo docker exec -it centos1 ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

That’s perfect for us for now.

Let’s give koko a run!

Ok, cool, so at this point you should have a couple containers named centos1 & centos2 running, and both of those with --network none so they only have the local loopback as mentioned.

[centos@koko gocode]$ sudo docker ps --format 'table     '
NAMES               IMAGE
centos2    dougbtv/inspect-centos
centos1    dougbtv/inspect-centos

Cool – now, let’s get some network between the two of these containers using vetcon… What we’re going to do is put the containers on a network, the /24 we’re going to choose is 10.200.0.0/24 and we’ll make network interfaces named net1 and net2.

You pass these into koko with colon delimited fields which is like -d {container-name}:{interface-name}:{ip-address/netmask}. As we mentioned earlier, since veths are pairs – you pass in the -d {stuff} twice for the pain, one for each container.

Note that the container name can either be the name (as we gave it a --name in our docker run or it can be the container id [the big fat hash]). The interface name must be unique – it can’t match another one on your system, and it must be different

So that means we’re going to execute koko like this. (Psst, make sure you’re in the ~/gocode/ directory we created earlier, unless you moved the koko binary somewhere else that’s handy.)

Drum roll please…

[centos@koko gocode]$ sudo ./koko -d centos1:net1:10.200.0.1/24 -d centos2:net2:10.200.0.2/24
Create veth...done

Alright! Now we should have some interfaces called net1 and net2 in the centos1 & centos2 containers respectively, let’s take a look by running ip addr on each container. (I took the liberty of grepping for some specifics)

[centos@koko gocode]$ sudo docker exec -it centos1 ip addr | grep -P "^\d|inet "
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    inet 127.0.0.1/8 scope host lo
28: net1@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    inet 10.200.0.1/24 scope global net1

[centos@koko gocode]$ sudo docker exec -it centos2 ip addr | grep -P "^\d|inet "
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    inet 127.0.0.1/8 scope host lo
27: net2@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    inet 10.200.0.2/24 scope global net2

As you can see there’s an interface called net1 for the centos1 container, and it’s assigned the address 10.200.0.1. It’s companion, centos2 has the net2 address, assigned 10.200.0.2.

That being said, let’s exec a ping from centos1 to centos2 to prove that it’s in good shape.

Here we go!

[centos@koko gocode]$ sudo docker exec -it centos1 ping -c5 10.200.0.2
PING 10.200.0.2 (10.200.0.2) 56(84) bytes of data.
64 bytes from 10.200.0.2: icmp_seq=1 ttl=64 time=0.063 ms
64 bytes from 10.200.0.2: icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from 10.200.0.2: icmp_seq=3 ttl=64 time=0.055 ms
64 bytes from 10.200.0.2: icmp_seq=4 ttl=64 time=0.054 ms
64 bytes from 10.200.0.2: icmp_seq=5 ttl=64 time=0.052 ms

--- 10.200.0.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.052/0.058/0.068/0.009 ms

Alright, looking good with a ping, just to couple check, let’s also see that we can see it with a tcpdump on centos2. So, bring up 2 ssh sessions to this host (or, if it’s local to you, two terminals will do well, or however you’d like to do this).

And we’ll start a TCP dump on centos2 and we’ll exec the same ping command as above on centos1

And running that, we can see the pings going to-and-fro!

[centos@koko ~]$ sudo docker exec -it centos2 tcpdump -nn -i net2 'icmp'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on net2, link-type EN10MB (Ethernet), capture size 65535 bytes
12:21:39.426020 IP 10.200.0.1 > 10.200.0.2: ICMP echo request, id 43, seq 1, length 64
12:21:39.426050 IP 10.200.0.2 > 10.200.0.1: ICMP echo reply, id 43, seq 1, length 64
12:21:40.425953 IP 10.200.0.1 > 10.200.0.2: ICMP echo request, id 43, seq 2, length 64
12:21:40.425983 IP 10.200.0.2 > 10.200.0.1: ICMP echo reply, id 43, seq 2, length 64
12:21:41.425898 IP 10.200.0.1 > 10.200.0.2: ICMP echo request, id 43, seq 3, length 64
12:21:41.425925 IP 10.200.0.2 > 10.200.0.1: ICMP echo reply, id 43, seq 3, length 64
12:21:42.425922 IP 10.200.0.1 > 10.200.0.2: ICMP echo request, id 43, seq 4, length 64
12:21:42.425949 IP 10.200.0.2 > 10.200.0.1: ICMP echo reply, id 43, seq 4, length 64
12:21:43.425870 IP 10.200.0.1 > 10.200.0.2: ICMP echo request, id 43, seq 5, length 64
12:21:43.425891 IP 10.200.0.2 > 10.200.0.1: ICMP echo reply, id 43, seq 5, length 64

(BTW, hit ctrl+c when you’re done with that tcpdump.)

Cool!!! …Man, sometimes when you’re working on networking goodies, the satisfaction of a successful ping is like no other. Ahhhh, feels so good.

Thank you, Tomo!

A big thanks goes out to Tomo for coming up with this idea, and then implementing it quite nicely in Go. It’s a well made utility built from an impressive idea. Really cool, I’ve enjoyed getting to utilitize it, and I hope it comes in handy to others in the future too.