Microservice Mesh? Yes, please. Let's sail with Istio.

Sometimes you wind up patching together your pieces in Kubernetes with a bunch of customized glue, and patching holes with a bunch of putty. It works, and it’s fine, but… What if we want to try to standardize those bits and pieces? Istio is a microservice mesh that can answer a number of those questions for us. Istio is greek for “sailing”, and is pronounced “IST-ee-oh” (Thanks to the folks on the Istio slack). Our goal today is to spin up Istio (using Helm) and then we’re going to deploy their sample app “bookinfo”, but, since we’re not in the book industry, we’re in the pickling industry – we’re going to then make a custom app to deploy and say “Hello, Istio!” in a pickle-ish fashion my custom “pickle-nginx” application – ready? …We can pickle that!

For some general info – Istio was just announced may 24th, in blog article by Istio. Kubernetes.io published this article on istio as well.

I got a great head start from this up-and-running video on YouTube by Lachlan Evenson. He’s using Kube 1.6.4 and that’s what we have today using my kube-centos-ansible, which tracks beta / unstable. He’s put together these helm charts for Istio which are a boon for us, thanks Lachlan!

Requirements

TL;DR:

  • Kubernetes
  • Helm

You’ll need Kubernetes, and feel free to use my labs. Also I also typically assume CentOS container / kube host, so – while it might not be required, know that it colors what ancillary tools I may use.

We’re going to install Istio using Helm charts, so if you need a path to install / use helm (and a helm primer) check out my article on using Helm.

Also, the Istio requirements say that we need Kubernetes >= 1.5. And if you’re using my labs, they track unstable by default so as of the date of writing, that should be approximately 1.6.4 available on your system.

My also typical also required things are 1. Your favorite text editor, and 2. Git.

Installing istioctl

You’re going to need to install istioctl – which is what we use to interact with istio. I’ve gone ahead and referenced the docs for the steps here.

This uses a curl bash script piped to bash, I’m not huuuuge on these (although popular) cause it’s asking you to either do some research into what it’s doing, or… To blindly trust it. Seeing these are some lab VMs I’m using, I’m going to “sorta trust it” – feel free to be the skeptic you should be and investigate what it’s doing. For now, I’m going to “just use it”.

[centos@kube-master ~]$ curl -L https://git.io/getIstio | sh -
[centos@kube-master ~]$ chmod +x /home/centos/istio-0.1.5/bin
[centos@kube-master ~]$ sudo cp /home/centos/istio-0.1.5/bin/istioctl /usr/local/bin/

I copy it into /usr/local/bin feel free to add the path to the bin to your path, as their docs recommend if that’s what you like.

And you can test it out by running it with a version parameter like so:

[centos@kube-master ~]$ istioctl version

At the time of writing I had version 0.1.5.

NOTE: You might have some trouble using extended functionality of istioctl without a little jury-rigging. This is not required for this tutorial, but, for further use it may be applicable for you. You’ll need to specify the --managerAPIService using the name of the service as in kubectl get svc, it will look approximately like:

[centos@kube-master ~]$ istioctl --managerAPIService=zooming-jaguar-istio-manager:8081 get route-rules -o yaml

Clone the helm charts

Go ahead and clone up the charts, and let’s take a quick peek. Feel free to dig further to see what’s in there.

[centos@kube-master ~]$ git clone --depth 1 https://github.com/kubernetes/charts.git
[centos@kube-master ~]$ cd charts/incubator/istio/
[centos@kube-master istio]$ ls
Chart.yaml  README.md  templates  values.yaml

Now, perform the helm install.

[centos@kube-master istio]$ helm install .

The output will give you a few important bits of information, especially where some of the pieces are running, and the names of the deployments, services, etc.

And of course, watch while it comes up watch -n1 kubectl get pods, as it’s going to pull a number of images down, so… Grab yourself a coffee. Unless you have gigabit WAN connection to your lab, sorry – your punishment for being so awesome is that you DON’T get coffee. Actually I have no say, but, since I’m a person who had gigabit before moving somewhere rural, I’m just bitter and jealous.

Run the sample app

Now that we have it up and running (Damn, Helm made it easy). Let’s open up their docs on the bookinfo sample.

Note that we’ve already got a copy of the samples when we installed istioctl, so let’s move into that directory.

[centos@kube-master ~]$ cd istio-0.1.5/

Fairly easy to kick it up with:

[centos@kube-master istio-0.1.5]$ kubectl apply -f <(istioctl kube-inject -f samples/apps/bookinfo/bookinfo.yaml)

That’s going to be spinning up some pods, so take a look with watch -n1 kubectl get pods. Need another coffee already? Yepps, wait until those pods are up.

The istioctl kube-inject is, according to their docs, going to modify the bookinfo yaml definition to use Envoy, and is documented here. Envoy is a L7 proxy.

This is going to create a bunch of services, go and check those out:

[centos@kube-master istio-0.1.5]$ kubectl get svc

Figuring out where your ingress is.

With that in hand, we can also check out the ingress that has been created. From the docs:

An Ingress is a collection of rules that allow inbound connections to reach the cluster services

Mine didn’t come up with an address. Like so:

[centos@kube-master istio-0.1.5]$ kubectl get ingress -o wide
NAME      HOSTS     ADDRESS   PORTS     AGE
gateway   *                   80        21m

According to the Istio bookinfo docs, they say:

If your deployment environment does not support external load balancers (e.g., minikube), the ADDRESS field will be empty. In this case you can use the service NodePort instead

In addition, if we look at kubectl get svc in my case I see the that EXTERNAL-IP is pending for the *-istio-ingress service. You can describe if you want, too, with:

[centos@kube-master istio]$ kubectl describe svc $(kubectl get svc | grep istio-ingress | awk '{print $1}')

We’re going to brew our own way to pick up the NodePort since we have cute names generated by helm.

Our nodeport:

[centos@kube-master istio]$ nodeport=$(kubectl get svc $(kubectl get svc | grep istio-ingress | awk '{print $1}') -o 'jsonpath={.spec.ports[0].nodePort}')
[centos@kube-master istio]$ echo $nodeport
30493

And our pod IP is:

[centos@kube-master istio]$ ingressip=$(kubectl get po -l istio=$(kubectl get deployment | grep istio-ingress | awk '{print $1}') -o 'jsonpath={.items[0].status.hostIP}')
[centos@kube-master istio]$ echo $ingressip
192.168.122.33

And let’s put that all together as:

[centos@kube-master istio]$ export GATEWAY_URL=$(kubectl get po -l istio=$(kubectl get deployment | grep istio-ingress | awk '{print $1}') -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc $(kubectl get svc | grep istio-ingress | awk '{print $1}') -o 'jsonpath={.spec.ports[0].nodePort}')
[centos@kube-master istio]$ echo $GATEWAY_URL
192.168.122.33:30493

Excellent. A bit more steps than if it just had the ingress external IP, which we’ll leave to put together for another time, but, this works with the current lab.

To make it interesting, let’s use that gateway URL from the virtual machine host, and curl from there.

[root@droctagon2 ~]# export GATEWAY_URL=192.168.122.33:30493
[root@droctagon2 ~]# curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage
200

Hurray, it comes back with a 200 OK, most excellent! Congrats, you’ve got your first Istio service up and running.

Bring it up in a browser.

So I’m going to create a tunnel from my client workstation to my virthost, so I can get traffic to that IP/port. I did so like:

ssh  root@192.168.1.119 -L 8088:192.168.122.33:30493

Where 192.168.1.119 is my virtual machine host, and 192.168.122.33:30493 is the above GATEWAY_URL. Then point your browser @ http://localhost:8088/productpage

(It’ll fail if you go to the root dir, so, yeah, be aware of that one, had me surfing around for a bit.)

Check out the included visualization tools.

Now, get yourself a few terminals up, one for:

  • master
  • virtual machine host
  • local client machine

There was a section in the helm install output earlier where we say some info about the Grafana dashboard.

Verifying the Grafana dashboard

  export POD_NAME=$(kubectl get pods --namespace default -l "component=rousing-rat-istio-grafana" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward $POD_NAME 3000:3000
  echo http://127.0.0.1:3000/dashboard/db/istio-dashboard

Now you can start port-forwarding on the master…

[centos@kube-master istio]$ export POD_NAME=$(kubectl get pods --namespace default -l "component=rousing-rat-istio-grafana" -o jsonpath="{.items[0].metadata.name}")
[centos@kube-master istio]$ kubectl port-forward $POD_NAME 3000:3000

That will keep running until you ctrl-c it.

Start generating some traffic on the virtual machine host to the bookinfo app. We’ll generate traffic in a loop.

[root@droctagon2 ~]# export GATEWAY_URL=192.168.122.33:30493
[root@droctagon2 ~]# while [ true ]; do curl -s -o /dev/null http://${GATEWAY_URL}/productpage; sleep 1; done

Just like before checking the status, but, this time in a loop.

Now, from your client machine, we’re going to tunnel and jump host. So we can open up graphana.

[doug@workstation ~]$ ssh -L 3000:localhost:3000 -t root@192.168.1.119 ssh -t -i .ssh/id_vm_rsa -L 3000:localhost:3000 centos@192.168.122.151

Note that 192.168.1.119 is my virtual machine host, and it has keys to acces s the master @ ~/.ssh/id_vm_rsa and that 192.168.122.151 is my Kubernetes master.

Now… In your browser you should be able to go to http://localhost:3000 and bring up grafana.

In the upper left, you can check out the “Home” nav item, and there’s a “istio dashboard” in there. So bring that up, and… see your requests comin’ in!

If you surf through the output from the helm install, there’s also a “dotviz” dashboard on port 8088 with some cool visualization too. Take the same steps as above put with that pod (pod name looks like istio-servicegraph). You might want to check that out too.

Clean up book info.

Now, we’re done with bookinfo for now, let’s clean that bad boy up.

[centos@kube-master istio-0.1.5]$ samples/apps/bookinfo/cleanup.sh

And check the route rules.

[centos@kube-master istio-0.1.5]$ istioctl get route-rules

And check out kubectl get pods to make sure they’re all gone.

That’s great, what about my own service?

So turns out you don’t run bookinfo as a business huh? You’re more of a pickle connoisseur, and you serve pickle images over the web. Big business. Dill, bread & butter, heck pickled watermelon rinds. So, let’s run our own service instead.

There’s some information in the istio.io doc onintegrating services into the mesh which you can follow, and I have my own example brewed up here.

So let’s create our pickle SaaS resource definitions, a pickle.yaml if you will, based on the ones I used in my Helm article:

[centos@kube-master ~]$ cat pickle.yaml 
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: pickle-nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        service: pickle-nginx
    spec:
      containers:
      - name: pickle-nginx
        image: dougbtv/pickle-nginx
        imagePullPolicy: IfNotPresent
        env:
        - name: PICKLE_TYPE
          value: pickle
        ports:
        - containerPort: 80
        # livenessProbe:
        #   httpGet:
        #    path: /
        #    port: 80
        # readinessProbe:
        #   httpGet:
        #     path: /
        #     port: 80
---
apiVersion: v1
kind: Service
metadata:
  name: pickle-nginx
  labels:
    service: pickle-nginx
spec:
  ports:
  - port: 9080
    name: "http-9080"
    targetPort: 80
  selector:
    service: pickle-nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway
  annotations:
    kubernetes.io/ingress.class: "istio"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: pickle-nginx
          servicePort: 9080

Now, we can use the same method as we used for deploying bookinfo. Note: I was missing an important little piece about having my ports name as http-9080 – I had it named "9080" and that didn’t work.

Let’s go ahead and run the same against pickle.yaml.

[centos@kube-master ~]$ kubectl apply -f <(istioctl kube-inject -f pickle.yaml)
deployment "pickle-nginx" created
service "pickle-nginx" created
ingress "gateway" created

And watch -n1 kubectl get pods until we have 2/2 ready on the pickle-nginx-* pod.

Now, we use the same method as above to figure out where the ingress IP:Port is and we’ll curl the index from our virt host.

[root@droctagon2 ~]# export GATEWAY_URL=192.168.122.33:30493
[root@droctagon2 ~]# curl -s $GATEWAY_URL | grep img
    <img src="pickle.png" />

Voila! We’re in the pickle business, now.

Great, that being said, we should in theory be able to see our traffic on Grafana now, too. So you can follow the same above steps for putting that curl in a while loop, and bringing up grafana.

And since you…. well aren’t actually in the pickle industry (and if you are, I hope you make billions on this application, give me a ride on your yacht when you do) – you might want to clean this up.

[centos@kube-master ~]$ kubectl delete -f <(istioctl kube-inject -f pickle.yaml)

So what’s next?

We’re going to… In the next article in the series… Do a canary release using these tools! Cross your fingers and get ready. Coming soon.

VNFs in Kubernetes? Sure thing, here's vnf-asterisk!

Want to run a virtual network function (VNF) on Kubernetes? You’re in luck! This article comprises a small “do it yourself workshop” that I’ve put together for a talk that I’m giving at OPNFV Summit during the CNCF day co-located event. Today, we’re going to use vnf-asterisk which is an open source demo VNF we’ve created on the NFVPE devops squad to validate various infrastructure deployments and explore other topics such as container networking, scale, HA, and on and on. I’ve documented it end-to-end as much as possible so participants can go ahead and dissect it to see how I’ve componentized it, and as well as how you might start to scale it. The requirements are thick, but are based on previous labs on this blog. Ready for (virtual) dialtone in Kube, let’s go!

vnf asterisk logo

I’ve also submitted a talk, along with Leif Madsen about running this VNF for Astricon 2017, so we’ll see if it makes it in there too. Edit: Updated to add, we are having a day-of-learning at Astricon 2017 – we’re having 4 workshops, please come and hang out and give it a try with us in person, we’re more than happy to help you out, and there is much hacking to be had!

The main take-away for folks here should be A. some nice exposure to how you might both take apart the pieces to containerize them, and also how to knit them back together with Kubernetes (and some Kubernetes usage), but also B. To use as a reference, and to decide what you do and do not like about it. It’s OK to not like some of it! In fact, I hope it helps you form your own opinions. In fact, while I do have some opinions – I try to keep them loose as these technologies grow and gain maturity. There’s already things here that I would change, and certain things that are done as a stop gap.

So enough blabbering, let’s fire up some terminals and get to the good stuff!

Requirements

TL;DR:

  • Kube cluster on CentOS
  • Persistent storage
  • Git (and your favorite editor) on the master
  • Ansible (if you’re using my lab playbooks) on “some convenient machine”
  • Approximately 5 gigs free for docker images

You’re going to need a Kubernetes lab environment that has some persistent storage available. Generally my articles also assume a CentOS environment, and while that may not be entirely applicable here, you should know that’s where I start from and might color some of the ancillary tools that I employ.

Additionally, you need git (and probably your favorite text editor) on the master node of your cluster.

But if that seems overwhelming? Don’t let it be! I’ve got you covered with these two labs that will get you up and running. All you really need is a machine to use as a virtual machine host, and Ansible installed.

Naturally, if you have another avenue to achieve the same, then go for it!

Browsing the components

If you’d like to explore the code behind this (and I highly recommend that you do), there’s generally two repositories you want to look at:

The controller is a full-stack-javascript web app that both exposes an API and also talks to Asterisk’s ARI (Asterisk RESTful Interface) in order to specify behaviors during call flow, and to use sorcery which we use to dynamically configure Asterisk. This is intended to make for a kind of clay infrastructure so that we can mold to fit a number of scenarios for which we can use Asterisk. A lot of people hear Asterisk and think “Oh, IP-PBX”. Sure, you could use it for that. But, that’s not all. It could be an IVR (psst, IVR is NOT just an auto-attendent), maybe you could use it on your session border as a B2BUA to hide topology, maybe you’ll make a feature server, maybe you’ll front a cluster of all of the above with it, maybe you’ll use it as a class-4 switch instead of the assumption of class-5 switching with a PBX. There’s a lot you can do with it! Here what we’re trying to achieve is a flexible way to use the components herein.

While you’re surfing around in the vnf-asterisk repository, you might notice that there’s also other notes and possibly Ansible playbooks. There’s also exploration we’ve done here with starting with a legacy, automating that legacy, and then breaking apart the pieces.

If you’re looking for the Dockerfiles for all the pieces, you’re going to want to look in a few places, vnf-asterisk-controller but also in the docker-asterisk repo, and also the homer-docker repo.

Last but not least – this also includes Homer; a VoIP capture, troubleshooting & monitoring tool, which I enjoy contributing too (and using even more!), and I designed the PoC method by which Homer is deployed in Kubernetes, and have maintained the Dockerfiles / docker-compose methodology for a few years.

Don’t deny Homer here, and take it’s lesson for your own deployments – implement monitoring and logging like you mean it. Homer has saved my bacon a number of times, seriously.

Basic setup.

Generally speaking, we’ll do this work from the Kubernetes master server. If you have kubectl setup in another place, go ahead and use whatever can access your Kubernetes cluster.

Now that you have a kubernetes cluster up with persistent volume storage (also, congrats!) you should first check that you can use kube DNS from the master. My lab playbooks don’t currently account for this, so that’s going to be the first thing we do. It’s worth the effort to make the rest of the steps easier without having to poke around too too much. Necessary evil, but, we’re onto the fun stuff in a moment.

DNS

We’ll use nslookup, so let’s make sure that’s around.

[centos@kube-master ~]$ sudo yum install -y bind-utils

And you should see if you can resolve kubernetes.default.svc.cluster.local (the address of the kube api) – If you can great! Skip the rest of the DNS setup. Otherwise, we’ll patch this up in a second.

[centos@kube-master ~]$ nslookup kubernetes.default.svc.cluster.local
Server:   192.168.122.1
Address:  192.168.122.1#53

** server can't find kubernetes.default.svc.cluster.local: NXDOMAIN

[centos@kube-master ~]$ echo $?
1

Great, as expected, it doesn’t work. So we’re just going to modify our /etc/resolv.conf. So figure out which address is for kube dns.

[centos@kube-master ~]$ kubectl get svc --all-namespaces | grep dns | awk '{print $3}'
10.96.0.10

Now, in your favorite editor (hopefully not emacs, heaven forbid) go ahead and alter resolv.conf to use this search domain, and add the above IP as a resolver.

It should now look something like:

[centos@kube-master ~]$ cat /etc/resolv.conf 
; generated by /usr/sbin/dhclient-script
search cluster.local
nameserver 10.96.0.10
nameserver 192.168.122.1

Note: That won’t be sticky through reboots. I’ll leave that as an exercise for my readers (or someone can make a PR on my playbooks!)

But wait! It can be sticky through reboots, due to help from guest star contributor @leifmadsen who writes:

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
USERCTL="yes"
PEERDNS="yes"
IPV6INIT="no"
PERSISTENT_DHCLIENT="1"
DNS1=10.96.0.10
DNS2=192.168.122.1

(Naturally, modify to fit the rest of your suite o’ settings.)

And just make sure that it works.

[centos@kube-master ~]$ nslookup kubernetes.default.svc.cluster.local
Server:   10.96.0.10
Address:  10.96.0.10#53

Non-authoritative answer:
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1

[centos@kube-master ~]$ echo $?
0

Let’s run vnf-asterisk!

You’re not too far away, just got to clone the repo. let’s go ahead and run it now. Note that the official repo is here on Github, we’re not cloning that as we’ll use the branch that this tutorial is based on in my fork – so it keeps working as the official repo changes.

[centos@kube-master ~]$ git clone https://github.com/dougbtv/vnf-asterisk.git
[centos@kube-master ~]$ cd vnf-asterisk/
[centos@kube-master vnf-asterisk]$ git checkout containers
[centos@kube-master vnf-asterisk]$ cd k8s/ansible/roles/load-podspec/templates/
[centos@kube-master templates]$ ls
homer-podspec.yml.j2  podspec.yml.j2

You’ll see two resource definition files in there. They’re .j2 jinja2 files, but, ignore that, there’s no templating in them now. You could also run the ansible playbooks to template these onto the machine (really handy for development of the vnf-asterisk application), but, it’s enough steps to make it easier to just clone this.

Before we launch these, we’re going to make sure they can’t be run on our master node. So let’s do that.

[centos@kube-master templates]$ kubectl taint nodes kube-master key=value:NoSchedule

We’re going to go ahead and create everything given those, so run yourself kubectl with those two files.

[centos@kube-master templates]$ kubectl create -f podspec.yml.j2 -f homer-podspec.yml.j2 

Watch everything while it comes up – it’s going to pull A LOT OF IMAGE FILES. Around 4 gigs. Yeah, that’s less than idea. Some of these just had to be bigger, maybe I can improve that later. It’s a pain when pulling from a public registry, but, in a local registry – it’s not so terribly bad. That could make it take quite a while.

I watch it come up with:

[centos@kube-master templates]$ watch -n1 kubectl get pods -o wide

I use that a lot, so I add an alias, which you can do too if you want to do something cute, the beholder.

[centos@kube-master templates]$ alias beholder="watch -n1 kubectl get pods -o wide"

Trouble shooting that deploy

If everything is coming up in kubectl get pods with a status of Running – you’re good to go!

Also generally, double check – is something still pulling for an image? It could be, and it takes a long time. So, double check that.

Otherwise, you can do the usual where you do kubectl get pods and for a particular pod that’s not in a running state, do a kubectl describe pod somepod-1550262015-x6v8b.

More outdated, but, I used to have some trouble with the PVCs (persistent volume claims) with my old lab instructions for GlusterFS persistent, and I had previously written:

If you’ve been toying around with the persistent volumes from my lab, and you see pods failing you might need to recreate them, I had to do kubectl delete -f ~/glusterfs-volumes.yaml and then kubectl create -f ~/glusterfs-volumes.yaml.

Checking out the running pieces.

So, what is running? Let’s look at my get pods output.

[centos@kube-master templates]$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
asterisk-2725520970-w5mnj   2/2       Running   0          8m
controller                  1/1       Running   0          8m
cron-1550262015-x6v8b       1/1       Running   0          8m
etcd0                       1/1       Running   0          8m
etcd1                       1/1       Running   0          8m
etcd2                       1/1       Running   0          8m
kamailio-2669626650-tg855   1/1       Running   2          8m
mysql-1479608569-4tx26      1/1       Running   0          8m
vnfui                       1/1       Running   0          8m
webapp-3687268953-ml1t4     1/1       Running   0          8m

You’ll see there’s some interesting things:

  • An asterisk instance (one of them)
  • A controller (a REST-ish API that can control asterisk)
  • A vaguely named “webapp” – which is homer’s web UI
  • A “vnfui” which is the web UI for the vnf-asterisk-controller
  • etcd – a distributed key/value used for service discovery
  • cron - Cron jobs for Homer (to later become Kubernetes cron-type jobs)
  • MySQL - used for Homer’s storage
  • kamailio - a SIP proxy, here used by Homer to look at VoIP traffice

If you do kubectl get pods --show-all you’ll also see a bootstrap job which prepopulates the data structures used by Homer.

You’ll also note the “asterisk” pod is the lone pod with 2/2 ready – as it has two containers. It has both asterisk proper, and a captagent to capture VoIP traffic, which it sniffs out of the shared network interface in the infra-container for the pod which both containers share.

And the services available:

[centos@kube-master templates]$ kubectl get svc
NAME                CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
bootstrap           None             <none>        55555/TCP           9m
controller          None             <none>        8001/TCP            9m
cron                None             <none>        55555/TCP           9m
etcd-client         10.104.94.90     <none>        2379/TCP            9m
etcd0               10.107.105.188   <none>        2379/TCP,2380/TCP   9m
etcd1               10.111.0.145     <none>        2379/TCP,2380/TCP   9m
etcd2               10.101.9.115     <none>        2379/TCP,2380/TCP   9m
glusterfs-cluster   10.99.161.63     <none>        1/TCP               4d
kamailio            10.111.59.35     <none>        9060/UDP            9m
kubernetes          10.96.0.1        <none>        443/TCP             5d
mysql               None             <none>        3306/TCP            9m
vnfui               None             <none>        80/TCP              9m
webapp              10.102.94.75     <none>        8080/TCP            9m

At the command line we can validate that a few things are running.

First, we have a controller running, it’s an API that can control what our Asterisk machines are doing. Just bring up the /foo endpoint to see that it’s working at all.

[centos@kube-master templates]$ curl controller.default.svc.cluster.local:8001/foo && echo
[{"text":"this and that"},{"text":"the other thing"},{"text":"final"}]

Now, if that’s working well, that’s a good sign.

Here’s running an Asterisk command, we can see we have one instance of Asterisk.

[centos@kube-master templates]$ kubectl exec -it $(kubectl get pods | grep asterisk | tail -n1 | awk '{print $1}') -- asterisk -rx 'core show version'
Asterisk 14.3.0 built by root @ 1b0d6163fdc2 on a x86_64 running Linux on 2017-03-01 20:49:29 UTC

You can also bring up an interactive prompt too if you wish.

[centos@kube-master templates]$ kubectl exec -it $(kubectl get pods | grep asterisk | tail -n1 | awk '{print $1}') -- asterisk -rvvv
[... snip ...]
asterisk-2725520970-w5mnj*CLI> 

Choose your own Adventure: Bridged Network VMs or NAT’ed VM’s

So – are your VMs NAT’ed or Bridged? If you’re using kube-centos-ansible, the default these days is to have bridged VMs, but, you can also choose NAT’d.

You’ll know by the IP address of your VMs, heck – if you got this far, there’s a good chance you know already, but, if you have 192.168.122.0/24 addresses, those are likely behind a NAT. If the VMs appear on your LAN IP addresses, then, those are bridged to your LAN.

Bridged VMs: Set up routing and DNS on your client machine

Skip down to the next section if you have NAT’ed VMs.

Alright, what we’re going to do here is expose a few of our services.

Firstly, pick up your IP address. You might need to modify it if you have a different address range for the IP address on the master for which we’ll expose the services.

[centos@kube-master k8s]$ ipaddr=$(ip a | grep 192 | awk '{print $2}' | perl -pe 's|/.+||')
[centos@kube-master templates]$ echo $ipaddr
192.168.1.6

Now, we can expose that service, so we’ll expose a new service based on what we have. Let’s do this for the controller first.

[centos@kube-master templates]$ kubectl expose svc controller --external-ip $ipaddr --name external-controller

Now, we can see in the list that we’ve create a new service based on the existing service that also now has an EXTERNAL-IP

[centos@kube-master templates]$ kubectl get svc | grep -P "NAME|controller"
NAME                                     CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
controller                               10.103.253.13    <none>        8001/TCP            1m
external-controller                      10.109.185.147   192.168.1.6   8001/TCP            32s

Now, from our client machine, we should be able to curl that service. So let’s try it from our laptops/desktops. If you are able to curl it from a Commodore 64 – I will personalize an animated GIF honoring you. But, for now, my Fedora workstation will have to do!

[doug@yoda vnf-asterisk]$ curl $ipaddr:8001/foo && echo
[{"text":"this and that"},{"text":"the other thing"},{"text":"final"}]

Hurray!

Ok, cool, let’s do that for a few more items.

$ kubectl expose svc webapp --external-ip $ipaddr --target-port=80 --name external-webapp
$ kubectl expose svc vnfui --external-ip $ipaddr --name external-vnfui

Why is the external-webapp svc different than the others? In this case, when basing it on the service, it doesn’t know how to choose a target port if it differs from the exposed port. So, we have to specify that this points to port 80 inside the container.

Alright – this being the case, you can now bring this up in a browser!

Browse to http://$ipaddr where $ipaddr is the one from the above output (should that have been fitting.)

If you hit the green button that says “Discovered Instaces” – you should see a list of one item.

(Now, skip the NAT’ed section for you, and head down to scale it up)

NAT’ed VMs: Bring it up in browser and Create the tunnels for the lab machines in VMs

If your lab is like mine (e.g. You’ve used my lab playbooks to create VMs on a virt host to run a Kubernetes cluster), the VMs running Kubernetes are walled off inside their own network. So you’ll have to create some tunnels in. This is… Less than convenient. Given this is a lab, it doesn’t have great network facilities for ingress. So, it’s fairly manual, sorry about that. Personally I’m frustrated with this, so my apologies are sincere. Maybe another blog article coming in the future for making the networking scenario a bit more user-friendly to access these services from afar.

Ok, first on the master let’s collect the IP addresses that we’ll need to forward. This bash command is a mouthful, but, it’ll give us the IPs we need, and we’ll use those on.

[centos@kube-master templates]$ podstring="controller vnfui webapp"; \
  for pod in $podstring; do \
    ip=$(kubectl get svc | grep $pod | awk '{print $2}'); \
    echo $pod=$ip; \
  done
controller=10.244.3.17
vnfui=10.244.1.16
webapp=10.244.1.18

Now that you have that, let’s paste those as variables into our workstation.

[doug@workstation ~]$ controller=10.244.3.17
[doug@workstation ~]$ vnfui=10.244.1.16
[doug@workstation ~]$ webapp=10.244.1.18

And dig up the IP addresses for both the virtual machine host and your master, and we’ll set those as variables too. Again if you’re using my lab playbooks those are in your vms.inventory

Let’s set those as variables now, too. In my case, my virt host is ``

[doug@workstation ~]$ jumphost=192.168.1.119
[doug@workstation ~]$ masterhost=192.168.122.151

Now you can setup all the jumphost tunneling like so:

[doug@workstation ~]$ ssh -L 8088:localhost:8088 -L 8001:localhost:8001 -L 8080:localhost:8080 -t root@$jumphost ssh -L 8088:$nginx:80 -L 8001:$controller:8001 -L 8080:$webapp:8080 -t -i .ssh/id_vm_rsa centos@$masterhost

And from your workstation, you should be able to test out the controller:

$ curl localhost:8001/foo && echo

You can access the web UI for the controller is @ http://localhost:8088

The web UI for Homer (VoIP analytics tool) is @ http://localhost:8080

Scale it UP!

So what we’re about to do now is take this default setup we have. And scale up a little bit. Once we scale up, we’ll provision SIP trunks between the Asterisk instances, and then we’ll make a call over it, and check out the analytics that we have setup.

You’ll note that we’re doing a bunch manually here. This could all theoretically be automated, including the API calls we’ll make to the customized controller I created. But, in the name of educating you about how it all works, we’re going to do this manually for now.

Scale up Asterisk instances

First thing we can do here is check out the deployment that was specified in our yaml resource definitions.

[centos@kube-master templates]$ kubectl get deployment asterisk
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
asterisk   1         1         1            1           8m

This shows us that our deployment requested a single instance, and 1 is up. So let’s scale that up to two instances.

[centzos@kube-master ~]$ kubectl scale deployment asterisk --replicas 2
deployment "asterisk" scaled

Now check out our kubectl get deployment again.

[centos@kube-master ~]$ kubectl describe deployment asterisk | grep -P "^Replicas"
Replicas:       2 desired | 2 updated | 2 total | 2 available | 0 unavailable

And we’ll see that there’s two pods available.

[centos@kube-master ~]$ kubectl get pods | grep -P "(NAME|asterisk)"
NAME                        READY     STATUS    RESTARTS   AGE
asterisk-2725520970-dwb93   1/1       Running   0          59s
asterisk-2725520970-tz31p   1/1       Running   0          1h

That’s good news, we’ve got two instances.

Provision trunks

Now that we have our two instances, we can create trunks over them. Let’s use the master and we’ll use the vnf-asterisk-controller to help us do this. If you’re curious about what the vnf-asterisk-controller can do, check that out. There’s also a API blueprint on Apiary.io describing all the API functionality if you’re interested.

These instances have a entrypoint script which announces their presence to etcd for service discovery, and the controller can discover these endpoints. Once the endpoints are discovered, we can then instruct the controller to create a SIP trunk between the two.

So, let’s go ahead and call the controller’s /discover endpoint.

[centos@kube-master ~]$ curl -s controller.default.svc.cluster.local:8001/discover | python -m json.tool
[
    {
        "ip": "10.244.3.20",
        "nickname": "suspicious_shaw",
        "trunks": [],
        "uuid": "f7feaa73-e823-4d47-b4f4-3310aa548bcb"
    },
    {
        "ip": "10.244.1.23",
        "nickname": "lonely_meitner",
        "trunks": [],
        "uuid": "b0e7990a-7009-4b00-9614-e0973da8ee68"
    }
]

You can see that there’s two Asterisk machines discovered by the controller, using etcd.

Additionally – if you bring up the vnfui, you can see these endpoints there in the Web UI.

Here’s what it looks like in the web UI:

vnf asterisk web ui

You’ll note there’s two nickname items there. This is just a shortcut that I built in that allows us to call them something other than the uuid for fun. I used a script (as a service) inspired by the Docker container naming scheme there to do this. These nicknames are random, so, yours will (almost certainly) differ.

But, we’re going to use the UUIDs for now. Here’s how you can pick up those UUIDs

[centos@kube-master ~]$ uuida=$(curl -s controller.default.svc.cluster.local:8001/discover | python -m json.tool | grep uuid | awk '{print $2}' | sed -s 's/[^a-z0-9\-]//g' | tail -n1)
[centos@kube-master ~]$ uuidb=$(curl -s controller.default.svc.cluster.local:8001/discover | python -m json.tool | grep uuid | awk '{print $2}' | sed -s 's/[^a-z0-9\-]//g' | head -n1)

[centos@kube-master ~]$ echo $uuida
b0e7990a-7009-4b00-9614-e0973da8ee68
[centos@kube-master ~]$ echo $uuidb
f7feaa73-e823-4d47-b4f4-3310aa548bcb

Now that we have those, we can use the connect API endpoint of the controller.

[centos@kube-master ~]$ curl -s controller.default.svc.cluster.local:8001/connect/$uuida/$uuidb/inbound 

You’ll get some JSON back about the trunks created. But, we can also pick that up from the discover endpoint, it should look like:

[centos@kube-master ~]$ curl -s controller.default.svc.cluster.local:8001/discover | python -m json.tool
[
    {
        "ip": "10.244.3.20",
        "nickname": "suspicious_shaw",
        "trunks": [
            "/asterisk/f7feaa73-e823-4d47-b4f4-3310aa548bcb/trunks/lonely_meitner"
        ],
        "uuid": "f7feaa73-e823-4d47-b4f4-3310aa548bcb"
    },
    {
        "ip": "10.244.1.23",
        "nickname": "lonely_meitner",
        "trunks": [
            "/asterisk/b0e7990a-7009-4b00-9614-e0973da8ee68/trunks/suspicious_shaw"
        ],
        "uuid": "b0e7990a-7009-4b00-9614-e0973da8ee68"
    }
]

You’ll see in the trunks list there’s a path to the trunks, and it will have the nickname of the partner at the other end of the SIP trunk.

Inspecting the results in Asterisk.

So – which is which? This is part of the reason that we have these nicknames. Let’s figure out who’s who. Let’s pull up the pod name for the first instance – we’re going to fish it out of the logs.

[centos@kube-master ~]$ kubectl logs $(kubectl get pods | grep asterisk | head -n1 | awk '{print $1}') -c asterisk | grep "Announcing nick"
Announcing nickname to etcd: lonely_meitner
+ echo 'Announcing nickname to etcd: lonely_meitner'

So we can see the first instance is lonely_meitner. Cool.

Now, with that in hand, let’s also check out the trunks that have been built in Asterisk.

[centos@kube-master ~]$ kubectl exec -it $(kubectl get pods | grep asterisk | head -n1 | awk '{print $1}') -c asterisk -- asterisk -rx 'pjsip show endpoints' 

 Endpoint:  <Endpoint/CID.....................................>  <State.....>  <Channels.>
    I/OAuth:  <AuthId/UserName...........................................................>
        Aor:  <Aor............................................>  <MaxContact>
      Contact:  <Aor/ContactUri..........................> <Hash....> <Status> <RTT(ms)..>
  Transport:  <TransportId........>  <Type>  <cos>  <tos>  <BindAddress..................>
   Identify:  <Identify/Endpoint.........................................................>
        Match:  <ip/cidr.........................>
    Channel:  <ChannelId......................................>  <State.....>  <Time.....>
        Exten: <DialedExten...........>  CLCID: <ConnectedLineCID.......>
==========================================================================================

 Endpoint:  suspicious_shaw                                      Not in use    0 of inf
        Aor:  suspicious_shaw                                    0
      Contact:  suspicious_shaw/sip:anyuser@10.244.3.20:50 05ea73df04 Unknown         nan
  Transport:  transport-udp             udp      0      0  0.0.0.0:5060
   Identify:  suspicious_shaw/suspicious_shaw
        Match: 10.244.3.20/32

Cool, we can see that lonely_meitner is connected to suspicious_shaw. You might also want to check out pjsip show aors.

Make a call

Now, let’s make a call between these boxes. Instead of trying to guess which one comes up first in yours, I’m going to let you copy and paste your own trunk name, and insert it into the command here. So substitute the nickname of the other host here in this command.

In fact, mine were backwards by the time I tried this. So go ahead and execute the asterisk command line one of them, and then do pjsip show aors and show the trunk name.

[centos@kube-master ~]$ kubectl exec -it $(kubectl get pods | grep asterisk | tail -n1 | awk '{print $1}') -- asterisk -rvvv

asterisk-2725520970-tz31p*CLI> pjsip show aors

      Aor:  <Aor..............................................>  <MaxContact>
    Contact:  <Aor/ContactUri............................> <Hash....> <Status> <RTT(ms)..>
==========================================================================================

      Aor:  lonely_meitner                                       0
    Contact:  lonely_meitner/sip:anyuser@10.244.1.23:5060  ee623310fc Unknown         nan

Now go ahead and originate the call substituting your trunk name for lonely_meitner

asterisk-2725520970-tz31p*CLI> channel originate PJSIP/333@lonely_meitner application wait 5
    -- Called 333@lonely_meitner

    -- PJSIP/lonely_meitner-00000004 answered

Now, we’ve had a call happen! Let’s go ahead and checkout some call detail records (CDRs).

[centos@kube-master ~]$ kubectl exec -it $(kubectl get pods | grep asterisk | tail -n1 | awk '{print $1}') -c asterisk -- cat /var/log/asterisk/./cdr-csv/Master.csv
"","anonymous","333","inbound","""Anonymous"" <anonymous>","PJSIP/desperate_poitras-00000000","","Hangup","","2017-05-31 19:49:18","2017-05-31 19:49:18","2017-05-31 19:49:18",0,0,"ANSWERED","DOCUMENTATION","1496260158.0",""

And you can see that it logged the call! Hurray.

Check it out in Homer

Now, let’s check out what’s going on with Homer. Homer is a VoIP analytics and monitoring tool. It can show you what’s up with your SIP traffic for one. And does some pretty sweet stuff like.

First, let’s peek at the database. You’ll note here I’m figuring out the name of MySQL pod, then I’m going to exec a MySQL CLI from that pod. You’ll note everything is insecure about this MySQL instance and how it’s called. Like, it’s using root and the password is “secret” and I use the password on the command-line. “Do as I say, not as I do” as it has been said, yeah… Just don’t do any of that stuff, this is a demo after all.

[centos@kube-master ~]$ kubectl get pods | grep mysql | awk '{print $1}'
mysql-1479608569-7bgnw
[centos@kube-master ~]$ kubectl exec -it mysql-1479608569-7bgnw -- mysql -u root -p'secret'

[... snip ...]

mysql> # What day is today? We'll use this to get our table name
mysql> SELECT DATE(NOW());                                      
+-------------+
| DATE(NOW()) |
+-------------+
| 2017-05-31  |
+-------------+
1 row in set (0.00 sec)

mysql> # Now, use that date in the table name and select from it.
mysql> SELECT id,`date`,method,ruri,ruri_user,user_agent FROM homer_data.sip_capture_call_20170531 LIMIT 1\G
*************************** 1. row ***************************
        id: 1
      date: 2017-05-31 19:49:18
    method: INVITE
      ruri: sip:333@10.244.3.24:5060
 ruri_user: 333
user_agent: Asterisk PBX 14.3.0
1 row in set (0.00 sec)

There it is!

And if you are able to, we can also bring that up in the UI.

If you have my lab setup, bring up http://localhost:8080 (or specific IP if you use NAT) and then use username “admin” password “test123”.

Click the “clock icon” in the upper right hand corner, and select say “Last 24 hours” then hit the search button (window pane under the nav towards the left).

Now if you hit the call ID in the results there, it should bring up a “ladder diagram” (which I recall from even the ISDN days! But, is standard for the SIP protocol).

Here’s what mine looks like:

homer ladder diagram

In review…

Hurray! And there…. You have it.

The bottom line is – a lot of what’s here for configuring the service once it’s up, especially with regards to interacting with the controller & scaling is rather manual; which is to demonstrate how this approach works and let you touch some parts.

You could however, automate those portions, and use some of Kubernetes autoscaling features to make this a lot more automatic & dynamic. Something to think about as you try this out, or as you design your own.

Sailing the 7 seas with Kubernetes Helm

Helm is THE Kubernetes Package Manager. Think of it like a way to yum install your applications, but, in Kubernetes. You might ask, “Heck, a deployment will give me most of what I need. Why don’t I just create a pod spec?” The thing is that Helm will give you a way to template those specs, and give you a workflow for managing those templates by what it calls “charts”. Today we’ll go ahead and step through the process of installing Helm, installing a pre-built pod given an existing chart, and then we’ll make an existing chart of our own and deploy it – and we’ll change some template values so that it runs in a couple different ways. Ready? Anchors aweigh…

The beginning base of these instruction on the official quickstart guide. We’ll then extrapolate from there as there’s a few considerations that we have now.

  1. We need to configure RBAC, which isn’t officially covered yet. The official way they say to do it is to turn off RBAC – nope, not going to do that.
  2. We’re also going to make our own Helm charts, which aren’t covered in the quick start guide, so we’ll expand from there.

Requirements

So, this assumes you’ve already got a Kubernetes cluster up and running, and usually… These articles assume CentOS 7.3 running. It might not exactly require CentOS 7.3 this time, but, just know that’s my reference, and I’m using Kubernetes 1.6.

If you don’t have a Kubernetes cluster up, may I recommend using my kube-centos-ansible playbooks – and I’ve got an article detailing how to use those playbooks.

Optionally – you can create persistent volumes. You can skip this step if you want, but, the example charts that we will install require some volume persistence. We’ll run it with persistence turned off, but, it’s “more realistic” if-you-will. And if you don’t have persistent volumes setup, you might want to try my method for using GlusterFS to back persistent volumes, as detailed in this blog post.

About Helm

Helm is really two parts, a client and a server. The client is helm and the server is tiller – all the boat references! Cause the definition of your applications are called charts.

So if you’re at the helm of a ship, and you steer (according to your charts), you’d move your tiller. See? All the ships!

These charts are essentially templates for how to deploy your pods. Without helm, you’d just create specs which are yaml files which define how the pod is to be run. But, using helm – we can make charts which make for more flexible specs. That way we can run the same application with differing parameters in the same or a different cluster.

Why not template them with Ansible, then? You could, too. But, using helm gives us a more direct work-flow for define the charts and deploying them, and should free up our playbooks to allow for lower-level infrastructure creation, and let our applications be abstracted from that, and let us leverage what Kubernetes has to offer without having to overly complicate our playbooks for applications – which should likely require more frequent reconfiguration than the underlying pieces. For the record, in my opinion – using Ansible isn’t the wrong way. It’s just another way.

Download Helm

Let’s pick out a version from the github releases of helm and download the binary onto our Kubernetes master server.

[centos@kube-master ~]$ curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.4.1-linux-amd64.tar.gz > helm.tar.gz
[centos@kube-master ~]$ tar -xzvf helm.tar.gz 
[centos@kube-master ~]$ chmod +x linux-amd64/helm 
[centos@kube-master ~]$ sudo cp linux-amd64/helm /usr/local/bin

Now, let’s check its version.

[centos@kube-master ~]$ helm version
Client: &version.Version{SemVer:"v2.4.1", GitCommit:"46d9ea82e2c925186e1fc620a8320ce1314cbb02", GitTreeState:"clean"}

It will also take a second to complete, and then timeout and probably complain that it can’t connect to tiller. Which is fine for now. So, that’s coming up soon.

Run helm init

[centos@kube-master ~]$ helm init

That should start tiller for us – so you’ll have to watch for it come up, go ahead and watch -n1 kubectl get pods --all-namespaces

And we’ll have to create an RBAC for it, too. I used this gist as a reference.

[centos@kube-master ~]$ kubectl --namespace kube-system create sa tiller
serviceaccount "tiller" created
[centos@kube-master ~]$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding "tiller" created
[centos@kube-master ~]$ kubectl --namespace kube-system patch deploy/tiller-deploy -p '{"spec": {"template": {"spec": {"serviceAccountName": "tiller"}}}}'
deployment "tiller-deploy" patched

Go ahead and watch your pods, cause it’s going to restart the tiller pod, so do something like watch -n1 kubectl get pods --all-namespaces until it comes back.

Let’s run an example app

Let’s go ahead and update our repo.

[centos@kube-master ~]$ helm repo update

And then we can install say… MongoDB (I’m wearing a MongoDB t-shirt today, so why not that one). If you’d like to install something else checkout the official “stable” repo and see what’s available.

[centos@kube-master ~]$ helm install --set persistence.enabled=false stable/mongodb

Note that we’re already doing something that sets Helm apart from “just using a spec file”. Like, if you’re familiar with my other tutorials you may have seen me create pods from specs before, where I’ve created a yaml file, and then I tell kubernetes to create it with something like kubectl create -f mongodb.yaml.

So running that helm install is going to give you some output like this (I clipped out some of the output)…

[centos@kube-master ~]$ helm install --set persistence.enabled=false stable/mongodb
[...snip...]
NOTES:
MongoDB can be accessed via port 27017 on the following DNS name from within your cluster:
silly-ladybird-mongodb.default.svc.cluster.local

To connect to your database run the following command:

   kubectl run silly-ladybird-mongodb-client --rm --tty -i --image bitnami/mongodb --command -- mongo --host silly-ladybird-mongodb

So let’s go ahead and use mongo for fun. Note, this command is going to take a while because Kube is going to pull a new image for you.

[centos@kube-master ~]$ kubectl run silly-ladybird-mongodb-client --rm --tty -i --image bitnami/mongodb --command -- mongo --host fallacious-giraffe-mongodb

You might have to hit enter, and it lets you know to do that too.

Let’s do something with it while we’re here.

> use kitchen;
switched to db kitchen
> db.kitchen.insert({"beer": {"heady topper": 4,"sip of sunshine": "awww yeah"}})
> db.kitchen.find().pretty()
{
  "_id" : ObjectId("591cb31956ed4d11bd5b82c0"),
  "beer" : {
    "heady topper" : 4,
    "sip of sunshine" : "awww yeah"
  }
}

Ok, cool, it works!

So what about some visibility of what charts we have deployed? Run helm list to check it out for yourself. This is a list of what are referred to as “releases”.

[centos@kube-master ~]$ helm list
NAME        REVISION  UPDATED                   STATUS    CHART           NAMESPACE
aged-uakari 1         Wed May 17 20:26:18 2017  DEPLOYED  mongodb-0.4.10  default  

Then you can go ahead and remove this sample one.

[centos@kube-master ~]$ helm delete aged-uakari
release "aged-uakari" deleted

Let’s create our own chart

I got a little help for creating a first chart from this blog post. Let’s go ahead and create our own.

We’re going to try to create an nginx instance that serves a photograph of a pickle. Because, that is absurd enough for me.

Scaffolding your chart, and basic commands

The first thing you’ll do is scaffold your chart.

[centos@kube-master ~]$ helm create pickle-chart

That will create a directory ./pickle-chart with the contents you need to create a chart. The contents look about like so:

[centos@kube-master ~]$ find pickle-chart/
pickle-chart/
pickle-chart/Chart.yaml
pickle-chart/templates
pickle-chart/templates/ingress.yaml
pickle-chart/templates/deployment.yaml
pickle-chart/templates/service.yaml
pickle-chart/templates/NOTES.txt
pickle-chart/templates/_helpers.tpl
pickle-chart/charts
pickle-chart/values.yaml
pickle-chart/.helmignore

You can check if the syntax is ok with a helm lint like so:

[centos@kube-master ~]$ helm lint pickle-chart
==> Linting pickle-chart
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures

And you can wrap it all up with helm package, which will make a tarball for you.

[centos@kube-master ~]$ helm package pickle-chart
[centos@kube-master ~]$ ls -lh pickle-chart-0.1.0.tgz 
-rw-rw-r--. 1 centos centos 2.2K May 18 17:54 pickle-chart-0.1.0.tgz

Let’s edit the charts to make them our own

Change your directory into the newly created ./pickle-chart dir. First let’s look at Chart.yaml in this directory – this is a bunch of meta data for our chart. I edited mine to look like:

apiVersion: v1
description: An nginx instance that serves a pickle photo
name: pickle-chart
version: 0.0.1

Now, move into the ./templates/ directory and you’re going to see a few things here – yaml files, but, they’re templates. And they’re templated as sprig templates.

If you’ve created pod specs before, these won’t seem too too weird, at least in name. Especially deployment.yaml and service.yaml. As you could imagine, these define a deployment, and a service. Feel free to surf around these and explore for yourself to get an idea of what you could customize, or better yet, add to.

Let’s modify the values.yaml – this is where the values of the majority of the parameters for the template come from.

Including the docker image that we’re going to use, which is dougbtv/pickle-nginx – should you care to build the image yourself, I posted the dockerfile and context as a gist.

We’re going to leave the majority of values.yaml as the default. I change the image section and also added the pickletype.

IMPORTANT: Github page didn’t like the embedded templates here in the markdown for my blog, it would fail building them. So you’ll have to pick up these two files from this gist.. Copy out both the values.yaml and deployment.yaml. And use them here.

Now, modify the ./templates/deployment.yaml. Again, most of it is default, but, you’ll see that I added an env section. This is used by the image to do something, more than “just statically deploy” – we’ll get to that in a moment.

Cool, that’s all set for now.

Let’s run our brand spankin’ new Helm charts!

Alright, so, now make sure you’re up a directory from the ./pickle-chart directory, and let’s fire it off.

Install the chart like so:

[centos@kube-master ~]$ helm install ./pickle-chart

Now, wait until it’s fully deployed, I do this by watching like this:

[centos@kube-master ~]$ watch -n1 kubectl get pods --show-all

And wait until it’s showing as running.

Now – it creates a service for us, so let’s check out what that service is with kubectl get svc.

Here’s the IP it’s listening on:

[centos@kube-master ~]$ kubectl get svc | grep -i pickle | awk '{print $2}'

We’ll save that as a variable and curl it.

[centos@kube-master ~]$ pickle_ip=$(kubectl get svc | grep -i pickle | awk '{print $2}')
[centos@kube-master ~]$ curl -s $pickle_ip | grep -i img
    <img src="pickle.png" />

Great, now note that the img src is pickle.png. This – we have made configurable, so let’s deploy our chart differently.

First I’ll go and delete the release. So list the charts and delete, a la:

[centos@kube-master ~]$ helm list
NAME                REVISION  UPDATED                   STATUS    CHART               NAMESPACE
interesting-buffalo 1         Thu May 18 19:51:51 2017  DEPLOYED  pickle-chart-0.0.1  default  
[centos@kube-master ~]$ helm delete interesting-buffalo
release "interesting-buffalo" deleted

Now – we’re going to run this differently by changing a default value in our template.

[centos@kube-master ~]$ helm install --set pickletype=pickle-man ./pickle-chart

This sets the pickletype which will change something our application.

Now, go ahead and pick up the IP from the service again, and we’ll curl it…

[centos@kube-master ~]$ pickle_ip=$(kubectl get svc | grep -i pickle | awk '{print $2}')
[centos@kube-master ~]$ curl -s $pickle_ip | grep -i img
    <img src="pickle-man.png" />

We can now see that we’re serving a different photo – this time a pickle cartoon that is a “pickle man” as opposed to… Just a pickle.

Oh yeah – and you can deploy from a tarball…

[centos@kube-master ~]$ rm pickle-chart-0.1.0.tgz 
[centos@kube-master ~]$ helm package pickle-chart/
[centos@kube-master ~]$ helm install pickle-chart-0.0.1.tgz 

Or you can install from an absolute URL containing the tarball, too.

And there you have it – you’ve gone ahead and…

  • Installed Helm
  • Installed a sample application (mongodb)
  • Created your own helm chart
  • Deployed a release
  • Change the parameters for the templated values to create a new release with different parameters.

Good luck sailing the 7 seas!

Gleaming the Kube - Building Kubernetes from source

Time to bust out your kicktail skateboards Christian Slater fans, we’re about to gleam the kube, or… really, fire up our terminals and build Kubernetes from source. Then we’ll manually install what we built and configure it and get it running. Most of the time, you can just yum install or docker run the things you need, but, sometimes… That’s just not enough when you’re going to need some finer grain control. Today we’re going to look at exactly how to build Kubernetes from source, and then how to deploy a working Kubernetes given the pieces therein. I base this tutorial on using the official build instructions from the Kube docs to start. But, the reality is as much as it’s easy to say the full instructions are git clone and make release – that just won’t cut the mustard. We’ll need to do a couple ollies and pop-shove-its to really get it to go. Ready? Let’s roll…

The goals here today are to:

  • Build binaries from Kubernetes source code
  • Install the binaries on a system
  • Configure the Kubernetes system (for a single node)
  • Run a pod to verify that it’s all working.

Requirements

This walk-through assumes a CentOS 7.3 machine, for one. And I use a VM to isolate this from my usual workstation environment. I wind up in this tutorial spinning up two boxes – one to build Kubernetes, and one to deploy Kubernetes on. You can do it all on one host if you choose.

I setup my VMs often times using this great article for spinning up centos cloud images but it relies only on having the default disk size for the CentOS cloud images, which is 8 gigs which isn’t enough.

Definitely need a lot of memory, I had it bomb out at 8 gigs, so I bumped up to 16 gigs.

Then – you need a lot more disk than I had. I’ve been using VMs spun up with virsh to setup this environment, so I’ll have to add some storage, which is outlined below. You need AT LEAST 20 gigs free. I’m not sure what it’s doing to need that must disk, but, it needs it. I go with 24 gigs below, and outline how to add an extra disk for the /var/lib/docker/ directory.

Spinning up a VM and adding a disk

If you’ve got another machine to use, that’s fine – and you can skip this section. Make sure you have at least 24 gig of free disk space available. You can even potentially use your workstation, if you’d like.

Go ahead and spin up a VM, in my case I’m spinning up CentOS cloud images. So let’s attach a spare disk, and we’ll use it for /var/lib/docker/.

Spin up a VM however you like with virsh and let’s create a spare disk image. I’m going to create one that’s 24 gigs.

Looking at your virtual machine host Here’s my running instance, called buildkube

$ virsh list | grep -Pi "( Id|buildkube|\-\-\-)"
 Id    Name                           State
----------------------------------------------------
 5     buildkube                      running

Create a spare disk:

$ dd if=/dev/zero of=/var/lib/libvirt/images/buildkube-docker.img bs=1M count=24576

Then we can attach it.

$ virsh attach-disk "buildkube" /var/lib/libvirt/images/buildkube-docker.img vdb --cache none
Disk attached successfully

Now, let’s ssh to the guest virtual machine and we’ll format and mount the disk.

From there we’ll use fdisk to make a new partition

$ fdisk /dev/vdb

Choose:

  • n: New Partition
  • p: Primary partition
  • Accept defaults for part number, first and last sector.
  • w: Write the changes.
  • q: quit

Now that you’ve got that set, let’s format it and give it a label.

$ mkfs -t ext3 /dev/vdb1
$ e2label /dev/vdb1 /docker

Add it to the fstab, and mount it. Create an entry in /etc/fstab like so:

/dev/vdb1   /var/lib/docker ext3    defaults    0   0

Now, create a directory to mount it and mount it.

$ mkdir -p /var/lib/docker
$ mount /var/lib/docker/

I then like to create a file and edit it to make sure things are OK.

Alright – you should be good to go.

Readying for install

Alright, first things first – go ahead and do a yum update -y and install any utils you want (your editor, etc) – and give ‘er a reboot.

Let’s install some utilities we’re going to use; git & tmux.

$ sudo yum install -y tmux git

Install docker-ce per the docs

$ sudo yum-config-manager --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum install docker-ce
$ sudo systemctl enable docker
$ sudo systemctl start docker
$ sudo docker -v

Now that we’ve got docker ready, we can start onto the builds (they use Docker, fwiw.)

Kick off the build

Ok, get everything ready, clone a specific tag in this instance (with a shallow depth) and move into that dir and fire up tmux (cause it’s gonna take a while)

$ git clone -b v1.6.3 --depth 1 https://github.com/kubernetes/kubernetes.git
$ cd kubernetes
$ tmux new -s build

Now let’s choose a release style. If you want bring up the Makefile in an editor and surf around, the official docs recommend make release, but, in order to quicken this up (still takes a while, about 20 minutes on my machine), we’ll do a quick-release (which doesn’t cross-compile or run the full test suite)

$ make quick-release

You can now exit the tmux screen while you do other things (or… get yourself a coffee, that’s what I’m going to do), you can do so with ctrl+b then d. (And you can return to it with tmux a – assuming it’s the only session, or tmux a -t build if there’s multiple sessions). Should this fail – read through and see what’s going on. Mostly – I recommend to begin with starting with a tagged release (master isn’t always going to build, I’ve found), that way you know it should in theory have clean enough code to build, and then assuming that’s OK check your disk and memory situation as it’s rather stringent requirements there, I’ve found.

Alright, now… Back with your coffee and it’s finished? Awesome, let’s peek around and see what we created. It creates some tarballs and some binaries, so we can see where they are.

$ find . | grep gz
$ find . | grep -iP "(client|server).bin"

For now we’re just concerned with those binaries, and we’ll move ‘em into the right place.

Let’s deploy it

I spun up another host with standard disk (8gig in my case) and I’m going to put the pieces together there.

Surprise! We need another yum update -y and a docker install. So let’s perform that on this “deploy host”

$ sudo yum-config-manager --add-repo     https://download.docker.com/linux/centos/docker-ce.repo
$ yum install -y docker-ce
$ systemctl enable docker --now
$ yum update -y 
$ reboot

A place for everything, and everything in its place

Now, from your build host, scp some things… Let’s try to make it as close to the rpm install as possible.

$ pwd
/home/centos/kubernetes
$ scp -i ~/.ssh/id_vms ./_output/release-stage/client/linux-amd64/kubernetes/client/bin/kubectl centos@192.168.122.147:~       
$ scp -i ~/.ssh/id_vms ./_output/release-stage/server/linux-amd64/kubernetes/server/bin/kubeadm centos@192.168.122.147:~
$ scp -i ~/.ssh/id_vms ./_output/release-stage/server/linux-amd64/kubernetes/server/bin/kubelet centos@192.168.122.147:~
$ scp -i ~/.ssh/id_vms ./build/debs/kubelet.service centos@192.168.122.147:~
$ scp -i ~/.ssh/id_vms ./build/debs/kubeadm-10.conf centos@192.168.122.147:~

Alright, that’s everything but CNI, basically.

Now back to the deploy host, let’s place the binaries and in the configs in the right places…


# Move binaries
[root@deploykube centos]# mv kubeadm /usr/bin/
[root@deploykube centos]# mv kubectl /usr/bin/
[root@deploykube centos]# mv kubelet /usr/bin/

# Move systemd unit and make directories
[root@deploykube centos]# mv kubelet.service /etc/systemd/system/kubelet.service
[root@deploykube centos]# mkdir -p /etc/kubernetes/manifests
[root@deploykube centos]# mkdir -p /etc/systemd/system/kubelet.service.d/
[root@deploykube centos]# mv kubeadm-10.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

# Edit kubeadm config, with two lines from below
[root@deploykube centos]# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

When you edit that 10-kubeadm.conf add these two lines above the ExecStart= line:

Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

Install CNI binaries

Now, we gotta setup CNI. First, clone and build it (you need git and golang).

[root@deploykube centos]# yum install -y git golang
[root@deploykube centos]# git clone -b v0.5.2 --depth 1 https://github.com/containernetworking/cni.git
[root@deploykube centos]# cd cni/
[root@deploykube cni]# ./build.sh 

With that complete, we can now copy out the binary plugins.

[root@deploykube cni]# mkdir -p /opt/cni/bin
[root@deploykube cni]# cp bin/* /opt/cni/bin/

Ok, looking like… we’re potentially close. Reload units and start & enable kubelet.

[root@deploykube cni]# systemctl daemon-reload
[root@deploykube cni]# systemctl enable kubelet --now

Now, let’s try a kubeadm. We’re going to aim to use flannel.

[root@deploykube cni]# kubeadm init --pod-network-cidr 10.244.0.0/16

Amazingly…. that completed for me on the first go, guess I did my homework!

From that output you’ll also get your join commands should you want to expand the cluster beyond this one node. You don’t need to run this if you’re just running a single node like me in this tutorial. The command will look like:

  kubeadm join --token 49cb93.48ac0d64e3f6ccf6 192.168.122.147:6443

Follow steps to use the cluster with kubectl, must be a regular non-root user. So we’ll use centos in this case.

[centos@deploykube ~]$   sudo cp /etc/kubernetes/admin.conf $HOME/
[centos@deploykube ~]$   sudo chown $(id -u):$(id -g) $HOME/admin.conf
[centos@deploykube ~]$   export KUBECONFIG=$HOME/admin.conf
[centos@deploykube ~]$ kubectl get nodes
NAME                       STATUS     AGE       VERSION
deploykube.example.local   NotReady   52s       v1.6.3

And let’s watch that for a while… Get yourself a coffee here for a moment.

[centos@deploykube ~]$ watch -n1 kubectl get nodes

Install pod networking, flannel here.

Tricked ya! Hope you didn’t get coffee already. Wait, that won’t be ready until we have a pod network. So let’s add that.

I put the yamls in a gist, so we can curl ‘em like so:

$ curl https://gist.githubusercontent.com/dougbtv/a6065c316019642ecc1706d6e785a037/raw/16554948e306359090c3d52c3c7b0bcffea2e450/flannel-rbac.yaml > flannel-rbac.yaml
$ curl https://gist.githubusercontent.com/dougbtv/a6065c316019642ecc1706d6e785a037/raw/16554948e306359090c3d52c3c7b0bcffea2e450/flannel.yaml > flannel.yaml

And apply those…

[centos@deploykube ~]$ kubectl apply -f flannel-rbac.yaml 
clusterrole "flannel" created
clusterrolebinding "flannel" created
[centos@deploykube ~]$ kubectl apply -f flannel.yaml 
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

Now…. you should have a ready node.

[centos@deploykube ~]$ kubectl get nodes
NAME                       STATUS    AGE       VERSION
deploykube.example.local   Ready     21m       v1.6.3

Run a pod to verify we can actually use Kubernetes

Alright, so… That’s good, now, can we run a pod? Let’s use my favorite example, 2 nginx pods via a replication controller.

[centos@deploykube ~]$ cat nginx.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

And watch until they come up… You should have a couple…

[centos@deploykube ~]$ kubectl get pods
NAME          READY     STATUS    RESTARTS   AGE
nginx-5hmxs   1/1       Running   0          41s
nginx-m42cz   1/1       Running   0          41s

And let’s curl something from them…

[centos@deploykube ~]$ kubectl describe pod nginx-5hmxs | grep -P "^IP"
IP:     10.244.0.3
[centos@deploykube ~]$ curl -s 10.244.0.3 | grep -i thank
<p><em>Thank you for using nginx.</em></p>

Word! That’s a running kubernetes from source.


Some further inspection of the built pieces

What follow are some notes of mine while I got all the pieces together to make the build work. If you’re interested in some of the details, it might be worthwhile reading, however, it’s somewhat raw notes, I didn’t overly groom it before posting it.

The kubernetes.tar.gz tarball

Let’s take a look at the kubernetes.tar.gz, that sounds interesting. It kind of is. It looks like a lot of setup goods.

$ cd _output/release-tars/
$ cp kubernetes.tar.gz /tmp/

And I’m in the ./cluster/centos and doing ./build.sh – failed. So I started reading about it, and it points at this old doc for a centos cluster which then says “Oh yeah, that’s deprecated.” and then pointed to the contemporary kubeadm install method (which I’ve been using in kube-centos-ansible).

So, how are we going to install it so we can use it?

Kubernetes isn’t terribly bad to deal with regarding deps, why? Golang. Makes it simple for us.

So let’s see how a rpm install looks after all, we’ll use that as our basis for installing and configuring kube.

Usually in kube-centos-ansible, I install these rpms

- kubelet
- kubeadm
- kubectl
- kubernetes-cni

Here’s what results from these.

[root@kubecni-master centos]# rpm -qa | grep -i kub
kubernetes-cni-0.5.1-0.x86_64
kubectl-1.6.2-0.x86_64
kubeadm-1.6.2-0.x86_64
kubelet-1.6.2-0.x86_64

[root@kubecni-master centos]# rpm -qa --list kubernetes-cni
/opt/cni
/opt/cni/bin
/opt/cni/bin/bridge
/opt/cni/bin/cnitool
/opt/cni/bin/dhcp
/opt/cni/bin/flannel
/opt/cni/bin/host-local
/opt/cni/bin/ipvlan
/opt/cni/bin/loopback
/opt/cni/bin/macvlan
/opt/cni/bin/noop
/opt/cni/bin/ptp
/opt/cni/bin/tuning

[root@kubecni-master centos]# rpm -qa --list kubectl
/usr/bin/kubectl

[root@kubecni-master centos]# rpm -qa --list kubeadm
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
/usr/bin/kubeadm

[root@kubecni-master centos]# rpm -qa --list kubelet
/etc/kubernetes/manifests
/etc/systemd/system/kubelet.service
/usr/bin/kubelet

So the binaries, that makes sense, but where do the etc pieces come from?

So it appears that in the clone, the

  • ./build/debs/kubelet.service == /etc/systemd/system/kubelet.service
  • ./build/debs/kubeadm-10.conf ~= /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

The kubeadm.conf is… similar, but not exactly. I modify it to add a cgroup driver, and there’s also two additional lines from the RPM

Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

Didn’t find it in the git clone with grep -Prin "cgroup-driver"

I also realize in my playbooks I do this…

- name: Add custom kubadm.conf ExecStart
  lineinfile:
    dest: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    regexp: 'systemd$'
    line: 'ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS --cgroup-driver=systemd'

So I accounted for that as well.

Let's run Homer on Kubernetes!

I have to say that Homer is a favorite of mine. Homer is VoIP analysis & monitoring – on steroids. Not only has it saved my keister a number of times when troubleshooting VoIP platforms, but it has an awesome (and helpful) open source community. In my opinion – it should be an integral part of your devops plan if you’re deploying VoIP apps (really important to have visibility of your… Ops!). Leif and I are using Homer as part of our (still WIP) vnf-asterisk demo VNF (virtualized network function). We want to get it all running in OpenShift & Kubernetes. Our goal for this walk-through is to get Homer up and running on Kubernetes, and generate some traffic using HEPgen.js, and then view it on the Homer Web UI. So – why postpone joy? Let’s use homer-docker to go ahead and get Homer up and running on Kubernetes.

Do you just want to play? You can skip down to the “requirements” section and put some hands on the keyboard.

Some background

First off, I really enjoy working upstream with the sipcapture crew – they’re really nice, and have created quite a fine community around the world of software that comprises Homer. They’re really a friendly bunch, and they’re always looking to make Homer better – I’m a regular contributor, and you’ll see I proudly contribute as evidenced by my badge showing membership of the sipcapture org on my github profile!

This hasn’t yet landed in the official upstream homer-docker repo yet. It will eventually, maybe even by the time you’re reading this. So, look for a ./k8s directory in the official homer-docker repository. There’s a couple things I need to change in order to get it in there – in part I need to get a build pipeline to get the images into a registry. Because, a registry is required, and frankly it’s easy to “just use dockerhub”. If you’ve got a registry – use your own! You can trust it. Later on in the article, I’ll encourage you to use your own registry or dockerhub images if you please, but, also give you the option of using the images I have already built – so you can just get it to work.

I also need to document it – that’s partially why I’m writing this article, cause I can generate some good docs for the repo proper! And there’s at least a few rough edges to sand off (secret management, usage of cron jobs).

That being said, currently – I have the pieces for this in a fork of the homer-docker repo, in the ‘k8s’ branch on dougbtv/homer-docker

Eventually – I’d like to make these compatible with OpenShift – which isn’t a long stretch. I’m a fan of running OpenShift; it encourages a lot of good practices, and I think as an organization it can help lower your bus number. It’s also easier to manage and maintain, but… It is a little more strict, so I like to mock-up a deployment in vanilla Kubernetes.

Requirements

The steepest of requirements being that you need Kubernetes running – actually getting Homer up and going afterwards is just a couple commands! Here’s my short list:

  • Kubernetes, 1.6.0 or greater (1.5 might work, I haven’t tested it)
  • Master node has git installed (and maybe your favorite text editor)

That’s a short list, right? Well… Installing Kubernetes isn’t that hard (and I’ve got the ansible playbooks to do it). But, we also need to have some persistent storage to use.

Why’s that? Well… Containers are for the most part ephemeral in nature. They come, and they go, and they don’t leave a lot of much around. We love them for that! They’re very reproducible, and we love that. But – with that we lose our data everytime they die. There’s certain stuff we want to keep around with Homer – especially: All of our database data. So we create persistent storage in order to keep it around. There’s many plugins for persistent volumes you can use with Kubernetes, such as NFS, iSCSI, CephFS, Flocker (and proprietary stuff, too), etc.

I highly recommend you follow my guide for installing Kubernetes with persistent volume storage backed by GlusterFS. If you can follow through my guide successfully – that will get you to exactly the place you need to be to follow the rest of this guide. It also puts you in a place that feasible for actually running in production – the other option is to use “host path volumes”, Kubernetes docs have a nice tutorial on how to use host path volumes – however, we lose a lot of the great value of Kubernetes when we use host path volumes – they’re not portable across hosts, so, they’re effectly only good for a simple development use-case. If you do follow my guide – I recommend that you stop before you actually create the MariaDB stuff. That way you don’t have to clean it up (but you can leave it there and it won’t cause any harm, tbh).

My guide on Kubernetes with GlusterFS backed persistent volumes also builds upon another one of my guide for installing Kubernetes on CentOS. Which may also be helpful.

Both of these guides use a CentOS 7.3 host, which we run Ansible playbooks against, to then run 4 virtual machines which comprise our Kubernetes 1.6.1 (at the time of writing) cluster.

If you’re looking for something smaller (e.g. not a cluster), maybe you want to try minikube.

So, got your Kubernetes up and running?

Alright, if you’ve read this far, let’s make sure we’ve got a working kubernetes, change your namespace, or if you’re like me, I’ll just mock this up in a default namespace.

So, SSH into the master and just go ahead and check some basics….

[centos@kube-master ~]$ kubectl get nodes
[centos@kube-master ~]$ kubectl get pods

Everything looking to your liking? E.g. no errors and no pods running that you don’t want running? Great, you’re ready to rumble.

Clone my fork, using the k8s branch.

Ok, next up, we’re going to clone my fork of homer-docker, so go ahead and get that going…

[centos@kube-master ~]$ git clone -b k8s https://github.com/dougbtv/
[centos@kube-master ~]$ cd homer-docker/

Alright, now that we’re there, let’s take a small peek around.

First off in the root directoy there’s a k8s-docker-compose.yml. It’s a Docker compose file that’s really only there for a single purpose – and that’s to build images from. The docker-compose.yml file that’s there is for just a standard kind of deployment with just docker/docker-compose.

Optional: Build your Docker images

If you want to build your own docker images and push them to a registry (say Dockerhub) – now’s the time to do that. It’s completely optional – if you don’t, you’ll just wind up pulling my images from Dockerhub, they’re in the dougbtv/* namespace. Go ahead and skip ahead to the “persistent volumes” section if you don’t want to bother with building your own.

So, first off you need Docker compose…

[centos@kube-master homer-docker]$ sudo /bin/bash -c 'curl -L https://github.com/docker/compose/releases/download/1.12.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose'
[centos@kube-master homer-docker]$ sudo chmod +x /usr/local/bin/docker-compose
[centos@kube-master homer-docker]$ sudo /usr/local/bin/docker-compose -v
docker-compose version 1.12.0, build b31ff33

And add it to your path if you wish.

Now – go ahead and replace my namespace with your own. Replacing YOURNAME with, well, your own name (which is the namespace used in your registry for example)

[centos@kube-master homer-docker]$ find . -type f -print0 | xargs -0 sed -i 's/dougbtv/YOURNAME/g'

Now you can kick off a build.

[centos@kube-master homer-docker]$ sudo /usr/local/bin/docker-compose -f k8s-docker-compose.yml build

Now you’ll have a bunch of images in your docker images list, and you can docker login and docker push yourname/each-image as you like.

Persistent volumes

Alright, now this is rather important, we’re going to need persistent volumes to store our data in. So let’s get those going.

I’m really hoping you followed my tutorial on using Kubernetes with GlusterFS because you’ll have exactly the volumes we need. If you haven’t – I’m leaving this as an excersize for the reader to create host path volumes, say if you’re using minikube or otherwise. If you do choose that adventure, think about modifying my glusterfs-volumes.yaml file.

During my tutorial where we created volumes, there’s a file in cento’s home @ /home/centos/glusterfs-volumes.yaml – and we ran a kubectl create -f /home/centos/glusterfs-volumes.yaml.

Once we ran that, we have volumes that are available to use, you can check them out with:

[centos@kube-master homer-docker]$ kubectl get pv
NAME               CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
gluster-volume-1   600Mi      RWO           Delete          Available             storage                  3h
gluster-volume-2   300Mi      RWO           Delete          Available             storage                  3h
gluster-volume-3   300Mi      RWO           Delete          Available             storage                  3h
gluster-volume-4   100Mi      RWO           Delete          Available             storage                  3h
gluster-volume-5   100Mi      RWO           Delete          Available             storage                  3h

Noting that in the above command kubectl get pv – the pv means “persistent volumes”. Once you have these volumes in your install – you’re good to proceed to the next steps.

Drum roll please – start a deploy of Homer!

Alright, now… with that in place there’s just three steps we need to perform, and we’ll look at the results of those after we run them. Those steps are:

  • Make persistent volume claims (to stake a claim to the space in those volumes)
  • Create service endpoints for the Homer services
  • Start the pods to run the Homer containers

Alright, so go ahead and move yourself to the k8s directory in the clone you created earlier.

[centos@kube-master k8s]$ pwd
/home/centos/homer-docker/k8s

Now, there’s 3-4 files here that really matter to us, go ahead and check them out if you so please.

These are the one’s I’m talking about:

[centos@kube-master k8s]$ ls -1 *yaml
deploy.yaml
hepgen.yaml
persistent.yaml
service.yaml

The purpose of each of these files is…

  • persistent.yaml: Defines our persistent volume claims.
  • deploy.yaml: Defines which pods we have, and also configurations for them.
  • service.yaml: Defines the exposed services from each pod.

Then there’s hepgen.yaml – but we’ll get to that later!

Alright – now that you get the gist of the lay of the land. Let’s run each one.

Changing some configuration options…

Should you need to change any options, they’re generally environment variables and are in the ConfigMap section of the deploy.yaml. Some of those environment variables are really secrets, and it’s an improvement that could be made to this deployment.

Create Homer Persistent volume claims

Alright, we’re going to need the persistent volume claims, so let’s create those.

[centos@kube-master k8s]$ kubectl create -f persistent.yaml 
persistentvolumeclaim "homer-data-dashboard" created
persistentvolumeclaim "homer-data-mysql" created
persistentvolumeclaim "homer-data-semaphore" created

Now we can check out what was created.

[centos@kube-master k8s]$ kubectl get pvc
NAME                   STATUS    VOLUME             CAPACITY   ACCESSMODES   STORAGECLASS   AGE
homer-data-dashboard   Bound     gluster-volume-4   100Mi      RWO           storage        18s
homer-data-mysql       Bound     gluster-volume-2   300Mi      RWO           storage        18s
homer-data-semaphore   Bound     gluster-volume-5   100Mi      RWO           storage        17s

Great!

Create Homer Services

Ok, now we need to create services – which allows our containers to interact with one another, and us to interact the services they create.

[centos@kube-master k8s]$ kubectl create -f service.yaml 
service "bootstrap" created
service "cron" created
service "kamailio" created
service "mysql" created
service "webapp" created

Now, let’s look at what’s there.

[centos@kube-master k8s]$ kubectl get svc
NAME                CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
bootstrap           None             <none>        55555/TCP   6s
cron                None             <none>        55555/TCP   6s
glusterfs-cluster   10.107.123.112   <none>        1/TCP       23h
kamailio            10.105.142.140   <none>        9060/UDP    5s
kubernetes          10.96.0.1        <none>        443/TCP     1d
mysql               None             <none>        3306/TCP    5s
webapp              10.101.132.226   <none>        80/TCP      5s

You’ll notice some familiar faces if you’re used to deploying Homer with the homer-docker docker-compose file – there’s kamailio, mysql, the web app, etc.

Create Homer Pods

Ok, now – we can create the actual containers to get Homer running.

[centos@kube-master k8s]$ kubectl create -f deploy.yaml 
configmap "env-config" created
job "bootstrap" created
deployment "cron" created
deployment "kamailio" created
deployment "mysql" created
deployment "webapp" created

Now, go ahead and watch them come up – this could take a while during the phase where the images are pulled.

So go ahead and watch patiently…

[centos@kube-master k8s]$ watch -n1 kubectl get pods --show-all

Wait until the STATUS for all pods is either completed or running. Should one of the pods fail, you might want to get some more information about it. Let’s say the webapp isn’t running; you could describe the pod, and get logs from the container with:

[centos@kube-master k8s]$ kubectl describe pod webapp-3002220561-l20v4
[centos@kube-master k8s]$ kubectl logs webapp-3002220561-l20v4

Verify the backend – generate some traffic with HEPgen.js

Alright, now that we have all the pods up – we can create a hepgen job.

[centos@kube-master k8s]$ kubectl create -f hepgen.yaml 
job "hepgen" created
configmap "sample-hepgen-config" created

And go and watch that until the STATUS is Completed for the hepgen job.

[centos@kube-master k8s]$ watch -n1 kubectl get pods --show-all

Now that it has run, we can verify that the calls are in the database, let’s look at something simple-ish. Remember, the default password for the database is literally secret.

So go ahead and enter the command line for MySQL…

[centos@kube-master k8s]$ kubectl exec -it $(kubectl get pods | grep mysql | awk '{print $1}') -- mysql -u root -p
Enter password: 

And then you can check out the number of entries intoday sip_capture_call_* table. There should in theory be 3 entries here.

mysql> use homer_data;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables like '%0420%';
+-----------------------------------+
| Tables_in_homer_data (%0420%)     |
+-----------------------------------+
| isup_capture_all_20170420         |
| rtcp_capture_all_20170420         |
| sip_capture_call_20170420         |
| sip_capture_registration_20170420 |
| sip_capture_rest_20170420         |
| webrtc_capture_all_20170420       |
+-----------------------------------+
6 rows in set (0.01 sec)

mysql> SELECT COUNT(*) FROM sip_capture_call_20170420;
+----------+
| COUNT(*) |
+----------+
|        3 |
+----------+
1 row in set (0.00 sec)

It’s all there! That means our backend is generally working. But… That’s the hard dirty work for Homer, we want to get into the good stuff – some visualization of our data, so let’s move on to the front-end.

Expose the front-end

Alright in theory now you can look at kubectl get svc and see the service for the webapp, and visit that URL.

But, following with my tutorial, if you’ve run these in VMs on a CentOS host, well… you have a little more to do to expose the front-end. This is also (at least somewhat) similiar to how you’d expose an external IP address to access this in a more-like-production setup.

So, let’s go ahead and change the service for the webapp to match the external IP address of the master.

Note, you’ll see it has an internal IP address assigned by your CNI networking IPAM.

[centos@kube-master k8s]$ kubectl get svc | grep -Pi "^Name|webapp"
NAME                CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
webapp              10.101.132.226   <none>        80/TCP      32m

Given that, right from the master (or likely anywhere on Kube nodes) you could just curl 10.101.132.226 and there’s your dashboard, but, man it’s hard to navigate the web app using curl ;)

So let’s figure out the IP address of our host. Mine is in the 192.168.122range and yours will be too if you’re using my VM method here.

[centos@kube-master k8s]$ kubectl delete svc webapp
service "webapp" deleted
[centos@kube-master k8s]$ ipaddr=$(ip a | grep 192 | awk '{print $2}' | perl -pe 's|/.+||')
[centos@kube-master k8s]$ kubectl expose deployment webapp --port=80 --target-port=80 --external-ip $ipaddr
service "webapp" exposed

Now you’ll see we have an external address, so anyone who can access 192.168.122.14:80 can see this.

[centos@kube-master k8s]$ kubectl get svc | grep -Pi "^Name|webapp"
NAME                CLUSTER-IP       EXTERNAL-IP      PORT(S)     AGE
webapp              10.103.140.63    192.168.122.14   80/TCP      53s

However if you’re using my setup, you might have to tunnel traffic from your desktop to the virtual machine host in order to do that. So, I did so with something like:

ssh root@virtual_machine_host -L 8080:192.168.122.14:80

Now – I can type in localhost:8080 in my browser, and… Voila! There is Homer!

Remember to login with username admin and password test123.

Change your date to “today” and hit search, and you’ll see all the information we captured from running HEPgen.

And there… You have it! Now you can wrangle in what’s going on with your Kubernetes VoIP platform :)