Let's spin up k8s 1.5 on CentOS (with CNI pod networking, too!)

Alright, so you’ve seen my blog post about installing Kubernetes by hand on CentOS, now… Let’s make that easier and do that with an Ansible playbook, specifically my kube-centos-ansible playbook. This time we’ll have Kubernetes 1.5 running on a cluster of 3 VMs, and we’ll use weave flannel as a CNI plugin to handle our pod network. And to make it more fun, we’ll even expose some pods to the ‘outside world’, so we can actually (kinda) do something with them. Ready? Let’s go!

Note: After writing this article, I later figured out how to use Weave or Flannel. So the playbook now reflects that, and uses Flannel as a default. I didn’t overly edit the article to reflect this, however, it shouldn’t change the instructions herein. I’ll add a note during the steps where you can change it if you’d like.

Why Flannel as default? I prefer it, but, for no particular reason than I’m from Vermont, and we love our flannels here. These stereotypes are basically 99% true, and yep, I have a closet full of flannel.

What’s inside?

Alright, so here’s the parts of this playbook, and it…

  1. Configures a machine to use as a virtual machine host (and you can skip this part if you want to run on baremetal, or an inventory of machines created otherwise, say on OpenStack)
  2. Installs all the deps necessary on the hosts
  3. Runs kubeadm init to bootstrap the cluster (kubeadm docs)
  4. Installs a CNI plugin for pod networking (for now, it’s weave)
  5. Joins the hosts to a cluster.

What do you need?

Along with the below you need a client machine from which to run your ansible playbooks. It can be the same host as one of the below if you want, but, you’ll need to install ansible & git on that machine whatever one it may be. Once you’ve got that machine, go ahead and clone this repo.

$ git clone https://github.com/dougbtv/kube-centos-ansible.git
$ cd kube-centos-ansible

In a choose your own adventure style, you can either choose from the below.

A. Pick a single host and use it to host your virtual machines. We’ll call this machine either the “virt host” or “virtual machine host” throughout here. This assumes that you have a CentOS 7 machine (that’s generally up to the latest packages). You’ll need an SSH key into this machine as root (or modify the inventory later on if you’re sshing in as another user, who’ll need sudo access). Go to section “A: Virtual machine host and VM spin-up”

B. Create your own inventory. Spin up some CentOS machines, either baremetal or virtual machines, and make note of the IP addresses. Skip on over to section “B: Define the inventory of kubernetes nodes”

A: Virtual machine host and VM spin-up

Ok, let’s first modify the inventory. Get the IP address of your virt-host, and we’ll modify the ./inventory/virthost.inventory and enter in the IP address there (or hostname, should you have some fancy-pants DNS setup).

The line you’re looking to modify is right up at the top and looks like:

kubehost ansible_host= ansible_ssh_user=root

Now we can run this playbook, it should be fairly straight forward, it installs the virtualization deps for KVM/libvirt and then spins up the VMs for you and reports their IP addresses.

You run the playbook like so:

$ ansible-playbook -i inventory/virthost.inventory virt-host-setup.yml 

When it completes you’ll get some output that looks about like this, yours will more-than-likely have different IP addresses, so make sure to note those:

TASK [vm-spinup : Here are the IPs of the VMs] *********************************
ok: [kubehost] => {
    "msg": {
        "kube-master": "", 
        "kube-minion-1": "", 
        "kube-minion-2": ""

You can also find them in the /etc/hosts on the virt-host for convenience, like so:

$ cat /etc/hosts | grep -i kube kube-master kube-minion-2 kube-minion-1

This playbook also creates an ssh key pair that’s used to access these machines. This key lives in root’s home @ /root/.ssh/. The machines that are spun up are CentOS Generic cloud images and you’ll need to ssh as the centos user.

So you can ssh to the master from this virt host like so:

ssh -i .ssh/id_vm_rsa centos@kube-master

Notes that the default way the playbook runs is to create 3 nodes. You can get fancy if you want and use more nodes by modifying the list of nodes in the ./vars/all.yml should you wish, and modifying the inventory appropriately in the next section.

Continue onto section B below with the IP addresses you’ve seen come up.

B: Define the inventory of kubernetes nodes

Alright, now you’re going to need to modify the ./inventory/vms.inventory file.

First modify the top most lines, usually 3 if you’re doing the default 3 as recommended earlier.

$ head -n3 ./inventory/vms.inventory 
kube-master ansible_host=
kube-minion-1 ansible_host=
kube-minion-2 ansible_host=

Modify these to suit your inventory.

Towards the end of the file, there’s some host vars setup, you’ll also want to modify these. If you used the virt-host method, you’ll want to change in the ansible_ssh_common_args – unless you’re running ansible from there in which case, comment this. Also SCP the /root/.ssh/id_vm_rsa to your client machine and put that in the ansible_ssh_private_key_file.

If you brought your own inventory, typically you’d probably comment out both the last two lines: ansible_ssh_common_args and ansible_ssh_private_key_file

$ tail -n6 ./inventory/vms.inventory 
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p root@"'

Now we can install k8s

Alright, now that the ./inventory/vms.inventory file is setup, we can get along moving to install k8s! Honestly, the hardest stuff is complete at this point.

Remember, flannel will be the default pod networking at this point, if you’d like check out the ./vars/all.yml and you’ll see that near the top there’s an option to change it to weave if you’d prefer.

Let’s run it!

$ ansible-playbook -i inventory/vms.inventory kube-install.yml

(Be prepared to accept the host keys by typing ‘yes’ when prompted if you haven’t ssh’d to these machines before. And beforewarned that you don’t type “yes” too many times, cause you might put in the command yes which will just fill your terminal with a ton of ‘y’ characters!).

Alright, you’re good to go! SSH to the master and let’s see that everything looks good.

On the master, let’s look at the nodes…

[root@virthost ~]# ssh -i .ssh/id_vm_rsa centos@kube-master
[centos@kube-master ~]$ kubectl get nodes
NAME            STATUS         AGE
kube-master     Ready,master   4m
kube-minion-1   Ready          2m
kube-minion-2   Ready          2m

There’s a number of pods running to support the pod networking, you can check those out with:

# All the pods
[centos@kube-master ~]$ kubectl get pods --all-namespaces
[... lots of pods ...]
# Specifically the kube-system pods
[centos@kube-master ~]$ kubectl get pods --namespace=kube-system

And we wanted k8s 1.5 right? Let’s check that out.

[centos@kube-master ~]$ kubectl version | grep -i server
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:34:56Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Alright, that looks good, so let’s move on and do something interesting with it…

Let’s run some pods!

Ok, we’ll do the same thing as the previous blog article and we’ll run some nginx pods.

Let’s create an nginx_pod.yaml like so:

[centos@kube-master ~]$ cat nginx_pod.yaml 
apiVersion: v1
kind: ReplicationController
  name: nginx
  replicas: 2
    app: nginx
      name: nginx
        app: nginx
      - name: nginx
        image: nginx
        - containerPort: 80

Then we can run it…

[centos@kube-master ~]$ kubectl create -f nginx_pod.yaml 

And we can see the two instances come up…

[centos@kube-master ~]$ kubectl get pods
nginx-34vhj   1/1       Running   0          1m
nginx-tkh4h   1/1       Running   0          1m

And we can get some details, should we want to…

[centos@kube-master ~]$ kubectl describe pod nginx-34vhj

And this is no fun if we can’t put these pods on the network, so let’s expose a pod.

First off, get the IP address of the master.

[centos@kube-master ~]$ master_ip=$(ifconfig | grep 192 | awk '{print $2}')
[centos@kube-master ~]$ echo $master_ip

And let’s use that as an external address… And expose a service.

[centos@kube-master ~]$ kubectl expose rc nginx --port=8999 --target-port=80 --external-ip $master_ip
service "nginx" exposed

And we can see it in our list of services…

[centos@kube-master ~]$ kubectl get svc
kubernetes      <none>           443/TCP    20m
nginx   8999/TCP   4s

And we can describe that service should we want more details…

[centos@kube-master ~]$ kubectl describe service nginx

Now, we can access the load balanced nginx pods from the virt-host (or your client machine should you have brought your own inventory)

[root@virthost ~]# curl -s | grep -i thank
<p><em>Thank you for using nginx.</em></p>

Voila! There we go, we have exposed nginx pods running on port 8999, an external IP on the master node, with Weave for the pod network using CNI.

Let's (manually) run k8s on CentOS!

So sometimes it’s handy to have a plain-old-Kubernetes running on CentOS 7. Either for development purposes, or to check out something new. Our goal today is to install Kubernetes by hand on a small cluster of 3 CentOS 7 boxen. We’ll spin up some libvirt VMs running CentOS generic cloud images, get Kubernetes spun up on those, and then we’ll run a test pod to prove it works. Also, this gives you some exposure to some of the components that are running ‘under the hood’.

Let’s follow the official Kubernetes guide for CentOS to get us started.

But, before that, we’ll need some VMs to use as the basis of our three machine cluster.

Let’s spin up a couple VM’s

So, we’re going to assume you have a machine with libvirt to spin up some VMs. In this case I’m going to use a CentOS Cloud Image, and I’m going to spin them up in this novel way using this guide to spin those up easily.

So let’s make sure we have the prerequisites. Firstly, I am using Fedora 25 as my workstation, and I’m going to spin up the machines there.

$ sudo dnf install libvirt-client virt-install genisoimage

I have a directory called /home/vms and I’m going to put everything there (this basic qcow2 cloud image, and my virtual machine disk images), so let’s make sure we download the cloud image there, too.

# In case you need somewhere to store your VM "things"
$ mkdir /home/vms

# Download the image
$ cd /home/vms/
$ wget -O /home/vms/CentOS-7-x86_64-GenericCloud.qcow2.xz https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1612.qcow2.xz

# Extract the downloaded image...
$ xz -d CentOS-7-x86_64-GenericCloud.qcow2.xz

I originally had this in the wrong place, so just make sure the image winds up in the right place, it should be @ /home/vms/CentOS-7-x86_64-GenericCloud.qcow2.

Now let’s download the gist for spinning up a cloud image in libvirt and we’ll change it’s mode so we can execute it.

# Download the Gist
$ wget -O spin-up-generic.sh https://gist.githubusercontent.com/giovtorres/0049cec554179d96e0a8329930a6d724/raw/f7520fbbf1e4a54f898cf8cc51e3eaac9167f178/virt-install-centos

# Make it executable
$ chmod 0755 spin-up-generic.sh 

# Change the default image directory to the one we created earlier.
$ sed -i -e 's|~/virt/images|/home/vms|g' spin-up-generic.sh

But, wait! There’s more. Go ahead and make sure you have an SSH public key you can add to the spin-up-generic.sh script. Make sure you cat the appropriate public key.

# Chuck your ssh public key into a variable...
$ sshpub=$(cat ~/.ssh/id_rsa.pub)

# Sed the file and replace the dummy public key with your own
# (You could also edit the file by hand and do a find for "ssh-rsa")
$ sed -i -e "/ssh-rsa/c\  - $sshpub" spin-up-generic.sh

Now, we can spin up a few VMs, we’re going to spin up a master and 2 minions. You’ll note that you get an IP address from this script for each machine, take note of those cause we’ll need it in the next steps. Depending on your setup for libvirt you might have to use sudo.

[root@yoda vms]# ./spin-up-generic.sh centos-master
Wed, 08 Feb 2017 16:28:21 -0500 DONE. SSH to centos-master using with  username 'centos'.

[root@yoda vms]# ./spin-up-generic.sh centos-minion-1
Wed, 08 Feb 2017 16:28:49 -0500 DONE. SSH to centos-minion-1 using with  username 'centos'.

[root@yoda vms]# ./spin-up-generic.sh centos-minion-2
Wed, 08 Feb 2017 16:29:16 -0500 DONE. SSH to centos-minion-2 using with  username 'centos'.

Alright, now you should be able to SSH to these guys, ssh into the master node to test it out…

$ ssh centos@

Let’s start installing k8s!

Alrighty, so there’s things we’re going to want to do across multiple hosts. Since the goal here is to do this manually (e.g. not creating an ansible playbook) we’re going to have a few for loops to do this stuff efficiently for us. So, set a variable with the class D octet from each of the IPs above. (And one for the master & the minions, too, we’ll use this later.)

class_d="21 18 208"

And for a test, just go and run this…

$ for i in $class_d; do ssh centos@192.168.122.$i 'cat /etc/redhat-release'; done

You may have to accept the key finger print for each box.

Install Kubernetes RPM requirements

Now we’re creating some repo files for the k8s components.

$ for i in $class_d; do ssh centos@192.168.122.$i 'echo "[virt7-docker-common-release]
" | sudo tee /etc/yum.repos.d/virt7-docker-common-release.repo'; done

Now install etcd, kubernetes & flannel on all the boxen.

$ for i in $class_d; do ssh centos@192.168.122.$i 'sudo yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel'; done

Setup /etc/hosts

Now, we need to add to our hosts files the hostnames for each of these three files, so let’s mock up the lines we want to add, in my case, the lines I’ll add look like: centos-master centos-minion-1 centos-minion-2

So I’ll append using tee in a loop like:

for i in $class_d; do ssh centos@192.168.122.$i 'echo " centos-master centos-minion-1 centos-minion-2" | sudo tee -a /etc/hosts'; done

Setup Kubernetes configuration

Now we’re going to chuck in a /etc/kubernetes/config file, same across all boxes. So let’s make a local version of it and scp it. I tried to do it in one command, but, too much trickery between looping SSH and heredocs and what not. So, make this file…

cat << EOF > ./kubernetes.config
# Comma separated list of nodes in the etcd cluster

# logging to stderr means we get it in the systemd journal

# journal message level, 0 is debug

# Should this cluster be allowed to run privileged docker containers

# How the replication controller and scheduler find the kube-apiserver

Now scp it to all the hosts…

for i in $class_d; do scp ./kubernetes.config centos@192.168.122.$i:~/kubernetes.config; done

And finally move it into place.

for i in $class_d; do ssh centos@192.168.122.$i 'sudo mv /home/centos/kubernetes.config /etc/kubernetes/config'; done

Wave goodbye to your security

So the official docs do things that generally… I’d say “Don’t do that.”, but, alas, we’re going with the official docs, and this likely simplifies some things. So, while we’re here we’re going to follow those instructions, and we’re going to setenforce 0 and then disable the firewalls.

for i in $class_d; do ssh centos@192.168.122.$i 'sudo setenforce 0; sudo systemctl disable iptables-services firewalld; sudo systemctl stop iptables-services firewalld; echo'; done

Configure Kube services on the master

Here we setup etcd on the master…

ssh centos@$master_ip 'sudo /bin/bash -c "
cat << EOF > /etc/etcd/etcd.conf
# [member]


And the etcd api server…

ssh centos@$master_ip 'sudo /bin/bash -c "
cat << EOF > /etc/kubernetes/apiserver
# The address on the local server to listen to.

# The port on the local server to listen on.

# Port kubelets listen on

# Address range to use for services

# Add your own!

And we start etcd and specify some keys, remember from the docs:

Warning This network must be unused in your network infrastructure! is free in our network.

So go ahead and start that add the keys assuming that warning is OK…

ssh centos@$master_ip 'sudo systemctl start etcd; sudo etcdctl mkdir /kube-centos/network; sudo etcdctl mk /kube-centos/network/config "{ \"Network\": \"\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"'

If you’d like to check that etcd key, you can do:

ssh centos@$master_ip 'etcdctl get /kube-centos/network/config'

Now, configure flannel… (later we’ll do this on the nodes as well)

ssh centos@$master_ip 'sudo /bin/bash -c "
cat << EOF > /etc/sysconfig/flanneld
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment

# Any additional options that you want to pass

And then restart and enable the services we need…

ssh centos@$master_ip 'sudo /bin/bash -c "
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
    systemctl restart \$SERVICES
    systemctl enable \$SERVICES
    systemctl status \$SERVICES

Mildly verifying the services on the master

There’s a lot going on above, right? I, in fact, made a few mistakes while performing the above actions. I had a typo. So, let’s make sure the services are active.

ssh centos@$master_ip 'sudo /bin/bash -c "
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
    systemctl status \$SERVICES | grep -P \"(\.service \-|Active)\"

Make sure each entry this is “Active” state of “active.” If for some reason one isn’t, go and check the journald logs, on the master, for it with:

journalctl -f -u kube-apiserver

(Naturally replacing the service name with the one in trouble from above.)

Configure the minion nodes

Ok, first thing we’re going to manually set each of the hostnames for the minions. Our VM spin up script names them “your_name.example.local”, not quite good enough. So let’s manually set each of those.

ssh centos@ 'sudo hostnamectl set-hostname centos-minion-1'
ssh centos@ 'sudo hostnamectl set-hostname centos-minion-2'

Now just double check those

for i in $minion_ips; do ssh centos@$i 'hostname'; done

Ok cool, that means we can simplify a few steps following.

Now we can go ahead and configure the kubelet.

for i in $minion_ips; do ssh centos@$i 'sudo /bin/bash -c "
cat << EOF > /etc/kubernetes/kubelet
# The address for the info server to serve on

# The port for the info server to serve on

# You may leave this blank to use the actual hostname
# Check the node number!
# KUBELET_HOSTNAME="--hostname-override=centos-minion-n"

# Location of the api-server

# Add your own!
"'; done

Now, setup flannel…

for i in $minion_ips; do ssh centos@$i 'sudo /bin/bash -c "
cat << EOF > /etc/sysconfig/flanneld
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment

# Any additional options that you want to pass
"'; done

And get the services running….

for i in $minion_ips; do ssh centos@$i 'sudo /bin/bash -c "
for SERVICES in kube-proxy kubelet flanneld docker; do
    systemctl restart \$SERVICES
    systemctl enable \$SERVICES
    systemctl status \$SERVICES
"'; done

And we’ll double check those

for i in $minion_ips; do ssh centos@$i 'sudo /bin/bash -c "
for SERVICES in kube-proxy kubelet flanneld docker; do
    systemctl status \$SERVICES | grep -P \"(\.service \-|Active)\"
"'; done


Drum roll please…. Let’s see if it’s all running!

So OK, one more step… Let’s set some default in kubectl, we’ll do this from the master. In this case… Now I’m going to ssh directly to that machine and work from there…

$ ssh centos@

And then we’ll perform:

kubectl config set-cluster default-cluster --server=http://centos-master:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context

Here’s… the moment of truth. Let’s see if we can see all the nodes…

[centos@centos-master ~]$ kubectl get nodes
NAME              STATUS    AGE
centos-minion-1   Ready     2m
centos-minion-2   Ready     2m

Yours should look about like the above!

So, you wanna run a pod?

Well this isn’t much fun without having a pod running, so let’s at least get something running.

Create an nginx pod

Let’s create an nginx pod… Create a pod spec anywhere you want on the master, here’s what mine looks like

[centos@centos-master ~]$ cat nginx_pod.yaml 
apiVersion: v1
kind: ReplicationController
  name: nginx
  replicas: 2
    app: nginx
      name: nginx
        app: nginx
      - name: nginx
        image: nginx
        - containerPort: 80

Now you can create given that yaml file.

[centos@centos-master ~]$ kubectl create -f nginx_pod.yaml 

And you can see it being create when you get pods…

[centos@centos-master ~]$ kubectl get pods
NAME          READY     STATUS              RESTARTS   AGE
nginx-8rajt   0/1       ContainerCreating   0          10s
nginx-w2yja   0/1       ContainerCreating   0          10s

And you can get details about the pod with:

[centos@centos-master ~]$ kubectl describe pod nginx-8rajt
Name:       nginx-8rajt
Namespace:  default
Node:       centos-minion-2/
Start Time: Thu, 09 Feb 2017 19:39:14 +0000
Labels:     app=nginx
Status:     Pending

In this case you can see this is running on centos-minion-2. And there’s two instances of this pod! We specified replicas: 2 in our pod spec. And that’s the job of the kubelet – make sure instances are running, and in this case, it’s going to make sure 2 are running across our hosts.

Create a service to expose nginx.

Now that’s all well and good, but… What if we want to, y’know, serve something? (Omitting, uhhh, content!) But, we can do that by exposing this to a service.

So let’s go and expose it… Let’s create a service spec. Here’s what mine looks like:

[centos@centos-master ~]$ cat nginx_service.yaml 
apiVersion: v1
kind: Service
    name: nginxservice
  name: nginxservice
    # The port that this service should run on.
    - port: 9090
  # Label keys and values that must match in order to receive traffic for this service.
    app: nginx
  type: LoadBalancer

And then we create that…

[centos@centos-master ~]$ kubectl create -f nginx_service.yaml
service "nginxservice" created

And we can see what’s running by getting the services and describing the service.

[centos@centos-master ~]$ kubectl get services
kubernetes       <none>        443/TCP    1h
nginxservice   <pending>     9090/TCP   58s

[centos@centos-master ~]$ kubectl describe service nginxservice
Name:           nginxservice
Namespace:      default
Labels:         name=nginxservice
Selector:       app=nginx
Type:           LoadBalancer
Port:           <unset> 9090/TCP
NodePort:       <unset> 32702/TCP
Session Affinity:   None
No events.

Oh so you want to actually curl it? Next time :) Leaving you with a teaser for the following installments. Maybe next time we’ll do this all with Ansible instead of these tedious ssh commands.

Bootstrap a kpm registry to run a kpm registry

Yo dawg… I heard you like kpm-registries. So I bootstrapped a kpm-registry so you can deploy a kpm-registry from a kpm-registry.

So, I was deploying my kpm registry using a public, and beta kpm registry, and this happens right about the time I’m about to give a demo of spinning up stackanetes, and I need a kpm registry for that… But, the beta kpm registry (beta.kpm.sh) was down, argh/fiddlesticks!. So I went through and deploy a kpm registry so I can push a kpm registry package to run it. In the meanwhile I also opened a kpm issue, too.

Why the extra steps here, like… If you can run a kpm registry without a kpm registry, why would you do it? The thing is… Then I’m managing it myself (between a single docker container and a gunicorn web app), instead of having Kubernetes (k8s) manage it for me. And I want k8s to do the work. So I just bootstrap it, then I can deploy it as k8s pods.

This already assumes that you have kpm (the client) installed. If you don’t have kpm installed, go ahead and use my ansible galaxy role to do so. Which will give you a clone of the kpm client in /usr/src/kpm/

Also make sure you have gunicorn (the “green unicorn”, a Python web server gateway interface) installed.

$ sudo yum install -y python-gunicorn

It requires etcd to be present, so get up etcd first.

$ docker run --name tempetcd -dt -p 2379:2379 -p 2380:2380 quay.io/coreos/etcd:v3.0.6 /usr/local/bin/etcd -listen-client-urls, -advertise-client-urls http://$,

Now you can run the registry API server with gunicorn, a la:

$ pwd
$ gunicorn kpm.api.wsgi:app -b :5555

And then you can push the kpm-registry packages, but only after you set the proper tag in the manifest, because there isn’t a pushed image for this particular tag.

$ pwd
$ sed -i 's/v0.21.2/v0.21.1/' manifest.jsonnet 
$ kpm push -H http://localhost:5555 -f
package: coreos/kpm-registry (0.21.2-4) pushed

Can we it deploy kpm-registry now? Not quite… We also have to push the coreos/etcd package to our bootstrapping registry. And I found the manifest for it in the kubespray/kpm-packages repo.

$ cd /usr/src/
$ git clone https://github.com/kubespray/kpm-packages.git
$ cd kpm-packages/
$ cd coreos/etcdv3
$ pwd
$ kpm push -H http://localhost:5555 -f
$ kpm list -H http://localhost:5555
app                  version    downloads
-------------------  ---------  -----------
coreos/etcd          3.0.6-1    -
coreos/kpm-registry  0.21.2-4   -

Now you should be able to deploy a kpm registry from the bootstrapping registry via:

$ kpm deploy coreos/kpm-registry --namespace kpm -H http://localhost:5555
create coreos/kpm-registry 

 01 - coreos/etcd:
 --> kpm (namespace): created
 --> etcd-kpm-1 (deployment): created
 --> etcd-kpm-2 (deployment): created
 --> etcd-kpm-3 (deployment): created
 --> etcd-kpm-1 (service): created
 --> etcd-kpm-2 (service): created
 --> etcd-kpm-3 (service): created
 --> etcd (service): created

 02 - coreos/kpm-registry:
 --> kpm (namespace): ok
 --> kpm-registry (deployment): created
 --> kpm-registry (service): created

Voila! Now you can tear down the bootstrapping registry if you’d like, e.g. stop the docker container and the API server as run by gunicorn.

Running Stackanetes on Openshift

Stackanetes is an open-source project that aims to run OpenStack on top of Kubernetes. Today we’re going to use a project that I created that uses ansible plays to setup Stackanetes on Openshift, openshift-stackanetes. We’ll use an all-in-one server approach to setting up openshift in this article to simplify that aspect, and later provide playbooks to launch Stackanetes with a cluster and focus on HA requirements in the future.

If you’re itching to get into the walk-through, head yourself down to the requirements section and you can get hopping. Otherwise, we’ll start out with an intro and overview of what’s involved to get the components together in order to make all that good stuff down in that section work in concert.

During this year’s OpenStack summit, and announced on October 26th 2016, Stackanetes was demonstrated as a technical preview. Up until this point, I don’t believe it has been documented as being run on OpenShift. I wouldn’t be able to document this myself if it weren’t for the rather gracious assistance of the crew from the CoreOS project and the Stackanetes as they helped me through this issue on GitHub. Big thanks go to ss7pro, PAStheLod, ant31, and Quentin-M. Really appreciated the help crew, big time!

On terminology – while the Tech Crunch article considers the name Stackanetes unfortunate, I disagree – I like the name. It kind of rolls off the tongue. Also if you say it fast enough, someone might say “Gesundheit!” after you say it. Also, theoretically using the construst of i18n (internationalization) or better yet, k8s (Kubernetes), you could also say this is s9s (stackanetes), which I’d use in my commit messages and what not because… It’s a bit of typing! You might see s9s here and again in this article, too. Also, you might hear me say “OpenShift” a huge number of times – I really mean “OpenShift Origin” whenever I say it.

Scope of this walk-through

First thing’s first – openshift-stackanetes is the project we’ll focus on to use to spin up Stackanetes on Openshift, it is a series of Ansible roles to help us accomplish getting Stackanetes on OpenShift.

Primarily we’ll focus on using an all-in-one OpenShift instance, that is one that uses the oc cluster up command to run OpenShift all on a single host, as outlined in the local cluster management documentation. My “openshift on openstack in easy mode” goes into some of those details as well. However, the playbooks will take care of this setup for you in this case.

Things we do cover:

  • Getting OpenShift up (all-in-one style, or what I like to call “easy mode”)
  • Spinning up a KPM registry
  • Setting up proper permissions for Stackanetes to run under OpenShift
  • Getting Stackanetes running in OpenShift

Things we don’t cover:

  • High availability (hopefully we’ll look at this in a further article)
  • For now, tenant / external networking, we’ll just run OpenStack clouds instances in their own isolated network. (This is kind of a project on its own)
  • In depth usage of OpenStack – we’ll just do enough to get some cloud instances up
  • Spinning up Ceph
  • A sane way of exposing DNS externally (we’ll just use a hosts file for our client machines outside of the s9s box)
  • Details of how to navigate OpenShift, surf this blog for some basics if you need them.
  • Changing out the container runtime (e.g. using rkt, we just use Docker this time around)
  • Ansible installation and basic usage, we will however give you all the ansible commands to run this playbook.

Considerations of using Stackanetes on OpenShift

Some of the primary considerations I had to overcome for using Stackanetes on OpenShift is managing the SCCs (security context constraints).

I’m not positive that the SCCs I have defined herein in are ideal. In some ways, I can point out that they are insufficient in a few ways. However, my initial focus has been to get to Stackanetes to run properly.

Components of openshift-stackanetes

So, first off there’s a lot of components of Stackanetes, especially the veritable cornucopia of pieces that comprise OpenStack. If you’re interested in those, you might want to check out the Wikipedia article on OpenStack which has a fairly comprehensive list.

One very interesting part of what stackanetes is that it leverages KPM registry.

KPM is described as “a tool to deploy and manage application stacks on Kubernetes”. I like to think of it as “k8s package manager”, and while never exactly branded that way, that makes sense to me. In my own words – it’s a way to take the definition YAML files you’d use to build k8s resources and parameterize them, and then store them in a registry so that you can access them later. In a word: brilliant.

Something I did in the process of creating openshift-stackanetes was to create an Ansible-Galaxy role for KPM on Centos to get a contemporary revision of kpm client running on CentOS, it’s included in the openshift-stackanetes ansible project as a requirement.

Another really great component of s9s is that they’ve gone ahead and implemented Traefik – which is a fairly amazing “modern reverse proxy” (Traefik’s words). This doles out the HTTP requests to the proper services.

Let’s give a quick sweeping overview of the roles as ran by the openshift-stackanetes playbooks:

  • docker-install installs the latest Docker from the official Docker RPM repos for CentOS.
  • dougbtv.kpm-install installs the KPM client to the OpenShift host machine.
  • openshift-install preps the machine with the deps to get OpenShift up and running.
  • openshift-up generally runs the oc cluster up command.
  • kpm-registry creates a namespace for the KPM registry and spins up the pods for it.
  • openshift-aio-dns-hack is my “all-in-one” OpenShift DNS hack.
  • stackanetes-configure preps the pieces to go into the kpm registry for stackanetes and spins up the pods in their own namespace.
  • stackanetes-routing creates routes in OpenShift for the stackanetes services that we need to expose


  • A machine with CentOS 7.3 installed
  • 50 gig HDD minimum (64+ gigs recommended)
  • 12 gigs of RAM minimum
  • 4 cores recommended
  • Networking pre-configured to your liking
  • SSH keys to root on this machine from a client machine
  • A client machine with git and ansible installed.

You can use a virtual machine or bare metal, it’s your choice. I do highly recommend doubling all those above requirements though, and using a bare metal machine as your experience will

If you use a virtual machine you’ll need to make sure that you have nested virtualization passthrough. I was able to make this work, and while I won’t go into super details here, the gist of what I did was to check if there were virtual machine extensions on the host, and also the guest. You’ll node I was using an AMD machine.

# To check if you have virtual machine extensions (on host and guest)
$ cat /proc/cpuinfo | grep -Pi "(vmx|svm)"

# Then check that you have nesting enabled
$ cat /sys/module/kvm_amd/parameters/nested

And then I needed to use the host-passthrough CPU mode to get it to work.

$ virsh dumpxml stackanetes | grep -i pass
  <cpu mode='host-passthrough'/>

All that said, I still recommend the bare metal machine, and my notes were double checked against bare metal… I think your experience will be improved, but I realize that isn’t always a convenient option.

Let’s run some playbooks!

So, we’re assuming that you’ve got your CentOS 7.3 machine up, you know its IP address and you have SSH keys to the root user. (Don’t like the root user? I don’t really, feel free to contribute updates to the playbooks to properly use become!)

git clone and basic ansible setup

First things first, make sure you have ansible installed on your client machine, and then we’ll clone the repo.

$ git clone https://github.com/dougbtv/openshift-stackanetes.git
$ cd openshift-stackanetes

Now that we have it installed, let’s go ahead and modify the inventory file in the root directory. In theory, all you should need to do is change the IP address there to the CentOS OpenShift host machine.

It looks about like this:

$ cat inventory && echo
stackanetes ansible_ssh_host= ansible_ssh_user=root


Ansible variable setup

Now that you’ve got that good to go, you can modify some of our local variables, check out the vars/main.yml file to see the variables you can change.

There’s two important variables you may need to change:

  • facter_ipaddress
  • minion_interface_name

Firstly facter_ipaddress variable. This is important as the value of this determines how we’re going to find your IP address. By default it’s set to ipaddress. If you’re unsure what to put here, go ahead and install facter and check out which value returns the IP address you’d like to use for external access ot the machine.

[root@labstackanetes ~]# yum install -y epel-release
[root@labstackanetes ~]# yum install -y facter
[root@labstackanetes ~]# facter | grep -i ipaddress
ipaddress =>
ipaddress_enp1s0f1 =>
ipaddress_lo =>

In this case, you’ll see that either ipaddress or ipaddress_enp1s0f1 look like valid choices – however the ipaddress isn’t reliable, so choose one based on your NIC.

Next the minion_interface_name, additionally important because this is the interface we’re going to tell Stackanetes to use for networking for the pods it deploys. This should generally be the same interface that the above ip address came from.

You can either edit the ./vars/main.yml file or you can pass them in as extra vars e.g. --extra-vars "facter_ipaddress=ipaddress_enp1s0f1 minion_interface_name=enp1s0f1"

Let’s run that playbook!

Now that you’re setup, you should be able to run the playbook…

The default way you’d run the playbook is with…

$ ansible-playbook -i inventory all-in-one.yml

Or if you’re specifying the --extra-vars, insert that before the yaml filename.

If everything has gone well!

It likely may have! If everything has gone as planned, there should be some output that will help you get going…

It should list:

  • The location of the openshift dashboard, e.g. https://yourip:8443
  • The location of the KPM registry (a cluster.local URL)
  • A series of lines representing a /etc/hosts file to put on your client machine.

You should be able to check out the OpenShift dashboard (cockpit) and take a little peek around to see what has happened.

Possible “gotchyas” and troubleshooting

First thing’s first – you can log into the openshift host and issue:

oc projects
oc project openstack
oc get pods

And see if any pods are in error.

The biggest possibility of what has gone wrong is that etcd in the kpm package didn’t come up properly. This happens intermittently to me, and I haven’t debugged it, nor opened up an issue with the KPM folks (Unsure if it’s how they instantiate etcd or etcd itself, I do know however that spinning up an etcd cluster can be a precarious thing, so, it happens.)

In this case that this happens, go ahead and delete the KPM namespace and run the playbook again, e.g.

# Change away from the kpm project in case you're on it
oc project default
# Delete the project / namespace
oc project delete kpm
# List the projects to see if it's gone before you re-run
oc projects

Let’s access OpenStack!

Alright! You got this far, nice work… You’re fairly brave if you made it this far. I’ve been having good luck, but, I still appreciate your bravado!

First up – did you make an /etc/hosts file on your local machine? We’re not worrying about external DNS yet, so you’ll have to do that, it will have entries that look somewhat similar to this but has your IP address of your OpenShift host: identity.openstack.cluster horizon.openstack.cluster image.openstack.cluster network.openstack.cluster volume.openstack.cluster compute.openstack.cluster novnc.compute.openstack.cluster search.openstack.cluster

So, you can access Horizon (the OpenStack dashboard) by pointing your browser at:


Great, now just login with username “admin” and password “password”, aka SuperSecure(TM).

Surf around that until you’re satisfied that the GUI isn’t powerful enough and you now need to hit up the command line ;)

Using the openstack client

Go ahead and SSH into the OpenShift host machine, and in root’s home directory you’ll find that there’s a stackanetesrc file available there. It’s based on the /usr/src/stackanetes/env_openstack.sh file that comes in the Stackanetes git clone.

So you can use it like so and get kickin’

[root@labstackanetes ~]# source ~/stackanetesrc 
[root@labstackanetes ~]# openstack hypervisor list
| ID | Hypervisor Hostname        | Hypervisor Type | Host IP       | State |
|  1 | identity.openstack.cluster | QEMU            | | down  |
|  2 | labstackanetes             | QEMU            | | up    |

So how about a cloud instance!?!?!!!

Alright, now that we’ve sourced our run commands, we can go ahead and configure up our OpenStack so we can run some instances. There’s a handy file for a suite demo commands to spin up some instances packaged in Stackanetes itself, my demo here is based on the same. You can find that configuration @ /usr/src/stackanetes/demo_openstack.sh.

First up, we download the infamous cirros & upload it to glance.

$ source ~/stackanetesrc 
$ curl -o /tmp/cirros.qcow2 http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
$ openstack image create --disk-format qcow2  --container-format bare  --file /tmp/cirros.qcow2 cirros

Now let’s create our networks

# External Net
$ openstack network create ext-net --external --provider-physical-network physnet1 --provider-network-type flat
$ openstack subnet create ext-subnet --no-dhcp --allocation-pool start=,end= --network=ext-net --subnet-range --gateway

# Internal Net
$ openstack network create int
$ openstack subnet create int-subnet --allocation-pool start=,end= --network int --subnet-range --gateway --dns-nameserver --dns-nameserver
$ openstack router create demo-router
$ neutron router-interface-add demo-router $(openstack subnet show int-subnet -c id -f value)
$ neutron router-gateway-set demo-router ext-net

Alright, now let’s at least add a flavor.

$ openstack flavor create --public m1.tiny --ram 512 --disk 1 --vcpus 1

And a security group

$ openstack security group rule create default --protocol icmp
$ openstack security group rule create default --protocol tcp --dst-port 22

…Drum roll please. Here comes an instance!

openstack server create cirros1 \
  --image $(openstack image show cirros -c id -f value) \
  --flavor $(openstack flavor show m1.tiny -c id -f value) \
  --nic net-id=$(openstack network show int -c id -f value)

Check that it hasn’t errored out with a nova list, and then give it a floating IP.

# This should come packaged with a few new deprecation warnings.
$ openstack ip floating add $(openstack ip floating create ext-net -c floating_ip_address -f value) cirros1

Let’s do something with it!

So, you want to SSH into it? Well… Not yet. Go ahead and use Horizon to access the machine and then console into it, and ping the gateway, in this example, there you go! You did something with it, and over the network.

Currently, I haven’t got the provider network working, just a small isolated tenant network. So, we’re saving that for next time. We didn’t want to spoil all the fun for now, right!?

Diagnosing a failed Nova instance creation.

So, the Nova instance didn’t spin up, huh? There’s a few reasons for that. To figure out the reason, first do a

nova list
nova show $failed_uuid

That will likely give you a whole lot of nothing more than probably a “no valid host found”. Which is essentially, nothing. So you’re going to want to look at the Nova compute logs. We can get those with kubectl or the oc commands.

# Make sure you're on the openstack project
oc projects
# Change to that project
oc project openstack
# List the pods to find the "nova-compute" pod
oc get pods
# Get the logs for that pod
oc logs nova-compute-3298216887-sriaa | tail -n 10

Or in short.

$ oc logs $(oc get pods | grep compute | awk '{print $1}') | tail -n 50

Now you should be able to see something.

A few things that have happened to me intermittently.

  1. You’ve sized your cluster wrong, or you’re using a virtual container host, and it doesn’t have nested virtualization. There might not be enough ram or processors for the instance, even though we’re using a pretty darn micro instance here.

  2. Something busted with openvswitch

I’d get an error like:

ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Protocol error)

So what I would do is delete the neutron-openvswitch pod, and it’d automatically deploy again and usually that’d do the trick.

  1. One time I had a bad glance image, I just deleted it and uploaded to glance again, I lost the notes for this error but it was something along the lines of “writing to a .part file” errored out.

Deploy a custom builder image on OpenShift

In the last article on creating custom s2i builder images we created the (intentionally ridiculous) pickle-http sample, and today we’re going to go ahead and deploy it under openshift. It’s the easy part, when it comes down to it. It’s rather fast, and cockpit (the web GUI) provides some nice clean information about the builds, including logs and links to webhooks to trigger builds.

Push custom builder image to registry

First I went ahead and pushed my local image to a public image in this case (you can push it to your local registry if you want, or you can feel free to use the public image named bowline/pickle-http). I tagged the image, pushed it – oh yeah, and I logged into dockerhub (not shown).

[openshift@test-cluster-master-0 stackanetes]$ docker tag pickle-http bowline/pickle-http
[openshift@test-cluster-master-0 stackanetes]$ docker push bowline/pickle-http

Create a new project and deploy new app!

Next I created a play project to work under in openshift, I also added this role to the admin user, so that I can see the project on cockpit.

[openshift@test-cluster-master-0 stackanetes]$ oc new-project pickler
[openshift@test-cluster-master-0 stackanetes]$ oc policy add-role-to-user admin admin -n pickler

Then we create a new app using our custom builder image. This is… as easy as it gets.

[openshift@test-cluster-master-0 stackanetes]$ oc new-app bowline/pickle-http~https://github.com/octocat/Spoon-Knife.git

Basically it’s just in the format oc new-app ${your_custom_builder_image_name}~{your_git_url}.

Inspect the app’s status and configure a readiness probe

It should be up at this point (after a short wait to pull the image). Great! It’s fast. Really fast, how great. Granted, we have the simplest use case – “just clone the code into my container”. So in this particular case if you don’t have the image pulled yet, that’s going to be the longest wait.

Let’s look at its status.

[openshift@test-cluster-master-0 stackanetes]$ oc status
In project pickler on server

svc/spoon-knife -
  dc/spoon-knife deploys istag/spoon-knife:latest <-
    bc/spoon-knife source builds https://github.com/octocat/Spoon-Knife.git on istag/pickle-http:latest 
    deployment #1 deployed 6 minutes ago - 1 pod

1 warning identified, use 'oc status -v' to see details.

We have a warning, and it’s because we don’t have a “readiness probe”. A “probe” is a k8s action that can take diagnostic actions periodically. Let’s go ahead and add ours to be complete.

Pick up on some help on the topic with:

[openshift@test-cluster-master-0 stackanetes]$ oc help set probe
oc set probe dc/spoon-knife --readiness --get-url=http://:8080/

In this case we’ll just look at the index on port 8080. You can run oc status again and see that we’re clear.

Look at the details of the build on cockpit

Now that we have a custom build going for us, there’s a lot more on the UI that’s going to be interesting to us. Firstly navigate to Builds -> Builds. From there choose “spoon-knife”.

There’s a few things here that are notable:

  • Summary -> Logs: check out what happened in the s2i custom building process (in this case, just a git clone)
  • Configuration: Has links to triggers to automatically trigger a new build (e.g. in a git webhook), details on the git source repository

That’s that, now you can both create your own custom builder image, and go forward with deploying pods crafted from just source (no dockerfile!) on openshift.