A (happy happy joy joy) ansible-container hello world!

Today we’re going to explore ansible-container, a project that gives you Ansible workflow for Docker. It provides a method of managing container images using ansible commands (so you can avoid a bunch of dirty bash-y Dockerfiles), and then provides a specification of “services” which is eerily similar (on purpose) to docker-compose. It also has paths forward for managing the instances of these containers on Kubernetes & OpenShift – that’s pretty tight. We’ll build two images “ren” and “stimpy”, which contain nginx and output some Ren & Stimpy quotes so we can get a grip on how it’s all put together. It’s better than bad – it’s good!

These steps were generally learned from dually the ansible-container demo github page and from the getting started guide. It also leverages this github project with demo ansible-container files I created which has all the files you need so you don’t have to baby them all in an editor.

My editorial is that… This is really a great project. However, I don’t consider it the be-all-end-all. I think it has an awesome purpose in the context of a larger ansible project. It’s squeaky clean when you use it that way. Except for the directory structure which I find a little awkward. Maybe I’m doing that part slightly wrong, it’s not terrible. I also think that Dockerfiles have their place. I like them, and in terms of some simpler apps (think, a Go binary) ansible-container is overkill, and your run of the mill pod spec when using k8s, raw and unabstracted isn’t so bad to deal with – in fact, it may be confusing in some places to abstract that. So, choose the right tool for the job is my advice. A-And I’d like a bike, and a Betsy Wetsherself doll, and a Cheesy-Bake Oven, and a Pulpy The Pup doll, and a gajillion green army men.

Ok, enough editorializing – let’s get some space madness and move onto getting this show on the road!


Fairly simple – as per usual, we’re using a CentOS 7.3 based virtual machine to run all these on. Feel free to use your workstation, but, I put this all in a VM so I could isolate it, and see what you needed given a stock CentOS 7.3 install. Just as a note, my install is from a CentOS 7.3 generic cloud, and the basics are based on that.

Also – you need a half dozen gigs free of disk, and a minimum of 4 gigs of memory. I had a 2 gig memory VM and it was toast (ahem, powdered toast) when I went to do the image builds, so, keep that in mind.

Since I have a throw-away VM, I did all this as root, you can be a good guy and use sudo if you please.

Install Docker!

Yep, you’re going to need a docker install. We’ll just use the latest docker from CentOS repos, that’s all we need for now.

yum install -y docker
systemctl enable docker
systemctl start docker
docker images
docker -v

Install ansible-container

We’re going to install ansible-container from source so that we can have the 0.3.0 version (because we want the docker-compose v2 specification)

Now, go and install the every day stuff you need (like, your editor). I also installed tree so I could use it for the output here. Oh yeah, and you need git!

So we’re going to need to update some python-ish things, especially epel to get python-pip, then update python-pip, then upgrade setuptools.

yum install -y epel-release
yum install -y python-pip git
pip install --upgrade pip
pip install -U setuptools

Now, let’s clone the project and install it. These steps were generally learned from these official installation directions.

git clone https://github.com/ansible/ansible-container.git
cd ansible-container/
python ./setup.py install

And now you should have a version 0.3.0-ish ansible-container.

[user@host ansible-container]$ ansible-container version
Ansible Container, version 0.3.0-pre

Let’s look at ansible-container init

In a minute here we’re going to get into using a full-out project, but, typically when you start a project, there’s a few things you’re going to do.

  1. You’re going to use ansible-container init to scaffold the pieces you need.
  2. You’ll use ansible-container install some.project to install ansible galaxy modules into your project.

So let’s give that a test drive before we go onto our custom project.

Firstly, make a directory to put this in as we’re going to throw it out in a minute.

[user@host ~]$ cd ~
[user@host ~]$ mkdir foo
[user@host ~]$ cd foo/
[user@host foo]$ ansible-container init
Ansible Container initialized.

Alright, now you can see it created a ./ansible/ directory there, and it has a number of files therein.

Installing ansible galaxy modules for ansible-container

Now let’s say we’re going to install an nginx module from ansible galaxy. We’d do it like so…

[user@host foo]$ ansible-container install j00bar.nginx-container

Note that this will take a while the first time because it’s pulling some ansible-specific images.

Once it’s done with the pull, let’s inspect what’s there.

Inspecting what’s there.

Let’s take a look at what it looks like with ansible-container init and then an installed role.

[user@host foo]$ tree
└── ansible
    ├── ansible.cfg
    ├── container.yml
    ├── main.yml
    ├── meta.yml
    ├── requirements.txt
    └── requirements.yml

1 directory, 6 files

Here’s what each file does.

  • container.yml this is a combo of both inventory and “docker-compose”
  • main.yml this is your main playbook which runs plays against the defined containers in the container.yml
  • requirements.{txt,yml} is your python & role deps respectively
  • meta.yml is for ansible galaxy (should you publish there).
  • ansible.cfg is your… ansible config.

Let’s make our own custom playbooks & roles!

Alright, so go ahead and move back to home and clone my demo-ansible-container repo.

The job of this role is to create two nginx instances (in containers, naturally) that each serve their own custom HTML file (it’s more like a text file).

So let’s clone it and inspect a few things.

$ cd ~
$ git clone https://github.com/dougbtv/demo-ansible-container.git
$ cd demo-ansible-container/

Inspecting the project

Now that we’re in there, let’s show the whole directory structure, it’s basically the same as earlier when we did ansible-container init (as I started that way) plus it adds a ./ansible/roles/ directory which contains roles just as you’d have in your run-of-the-mill ansible project.

├── ansible
│   ├── ansible.cfg
│   ├── container.yml
│   ├── main.yml
│   ├── meta.yml
│   ├── requirements.txt
│   ├── requirements.yml
│   └── roles
│       ├── nginx-install
│       │   └── tasks
│       │       └── main.yml
│       ├── ren-html
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       └── index.html
│       └── stimpy-html
│           ├── tasks
│           │   └── main.yml
│           └── templates
│               └── index.html
└── README.md

You’ll note there’s everything we had before, plus three roles.

  • nginx-install: which install (and generally configures) nginx
  • ren-html & stimpy-html: which places specific HTML files in each container

Now, let’s look specifically at the most important pieces.

First, our container.yml

[user@host demo-ansible-container]$ cat ansible/container.yml 
version: '2'
    image: centos:centos7
      - "8080:80"
    # user: nginx
    command: "nginx" # [nginx, -c, /etc/nginx/nginx.conf]
      ports: []
      command: bin/false
        runAsUser: 997
        replicas: 2
        replicas: 3
    image: centos:centos7
      - "8081:80"
    # user: nginx
    command: [nginx, -c, /etc/nginx/nginx.conf]
      ports: []
      command: bin/false
        runAsUser: 997
        replicas: 2
        replicas: 3
registries: {}

Whoa whoa, whoa Doug! There’s too much there. Yeah, there kind of is. I also put in some goodies to tempt you to look further ;) So, you’ll notice this looks very very much like a docker-compose yaml file.

Mostly though for now, looking at the services section, there’s two services listed ren & stimpy.

These comprise the inventory we’ll be using. And they specify things like… What ports we’re going to run the containers on, especially we’ll be using ports 8080 and 8081 which both map to port 80 inside the container.

Those are the most important for now.

So let’s move onto looking at the main.yml. This is sort of your site playbook for all your containers.

[user@host demo-ansible-container]$ cat ansible/main.yml 
- hosts: all
  gather_facts: false
- hosts: 
    - ren
    - stimpy
    - role: nginx-install
- hosts: ren
    - role: ren-html
- hosts: stimpy
    - role: stimpy-html

So, looks like any other ansible playbook, awesome! The gist is that we use the “host” names ren & stimpy and we run roles against them.

You’ll see that both ren & stimpy have nginx installed into them, but, then use a specific role to install some HTML into each container image.

Feel free to deeply inspect the roles if you so please, they’re simple.

Onto the build!

Now that we’ve got that all setup, we can go ahead and build these container images.

Let’s do that now. Make sure you’re in the ~/demo-ansible-container working dir and not the ~/demo-ansible-container/ansible dir or this won’t work (one of my pet peeves with ansible-container, tbh)

[user@host demo-ansible-container]$ ansible-container build

You’ll see that it spins up some containers and then runs those plays, and you can see it having some specificity to each “host” (each container, really).

When it’s finished it will go and commit the images, to save the results of what it did to the images.

Let’s look at the results of what it did.

[user@host demo-ansible-container]$ docker images
REPOSITORY                                    TAG                 IMAGE ID            CREATED             SIZE
demo-ansible-container-ren                    20170316142619      ba5b90f9476e        5 seconds ago       353.9 MB
demo-ansible-container-ren                    latest              ba5b90f9476e        5 seconds ago       353.9 MB
demo-ansible-container-stimpy                 20170316142619      2b86e0872fa7        12 seconds ago      353.9 MB
demo-ansible-container-stimpy                 latest              2b86e0872fa7        12 seconds ago      353.9 MB
docker.io/centos                              centos7             98d35105a391        16 hours ago        192.5 MB
docker.io/ansible/ansible-container-builder   0.3                 b606876a2eaf        12 weeks ago        808.7 MB

As you can see it’s got it’s special ansible-container-builder image which it uses to bootstrap our images.

Then we’ve got our demo-ansible-container-ren and demo-ansible-container-stimpy each with two tags. One for latest and then anotehr tag with the date and time.

And we run it.

Ok, everything’s built, let’s run it.

ansible-container run --detached --production

You can run without –production and it will just run /bin/false in the container, which may be confusing, but, it’s basically a no-operation and you could use it to inspect the containers in development if you wanted.

When that completes, you should see two containers running.

[user@host demo-ansible-container]$ docker ps
CONTAINER ID        IMAGE                                  COMMAND                  CREATED             STATUS              PORTS                  NAMES
7b160322dc26        demo-ansible-container-ren:latest      "nginx"                  27 seconds ago      Up 24 seconds>80/tcp   ansible_ren_1
a2f8dabe8a6f        demo-ansible-container-stimpy:latest   "nginx -c /etc/nginx/"   27 seconds ago      Up 24 seconds>80/tcp   ansible_stimpy_1

Great! Two containers up running on ports 8080 & 8081, just as we wanted.

Finally, verify the results.

You can now see you’ve got Ren & Stimpy running, let’s see what they have to say.

[user@host demo-ansible-container]$ curl localhost:8080
You fat, bloated eeeeediot!

[user@host demo-ansible-container]$ curl localhost:8081
Happy Happy Happy! Joy Joy Joy!

And there we go, two images built, two containers running, all with ansible instructions on how they’re built!

Makes for a very nice paradigm to create images & spin up containers in the context of an Ansible project.

Kuryr-Kubernetes will knock your socks off!

Seeing kuryr-kubernetes in action in my “Dr. Octagon NFV laboratory” has got me feeling that barefoot feeling – and henceforth has completely knocked my socks off. Kuryr-Kubernetes provides Kubernetes integration with OpenStack networking, and today we’ll walk through the steps so you can get your own instance up of it up and running so you can check it out for yourself. We’ll spin up kuryr-kubernetes with devstack, create some pods and a VM, inspect Neutron and verify the networking is working a charm.

As usual with these blog posts, I’m kind of standing on the shoulders of giants. I was able to get some great exposure to kuryr-kubernetes through Luis Tomas’s blog post. And then a lot of the steps here you’ll find familiar from this OpenStack superuser blog post. Additionally, I always wind up finding a good show-stopper or two, and Antoni Segura Puimedon (celebdor) was a huge help in diagnosing my setup, which I greatly appreciated.


You might be able to do this with a VM, but, you’ll need some kind of nested virtualization – because we’re going to spin up a VM, too. In my case, I used baremetal and the machine is likely overpowered (48 gig RAM, 16 cores, 1TB spinning disk). I’d recommend no less than 4-8 gigs of RAM and at least a few cores, and maybe 20-40 gigs free (which is still overkill)

One requirement that’s basically for sure is a CentOS 7.3 (or later) install somewhere. I assume you’ve got this setup. Also, make sure it’s pretty fresh because I’ve run into problems with devstack where I tried to put it on an existing machine and it fought with say an existing Docker install.

That box needs git, and maybe your favorite text editor (and I use screen).

Get your devstack up and kickin’

The gist here is that we’ll clone devstack, setup the stack user, create a local.conf file, and then kick off the stack.sh

So here’s where we clone devstack, use it to create a stack user, and move the devstack clone into the stack user’s home and then assume that user.

[root@droctagon3 ~]# git clone https://git.openstack.org/openstack-dev/devstack
[root@droctagon3 ~]# cd devstack/
[root@droctagon3 devstack]# ./tools/create-stack-user.sh 
[root@droctagon3 devstack]# cd ../
[root@droctagon3 ~]# mv devstack/ /opt/stack/
[root@droctagon3 ~]# chown -R stack:stack /opt/stack/
[root@droctagon3 ~]# su - stack
[stack@droctagon3 ~]$ pwd

Ok, now that we’re there, let’s create a local.conf to parameterize our devstack deploy. You’ll note that my config is a portmanteau of Luis’ and from the superuser blog post. I’ve left in my comments even so you can check it out and compare against the references. Go ahead and put this in with an echo heredoc or your favorite editor, here’s mine:

[stack@droctagon3 ~]$ cd devstack/
[stack@droctagon3 devstack]$ pwd
[stack@droctagon3 devstack]$ cat local.conf 


# Credentials
# Enable Keystone v3

# Q_PLUGIN=ml2

# LBaaSv2 service and Haproxy agent
enable_plugin neutron-lbaas \
enable_service q-lbaasv2

enable_plugin kuryr-kubernetes \
 https://git.openstack.org/openstack/kuryr-kubernetes refs/changes/45/376045/12

enable_service docker
enable_service etcd
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
enable_service kubelet
enable_service kuryr-kubernetes

# [[post-config|/$Q_PLUGIN_CONF_FILE]]
# [securitygroup]
# firewall_driver = openvswitch

Now that we’ve got that set. Let’s just at least take a look at one parameters. The one in question is:

enable_plugin kuryr-kubernetes \
 https://git.openstack.org/openstack/kuryr-kubernetes refs/changes/45/376045/12

You’ll note that this is version pinned. I ran into a bit of a hitch that Toni helped get me out of. And we’ll use that work-around in a bit. There’s a patch that’s coming along that should fix this up. I didn’t have luck with it yet, but, just submitted the evening before this blog post.

Now, let’s run that devstack deploy, I run mine in a screen, that’s optional for you, but, I don’t wanna have connectivity lost during it and wonder “what happened?”.

[stack@droctagon3 devstack]$ screen -S devstack
[stack@droctagon3 devstack]$ ./stack.sh 

Now, relax… This takes ~50 minutes on my box.

Verify the install and make sure the kubelet is running

Alright, that should finish up and show you some timing stats and some URLs for your devstack instances.

Let’s just mildly verify that things work.

[stack@droctagon3 devstack]$ source openrc 
[stack@droctagon3 devstack]$ nova list
| ID | Name | Status | Task State | Power State | Networks |

Great, so we have some stuff running at least. But, what about Kubernetes?

It’s likely almost there.

[stack@droctagon3 devstack]$ kubectl get nodes

That’s going to be empty for now. It’s because the kubelet isn’t running. So, open the devstack “screens” with:

screen -r

Now, tab through those screens, hit Ctrl+a then n, and it will go to the next screen. Keep going until you get to the kubelet screen. It will be at the lower left hand size and/or have an * next to it.

It will likely be a screen with “just a prompt” and no logging. This is because the kubelet fails to run in this iteration, but, we can work around it.

First off, get your IP address, mine is on my interface enp1s0f1 so I used ip a and got it from there. Now, put that into the below command where I have YOUR_IP_HERE

Issue this command to run the kubelet:

sudo /usr/local/bin/hyperkube kubelet\
        --allow-privileged=true \
        --api-servers=http://YOUR_IP_HERE:8080 \
        --v=2 \
        --address='' \
        --enable-server \
        --network-plugin=cni \
        --cni-bin-dir=/opt/stack/cni/bin \
        --cni-conf-dir=/opt/stack/cni/conf \
        --cert-dir=/var/lib/hyperkube/kubelet.cert \

Now you can detach from the screen by hitting Ctrl+a then d. You’ll be back to your regular old prompt.

Let’s list the nodes…

[stack@droctagon3 demo]$ kubectl get nodes
NAME         STATUS    AGE
droctagon3   Ready     4s

And you can see it’s ready to rumble.

Build a demo container

So let’s build something to run here. We’ll use the same container in a pod as shown in the superuser article.

Let’s create a python script that runs an http server and will report the hostname of the node it runs on (in this case when we’re finished, it will report the name of the pod in which it resides)

So let’s create those two files, we’ll put them in a “demo” dir.

[stack@droctagon3 demo]$ pwd

Now make the Dockerfile:

[stack@droctagon3 demo]$ cat Dockerfile 
FROM alpine
RUN apk add --no-cache python bash openssh-client curl
COPY server.py /server.py
ENTRYPOINT ["python", "server.py"]

And the server.py

[stack@droctagon3 demo]$ cat server.py 
import BaseHTTPServer as http
import platform

class Handler(http.BaseHTTPRequestHandler):
  def do_GET(self):
    self.send_header('Content-Type', 'text/plain')
    self.wfile.write("%s\n" % platform.node())

if __name__ == '__main__':
  httpd = http.HTTPServer(('', 8080), Handler)

And kick off a Docker build.

[stack@droctagon3 demo]$ docker build -t demo:demo .

Kick up a Pod

Now we can launch a pod given that, we’ll even skip the step of making a yaml pod spec since this is so simple.

[stack@droctagon3 demo]$ kubectl run demo --image=demo:demo

And in a few seconds you should see it running…

[stack@droctagon3 demo]$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
demo-2945424114-pi2b0   1/1       Running   0          45s

Kick up a VM

Cool, that’s kind of awesome. Now, let’s create a VM.

So first, download a cirros image.

[stack@droctagon3 ~]$ curl -o /tmp/cirros.qcow2 http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

Now, you can upload it to glance.

glance image-create --name cirros --disk-format qcow2  --container-format bare  --file /tmp/cirros.qcow2 --progress

And we can kick off a pretty basic nova instance, and we’ll look at it a bit.

[stack@droctagon3 ~]$ nova boot --flavor m1.tiny --image cirros testvm
[stack@droctagon3 ~]$ openstack server list -c Name -c Networks -c 'Image Name'
| Name   | Networks                                                | Image Name |
| testvm | private=fdae:9098:19bf:0:f816:3eff:fed5:d769, | cirros     |

Kuryr magic has happened! Let’s see what it did.

So, now Kuryr has performed some cool stuff, we can see that it created a Neutron port for us.

[stack@droctagon3 ~]$ openstack port list --device-owner kuryr:container -c Name
| Name                  |
| demo-2945424114-pi2b0 |
[stack@droctagon3 ~]$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
demo-2945424114-pi2b0   1/1       Running   0          5m

You can see that the port name is the same as the pod name – cool!

And that pod has an IP address on the same subnet as the nova instance. So let’s inspect that.

[stack@droctagon3 ~]$ pod=$(kubectl get pods -l run=demo -o jsonpath='{.items[].metadata.name}')
[stack@droctagon3 ~]$ pod_ip=$(kubectl get pod $pod -o jsonpath='{.status.podIP}')
[stack@droctagon3 ~]$ echo Pod $pod IP is $pod_ip
Pod demo-2945424114-pi2b0 IP is

Expose a service for the pod we launched

Ok, let’s go ahead and expose a service for this pod. We’ll expose it and see what the results are.

[stack@droctagon3 ~]$ kubectl expose deployment demo --port=80 --target-port=8080
service "demo" exposed
[stack@droctagon3 ~]$ kubectl get svc demo
demo    <none>        80/TCP    13s
[stack@droctagon3 ~]$ kubectl get endpoints demo
demo   1m

And we have an LBaaS (load balancer as a service) which we can inspect with neutron…

[stack@droctagon3 ~]$ neutron lbaas-loadbalancer-list -c name -c vip_address -c provider
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
| name                   | vip_address | provider |
| Endpoints:default/demo |   | haproxy  |
[stack@droctagon3 ~]$ neutron lbaas-listener-list -c name -c protocol -c protocol_port
[stack@droctagon3 ~]$ neutron lbaas-pool-list -c name -c protocol
[stack@droctagon3 ~]$ neutron lbaas-member-list Endpoints:default/demo:TCP:80 -c name -c address -c protocol_port
[stack@droctagon3 ~]$ neutron lbaas-member-list Endpoints:default/demo:TCP:80 -c name -c address -c protocol_port

Scale up the replicas

You can now scale up the number of replicas of this pod, and Kuryr will follow along in suit. Let’s do that now.

[stack@droctagon3 ~]$ kubectl scale deployment demo --replicas=2
deployment "demo" scaled
[stack@droctagon3 ~]$ kubectl get pods
NAME                    READY     STATUS              RESTARTS   AGE
demo-2945424114-pi2b0   1/1       Running             0          14m
demo-2945424114-rikrg   0/1       ContainerCreating   0          3s

We can see that more ports were created…

[stack@droctagon3 ~]$ openstack port list --device-owner kuryr:container -c Name -c 'Fixed IP Addresses'
[stack@droctagon3 ~]$ neutron lbaas-member-list Endpoints:default/demo:TCP:80 -c name -c address -c protocol_port

Verify connectivity

Now – as if the earlier goodies weren’t fun, this is the REAL fun part. We’re going to enter a pod, e.g. via kubectl exec and we’ll go ahead and check out that we can reach the pod from the pod, and the VM from the pod, and the exposed service (and henceforth both pods) from the VM.

Let’s do it! So go and exec the pod, and we’ll give it a cute prompt so we know where we are since we’re about to enter the rabbit hole.

[stack@droctagon3 ~]$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
demo-2945424114-pi2b0   1/1       Running   0          21m
demo-2945424114-rikrg   1/1       Running   0          6m
[stack@droctagon3 ~]$ kubectl exec -it demo-2945424114-pi2b0 /bin/bash
bash-4.3# export PS1='[user@pod_a]$ '

Before you continue on – you might want to note some of the IP addresses we showed earlier in this process. Collect those or chuck ‘em in a note pad and we can use them here.

Now that we have that, we can verify our service locally.

[user@pod_a]$ curl

And verify it with the pod IP

[user@pod_a]$ curl

And verify we can reach the other pod

[user@pod_a]$ curl

Now we can verify the service, note how you get different results from each call, as it’s load balanced between pods.

[user@pod_a]$ curl
[user@pod_a]$ curl

Cool, how about the VM? We should be able to ssh to it since it uses the default security group which is pretty wide open. Let’s ssh to that (reminder, password is cubswin:)) and also set the prompt to look cute.

[user@pod_a]$ ssh cirros@
The authenticity of host ' (' can't be established.
RSA key fingerprint is SHA256:Mhz/s1XnA+bUiCZxVc5vmD1C6NoeCmOmFOlaJh8g9P8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (RSA) to the list of known hosts.
cirros@'s password: 
$ export PS1='[cirros@vm]$ '

Great, so that definitely means we can get to the VM from the pod. But, let’s go and curl that service!

[cirros@vm]$ curl
[cirros@vm]$ curl

Voila! And that concludes our exploration of kuryr-kubernetes for today. Remember that you can find the Kuryr crew on the openstack mailing lists, and also in Freenode @ #openstack-kuryr.

So you want to expose a pod to multiple network interfaces? Enter Multus-CNI

Sometimes, one isn’t enough. Especially when you’ve got network requirements that aren’t just “your plain old HTTP API”. By default in Kubernetes, a pod is exposed only to a loopback and a single interface as assigned by your pod networking. In the telephony world, something we love to do is isolate our signalling, media, and management networks. If you’ve got those in separate NICs on your container host, how do you expose them to a Kubernetes pod? Let’s plug in the CNI (container network interface) plugin called multus-cni into our Kubernetes cluster and we’ll expose multiple network interfaces to a (very simple) pod.

Our goal here is going to be to spin up a pod using the techniques described in this article I wrote about spinning up Kubernetes 1.5 on CentOS – from there, we’ll install multus-cni and configure pod networking so that we expose a pod to two interfaces: 1. To Flannel, and 2. To the host’s eth0 nic.

We’ll cover two methods here – the first being to use my kube-centos-ansible playbooks and spin it up with “no CNI networking configured” and configure this CNI plugin by hand – this will allow us to familiarize ourselves with the components in detail here. Later, a secondary method using those playbooks will be introduced where it automatically sets up multus-cni using the playbooks, too.

If you’re itching to get to the “how to” skip down to the “Let’s get started” section below.

You’ll notice that I refer to multus-cni interchangably through this article as “multus-cni” (the Git clone’s name) or “Multus”; which I guess I inferred from their documentation which reads “MULTUS CNI Plugin”. Their docs then describe that “Multus” is Latin – and I looked it up myself and it generally translates to “many” or “numerous”, and their documentation tends to hint at that it may be the root of the prefix “multi-“ – so I checked the etymology on Merriam-Webster and they’re right – it is indeed!

What about the future? Some of this functionality may wind up in the core of CNI or maybe integrated into included plugins with k8s distributions. I recently was made aware of the Kubernetes SIG Networking mailing list and I also saw there’s a spec/proposal for “k8s-multiple-networks” which mentions a number of NFV use cases.

Taking a look at CNI

Through this process, I was exposed to a number of different pieces of CNI that became more and more valuable through the process. And maybe you’ll want to learn some more about CNI, too. I won’t belabor what CNI is here, but, quickly…

One of the first things is that CNI is not libnetwork (the default way Docker connects containers). You might be wondering “why doesn’t k8s use libnetwork?” And if you want to hear it straight from the horse’s mouth, check out the CNI specifications.

But the most concise way to describe CNI is (quoted from the spec):

[CNI is] a generic plugin-based networking solution for application containers on Linux

So, what’s multus-cni?

That being said multus-cni is a plugin for CNI – one that allows a pod to be connected to multiple network interfaces, or as the GitHub project description reads rather succinctly, it’s “Multi-homed pod cni”. And basically what we’re going to do is build some (of their existing) Go code for it, and then go ahead and put the binary in the right place so that CNI can execute it. It’s… That easy!


Multus is actually fairly simple to use, but, it requires that you understand some other portions of CNI. One of the most important places you’ll need to go is the documentation for the included CNI plugins. Because, in my own words as a user – basically Multus is a wrapper for combining other CNI plugins and basically lets you define a list of plugins you’re going to use to expose multiple interfaces to a pod.

I struggled at first especially because I didn’t exactly grok that. I was trying to modify a configuration that I thought was specific to multus-cni, but, I was missing that it was wrapping the configuration for other CNI plugins.

Luckily for me, I picked up a little community help along the way and got the last few pieces sorted out. Kuralamudhan gave me some input here in this GitHub issue, and he was very friendly about offering some assistance. Additionally, in the Kubernetes slack, Yaron Haviv shared his sample configuration. Between “my own way” (which you’ll see in a little bit), Kuralamudhan pointing out that sections of the config are related to other plugins, and having a spare reference from Yaron I was able to get Multus firing on all pistons.

Requirements for this walk-through

The technique used here is based on the technique used in this article to spin up k8s 1.5 on CentOS. And this technique leverages my kube-centos-ansible playbooks available on GitHub. By default it spins up 3 virtual machines on a virtual machine host. You can bring your own virtual machines (or bare metal machines) just make sure they’re (generally the latest) CentOS 7 installs. That article may familiarize you with the structure of the playbooks – especially if you need some more detail on bringing your own inventory.

Also, this uses Ansible playbooks, so you’ll need an ansible machine.

Note that I’m going to skip over some of the details of how to customize the Ansible inventories, so, refer to the previous article if you’re lost.

Let’s get started

Alright, let’s pull the rip cord! Ok, first thing’s first clone my kube-centos-ansible repo.

$ git clone https://github.com/dougbtv/kube-centos-ansible.git
$ cd kube-centos-ansible

Go ahead and modify ./inventory/virt-host.inventory to suit your virtual host, and let’s spin up some virtual machines.

$ ansible-playbook -i inventory/virthost.inventory virt-host-setup.yml

Based on the results of that playbook modify ./inventory/vms.inventory. Now that it’s up, we’re going to run the kube-install.yml – but, with a twist. By default this playbook uses Flannel only. So we’re going to pass in a variable that says to the playbook “skip setting up any CNI plugins”. So we’ll run it like below.

Note: This kicks off the playbooks in a way that allows us to manually configure multus-cni so we can inspect it. If you’d like to let the playbook do all the install for you, you can – skip down to the section near the bottom titled: ‘Welcome to “easy mode”’.

$ ansible-playbook -i inventory/vms.inventory kube-install.yml --extra-vars "pod_network_type=none"

If you ssh into the master, you should be able to kubectl get nodes and see a master and two nodes at this point.

Now, we need to compile and install multus-cni, so let’s run that playbook. It runs on all the VMs.

$ ansible-playbook -i inventory/vms.inventory multus-cni.yml

Basically all this playbook does is install the dependencies (like, golang and git) and then clones the multus-cni repo and builds the go binaries. It then copies those binaries into /opt/cni/bin/ so that CNI can run the plugins from there.

Inspecting the multus-cni configuration

Alright, so now let’s ssh into the master, and we’ll get a config downloaded here and take a look.

We’re going to use this yaml file I’ve posed as a gist

Let’s curl that down to the master, and then we’ll take a look at a few parts of it.

[centos@kube-master ~]$ curl https://gist.githubusercontent.com/dougbtv/cf05026e48e5b8aa9068a7f6fcf91a56/raw/dd3dfbf5e440abea8781e27450bb64c31e280857/multus-working.yaml > multus.yaml

Generally, my idea was to take the Flannel pod networking yaml and modify it to suit multus-cni, seeing they play together. In fact, I couldn’t get it to work with just a multus-cni config alone. If you compare and contrast the two, you’ll notice the Flannel yaml (say that outloud three times in a row) has been borrowed from heavily.

Go ahead and cat the multus.yaml file so we can look at it, or bring it up in your favorite editor as long as it’s not emacs. If it is indeed emacs, the next step in this walk-through is for you to go jump in a lake and think about your life for a little while ;) (JK, I love you emacs brethren, you’re just… weird.)

The Multus configuration is JSON packed inside a yaml configuration for Flannel, generally. According to the CNI spec “The network configuration is in JSON format and can easily be stored in a file”. We’re defining a k8s ConfigMap which has the Multus configuration within. You see, Multus works in concert with other CNI plugins.

First up, looking at lines 17-45 this is the Multus configuration proper.

Note there’s a JSON list in here called delegates and is a list of two items.

The top-most element is "type": "macvlan" and uses the CNI plugin macvlan. This is what we use to map a bridge to eth0 in the virtual machine. We also then specify a network range which is that of the default libvirt br0.

The second element is the one that’s specific to multus-cni, and is "type": "flannel" but has an element specific to multus-cni, which is "masterplugin": true.

Continuing through the file, we’ll later see a DaemonSet defined which has the pods for Flannel’s networking. Later on in that file, we’ll see that there’s a command which is run on line 95. This basically takes the JSON from the ConfigMap and copies it to the proper place on the host.

What’s great about this step is that the config winds up in the right place on each machine in the cluster. Without this step, I’m not sure how we’d get that configuration properly setup on each machine without something like Ansible to put the config where it should be.

Applying the multus configuration

Great, so you’ve already downloaded the multus.yaml file onto the master, let’s go ahead and apply it.

[centos@kube-master ~]$ kubectl apply -f multus.yaml 
serviceaccount "multus" created
configmap "kube-multus-cfg" created
daemonset "kube-multus-ds" created

Let’s watch the pods, and wait until each instance of the pod is running on each node in the cluster.

[centos@kube-master ~]$ watch -n1 kubectl get pods --all-namespaces

In theory you should have three lines which looks about like this when they’re ready.

[centos@kube-master ~]$ kubectl get pods --all-namespaces | grep multus
kube-system   kube-multus-ds-cgtr6                  2/2       Running   0          1m
kube-system   kube-multus-ds-qq2tm                  2/2       Running   0          1m
kube-system   kube-multus-ds-vkg3r                  2/2       Running   0          1m

So, now multus is applied! Now what?

Time to run a pod!

Let’s use our classic nginx pod, again. We’re going to run this guy, and then we’ll inspect some goodies.

Go ahead and make an nginx_pod.yaml file like so, then run a kubectl create -f against it.

[centos@kube-master ~]$ cat nginx_pod.yaml
apiVersion: v1
kind: ReplicationController
  name: nginx
  replicas: 2
    app: nginx
      name: nginx
        app: nginx
      - name: nginx
        image: nginx
        - containerPort: 80
[centos@kube-master ~]$ kubectl create -f nginx_pod.yaml 

Now watch the pods until you see the nginx containers running.

[centos@kube-master ~]$ watch -n1 kubectl get pods

What if you don’t ever see the pods come up? Uh oh, that means something went wrong. As of today, this is all working swimmingly for me, but… It could go wrong. If that’s the case, go ahead and describe one of the pods (remember, this ReplicationController yaml spins up 2 instances of nginx).

[centos@kube-master ~]$ kubectl describe pod nginx-vp516

Let’s inspect a pod and run ip addr to see the interfaces on it

Now that we have some pods up… we can go ahead and check out what’s inside them.

So pick either one of the nginx pods and let’s execute the ip addr command in each one.

[centos@kube-master ~]$ kubectl exec nginx-vp516 -it ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:f4:01:02 brd ff:ff:ff:ff:ff:ff
    inet scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::11:99ff:fe68:c8ba/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
4: net0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 0a:58:c0:a8:7a:c8 brd ff:ff:ff:ff:ff:ff
    inet scope global net0
       valid_lft forever preferred_lft forever
    inet6 fe80::858:c0ff:fea8:7ac8/64 scope link 
       valid_lft forever preferred_lft forever

Woo hoo! That’s great news. We’ve got 3 interfaces here.

  1. The loopback
  2. eth0@if6 which is flannel.
  3. net0@if2 which is the bridge to eth0

You’ll note that it’s got an ip address assigned on the network, in this case it’s Awesomeness.

It’s also got the IP address for the Flannel overlay. Which is and matches what we see in a kubectl describe pod, like:

[centos@kube-master ~]$ kubectl describe pod nginx-vp516 | grep ^IP

Now that we’ve done that, we can go onto the virtual machine host, and we can curl that nginx instance!

So from the virtual machine host:

[user@virt-host ~]$ curl -s | grep -i thank
<p><em>Thank you for using nginx.</em></p>


Welcome to “easy mode”

Ok, so we just did all of that manually – but, you can also use this playbook to do the “heavy lifting” (if it’s that) for you.

We’ll assume you already kicked off the virt-host-setup.yml playbook, let’s continue at the point where you’ve got your ./inventory/vms.inventory all setup.

Basically the usual kube-install, but, we’re going to specify that

ansible-playbook -i inventory/vms.inventory kube-install.yml --extra-vars "pod_network_type=multus"

When that’s complete, you should see 3 “kube-multus-ds” pods, one on each node in the cluster when you perform a kubectl get pods --all-namespaces.

From there you can follow above steps to run a pod and verify that you’ve got the multiple network interfaces and what not.


Let's spin up k8s 1.5 on CentOS (with CNI pod networking, too!)

Alright, so you’ve seen my blog post about installing Kubernetes by hand on CentOS, now… Let’s make that easier and do that with an Ansible playbook, specifically my kube-centos-ansible playbook. This time we’ll have Kubernetes 1.5 running on a cluster of 3 VMs, and we’ll use weave flannel as a CNI plugin to handle our pod network. And to make it more fun, we’ll even expose some pods to the ‘outside world’, so we can actually (kinda) do something with them. Ready? Let’s go!

Note: After writing this article, I later figured out how to use Weave or Flannel. So the playbook now reflects that, and uses Flannel as a default. I didn’t overly edit the article to reflect this, however, it shouldn’t change the instructions herein. I’ll add a note during the steps where you can change it if you’d like.

Why Flannel as default? I prefer it, but, for no particular reason than I’m from Vermont, and we love our flannels here. These stereotypes are basically 99% true, and yep, I have a closet full of flannel.

What’s inside?

Alright, so here’s the parts of this playbook, and it…

  1. Configures a machine to use as a virtual machine host (and you can skip this part if you want to run on baremetal, or an inventory of machines created otherwise, say on OpenStack)
  2. Installs all the deps necessary on the hosts
  3. Runs kubeadm init to bootstrap the cluster (kubeadm docs)
  4. Installs a CNI plugin for pod networking (for now, it’s weave)
  5. Joins the hosts to a cluster.

What do you need?

Along with the below you need a client machine from which to run your ansible playbooks. It can be the same host as one of the below if you want, but, you’ll need to install ansible & git on that machine whatever one it may be. Once you’ve got that machine, go ahead and clone this repo.

$ git clone https://github.com/dougbtv/kube-centos-ansible.git
$ cd kube-centos-ansible

In a choose your own adventure style, you can either choose from the below.

A. Pick a single host and use it to host your virtual machines. We’ll call this machine either the “virt host” or “virtual machine host” throughout here. This assumes that you have a CentOS 7 machine (that’s generally up to the latest packages). You’ll need an SSH key into this machine as root (or modify the inventory later on if you’re sshing in as another user, who’ll need sudo access). Go to section “A: Virtual machine host and VM spin-up”

B. Create your own inventory. Spin up some CentOS machines, either baremetal or virtual machines, and make note of the IP addresses. Skip on over to section “B: Define the inventory of kubernetes nodes”

A: Virtual machine host and VM spin-up

Ok, let’s first modify the inventory. Get the IP address of your virt-host, and we’ll modify the ./inventory/virthost.inventory and enter in the IP address there (or hostname, should you have some fancy-pants DNS setup).

The line you’re looking to modify is right up at the top and looks like:

kubehost ansible_host= ansible_ssh_user=root

Now we can run this playbook, it should be fairly straight forward, it installs the virtualization deps for KVM/libvirt and then spins up the VMs for you and reports their IP addresses.

You run the playbook like so:

$ ansible-playbook -i inventory/virthost.inventory virt-host-setup.yml 

When it completes you’ll get some output that looks about like this, yours will more-than-likely have different IP addresses, so make sure to note those:

TASK [vm-spinup : Here are the IPs of the VMs] *********************************
ok: [kubehost] => {
    "msg": {
        "kube-master": "", 
        "kube-minion-1": "", 
        "kube-minion-2": ""

You can also find them in the /etc/hosts on the virt-host for convenience, like so:

$ cat /etc/hosts | grep -i kube kube-master kube-minion-2 kube-minion-1

This playbook also creates an ssh key pair that’s used to access these machines. This key lives in root’s home @ /root/.ssh/. The machines that are spun up are CentOS Generic cloud images and you’ll need to ssh as the centos user.

So you can ssh to the master from this virt host like so:

ssh -i .ssh/id_vm_rsa centos@kube-master

Notes that the default way the playbook runs is to create 3 nodes. You can get fancy if you want and use more nodes by modifying the list of nodes in the ./vars/all.yml should you wish, and modifying the inventory appropriately in the next section.

Continue onto section B below with the IP addresses you’ve seen come up.

B: Define the inventory of kubernetes nodes

Alright, now you’re going to need to modify the ./inventory/vms.inventory file.

First modify the top most lines, usually 3 if you’re doing the default 3 as recommended earlier.

$ head -n3 ./inventory/vms.inventory 
kube-master ansible_host=
kube-minion-1 ansible_host=
kube-minion-2 ansible_host=

Modify these to suit your inventory.

Towards the end of the file, there’s some host vars setup, you’ll also want to modify these. If you used the virt-host method, you’ll want to change in the ansible_ssh_common_args – unless you’re running ansible from there in which case, comment this. Also SCP the /root/.ssh/id_vm_rsa to your client machine and put that in the ansible_ssh_private_key_file.

If you brought your own inventory, typically you’d probably comment out both the last two lines: ansible_ssh_common_args and ansible_ssh_private_key_file

$ tail -n6 ./inventory/vms.inventory 
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p root@"'

Now we can install k8s

Alright, now that the ./inventory/vms.inventory file is setup, we can get along moving to install k8s! Honestly, the hardest stuff is complete at this point.

Remember, flannel will be the default pod networking at this point, if you’d like check out the ./vars/all.yml and you’ll see that near the top there’s an option to change it to weave if you’d prefer.

Let’s run it!

$ ansible-playbook -i inventory/vms.inventory kube-install.yml

(Be prepared to accept the host keys by typing ‘yes’ when prompted if you haven’t ssh’d to these machines before. And beforewarned that you don’t type “yes” too many times, cause you might put in the command yes which will just fill your terminal with a ton of ‘y’ characters!).

Alright, you’re good to go! SSH to the master and let’s see that everything looks good.

On the master, let’s look at the nodes…

[root@virthost ~]# ssh -i .ssh/id_vm_rsa centos@kube-master
[centos@kube-master ~]$ kubectl get nodes
NAME            STATUS         AGE
kube-master     Ready,master   4m
kube-minion-1   Ready          2m
kube-minion-2   Ready          2m

There’s a number of pods running to support the pod networking, you can check those out with:

# All the pods
[centos@kube-master ~]$ kubectl get pods --all-namespaces
[... lots of pods ...]
# Specifically the kube-system pods
[centos@kube-master ~]$ kubectl get pods --namespace=kube-system

And we wanted k8s 1.5 right? Let’s check that out.

[centos@kube-master ~]$ kubectl version | grep -i server
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:34:56Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Alright, that looks good, so let’s move on and do something interesting with it…

Let’s run some pods!

Ok, we’ll do the same thing as the previous blog article and we’ll run some nginx pods.

Let’s create an nginx_pod.yaml like so:

[centos@kube-master ~]$ cat nginx_pod.yaml 
apiVersion: v1
kind: ReplicationController
  name: nginx
  replicas: 2
    app: nginx
      name: nginx
        app: nginx
      - name: nginx
        image: nginx
        - containerPort: 80

Then we can run it…

[centos@kube-master ~]$ kubectl create -f nginx_pod.yaml 

And we can see the two instances come up…

[centos@kube-master ~]$ kubectl get pods
nginx-34vhj   1/1       Running   0          1m
nginx-tkh4h   1/1       Running   0          1m

And we can get some details, should we want to…

[centos@kube-master ~]$ kubectl describe pod nginx-34vhj

And this is no fun if we can’t put these pods on the network, so let’s expose a pod.

First off, get the IP address of the master.

[centos@kube-master ~]$ master_ip=$(ifconfig | grep 192 | awk '{print $2}')
[centos@kube-master ~]$ echo $master_ip

And let’s use that as an external address… And expose a service.

[centos@kube-master ~]$ kubectl expose rc nginx --port=8999 --target-port=80 --external-ip $master_ip
service "nginx" exposed

And we can see it in our list of services…

[centos@kube-master ~]$ kubectl get svc
kubernetes      <none>           443/TCP    20m
nginx   8999/TCP   4s

And we can describe that service should we want more details…

[centos@kube-master ~]$ kubectl describe service nginx

Now, we can access the load balanced nginx pods from the virt-host (or your client machine should you have brought your own inventory)

[root@virthost ~]# curl -s | grep -i thank
<p><em>Thank you for using nginx.</em></p>

Voila! There we go, we have exposed nginx pods running on port 8999, an external IP on the master node, with Weave for the pod network using CNI.

Let's (manually) run k8s on CentOS!

So sometimes it’s handy to have a plain-old-Kubernetes running on CentOS 7. Either for development purposes, or to check out something new. Our goal today is to install Kubernetes by hand on a small cluster of 3 CentOS 7 boxen. We’ll spin up some libvirt VMs running CentOS generic cloud images, get Kubernetes spun up on those, and then we’ll run a test pod to prove it works. Also, this gives you some exposure to some of the components that are running ‘under the hood’.

Let’s follow the official Kubernetes guide for CentOS to get us started.

But, before that, we’ll need some VMs to use as the basis of our three machine cluster.

Let’s spin up a couple VM’s

So, we’re going to assume you have a machine with libvirt to spin up some VMs. In this case I’m going to use a CentOS Cloud Image, and I’m going to spin them up in this novel way using this guide to spin those up easily.

So let’s make sure we have the prerequisites. Firstly, I am using Fedora 25 as my workstation, and I’m going to spin up the machines there.

$ sudo dnf install libvirt-client virt-install genisoimage

I have a directory called /home/vms and I’m going to put everything there (this basic qcow2 cloud image, and my virtual machine disk images), so let’s make sure we download the cloud image there, too.

# In case you need somewhere to store your VM "things"
$ mkdir /home/vms

# Download the image
$ cd /home/vms/
$ wget -O /home/vms/CentOS-7-x86_64-GenericCloud.qcow2.xz https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1612.qcow2.xz

# Extract the downloaded image...
$ xz -d CentOS-7-x86_64-GenericCloud.qcow2.xz

I originally had this in the wrong place, so just make sure the image winds up in the right place, it should be @ /home/vms/CentOS-7-x86_64-GenericCloud.qcow2.

Now let’s download the gist for spinning up a cloud image in libvirt and we’ll change it’s mode so we can execute it.

# Download the Gist
$ wget -O spin-up-generic.sh https://gist.githubusercontent.com/giovtorres/0049cec554179d96e0a8329930a6d724/raw/f7520fbbf1e4a54f898cf8cc51e3eaac9167f178/virt-install-centos

# Make it executable
$ chmod 0755 spin-up-generic.sh 

# Change the default image directory to the one we created earlier.
$ sed -i -e 's|~/virt/images|/home/vms|g' spin-up-generic.sh

But, wait! There’s more. Go ahead and make sure you have an SSH public key you can add to the spin-up-generic.sh script. Make sure you cat the appropriate public key.

# Chuck your ssh public key into a variable...
$ sshpub=$(cat ~/.ssh/id_rsa.pub)

# Sed the file and replace the dummy public key with your own
# (You could also edit the file by hand and do a find for "ssh-rsa")
$ sed -i -e "/ssh-rsa/c\  - $sshpub" spin-up-generic.sh

Now, we can spin up a few VMs, we’re going to spin up a master and 2 minions. You’ll note that you get an IP address from this script for each machine, take note of those cause we’ll need it in the next steps. Depending on your setup for libvirt you might have to use sudo.

[root@yoda vms]# ./spin-up-generic.sh centos-master
Wed, 08 Feb 2017 16:28:21 -0500 DONE. SSH to centos-master using with  username 'centos'.

[root@yoda vms]# ./spin-up-generic.sh centos-minion-1
Wed, 08 Feb 2017 16:28:49 -0500 DONE. SSH to centos-minion-1 using with  username 'centos'.

[root@yoda vms]# ./spin-up-generic.sh centos-minion-2
Wed, 08 Feb 2017 16:29:16 -0500 DONE. SSH to centos-minion-2 using with  username 'centos'.

Alright, now you should be able to SSH to these guys, ssh into the master node to test it out…

$ ssh centos@

Let’s start installing k8s!

Alrighty, so there’s things we’re going to want to do across multiple hosts. Since the goal here is to do this manually (e.g. not creating an ansible playbook) we’re going to have a few for loops to do this stuff efficiently for us. So, set a variable with the class D octet from each of the IPs above. (And one for the master & the minions, too, we’ll use this later.)

class_d="21 18 208"

And for a test, just go and run this…

$ for i in $class_d; do ssh centos@192.168.122.$i 'cat /etc/redhat-release'; done

You may have to accept the key finger print for each box.

Install Kubernetes RPM requirements

Now we’re creating some repo files for the k8s components.

$ for i in $class_d; do ssh centos@192.168.122.$i 'echo "[virt7-docker-common-release]
" | sudo tee /etc/yum.repos.d/virt7-docker-common-release.repo'; done

Now install etcd, kubernetes & flannel on all the boxen.

$ for i in $class_d; do ssh centos@192.168.122.$i 'sudo yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel'; done

Setup /etc/hosts

Now, we need to add to our hosts files the hostnames for each of these three files, so let’s mock up the lines we want to add, in my case, the lines I’ll add look like: centos-master centos-minion-1 centos-minion-2

So I’ll append using tee in a loop like:

for i in $class_d; do ssh centos@192.168.122.$i 'echo " centos-master centos-minion-1 centos-minion-2" | sudo tee -a /etc/hosts'; done

Setup Kubernetes configuration

Now we’re going to chuck in a /etc/kubernetes/config file, same across all boxes. So let’s make a local version of it and scp it. I tried to do it in one command, but, too much trickery between looping SSH and heredocs and what not. So, make this file…

cat << EOF > ./kubernetes.config
# Comma separated list of nodes in the etcd cluster

# logging to stderr means we get it in the systemd journal

# journal message level, 0 is debug

# Should this cluster be allowed to run privileged docker containers

# How the replication controller and scheduler find the kube-apiserver

Now scp it to all the hosts…

for i in $class_d; do scp ./kubernetes.config centos@192.168.122.$i:~/kubernetes.config; done

And finally move it into place.

for i in $class_d; do ssh centos@192.168.122.$i 'sudo mv /home/centos/kubernetes.config /etc/kubernetes/config'; done

Wave goodbye to your security

So the official docs do things that generally… I’d say “Don’t do that.”, but, alas, we’re going with the official docs, and this likely simplifies some things. So, while we’re here we’re going to follow those instructions, and we’re going to setenforce 0 and then disable the firewalls.

for i in $class_d; do ssh centos@192.168.122.$i 'sudo setenforce 0; sudo systemctl disable iptables-services firewalld; sudo systemctl stop iptables-services firewalld; echo'; done

Configure Kube services on the master

Here we setup etcd on the master…

ssh centos@$master_ip 'sudo /bin/bash -c "
cat << EOF > /etc/etcd/etcd.conf
# [member]


And the etcd api server…

ssh centos@$master_ip 'sudo /bin/bash -c "
cat << EOF > /etc/kubernetes/apiserver
# The address on the local server to listen to.

# The port on the local server to listen on.

# Port kubelets listen on

# Address range to use for services

# Add your own!

And we start etcd and specify some keys, remember from the docs:

Warning This network must be unused in your network infrastructure! is free in our network.

So go ahead and start that add the keys assuming that warning is OK…

ssh centos@$master_ip 'sudo systemctl start etcd; sudo etcdctl mkdir /kube-centos/network; sudo etcdctl mk /kube-centos/network/config "{ \"Network\": \"\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"'

If you’d like to check that etcd key, you can do:

ssh centos@$master_ip 'etcdctl get /kube-centos/network/config'

Now, configure flannel… (later we’ll do this on the nodes as well)

ssh centos@$master_ip 'sudo /bin/bash -c "
cat << EOF > /etc/sysconfig/flanneld
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment

# Any additional options that you want to pass

And then restart and enable the services we need…

ssh centos@$master_ip 'sudo /bin/bash -c "
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
    systemctl restart \$SERVICES
    systemctl enable \$SERVICES
    systemctl status \$SERVICES

Mildly verifying the services on the master

There’s a lot going on above, right? I, in fact, made a few mistakes while performing the above actions. I had a typo. So, let’s make sure the services are active.

ssh centos@$master_ip 'sudo /bin/bash -c "
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
    systemctl status \$SERVICES | grep -P \"(\.service \-|Active)\"

Make sure each entry this is “Active” state of “active.” If for some reason one isn’t, go and check the journald logs, on the master, for it with:

journalctl -f -u kube-apiserver

(Naturally replacing the service name with the one in trouble from above.)

Configure the minion nodes

Ok, first thing we’re going to manually set each of the hostnames for the minions. Our VM spin up script names them “your_name.example.local”, not quite good enough. So let’s manually set each of those.

ssh centos@ 'sudo hostnamectl set-hostname centos-minion-1'
ssh centos@ 'sudo hostnamectl set-hostname centos-minion-2'

Now just double check those

for i in $minion_ips; do ssh centos@$i 'hostname'; done

Ok cool, that means we can simplify a few steps following.

Now we can go ahead and configure the kubelet.

for i in $minion_ips; do ssh centos@$i 'sudo /bin/bash -c "
cat << EOF > /etc/kubernetes/kubelet
# The address for the info server to serve on

# The port for the info server to serve on

# You may leave this blank to use the actual hostname
# Check the node number!
# KUBELET_HOSTNAME="--hostname-override=centos-minion-n"

# Location of the api-server

# Add your own!
"'; done

Now, setup flannel…

for i in $minion_ips; do ssh centos@$i 'sudo /bin/bash -c "
cat << EOF > /etc/sysconfig/flanneld
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment

# Any additional options that you want to pass
"'; done

And get the services running….

for i in $minion_ips; do ssh centos@$i 'sudo /bin/bash -c "
for SERVICES in kube-proxy kubelet flanneld docker; do
    systemctl restart \$SERVICES
    systemctl enable \$SERVICES
    systemctl status \$SERVICES
"'; done

And we’ll double check those

for i in $minion_ips; do ssh centos@$i 'sudo /bin/bash -c "
for SERVICES in kube-proxy kubelet flanneld docker; do
    systemctl status \$SERVICES | grep -P \"(\.service \-|Active)\"
"'; done


Drum roll please…. Let’s see if it’s all running!

So OK, one more step… Let’s set some default in kubectl, we’ll do this from the master. In this case… Now I’m going to ssh directly to that machine and work from there…

$ ssh centos@

And then we’ll perform:

kubectl config set-cluster default-cluster --server=http://centos-master:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context

Here’s… the moment of truth. Let’s see if we can see all the nodes…

[centos@centos-master ~]$ kubectl get nodes
NAME              STATUS    AGE
centos-minion-1   Ready     2m
centos-minion-2   Ready     2m

Yours should look about like the above!

So, you wanna run a pod?

Well this isn’t much fun without having a pod running, so let’s at least get something running.

Create an nginx pod

Let’s create an nginx pod… Create a pod spec anywhere you want on the master, here’s what mine looks like

[centos@centos-master ~]$ cat nginx_pod.yaml 
apiVersion: v1
kind: ReplicationController
  name: nginx
  replicas: 2
    app: nginx
      name: nginx
        app: nginx
      - name: nginx
        image: nginx
        - containerPort: 80

Now you can create given that yaml file.

[centos@centos-master ~]$ kubectl create -f nginx_pod.yaml 

And you can see it being create when you get pods…

[centos@centos-master ~]$ kubectl get pods
NAME          READY     STATUS              RESTARTS   AGE
nginx-8rajt   0/1       ContainerCreating   0          10s
nginx-w2yja   0/1       ContainerCreating   0          10s

And you can get details about the pod with:

[centos@centos-master ~]$ kubectl describe pod nginx-8rajt
Name:       nginx-8rajt
Namespace:  default
Node:       centos-minion-2/
Start Time: Thu, 09 Feb 2017 19:39:14 +0000
Labels:     app=nginx
Status:     Pending

In this case you can see this is running on centos-minion-2. And there’s two instances of this pod! We specified replicas: 2 in our pod spec. And that’s the job of the kubelet – make sure instances are running, and in this case, it’s going to make sure 2 are running across our hosts.

Create a service to expose nginx.

Now that’s all well and good, but… What if we want to, y’know, serve something? (Omitting, uhhh, content!) But, we can do that by exposing this to a service.

So let’s go and expose it… Let’s create a service spec. Here’s what mine looks like:

[centos@centos-master ~]$ cat nginx_service.yaml 
apiVersion: v1
kind: Service
    name: nginxservice
  name: nginxservice
    # The port that this service should run on.
    - port: 9090
  # Label keys and values that must match in order to receive traffic for this service.
    app: nginx
  type: LoadBalancer

And then we create that…

[centos@centos-master ~]$ kubectl create -f nginx_service.yaml
service "nginxservice" created

And we can see what’s running by getting the services and describing the service.

[centos@centos-master ~]$ kubectl get services
kubernetes       <none>        443/TCP    1h
nginxservice   <pending>     9090/TCP   58s

[centos@centos-master ~]$ kubectl describe service nginxservice
Name:           nginxservice
Namespace:      default
Labels:         name=nginxservice
Selector:       app=nginx
Type:           LoadBalancer
Port:           <unset> 9090/TCP
NodePort:       <unset> 32702/TCP
Session Affinity:   None
No events.

Oh so you want to actually curl it? Next time :) Leaving you with a teaser for the following installments. Maybe next time we’ll do this all with Ansible instead of these tedious ssh commands.