Whereabouts -- A cluster-wide CNI IP Address Management (IPAM) plugin

Something that’s a real challenge when you’re trying to attach multiple networks to pods in Kubernetes is trying to get the right IP addresses assigned to those interfaces. Sure, you’d think, “Oh, give it an IP address, no big deal” – but, turns out… It’s less than trivial. That’s why I came up with the IP Address Management (IPAM) plugin that I call “Whereabouts” – you can think of it like a DHCP replacement, it assigns IP addresses dynamically to interfaces created by CNI plugins in Kubernetes. Today, we’ll walk through how to use Whereabouts, and highlight some of the issues that it overcomes. First – a little background.

The “multi-networking problem” in Kubernetes is something that’s been near and dear to me. Basically what it boils down to is the question “How do you access multiple networks from networking-based workloads in Kube?” As a member of the Network Plumbing Working Group, I’ve helped to write a specification for how to express your intent to attach to multiple networks, and I’ve contributed to Multus CNI in the process. Multus CNI is a reference implementation of that spec and it gives you the ability to create additional interfaces in pods, each one of those interfaces created by CNI plugins. This kind of functionality is critical for creating network topologies that provide control and data plane isolation (for example). If you’re a follower of my blog – you’ll know that I’m apt to use telephony examples (especially with Asterisk!) usually to show how you might isolate signal, media and control.

I’ll admit to being somewhat biased (being a Multus maintainer), but typically I see community members pick up Multus and have some nice success with it rather quickly. However, sometimes they get tripped up when it comes to getting IP addresses assigned on their additional interfaces. Usually they start by using the quick-start guide). The examples for Multus CNI are focused on a quick start in a lab, and for IP address assignment, we use the host-local reference plugin from the CNI maintainers. It works flawlessly for a single node.

host-local with a single node

But… Once they get through the quickstart guide in a lab, they’re like “Great! Ok, now let’s exapand the scale a little bit…” and once that happens, they’re using more than one node, and… It all comes crumbling down.

host-local with multiple nodes

See – the reason why host local doesn’t work across multiple nodes is actually right in the name “host-local” – the storage for the IP allocations is local to each node. That is, it stores which IPs have been allocated in a flat file on the node, and it doesn’t know if IPs in the same range have been allocated on a different node. This is… Frustrating, and really the core reasoning behind why I originally created Whereabouts. That’s not to say there’s anything inherently wrong with host-local, it works great for the purpose for which its designed, and its purview (from my view) is for local configurations for each node (which isn’t necessarily the paradigm that’s used with a technology like Multus CNI where CNI configurations aren’t local to each node).

Of course, the next thing you might ask is “Why not just DHCP?” and actually that’s what people typically try next. They’ll try to use the DHCP CNI plugin. And you know, the DHCP CNI plugin is actually pretty great (and aside from the README, these rkt docs kind of explain it pretty well in the IP Address management section). But, some of it is less than intuitive. Firstly, it requires two parts – one of which is to run the DHCP CNI plugin in “daemon mode”. You’ve gotta have this running on each node, so you’ll need a recipe to do just that. But… It’s “DHCP CNI Plugin in Daemon Mode” it’s not a “DHCP Server”. Soooo – if you don’t already have a DHCP server you can use, you’ll also need to setup a DHCP server itself. The “DHCP CNI Plugin in Daemon Mode” just gives you a way to listen to for DHCP messages.

And personally – I think managing a DHCP server is a pain in the gluteous maximus. And it’s the beginning of ski season, and I’m a telemark skier, so I have enough of those pains.

I’d also like to give some BIG THANKS! I’d like to point out that Christopher Randles has made some monstrous contributions to Whereabouts – especially but not limited to the engine which provides the Kubernetes-backed data store (Thanks Christopher!). Additionally, I’d also like to thank Tomofumi Hayashi who is the author of the static IPAM CNI plugin. I originally based Whereabouts on the structure of the static IPAM CNI plugin as it had all the basics, and also I could leverage what was built there to allow Whereabouts users to also use the static features alongside Whereabouts.

How Whereabouts works

How Whereabouts Works

From a user perspective, it’s pretty easy – basically, you add a section to your CNI configuration(s). The CNI specification has a construct for “ipam” – IP Address management.

Here’s an example of what a Whereabouts configuration looks like:

"ipam": {
    "type": "whereabouts",
    "datastore": "kubernetes",
    "kubernetes": { "kubeconfig": "/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig" },
    "range": ""

Here, we’re essentially saying:

  • We choose whereabouts as a value for type which defines which IPAM plugin we’re calling.
  • We’d like to use kubernetes for our datastore (where we’ll store the IP addresses we’ve allocated) (and we’ll provide a kubeconfig for it, so Whereabouts can access the kube API)
  • And we’d like an IP address range that’s a /24 – we’re asking Whereabouts to assign us IP addresses in the range of to

Behind the scenes, honestly… It’s not much more complex than what you might assume from the exposed knobs from the user perspective. Essentially – it’s storing the IP address allocations in a data store. It can use the Kubernetes API natively to do so, or, it can use an etcd instance. This provides a method to access what’s been allocated across the cluster – so you can assign IP addresses across nodes in the cluster (unlike being limited to a single host, with host-local). Otherwise, regarding internals – I have to admit it was kind of satisfying to program the logic to scan through IP address ranges with bitwise operations, ok I’m downplaying it… Let’s be honest, it was super satisifying.


  • A Kubernetes Cluster v1.16 or later
  • You need a default network CNI plugin installed (like Flannel [or Weave, or Calico, etc, etc])
  • Multus CNI
    • I’ll cover a basic installation here, so you don’t need to have it right now. But, if you already have it installed, you’ll save a step.
    • If you’re using OpenShift – you already have all of the above out of the box, so you’re all set.

Essentially, all of the commands will be run from wherever you have access to kubectl.

Let’s install Multus CNI

You can always refer to the quick start guide if you’d like more information about it, but, I’ll provide the cheat sheet here.

Basically we just clone the Multus repo and then apply the daemonset for it…

git clone https://github.com/intel/multus-cni.git && cd multus-cni
cat ./images/multus-daemonset.yml | kubectl apply -f -

You can check to see that it’s been installed by watching the pods for it come up, with watch -n1 kubectl get pods --all-namespaces. When you see the kube-multus-ds-* pods in a Running state you’re good. If you’re a curious type you can check out the contents (on any or all nodes) of /etc/cni/net.d/00-multus.conf to see how Multus was configured.

Let’s fire up Whereabouts!

The installation for it is easy, it’s basically the same as Multus, we clone it and apply the daemonset. This is copied directly from the Whereabouts README.

git clone https://github.com/dougbtv/whereabouts && cd whereabouts
kubectl apply -f ./doc/daemonset-install.yaml -f ./doc/whereabouts.cni.k8s.io_ippools.yaml

Same drill as above, just wait for the pods to come up with watch -n1 kubectl get pods --all-namespaces, they’re named whereabouts-* (usually in the kube-system namespace).

Time for a test drive

The goal here is to create a configuration to add an extra interface on a pod, add a Whereabouts configurations to that, spin up two pods, have those pods on different nodes, and show that they’ve been assigned IP addresses as we’ve specified.

Alright, what I’m going to do next is to give my nodes some labels so I can be assured that pods wind up on different nodes – this is mostly just used to illustrate that Whereabouts works with multiple nodes (as opposed to how host-local works).

$ kubectl get nodes
$ kubectl label node kube-whereabouts-demo-node-1 side=left
$ kubectl label node kube-whereabouts-demo-node-2 side=right
$ kubectl get nodes --show-labels

Now what we’re going to do is create a NetworkAttachmentDefinition – this a custom resource that we’ll create to express that we’d like to attach an additional interface to a pod. Basically what we do is pack a CNI configuration inside our NetworkAttachmentDefinition. In this CNI configuration we’ll also include our whereabouts config.

Here’s how I created mine:

cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
  name: macvlan-conf
  config: '{
      "cniVersion": "0.3.0",
      "name": "whereaboutsexample",
      "type": "macvlan",
      "master": "eth0",
      "mode": "bridge",
      "ipam": {
        "type": "whereabouts",
        "datastore": "kubernetes",
        "kubernetes": { "kubeconfig": "/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig" },
        "range": "",
        "log_file" : "/tmp/whereabouts.log",
        "log_level" : "debug"

What we’re doing here is creating a NetworkAttachmentDefinition for a macvlan-type interface (using the macvlan CNI plugin).

NOTE: If you’re copying and pasting the above configuration (and I hope you are!) make sure you set the master parameter to match the name of a real interface name as available on your nodes.

Then we specify an ipam section, and we say that we want to use whereabouts as our type of IPAM plugin. We specify where the kubeconfig lives (this gives whereabouts access to the Kube API).

And maybe most important to us as users – we specify the range we’d like to have IP addresses assigned in. You can use CIDR notation here, and… If you need to use other options to exclude ranges, or other range formats – check out the README’s guide to the core parameters.

After we’ve created this configuration, we can list it too – in case we need to remove or change it later, such as:

$ kubectl get network-attachment-definitions.k8s.cni.cncf.io

Alright, we have all our basic setup together, now let’s finally spin up some pods…

Note that we have annotations here that include k8s.v1.cni.cncf.io/networks: macvlan-conf – that value of macvlan-conf matches the name of the NetworkAttachmentDefinition that we created above.

Let’s create the first pod for our “left side” label:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
  name: samplepod-left
    k8s.v1.cni.cncf.io/networks: macvlan-conf
  - name: samplepod-left
    command: ["/bin/bash", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: dougbtv/centos-network
    side: left

And again for the right side:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
  name: samplepod-right
    k8s.v1.cni.cncf.io/networks: macvlan-conf
  - name: samplepod-right
    command: ["/bin/bash", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: dougbtv/centos-network
    side: right

I then wait for the pods to come up with watch -n1 kubectl get pods --all-namespaces or I look at the details of one pod with watch -n1 'kubectl describe pod samplepod-left | tail -n 50'

Also – you’ll note if you kubectl get pods -o wide the pods are indeed running on different nodes.

Once the pods are up and in a Running state, we can interact with them.

The first thing I do is check out that the IPs have been assigned:

$ kubectl exec -it samplepod-left -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP 
    link/ether 3e:f7:4b:a1:16:4b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global eth0
       valid_lft forever preferred_lft forever
4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether b6:42:18:70:12:6e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global net1
       valid_lft forever preferred_lft forever

You’ll note there’s three interfaces, a local loopback, an eth0 that’s for our “default network” (where we have pod-to-pod connectivity by default), and an additional interface – net1. This is our macvlan connection AND it’s got an IP address assigned dynamically by Whereabouts. In this case

Let’s check out the right side, too:

$ kubectl exec -it samplepod-right -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP 
    link/ether 96:28:58:b9:a4:4c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global eth0
       valid_lft forever preferred_lft forever
4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 7a:31:a7:57:82:1f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global net1
       valid_lft forever preferred_lft forever

Great, we’ve got another dynamically assigned address that does not collide with our already reserved IP address from the left side! Our address on the right side here is

And while connectivity is kind of outside the scope of this article – in most cases it should generally work right out the box, and you should be able to ping from one pod to the next!

[centos@kube-whereabouts-demo-master whereabouts]$ kubectl exec -it samplepod-right -- ping -c5
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.438 ms
64 bytes from icmp_seq=2 ttl=64 time=0.217 ms
64 bytes from icmp_seq=3 ttl=64 time=0.316 ms
64 bytes from icmp_seq=4 ttl=64 time=0.269 ms
64 bytes from icmp_seq=5 ttl=64 time=0.226 ms

And that’s how you can determine your pods Whereabouts (by assigning it a dynamic address without the pain of runnning DHCP!).

High Performance Networking with KubeVirt - SR-IOV device plugin to the rescue!

If you’ve got workloads that live in VMs, and you want to get them into your Kubernetes environment (because, I don’t wish maintaining two platforms even on the worst of the supervillains!) – you might also have networking workloads that require you to really push some performance…. KubeVirt with SR-IOV device plugin might be just the hero you need to save the day. Not all heros wear capes, sometimes those heroes just wear a t-shirt with a KubeVirt logo that they got at Kubecon. Today we’ll spin up KubeVirt with SR-IOV device plugin and we’ll run a VoIP workload on it, so jump into a phonebooth, change into your Kubevirt t-shirt and fire up a terminal!

I’ll be giving a talk at Kubecon EU 2019 in Barcelona titled High Performance Networking with KubeVirt. Presenting with me is the guy with the best Yoda drawing on all of GitHub, Abdul Halim from Intel. and I’ll give a demonstration of what’s going on here in this article, and this material will be provided to attendees too so that they can follow the bouncing ball and get the same demo working in their environment.

Part of the talk is this recorded demo on YouTube. It’ll give you a preview of all that we’re about to do here in this article. Granted this recorded demo does skip over some of the most interesting configuration, but, shows the results. We’ll cover all the details herein to get you to the same point.

We’ll look at spinning up KubeVirt, with SR-IOV capabilities. We’ll walk through what the physical installation and driver setup looks like, we’ll fire up KubeVirt, spin up VMs running in Kube, and then we’ll put our VoIP workload (using Asterisk) in those pods – which isn’t complete until we terminate a phone call over a SIP trunk! The only thing that’s on you is to install Kubernetes (but, I’ll have pointers to get you started there, too). Just a quick note that I’m just using Asterisk as an example of a VoIP workload, it’s definitely NOT limited to running in a VM, it also works well in a container, even as a containerized VNF. You might be getting the point that I love Asterisk! (Shameless plugin, it’s a great open source telephony solution!)

So – why VMs? The thing is, maybe you’re stuck with them. Maybe it’s how your vendor shipped the software you bought and deploy. Maybe the management of the application is steeped in the history of it being virtualized. Maybe your software has legacies that simply just can’t be easily re-written into something that’s containerized. Maybe you like having pets (I don’t always love pets in my production deployments – but, I do love my cats Juniper & Otto, who I trained using know-how from The Trainable Cat! …Mostly I just trained them to come inside on command as they’re indoor-outdoor cats.)

Something really cool about the KubeVirt ecosystem is that it REALLY leverages some other hereos in the open source community. A good hero works well in a team for sure. In this case KubeVirt leverages Multus CNI which enables us to connect multiple network interfaces to pods (which also means VMs in the case of KubeVirt!), and we also use the SR-IOV Device Plugin – this plugin gives the Kubernetes scheduler awareness of which limited resources on our worker nodes have been exhausted – specifically which SR-IOV virtual functions (VFs) have been used up, this way we schedule workloads on machines that have sufficient resources.

I’d like to send a HUGE thanks to Booxter – Ihar from the KubeVirt team at Red Hat helped me get all of this going, and I could not have gotten nearly as far as I did without his help. Also thanks to SchSeba & Phoracek, too!


Not a ton of requirements, I think the heaviest two here is that you’re going to need:

  • Some experience with Kubernetes (you know how to use kubectl for some basic stuff, at least), and a way to install Kubernetes.
  • SR-IOV capable devices on bare metal machines (and make them part of the Kubernetes cluster that you create)

I’m not going to cover the Kubernetes install here, I have some other material I will share with you on how to do so, though.

In my case, I spun up a cluster with kubeadm. Additionally, I also used my kube-ansible playbooks. If you’d like to use those playbooks, I also have another blog article on how to use kube-ansible.

Install a “default network”

Once you have Kubernetes installed – you’re going to need to have some CNI plugin installed to act as the default network for your cluster. This will provide network connectivity between pods in the regular old fashioned way that you’re used to. Why am I calling it the “default network”, you ask? Because we’re going to add additional network interfaces and attachments to other networks on top of this.

I used Flannel, and installed it like so:

$ curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml > flannel.yml
$ kubectl apply -f flannel.yml 

When it’s installed you should see all nodes in a “ready” state when you issue kubectl get nodes.

SR-IOV Setup

Primarily, I followed the KubeVirt docs for SR-IOV setup. In my opinion, this is maybe the biggest adventure in this whole process – mostly because depending on what SR-IOV hardware you have, and what mobo & CPU you have, etc… It might require you to have to dig deeply into your BIOS and figure out what to enable.

Mostly – I will leave this adventure to you, but, I will give you a quick overview of how it went on my equipment.

It’s a little like making a witch’s brew, “Less eye of newt, more hair of frog… nope. Ok let’s try that again, blackcat_iommu=no ravensbreath_pci=on

Or as my co-worker Anton Ivanov said:

It’s just like that old joke about SCSI. How many places do you terminate a SCSI cable? Three. Once on each end and a black goat with a silver knife at full moon in the middle

Mostly, I first had to modify my kernel parameters, so, I added an extra menuentry in my /etc/grub2.cfg, and set it as the default with grubby --set-default-index=0, and made sure my linux line included:

amd_iommu=on pci=realloc

Make sure to do this on each node in your cluster that has SR-IOV hardware.

Note that I was using an AMD based motherboard and CPU, so you might have intel_iommu=on if you’re using Intel, and the KubeVirt docs suggest a couple other parameters you can try.

If you need more help with Grub configurations, the Fedora docs on working with the GRUB2 bootloader are very helpful.

Then, in my BIOS I had to enable a number of things, I had to make sure SR-IOV support was on, as well as enabling IOMMU, and PCIe ARI Support.

After I had that up, I was able to find the VFs like so:

$ find /sys -name *vfs*

And then chose a sriov_totalvfs and echo that number into the sriov_numvfs:

$ cat /sys/devices/pci0000:00/0000:00:03.2/0000:2f:00.2/sriov_totalvfs
$ echo 32 > /sys/devices/pci0000:00/0000:00:03.2/0000:2f:00.2/sriov_numvfs

If it errors out, you might get a hint from following your journal, that is with journalctl -f and see if it gives you any hints. I almost thought I was going to have to modify my BIOS (gulp!), I had found this Reddit thread, but, luckily it never got that far for me. It took me a few iterations at fixing my Kernel parameters and finding all the hidden bits in my BIOS, but… With patience I got there.

…Last but not least, make sure your physical ports on your SR-IOV card are connected to something. I had forgotten to connect mine initially and I couldn’t get SR-IOV capable interfaces in my VMs to come up. So, back to our roots – check layer 1!

Make sure to modprobe vfio-pci

Make sure you have the vfio-pci kernel module loaded…

I did:

# modprobe vfio-pci

And then verified it with:

# lsmod | grep -i vfio

And then I added vfio-pci to /etc/modules

KubeVirt installation

First we install the cluster-network-addons, this will install Multus CNI, and the SR-IOV device plugin.

Before we get any further, let’s open the SR-IOV feature gate. So, on your machine where you use kubectl, issue:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
  name: kubevirt-config
  namespace: kubevirt
    kubevirt.io: ""
  feature-gates: "SRIOV"

It’s assumed you’d generally do this on the master, or, wherever you run kubectl from.

Let’s follow the add-on operator deployment

kubectl apply -f https://raw.githubusercontent.com/kubevirt/cluster-network-addons-operator/master/manifests/cluster-network-addons/0.7.0/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubevirt/cluster-network-addons-operator/master/manifests/cluster-network-addons/0.7.0/network-addons-config.crd.yaml
kubectl apply -f https://raw.githubusercontent.com/kubevirt/cluster-network-addons-operator/master/manifests/cluster-network-addons/0.7.0/operator.yaml

And we make an example custom resource…

kubectl apply -f https://raw.githubusercontent.com/kubevirt/cluster-network-addons-operator/master/manifests/cluster-network-addons/0.7.0/network-addons-config-example.cr.yaml

Watch for it all to come up…

$ watch -n1 kubectl get pods --all-namespaces -o wide

You can also use this wait condition…

$ kubectl wait networkaddonsconfig cluster --for condition=Ready

Install the KubeVirt operator

Next we’ll follow instructions from the KubeVirt docs for installing the KubeVirt operator. In this case we’ll follow the “#2” instructions here for the “Alternative flow (aka Operator flow)”.

It was suggested to me to use the latest version, as of this writing on the KubeVirt releases it’s shown to be v0.17.0.

$ export VERSION=v0.17.0
$ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/$VERSION/kubevirt-operator.yaml
$ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/$VERSION/kubevirt-cr.yaml

Watch the pods to be ready, kubectl get pods and all that good stuff.

Then we wait for this to be readied up…

$ kubectl wait kv kubevirt --for condition=Ready

(Mine never became ready?)

[centos@kube-nonetwork-master ~]$ kubectl wait kv kubevirt --for condition=Ready
Error from server (NotFound): kubevirts.kubevirt.io "kubevirt" not found

Install virtctl

$ wget https://github.com/kubevirt/kubevirt/releases/download/v0.17.0/virtctl-v0.17.0-linux-amd64
$ chmod +x virtctl-v0.17.0-linux-amd64
$ sudo mv virtctl-v0.17.0-linux-amd64 /usr/bin/virtctl

Alright cool, at this point you’ve got KubeVirt installed up!

Setup SR-IOV on-disk configuration file /etc/pcidp/config.json

For this step, we’re going to use a helper script. I took this from an existing (and open at the time of writing this article) pull request, and I put it into this gist.

I went ahead and did this as root on each node that has SR-IOV devices (in my case, just one machine)

# curl -s https://gist.githubusercontent.com/dougbtv/1d83c233975e3444957e318f39949d14/raw/ef0bcad7e4a318b3791934ff60a87cc40c4233a9/sriov-helper.sh > sriov-helper.sh
# chmod +x sriov-helper.sh
# ./sriov-helper.sh

Now we can inspect the contents of the file…

# cat /etc/pcidp/config.json

On my machine I can see that the rootDevices matches what I initialized in my SR-IOV setup way above in this article, specifically 2f:00.2.

Restart the SR-IOV device plugin pods…

Now that this is setup, you have to delete the SR-IOV pods… Back to the master (or wherever your kubectl command is run from).

Give this a try…

$ kubectl get pods --namespace=sriov | grep device-plugin | awk '{print $1}' | xargs -L 1 -i kubectl delete pod {} --namespace=sriov

If it stalls out (full disclosure, mine did), you can just list them and delete one-by-one.

$ kubectl get pods --namespace=sriov -o wide | grep device-plugin

and then with each one:

$ kubectl delete pod $each_pod_name_here --namespace=sriov

And then just to make sure, I took the one pod running on my host with SR-IOV devices and looked at the logs…

$ kubectl logs kube-sriov-device-plugin-nblww --namespace=sriov

In this case, I could see the last line was a ListAndWatch(sriov) log and it had content about my device, looked something like this:


Let’s start a (vanilla) Virtual Machine!

Move back to your master (or wherever your run Kubevirt from), and we’re going to spin up a vanilla VM just to get the commands down and make sure everything’s looking hunky dory.

First we’ll clone the kubevirt repo (word to the wise, it’s pretty big, maybe 400 meg clone).

$ git clone https://github.com/kubevirt/kubevirt.git --depth 50 && cd kubevirt

Let’s move into the example VMs section…

$ cd cluster/examples/

And edit a file in there, let’s edit the vm-cirros.yaml – a classic test VM image. Bring it up in your editor first, but, we’ll edit in place like so:

$ sed -ie "s|registry:5000/kubevirt/cirros-container-disk-demo:devel|kubevirt/cirros-container-disk-demo:latest|" vm-cirros.yaml

Kubectl create from that file…

$ kubectl create -f vm-cirros.yaml

And let’s look at the vms custom resources, and we’ll see that it’s created, but, not yet running.

$ kubectl get vms
vm-cirros   2m13s   false     

Yep, it’s not started yet, let’s start it…

$ virtctl start vm-cirros
VM vm-cirros was scheduled to start
$ kubectl get vms
vm-cirros   3m17s   true      

Wait for it to come up (watch the pods…), and then we’ll console in (you can see that the password is listed right there in the MOTD, gocubsgo). You might have to hit <enter> to see the prompt.

[centos@kube-nonetwork-master examples]$ virtctl console vm-cirros
Successfully connected to vm-cirros console. The escape sequence is ^]

login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.
vm-cirros login: cirros
$ echo "foo"

(You can hit ctrl+] to get back to your command line, btw.)

Presenting… a VM with an SR-IOV interface!

Ok, back into your master, and still in the examples directory… Let’s create the SR-IOV example. First we change the image location again…

sed -ie "s|registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel|kubevirt/fedora-cloud-container-disk-demo:latest|" vmi-sriov.yaml

Create a network configuration, a NetworkAttachmentDefinition for this one…

cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
  name: sriov-net
    k8s.v1.cni.cncf.io/resourceName: intel.com/sriov
  config: '{
    "type": "sriov",
    "name": "sriov-net",
    "ipam": {
      "type": "host-local",
      "subnet": "",
      "rangeStart": "",
      "rangeEnd": "",
      "routes": [{
        "dst": ""
      "gateway": ""

(Side note: The IPAM section here isn’t actually doing a lot for us, in theory you can have "ipam": {}, instead of this setup with the host-local plugin – I struggled with that a little bit, so, I included here an otherwise dummy IPAM section)

Console in with:

virtctl console vmi-sriov

Login as fedora (with password fedora), become root (sudo su -) create an ifcfg-eth1 script:

[root@vmi-sriov2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1


# ifup eth1

You can now check out what the configs look like with: ip a.

Now – repeat this for a second VM. I copied the vmi-sriov.yaml to another file and changed the metadata->name to vmi-sriov2.

I then also created a /etc/sysconfig/network-scripts/ifcfg-eth1 and assigned a static IP address of

We’ll reference that IP address later when we create our VoIP workload.

Once you have those two together – you can probably make a ping between the two workloads, and… You can put your own workload in!

Or, if you like, you can also create a VoIP workload using Asterisk as I did.

Asterisk configuration

Install asterisk from RPM, in both VMs, install like so:

yum install -y asterisk-pjsip asterisk asterisk-sounds-core-en-ulaw

Next, we’re going to setup our /etc/asterisk/pjsip.conf file on both VMs. This creates a SIP trunk between each machine.








Once you’ve loaded that, console into the VM and issue:

# asterisk -rx 'pjsip reload'

Next we’re going to create a file /etc/asterisk/extensions.conf which is our “dialplan” – this tells Asterisk how to behave when a call comes in our trunk. In our case, we’re going to have it answer the call, play a sound file, and then hangup.

Create the file as so:

exten => _X.,1,NoOp()
  same => n,Answer()
  same => n,SayDigits(1)
  same => n,Hangup()

Next, you’re going to tell asterisk to reload this with:

# asterisk -rx 'dialplan reload'

Now, from the first VM with the address, go ahead and console into the VM and run asterisk -rvvv to get an Asterisk console, and we’ll set some debugging output on, and then we’ll originate a phone call:

vmi-sriov*CLI> pjsip set logger on
vmi-sriov*CLI> rtp set debug on
vmi-sriov*CLI> channel originate PJSIP/333@bob application saydigits 1

You should see a ton of output now! You’ll see the SIP messages to initiate the phone call, and then you’ll see information about the RTP (real-time protocol) packets that include the voice media going between the machines!

Awesome! Thanks for sticking with it, now… For your workload to the rescue!

A Kubernetes Operator Tutorial? You got it, with the Operator-SDK and an Asterisk Operator!

So you need a Kubernetes Operator Tutorial, right? I sure did when I started. So guess what? I got that b-roll! In this tutorial, we’re going to use the Operator SDK, and I definitely got myself up-and-running by following the Operator Framework User Guide. Once we have all that setup – oh yeah! We’re going to run a custom Operator. One that’s designed for Asterisk, it can spin up Asterisk instances, discover them as services and dynamically create SIP trunks between n-number-of-instances of Asterisk so they can all reach one another to make calls between them. Fire up your terminals, it’s time to get moving with Operators.

What exactly are Kubernetes Operators? In my own description – Operators are applications that manage other applications, specifically with tight integration with the Kubernetes API. They allow you build in your own “operational knowledge” into them, and perform automated actions when managing those applications. You might also want to see what CoreOS has to say on the topic, read their blog article where they introduced operators.

Sidenote: Man, what an overloaded term, Operators! In the telephony world, well, we have operators, like… a switchboard operator (I guess that one’s at least a little obsolete). Then we have platform operators, like… sysops. And we have how things operate, and the operations they perform… Oh my.

A guy on my team said (paraphrased): “Well if they’re applications that manage applications, then… Why write them in Go? Why not just write them in bash?”. He was… Likely kidding. However, it always kind of stuck with me and got me to think about it a lot. One of the main reasons why you’ll see these written in Go is because it’s going to be the default choice for interacting with the Kubernetes API. There’s likely other ways to do it – but, all of the popular tools for interacting with it are written in Go, just like Kubernetes itself. The thing here is – you probably care about managing your application running in Kubernetes with an operator because you care about integrating with the Kubernetes API.

One more thing to keep in mind here as we continue along – the idea of CRDS – Custom Resource Definitions. These are the lingua franca of Kubernetes Operators. We often watch what these are doing and take actions based on them. What’s a CRD? It’s often described as “a way to extend the Kubernetes API”, which is true. The thing is – that sounds SO BIG. It sounds daunting. It’s not really. CRDs, in the end, are just a way for you to store some of your own custom data, and then access it through the Kubernetes API. Think of it as some meta data you can push into the Kube API and then access it – so if you’re interacting with the Kube API, it’s simple to store some of your own data, without having to roll-your-own way of otherwise storing it (and otherwise reading & writing that data).

Today we have a big agenda for this blog article… Here’s what we’re going to do:

  • Create a development environment where we can use the operator-sdk
  • Create own application as scaffolded by the Operator SDK itself.
  • Spin up the asterisk-operator, dissect it a little bit, and then we’ll run it and see it in action.
  • Lastly, we’ll introduce the Helm Operator, a way to kind of lower the barrier of entry that allows you to create a Kubernetes Operator using Helm, and it might solve some of your problems that you’d use an Operator for without having to slang any golang.


Here’s a few articles that I used when I was building this article myself.


  • A CentOS 7 machine to use for development
    • These commands all reference CentOS, if you use Fedora (or something else), then it might take some conversion to get all the deps.
  • Access to Kubernetes version 1.9 or later cluster
    • Need a tute for that? Check out my latest Kubernetes install tutorial.
    • We will also cover a quick minikube installation
  • Your favorite text editor.
  • A rubber duck for debugging.

Basic development environment setup

Alright, we’ve got some deps to work through. Including, ahem, dep. I didn’t include “root or your regular user” but in short, generally, just the yum & systemctl lines here require su, otherwise they should be your regular user.

Make sure you have git, and this is a good time to install whatever usual goodies you use.

$ yum install -y git
$ git config --global user.email "you@example.com"
$ git config --global user.name "Your Name"

Firstly, install Docker.

$ yum install -y yum-utils   device-mapper-persistent-data   lvm2
$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ yum install docker-ce -y
$ systemctl enable docker
$ systemctl start docker

Install kubectl.

$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

$ yum install -y kubectl

Double check that you’ve got bridge-nf-call-iptables all good.

$ sudo /bin/bash -c 'echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables'

Install minikube (optional: if this is part of a cluster or otherwise have access to another cluster). I’m not generally a huge minikube fan, however, in this case we’re working on a development environment (seeing that we’re looking into building an operator), so it’s actually appropriate here.

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.28.2/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
$ sudo /usr/local/bin/minikube start --vm-driver=none

It’ll take a few minutes while it downloads a few container images from which it runs Kubernetes.

If something went wrong and you need to restart minikube from scratch you can do so with:

$ sudo /usr/local/bin/minikube stop; cd /etc/kubernetes/; sudo rm -F *.conf; /usr/local/bin/minikube delete; cd -

Follow the instructions from minikube for setting up your .kube folder. I didn’t have great luck with it, so I performed a sudo su - in order to run say, kubectl get nodes to see that the cluster was OK. In my case, this also meant that I had to bring the cluster up as root as well.

You can test that your minikube is operational with:

kubectl get nodes

It should list just a single node.

Install a nice-and-up-to-date-golang.

$ rpm --import https://mirror.go-repo.io/centos/RPM-GPG-KEY-GO-REPO
$ curl -s https://mirror.go-repo.io/centos/go-repo.repo | tee /etc/yum.repos.d/go-repo.repo
$ yum install -y golang

I changed root’s ~/.bash_profile path (given my above Minikube situation) to:

export GOPATH=/home/centos/go
PATH=$PATH:$HOME/bin:$(go env GOPATH)/bin
export PATH

If you do the same thing you might want to be mindful of the /home/user in that path.

Setup your go environment a little, goal here being able to run binaries that are in your GOPATH’s bin directory.

$ mkdir -p ~/go/bin
$ export GOPATH=~/go
$ export PATH=$PATH:$(go env GOPATH)/bin

Ensure that directory exists…

mkdir -p $GOPATH/bin

Install dep.

$ curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh

Install the operator-sdk.

$ mkdir -p $GOPATH/src/github.com/operator-framework
$ cd $GOPATH/src/github.com/operator-framework
$ git clone https://github.com/operator-framework/operator-sdk
$ cd operator-sdk
$ git checkout master
$ export PATH=$PATH:$GOPATH/bin && make dep && make install

Create your new project

We’re going to create a sample project using the operator-sdk CLI tool. Note – I used my own GitHub namespace here, feel free to replace it with yours. If not, cool, you can also get a halloween costume of me (and scare kids and neighbors!)

$ mkdir -p $GOPATH/src/github.com/dougbtv
$ cd $GOPATH/src/github.com/dougbtv
$ operator-sdk new hello-operator --kind=Memcached
$ operator-sdk add api  --api-version=cache.example.com/v1alpha1 --kind=Memcached
$ cd hello-operator

Sidenote: For what it’s worth, at some point I had tried a few versions of operator-sdk tools to try to fix another issue. During this, I had some complaint (when running operator-sdk new ...) that something didn’t meet constraints (No versions of k8s.io/gengo met constraints), and it turned out it was this kind of stale dep package cache. So you can clear it as such:

[centos@operator-box github.com]$ rm -Rf $GOPATH/pkg/dep/sources

Also, ignore if it complains it can’t complete the git actions, they’re so simple you can just manage it as a git repo however you please.

Inspecting the scaffolded project

Let’s modify the types package to define what our CRD looks like…

Modify ./pkg/apis/cache/v1alpha1/types.go, replace the two structs at the bottom (that say // Fill me) like so:

type MemcachedSpec struct {
    // Size is the size of the memcached deployment
    Size int32 `json:"size"`
type MemcachedStatus struct {
    // Nodes are the names of the memcached pods
    Nodes []string `json:"nodes"`

And then update the generated code for the custom resources…

operator-sdk generate k8s

Then let’s update the handler, it’s @ ./pkg/stub/handler.go

We’ll replace that file in its entirety with this example memcached deployment code from github. Just copy-pasta it, or curl it down, whatever you like.

You’ll also need to change the github namespace in that file, replace it with your namespace + the project name you used during operator-sdk new $name_here. I changed mine like so:

$ sed -i -e 's|example-inc/memcached-operator|dougbtv/hello-operator|' pkg/stub/handler.go

Now, let’s create the CRD. First, let’s just cat (I’m a cat person, like, seriously I love cats, if you’re a dog person you can stop reading this article right now, or, you probably use less as a pager too, dog people, seriously!) it and take a look…

$ cat deploy/crd.yaml

Now you can create it…

$ kubectl create -f deploy/crd.yaml

Once it has been created, you can see it’s listed, but, there’s no CRD objects yet…

$ kubectl get memcacheds.cache.example.com

In the Operator-SDK user guide they list two options for running your SDK. Of course, the production way to do it is create a docker image and push it up to a registry, but… we haven’t even compiled this yet, so let’s go one step at a time and run in our local cluster.

$ operator-sdk up local

Cool, you’ll see it initialize, and you might get an error you can ignore for now:

ERRO[0000] failed to initialize service object for operator metrics: OPERATOR_NAME must be set 

Alright, so what has it done? Ummm, nothing yet! Let’s create a custom resource and we’ll watch what it does… Create a custom resource yaml file like so:

$ cat deploy/my.crd.yaml 
apiVersion: "cache.example.com/v1alpha1"
kind: "Memcached"
  name: "example-memcached"
  size: 3

Now let’s apply it:

$ kubectl apply -f deploy/my.crd.yaml 

And we can go and watch what’s happening here…

$ watch -n1 kubectl get deployment

You’ll see that it’s creating a bunch of memcached pods from a deployment! Hurray! Now we can modify that…

Let’s edit the the ./deploy/my.crd.yaml to have a size: 4, like so:

$ cat deploy/my.crd.yaml 
apiVersion: "cache.example.com/v1alpha1"
kind: "Memcached"
  name: "example-memcached"
  size: 4

We can apply that, and then we’ll take another look…

$ kubectl apply -f deploy/my.crd.yaml 
$ watch -n1 kubectl get deployment

Awesome, 4 instances going. Alright cool, we’ve got an operator running! So… Can we create our own?

Creating our own operator!

Well, almost! What we’re going to do now is use Doug’s asterisk-operator. Hopefully there’s some portions here that you can use as a springboard for your own Operator.

How the operator was created

Some of the things that I modified after I had the scaffold was..

  • Updated the types.go to include the fields I needed.
  • I moved the /pkg/apis/cache/ to /pkg/apis/voip/
    • And changed references to memcached to asterisk
  • Created a scheme to discover all IPs of the Asterisk pods
  • Created REST API called to Asterisk to push the configuration

Some things to check out in the code…

Aside from what we reviewed earlier when we were scaffolding the application – which is argually the most interesting from a standpoint of “How do I create any operator that want?” The second most interesting, or, potentially most interesting if you’re interested in Asterisk – is how we handle the service discovery and dynamically pushing configuration to Asterisk.

You can find the bulk of this in the handler.go. Give it a skim through, and you’ll find where it makes the actions of:

  1. Creating the deployment and giving it a proper size based on the CRDs
  2. How it figures out the IP addresses of each pod, and then goes through and uses those to cycle through all the instances and create SIP trunks to all of the other Asterisk instances.

But… What about making it better? This Operator is mostly provided as an example, and to “do a cool thing with Asterisk & Operators”, so some of the things here are clearly in the proof-of-concept realm. A few of the things that it could use improvement with are…

  1. It’s not very graceful with how it handles waiting for the Asterisk instances to become ready. There’s some timing issues with when the pod is created, and when the IP address is assigned. It’s not the cleanest in that regard.
  2. There’s a complete “brute force” method by which it creates all the SIP trunks. If you start with say, 2 instances, and change to 3 instances – well… It creates all of the SIP trunks all over again instead of just creating the couple new ones it needs, I went along with the idea of don’t prematurely optimize. But, this could really justified to optimize it.

What’s the application doing?

Asterisk Operator diagram

In short the application really just does three things:

  1. Watches a CRD to see how many Asterisk instances to create
  2. Figures out the IP addresses of all the Asterisk instances, using the Kube API
  3. Creates SIP trunks from each Asterisk instance to each other Asterisk instance, using ARI push configuration, allowing us to make calls from any Asterisk instance to any other Asterisk instance.

Let’s give the Asterisk Operator a spin!

This assumes that you’ve completed creating the development environment above, and have it all running – you know, with golang and GOPATH all set, minikube running and the operator-sdk binaries available.

First things first – make sure you pull the image we’ll use in advance, this will make for a lot less confusing waiting when you first start the operator itself.

docker pull dougbtv/asterisk-example-operator

Then, clone the asterisk-operator git repo:

mkdir -p $GOPATH/src/github.com/dougbtv && cd $GOPATH/src/github.com/dougbtv
git clone https://github.com/dougbtv/asterisk-operator.git && cd asterisk-operator

We’ll need to create the CRD for it:

kubectl create -f deploy/crd.yaml

Next… We’ll just start the operator itself!

operator-sdk up local

Ok, cool, now, we’ll create a CRD so that the operator sees it and spins up asterisk instances – open up a new terminal window for this.

cat <<EOF | kubectl apply -f -
apiVersion: "voip.example.com/v1alpha1"
kind: "Asterisk"
  name: "example-asterisk"
  size: 2
  config: "an unused field."

Take a look at the output from the operator – you’ll see it logging a number of things. It has some waits to properly wait for Asterisk’s IP to be found, and for Asterisk instances to be booted – and then it’ll log that it’s creating some trunks for us.

Check out the deployment to see that all of the instances are up:

watch -n1 kubectl get deployment

You should see that it desires to have 2 instances, and that it’s fulfilled those instances. It does this as it has created a deployment.

Let’s go ahead and exec into one of the Asterisk pods, and we’ll run the Asterisk console…

kubectl exec -it $(kubectl get pods -o wide | grep asterisk | head -n1 | awk '{print $1}') -- asterisk -rvvv

Let’s show the AORs (addresses of record):

example-asterisk-6c6dff544-2wfwg*CLI> pjsip show aors

      Aor:  <Aor..............................................>  <MaxContact>
    Contact:  <Aor/ContactUri............................> <Hash....> <Status> <RTT(ms)..>

      Aor:  example-asterisk-6c6dff544-wnkpx                     0
    Contact:  example-asterisk-6c6dff544-wnkpx/sip:anyuser 1a830a6772 Unknown         nan

Ok, cool, this has a trunk setup for us, the trunk name in the Aor field is example-asterisk-6c6dff544-wnkpx. Go ahead and copy that value in your own terminal (yours will be different, if it’s not different – leave your keyboard right now, and go buy a lotto ticket).

We can use that to originate a call, I do so with:

example-asterisk-6c6dff544-2wfwg*CLI> channel originate PJSIP/333@example-asterisk-6c6dff544-wnkpx application wait 2
    -- Called 333@example-asterisk-6c6dff544-wnkpx
    -- PJSIP/example-asterisk-6c6dff544-wnkpx-00000000 answered

And we can see that there’s a call that’s been originated, and it has been answered by the other end! Go ahead an quit for now.

Ok – but, here comes the cool stuff. Let’s increase the size of our cluster, we requested 2 instances of Asterisk earlier, now we’ll bump it up to 3.

cat <<EOF | kubectl apply -f -
apiVersion: "voip.example.com/v1alpha1"
kind: "Asterisk"
  name: "example-asterisk"
  size: 3
  config: "an unused field."

Now our kubectl get deployment will show us that we have three, but! Better yet, we have all the SIP trunks created for us. Let’s exec in and look at the AORs again.

kubectl exec -it $(kubectl get pods -o wide | grep asterisk | head -n1 | awk '{print $1}') -- asterisk -rvvv

Then we’ll do the same and show the AORs:

example-asterisk-6c6dff544-2wfwg*CLI> pjsip show aors

      Aor:  <Aor..............................................>  <MaxContact>
    Contact:  <Aor/ContactUri............................> <Hash....> <Status> <RTT(ms)..>

      Aor:  example-asterisk-6c6dff544-k2m7z                     0
    Contact:  example-asterisk-6c6dff544-k2m7z/sip:anyuser 0d391d57b2 Unknown         nan

      Aor:  example-asterisk-6c6dff544-wnkpx                     0
    Contact:  example-asterisk-6c6dff544-wnkpx/sip:anyuser 1a830a6772 Unknown         nan

Ah ha! Now there’s 2 trunks available, the operator went and created a new one for us to the new Asterisk instance.

And we can originate a call to it, too!

example-asterisk-6c6dff544-2wfwg*CLI> channel originate PJSIP/333@example-asterisk-6c6dff544-wnkpx application wait 2
    -- Called 333@example-asterisk-6c6dff544-wnkpx
    -- PJSIP/example-asterisk-6c6dff544-wnkpx-00000001 answered

And there you have it – you can do it for n-number of instances. I tested it out with 33 instances, which works out to 1056 trunks (counting both sides) and… While it took like 15ish minutes, which felt like forever… It takes me longer than that to create 2 or 3 by hand! So… Not a terrible trade off.

Bonus: Helm Operator!

Let’s follow the 15 minute operator with Helm tutorial. See how far we can get. This uses the helm operator kit.

Clone the operator kit, we’ll use their example.

$ git clone https://github.com/operator-framework/helm-app-operator-kit.git
$ cd helm-app-operator-kit/

Now, build a Docker image. Note: You’ll probably want to change the name (from -t dougbtv/... to your name, or someone else’s name if that’s how you roll).

docker build \
  --build-arg HELM_CHART=https://storage.googleapis.com/kubernetes-charts/tomcat-0.1.0.tgz \
  --build-arg API_VERSION=apache.org/v1alpha1 \
  --build-arg KIND=Tomcat \
  -t dougbtv/tomcat-operator:latest .

Docker login and then push the image.

$ docker login
$ docker push dougbtv/tomcat-operator:latest

Alright, now there’s a series of things we’ve got to customize. There’s more instructions on what needs to be customized, too, if you need it.

# this can stay changed to "tomcat"
$ sed -i -e 's/<chart>/tomcat/' helm-app-operator/deploy/operator.yaml 

# this you should change to your docker namespace
$ sed -i -e 's|quay.io/<namespace>|dougbtv|' helm-app-operator/deploy/operator.yaml

# Change the group & kind to match what we had in the docker build.
$ sed -i -e 's/group: example.com/group: apache.org/' helm-app-operator/deploy/crd.yaml 
$ sed -i -e 's/kind: ExampleApp/kind: Tomcat/' helm-app-operator/deploy/crd.yaml 

# And the name has to match that, too
$ sed -i -e 's/name: exampleapps.example.com/name: exampleapps.apache.org/' helm-app-operator/deploy/crd.yaml

# Finally update the Custom Resource to be what we like.
$ sed -i -e 's|apiVersion: example.com/v1alpha1|apiVersion: apache.org/v1alpha1|' helm-app-operator/deploy/cr.yaml
$ sed -i -e 's/kind: ExampleApp/kind: Tomcat/' helm-app-operator/deploy/cr.yaml

Now let’s deploy all that stuff we created!

$ kubectl create -f helm-app-operator/deploy/crd.yaml
$ kubectl create -n default -f helm-app-operator/deploy/rbac.yaml
$ kubectl create -n default -f helm-app-operator/deploy/operator.yaml
$ kubectl create -n default -f helm-app-operator/deploy/cr.yaml

Spin up a Kubernetes cluster on CentOS, a choose-your-own-adventure

So you want to install Kubernetes on CentOS? Awesome, I’ve got a little choose-your-own-adventure here for you. If you choose to continue installing Kubernetes, keep reading. If you choose to not install Kubernetes, skip to the very bottom of the article. I’ve got just the recipe for you to brew it up. It’s been a year since my last article on installing Kubernetes on CentOS, and while it’s still probably useful – some of the Ansible playbooks we were using have changed significantly. Today we’ll use kube-ansible which is a playbook developed by my team and I to spin up Kubernetes clusters for development purposes. Our goal will be to get Kubernetes up (and we’ll use Flannel as the CNI plugin), and then spin up a test pod to make sure everything’s working swimmingly.

What’s inside?

Our goal here is to spin up a development cluster of Kubernetes machines to experiment here. If you’re looking for something that’s a little bit more production grade, you might want to consider using OpenShift – the bottom line is that it’s a lot more opinionated, and will guide you to make some good decisions for production, especially in terms of reliability and maintenance. What we’ll spin up here is more-or-less the bleeding edge of Kubernetes. This project is more appropriate for infrastructure experimentation, and is generally a bit more fragile.

We’ll be using Ansible – but you don’t have to be an Ansible expert. If you can get it installed (which should be as easy as a pip install or dnf install) – you’re well on your way. I’ll give you the command-by-command rundown here, and I’ll provide example inventories (which tell Ansible which machines to operate on). We use kube-ansible extensively here to do the job for us.

Generally – what these playbooks do is bootstrap some hosts for you so they’re readied for a Kubernetes install. They then use kubeadm. If you have more interest in this, follow that previous link to the official docs, or check out my (now likely a bit dated) article on manually installing Kubernetes on CentOS.

Then, post install, the playbooks can install some CNI plugins – the plugins that Kubernetes uses to configure the networking on the cluster. By default we spin up the cluster with Flannel.

Breif overview of the adventure.

So what exactly are we going to do?

  • You’ll clone a repo to help install Kube on CentOS.
  • You’ll make a choice:
    • To provision a CentOS host to use as a virtual machine host which hosts the virtual guests which will comprise your cluster
    • Install CentOS on any number of machines (2+ recommended) which will become the nodes which comprise your cluster.
  • Install Kubernetes
  • Verify the installation by running a couple pods.


Overall you’re required to have:

  • Some box with Ansible installed – you don’t need to be an Ansible expert.
  • Git.
  • You guessed it, a coffee in hand. Beans must have been ground at approximately the time of brewing, and your coffee was poured from 12” or higher into your drinking vessel to help aerate the coffee. Seeing it’s a choose your own adventure – you may also choose tea.You’ll just be suffering a little. But, grab some Smith Teamaker’s Rooibos, it’s pretty fine.

Secondarily, there’s a choose-your-own-adventure part. Basically, you can choose to either:

  1. Provision a host that can run virtual machines, or
  2. Spin up whatever CentOS hosts yourself.

Generally – I’d suggest #2. Hopefully you have a way to spin up hosts in your own environment. You could use anything from spacewalk, to bifrost, or… If you’re hipster cool, maybe you’re even using matchbox.

Mostly the playbooks used to spin up virtual machines for you herein are for my own quick iteration when I’m quickly building (and destroying) clusters, and trying different setups, configurations, new features, CNI plugins, etc. Feel free to use it, but, it could just slow you down if you otherwise have a workflow for spinning up boxen. Sidenote: For years I called a virtualization host I was using in a development environment “deathstar” because the rebels kept destroying the damn thing. Side-sidenote: I was a rebel.

If you’ve choosen “1. Provision a host that can run virtual machines.” – then you’re just required to have a host that can run virtual machines. I assume there’s already a CentOS operating system on it. You should have approximately 60-120+ gigs of disk space free, and maybe 16-32 gigs of RAM. That should be more than enough.

If you chose the adventure “2. Spin up whatever CentOS hosts yourself.” – then go ahead and spin those CentOS machines up yourself, and I’d recommend 3 of them. 2 is fine too. 1 will just not be nearly as much fun. Generally, I’d recommend 4 gig of RAM a piece, and maybe 20+ gig free for each node.

I admit that the box sizing recommendations are fairly arbitrary. You’d likely size them according to your workloads, but, these are essentially “medium range guesses” to make sure it works.

Clone the kube-ansible repo.

Should be fairly simple, just clone ‘er right up:

$ git clone -b v0.5.0 https://github.com/redhat-nfvpe/kube-ansible.git && cd kube-ansible

You’ll note that we’re cloning at a particular tag – v0.5.0. If you want, omit the -b v0.5.0, which will make it so you’re on the master branch. In theory, it should be fine. I chose a particular tag for this article so it’ll still be relevant in the case that we (inevitably) make changes to the kube-ansible repo.

It’ll change directory into that directory with the copy-and-pasted command, and then you can initialize the included roles…

$ ansible-galaxy install -r requirements.yml

You’ll note here that we’re cloning at a particular tag so that things don’t change and I can base the documentation on it. If you’re feeling particularly, ahem, adventurous – you can choose the adventure to remove the -b 0.2.1 parameter, and clone at master HEAD. I’m hopeful that there’s some maturity on these playbooks and that shouldn’t matter much, but, at least at this tag it’ll match your experience with this article. Granted – we’ll be installing the latest and greatest Kubernetes, so, that will change.

So, what exactly do these playbooks do?

  1. Configures a machine to use as a virtual machine host (which is optional, you’ll get to choose this later on) on which the nodes run.
  2. Installs all the deps necessary on the hosts
  3. Runs kubeadm init to bootstrap the cluster (kubeadm docs)
  4. Installs a CNI plugin for pod networking (by default, it’s flannel.)
  5. Joins the hosts to a cluster.

You chose the adventure: Provision a host that can run virtual machines

If you chose the adventure “2. Spin up whatever CentOS hosts yourself.” head down to the next header topic, you’ve just saved yourself some work. (Unless you had to manually install CentOS like, twice, then you didn’t but I’m hopeful you have a good way to spin up nodes in your environment.)

If you chose “1. Provision a host that can run virtual machines.”, continue reading from here.

I recommended adventure #2, to spin them up yourself. I’m only going to glance over this part, I think it’s handy for iterating on Kubernetes setups, but, there’s really a bunch of options here. For the time being – I’m going to only cover a setup that uses a NAT’d setup for the VMs. IMO – it’s less convenient, but, it’s more normalized to generally document. So that’s what we’ll get today.

Alright – so you’ve got CentOS all setup on this new host, and you can SSH to it, and at least sudo root from there. That’s necessary for our Ansible playbook.

Let’s create a small inventory, and we’ll use that.

We can copy out a sample inventory, and we’ll go from there.

$ cp inventory/examples/virthost/virthost.inventory inventory/your_virthost.inventory

All edited, mine looks like:

vmhost ansible_host= ansible_ssh_user=root


This assumes you can SSH as root to that ansible_host specified there.

If you’ve got that all set – it shouldn’t be hard to spin up some VMs, now.

Just go ahead and run the virthost-setup playbook, such as:

$ ansible-playbook -i inventory/your_virthost.inventory -e "ssh_proxy_enabled=true" playbooks/virthost-setup.yml

By default this will spin up 4 hosts for us to use. If you’d like to use other hosts, you can specify them, you’ll find the default variable for the list of these VMs in the variable called virtual_machines in the ./playbooks/ka-init/group_vars/all.yml file, which you’re intended to override (instead of edit) – you can specify the memory & CPU requirements for those VMs, too.

Let that puppy run, and you’ll find out that it will create a file for you with a new inventory – ./inventory/vms.local.generated.

It has also created a private key to SSH to these vms. So if you want to ssh to one, you can do something like:

$ ssh -i ~/.ssh/vmhost/id_vm_rsa -o ProxyCommand="ssh -W %h:%p root@" centos@


  • ` ~/.ssh/vmhost/id_vm_rsa is the private key, and vmhost` is the name of the host from the first inventory we used.
  • is the IP address of the virtualization host.
  • and is the IP address of the VM (which you discovered from looking at the vms.local.generated file)

Check that out, we’re going to use it in the “Interall Kubernetes step” (which you can skip to, now.)

You chose the adventure: Spin up whatever CentOS hosts yourself

If you chose “1. Provision a host that can run virtual machines.”, continue to the next header.

Go ahead and spin up N+1 boxes. I recommend at least 2, 3 makes it more interesting. And even more for the brave. You need at least a master, and I recommend another as a node.

Make sure that you can SSH to these boxes, and let’s create a sample inventory.

Create yourself an inventory, which you can base on this inventory:

kube-master ansible_host=
kube-node-1 ansible_host=
kube-node-2 ansible_host=




Go ahead and put that inventory file in the ./inventory directory at whatever name you choose, I’d choose ./inventory/myname.inventory – you can replace myname with your name, your dogs name, your favorite cheese – actually that’s the official suggested name of the inventory now… manchego.inventory.

So place that file at ./inventory/manchego.inventory.

(sidenote, I actually prefer a sharp cheddar, or a brie-style cheese like Jasper Hill’s Moses Sleeper)

Installing Kubernetes

Alright – you’ve gotten this far, you’re on the path to success. Let’s kick off an install.

Replace ./inventory/your.inventory with:

  • ./inventory/vms.local.generated if you chose #1, build a virtualization host
  • ./inventory/manchego.inventory if you chose #2, provision your own machines.
$ ansible-playbook -i ./inventory/your.inventory playbooks/kube-install.yml

Wait! Did you already run that? If you didn’t there’s another mini-adventure you can choose, go to the next header, “Run the kube-install with Multus for networking”.

And you’re on the way to success! And if you’ve finished your coffee now… It’s time to skip down to “Verify your Kubernetes setup!”

(Optional) Run the kube-install with Multus for networking

If you aren’t going to use Multus, skip down to “Verify your Kubernetes setup!”, otherwise, continue here.

Alright, so this is an optional one, some of my audience for this blog gets here because they’re looking for a way to use Multus CNI. I’m a big fan of Multus, it allows us to attach multiple network interfaces to pods. If you’re following Multus, I urge you to check out what’s happening with the Network Plumbing Working Group (NPWG) – an offshoot of Kubernetes SIG-Network (the special interest group for networking). Up in the NPWG, we’re working on standardizing how multiple network attachments for pods work, and I’m excited to be trying Multus.

Ok, so you want to use Multus! Great. Let’s create an extra vars file that we can use.

$ cat inventory/multus-extravars.yml 
pod_network_type: "multus"
multus_use_crd: false
  - tcpdump
  - bind-utils
multus_ipam_subnet: ""
multus_ipam_rangeStart: ""
multus_ipam_rangeEnd: ""
multus_ipam_gateway: ""

Our Multus demo uses macvlan – so you’ll want to change the multus_ipam_* variables to match your network. This one matches the default NAT’ed setup for libvirt VMs in CentOS.

Now that we have that file in place, we can kick off the install like so:

$ ansible-playbook -i ./inventory/vms.local.generated -e "@./inventory/multus-extravars.yml" playbooks/kube-install.yml

If you created your own inventory change ./inventory/vms.local.generated with ./inventory/manchego.inventory (or whatever you called yours if you didn’t pick my cheesy inventory name).

Verify your Kubernetes setup!

Go ahead and SSH to the master node, and you can view which nodes have registered, if everything is good, it should look something like:

[centos@kube-master ~]$ kubectl get nodes
NAME          STATUS    ROLES     AGE       VERSION
kube-master   Ready     master    30m       v1.9.3
kube-node-1   Ready     <none>    22m       v1.9.3
kube-node-2   Ready     <none>    22m       v1.9.3
kube-node-3   Ready     <none>    22m       v1.9.3

Let’s create a pod to make sure things are working a-ok.

Create a yaml file that looks like so:

[centos@kube-master ~]$ cat nginx_pod.yaml
apiVersion: v1
kind: ReplicationController
  name: nginx
  replicas: 2
    app: nginx
      name: nginx
        app: nginx
      - name: nginx
        image: nginx
        - containerPort: 80

And tell kube to create the pods with:

[centos@kube-master ~]$ kubectl create -f nginx_pod.yaml 

Watch them come up with:

[centos@kube-master ~]$ watch -n1 kubectl get pods -o wide

Assuming you have multiple nodes, these should be coming up on separate nodes, once they’re up, go ahead and find the IP of one of them…

[centos@kube-master ~]$ IP=$(kubectl describe pod $(kubectl get pods | grep nginx | head -n1 | awk '{print $1}') | grep -P "^IP" | awk '{print $2}')
[centos@kube-master ~]$ echo $IP
[centos@kube-master ~]$ curl -s $IP | grep -i thank
<p><em>Thank you for using nginx.</em></p>

And there you have it, an instance of nginx running on Kube!

For Multus verification…

(If you haven’t installed with Multus, skip down to the “Some other adventures you can choose” section.)

You can kick off a pod and go ahead and exec ip a on it. The nginx pods that we spun up don’t have the right tools to inspect the network. So let’s kick off a pod with some better tools.

Create a yaml file like so:

[centos@kube-master ~]$ cat check_network.yaml 
apiVersion: v1
kind: Pod
  name: debugging
    - name: debugging
      command: ["/bin/bash", "-c", "sleep 2000000000000"]
      image: dougbtv/centos-network
      - containerPort: 80

Then have Kubernetes create that pod for you…

[centos@kube-master ~]$ kubectl create -f check_network.yaml 

You can watch it come up with watch -n1 kubectl get pods -o wide, then you can verify that it has multiple interfaces…

[centos@kube-master ~]$ kubectl exec -it debugging -- ip a | grep -Pi "^\d|^\s*inet\s"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    inet scope host lo
3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP 
    inet scope global eth0
4: net0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    inet scope global net0

Hurray! There’s your Kubernetes install up and running showing multiple network attachments per pod using Multus.

Some other adventures you can choose…

This is just the tip of the iceberg for more advanced scenarios you can spin up…

If you made the first decision in this article to install Kube, congrats! THE END.

You have chosen: Do not install Kubernetes

It is pitch black. You are likely to be eaten by a grue. You have been eaten by a grue. THE END.

Kubernetes multiple network interfaces -- but! With different configs per pod; Multus CNI has your back.

You need multiple network interfaces in each pod – because you, like me, have some more serious networking requirements for Kubernetes than your average bear. The thing is – if you have different specifications for each pod, and what network interfaces each pod should have based on its role, well… Previously you were fairly limited. At least using my previous (and somewhat dated) method of using Multus CNI (a CNI plugin to help enable you to have multiple interfaces per pod), you could only apply to all pods (or at best, with multiple CNI configs per box, and have it per box). Thanks to Kural and crew, Multus includes the functionality to use Kubernetes Custom Resources (Also known as “CRDs”). These “custom resource definitions” are a way to extend the Kubernetes API. Today we’ll take advantage of that functionality. The CRD implementation in Multus allows us to specify exactly what multiple network interfaces each pod has based on annotations attached to each pod. Our goal here will be to spin up a Kubernetes cluster complete with Multus CNI (including the CRD functionality), and then we’ll spin up pods where we have some with a single interface, and some with multiple interfaces, and then we’ll inspect those.

Not familiar with Multus CNI? The short version is that it’s (in my own words) a “meta plugin” – one that lets you call multiple CNI plugins, and assign an interface in a pod to each of those plugins. This allows us to create multiple interfaces.

Have an older Kubernetes? At the time of writing Kubernetes 1.9.0 was hot off the presses. So CRDs are well established, but if you have an older edition Multus also supports “TPRs” – third party resources, which were an earlier incantation of what is now CRDs. You’ll have to modify for those to work, but, this might be a fair reference point.

A lot of what I learned here is directly from the Multus CNI readme. Mostly I have just automated it with kube-ansible, and then documented up my way of doing it. Make sure to check out what’s in the official readme to further extend your knowledge of what you can do with Multus.

In short, what’s cool about this?

  • Multus CNI can give us multiple interfaces per each Kubernetes pod
  • The CRD functionality for Multus can allow us to specify which pods get which interfaces, and allowing different interfaces depending on the use case.

I originally really wanted to do something neat with a realistic use-case. Like separate networks like I used to do frequently for telephony use cases. In those cases I’d have different network segments for management, signalling and media. I was going to setup a neat VoIP configuration here, but, alas… I kept yak shaving to get there. So instead, we’ll just get to the point and today we’re just going to spin up some example pods, and maybe next time around I’ll have a more realistic use-case rather than just saying “There it is, it works!”. But, today, it’s just “there it is!”



  • A CentOS 7 box capable of running some virtual machines.
  • Ansible installed on a workstation.
  • Git.
  • Your favorite text editor.
  • Some really good coffee.
    • Tea is also a fair substitute, but, if herbal – it must be a rooibos.

This tutorial will use kube-ansible, which is an Ansible playbook that I reference often in this blog, but, it’s a way to spin up a Kubernetes cluster (on CentOS) with vanilla kubernetes in order to create a Kubernetes development cluster for yourself quickly, and including some scenarios.

In this case we’re going to spin up a couple virtual machines and deploy to those. You don’t need a high powered machine for this, just enough to get a couple light VMs to use for our experiment.

Get your clone on.

Go ahead and clone kube-ansible, and move into its directory.

$ git clone -b v0.1.8 git@github.com:redhat-nfvpe/kube-ansible.git && cd kube-ansible/

Install the required galaxy roles for the project.

$ ansible-galaxy install -r requirements.yml

Setup your inventory and extra vars.

Make sure you can SSH to the CentOS 7 machine we’ll use as a virtualization host (referred to heavily as “virthost” in the Ansible playbooks, and docs, and probably here in this article). Then create yourself an inventory for that host. For a reference, here’s what mine looks like:

$ cat inventory/virthost.inventory 
the_virthost ansible_host= ansible_ssh_user=root


We’re also going to create some extra variables to use. So let’s define those.

Pay attention to these parts:

  • bridge_ variables define how we’ll bridge to the network of your virthost. In this case I want to bridge to the device called enp1s0f1 on that host, which I specify as bridge_physical_nic. I then specify a bridge_network_cidr which matches the DHCP range on that network (in this example case I have a SOHO type setup with a subnet.)
  • multus_ipam_ variables define how we’re going to use some networking with a plugin (it’ll be macvlan, a little more on that later) that this playbook automatically sets up for us. Generally this should match what your network looks like, so in my SOHO type example, we have a gateway on and then we match that.

The rest of the variables can likely stay the same.

$ cat inventory/multus-extravars.yml 
bridge_networking: true
bridge_name: br0
bridge_physical_nic: "enp1s0f1"
bridge_network_name: "br0"
pod_network_type: "multus"
  - name: kube-master
    node_type: master
  - name: kube-node-1
    node_type: nodes
  - tcpdump
  - bind-utils
multus_use_crd: true
multus_ipam_subnet: ""
multus_ipam_rangeStart: ""
multus_ipam_rangeEnd: ""
multus_ipam_gateway: ""

Initial setup the virtualization host

Cool, with those in place, we can now begin our initial virthost setup. Let’s run that with the inventory and extra vars we just created.

$ ansible-playbook -i inventory/virthost.inventory -e "@./inventory/multus-extravars.yml" virthost-setup.yml

This has done a few things for us: It has spun up some virtual machines, and created a local inventory of those virtual machines, and also it has put a ssh key in ~/.ssh/the_virthost/id_vm_rsa – which we can use if we want to SSH to one of those hosts. (Which we’ll do here in a minute)

Now, let’s kick off a deployment of Kubernetes, it will also get. This is the part of the tute where you’ll need that coffee I mentioned earlier.

$ ansible-playbook -i inventory/vms.local.generated -e "@./inventory/multus-extravars.yml" kube-install.yml 

Finished your coffee yet? Ok, heat it up, we’re going to enter a machine and take a look around.

Overview of what’s happened.

I highly suggest you take a peek around the Ansible playbooks if you want some details of what has happened for you. Sure, they’re pretty big, but, you don’t need to be an Ansible genius to figure out what’s going on.

As a quick recap, here’s some of the things the playbook has done for us:

  • Installed the basic packages we need for Kubernetes
  • Initialized a Kubernetes cluster using kubeadm
  • Compiled Multus CNI
  • Configured some RBAC so that our nodes can query the Kubernetes API (which Multus needs in order to use CRDs)
  • Added some CRDs to our setup that Multus can use to figure out which pods get which treatments for their network configuration.

Inspecting the setup.

Here’s one way that you can use to ssh to the master…

$ ssh -i ~/.ssh/the_virthost/id_vm_rsa centos@$(grep -m1 "kube-master" inventory/vms.local.generated | cut -d"=" -f 2)

You might first want to checkout the health of the cluster with a kubectl get nodes and make sure that it’s generally functioning OK. In this case we’re building a cluster with a single master, and a single node.

Let’s peek around at a few things that the playbook has setup for us… Before anything else – the CNI config.

[centos@kube-master centos]$ sudo cat /etc/cni/net.d/10-multus.conf 
  "name": "multus-cni-network",
  "type": "multus",
  "kubeconfig": "/etc/kubernetes/kubelet.conf"

You’ll see that it just has a skeleton for Multus. The real configs will really be in CRD.

The Custom Resource Definitions (CRDs)

Check this out – we have a CRD, networks.kubernetes.com:

[centos@kube-master ~]$ kubectl get crd
NAME                      AGE
networks.kubernetes.com   46m

We can also kubectl that, too.

[centos@kube-master ~]$ kubectl get networks
NAME           AGE
flannel-conf   46m
macvlan-conf   46m

Great, now let’s describe one of the networks…

[centos@kube-master ~]$ kubectl describe networks flannel-conf
Name:         flannel-conf
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  kubernetes.com/v1
Args:         [ { "delegate": { "isDefaultGateway": true } } ]
[...snip ...]

You can also describe the macvlan-conf, too. With kubectl describe networks macvlan-conf.

So check this out, there’s a really really simple CNI configuration there in the Args:. It’s just a simple config that points to flannel. That’s it.

Spin up a pod!

That being the case, let’s setup a pod from this spec.

[centos@kube-master ~]$ cat flannel.pod.yaml 
apiVersion: v1
kind: Pod
  name: flannelpod
    networks: '[  
        { "name": "flannel-conf" }
  - name: flannelpod
    command: ["/bin/bash", "-c", "sleep 2000000000000"]
    image: dougbtv/centos-network
    - containerPort: 80

Create that pod spec YAML however you’d like, and then we’ll create from it.

[centos@kube-master ~]$ kubectl create -f flannel.pod.yaml 
pod "flannelpod" created

Watch it come up if you wish, with watch -n1 kubectl get pods -o wide. Or even get some detail with watch -n1 kubectl describe pod flannelpod

Now, let’s look at the interfaces therein… In this case, we have a vanilla flannel setup for this pod. There’s a lo loopback device interface, and then eth0 which has an IP address assigned on the address in the CIDR range the playbooks setup for us.

[centos@kube-master ~]$ kubectl exec -it flannelpod -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP 
    link/ether 0a:58:0a:f4:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a8a0:b3ff:febd:4e0a/64 scope link 
       valid_lft forever preferred_lft forever

How about… another pod!

Well naturally, this wouldn’t be a very good demonstration if we didn’t show you how you could create yet another pod – but with a different set of networks using CRD. So, let’s get on with it and create another!

This time, you’ll note that the annotation is different here, instead of flannel-conf in the networks in annotations we have macvlan-conf which you’ll notice correlates with the object we have created (via the playbooks) in the CRDs.

Here’s my example pod spec…

[centos@kube-master ~]$ cat macvlan.pod.yaml 
apiVersion: v1
kind: Pod
  name: macvlanpod
    networks: '[  
        { "name": "macvlan-conf" }
  - name: macvlanpod
    command: ["/bin/bash", "-c", "sleep 2000000000000"]
    image: dougbtv/centos-network
    - containerPort: 80

And I create that…

kubectl create -f macvlan.pod.yaml 

And then I watch that come up too (much quicker this time as it in theory should’ve pulled the image already to the same node)

$ watch -n1 kubectl describe pod macvlanpod

Now let’s check out the ip a on that pod, too.

[centos@kube-master ~]$ kubectl exec -it macvlanpod -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 96:ea:41:2b:38:23 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::94ea:41ff:fe2b:3823/64 scope link 
       valid_lft forever preferred_lft forever

Cool! It’s got an address on the network. In theory, you could ping this pod from elsewhere on that network. In my case, I’m going to open up a ping stream to this pod on my workstation (which is VPN’d in and presents as and then I’m going to sniff some packets with tcpdump while I’m at it.

On my workstation…

$ ping -c 100

And then from the pod…

[centos@kube-master ~]$ kubectl exec -it macvlanpod -- /bin/bash
[root@macvlanpod /]# tcpdump -i any icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
18:30:21.195765 IP > macvlanpod: ICMP echo request, id 695, seq 43, length 64
18:30:21.195814 IP macvlanpod > ICMP echo reply, id 695, seq 43, length 64
18:30:22.197676 IP > macvlanpod: ICMP echo request, id 695, seq 44, length 64
18:30:22.197721 IP macvlanpod > ICMP echo reply, id 695, seq 44, length 64

Hey did you notice anything yet? There’s not truly multi-interface!

Hey you duped me, this isn’t multi-interface!

Ah ha! Now this is the part where we’ll bring it all together my good friend. Let’s create a pod that has BOTH macvlan, and flannel… All we have to do is create a list in the annotations – the astute eye may have noticed that the JSON already had the brackets for a list.

$ cat both.pod.yaml 
apiVersion: v1
kind: Pod
  name: bothpod
    networks: '[  
        { "name": "macvlan-conf" },
        { "name": "flannel-conf" }
  - name: bothpod
    command: ["/bin/bash", "-c", "sleep 2000000000000"]
    image: dougbtv/centos-network
    - containerPort: 80

And create with that…

kubectl create -f both.pod.yaml

Of course, I watch it come up with watch -n1 kubectl describe pod bothpod.

And I can see that there’s now multiple interfaces – loopback, flannel, and macvlan!

[centos@kube-master multus-resources]$ kubectl exec -it bothpod -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether c6:bc:74:df:80:7b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::c4bc:74ff:fedf:807b/64 scope link 
       valid_lft forever preferred_lft forever
4: net0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP 
    link/ether 0a:58:0a:f4:01:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global net0
       valid_lft forever preferred_lft forever
    inet6 fe80::6c4e:c5ff:fe5d:64f8/64 scope link 
       valid_lft forever preferred_lft forever

Here you can see it shows both the 10. network for flannel (net0), and the network for the macvlan plugin (eth0).

Thanks for giving it a try! If you run into any issues, make sure to post ‘em on the issues for the kube-ansible github, or, if they’re multus specific (and not setup specific) to Multus CNI repo.