If you’re looking at developing (or debugging!) CNI plugins, you’re going to need a workflow for developing CNI plugins – something that really lets you get in there, and see exactly what a CNI plugin is doing. You’re going to need a bit of a swiss army knife, or something that slices, dices, and makes juilienne fries. cnitool is just the thing to do the job. Today we’ll walk through setting up cnitool, and then we’ll make a “dummy” CNI plugin to use it with, and we’ll run a reference CNI plugin.
We’ll also cover some of the basics of the information that’s passed to and from the CNI plugins and CNI itself, and how you might interact with that information, and how you might inspect a container that’s been plumbed with interfaces as created by a CNI plugin.
In this article, we’ll do this entirely without interacting with Kubernetes (and save it for another time!). And we actually do it without a container runtime at all – no docker, no crio. We just create the network namespace by hand. But the same kind of principles apply with both a container runtime (docker, crio) or a container orchestration enginer (e.g. k8s)
I used a Fedora environment, these steps probably work elsewhere.
Setting up cnitool and the reference CNI plugins.
Basically, all the steps necessary to install cnitool are available in the cnitool README. I’ll summarize them here, but, it may be worth a reference.
Install cnitool…
go get github.com/containernetworking/cni
go install github.com/containernetworking/cni/cnitool
You can test if it’s in your path and operational with:
cnitool --help
Next, we’ll compile the “reference CNI plugins” – these are a series of plugins that are offered by the CNI maintainers that create network interfaces for pods (as well as provide a number of “meta” type plugins that alter the properties, attributes, and what not of a particular container’s network). We also set our CNI_PATH variable (which is used by cnitool to know where these plugin executables are)
git clone https://github.com/containernetworking/plugins.git
cd plugins
./build_linux.sh
export CNI_PATH=$(pwd)/bin
echo $CNI_PATH
Alright, you’re basically all setup at this point.
Creating a netns and running cnitool against it
We’ll need to create a CNI configuration. For testing purposes, we’re going to create a configuration for the bridge CNI.
Create a directory and file at /tmp/cniconfig/10-myptp.conf with these contents:
And then set your CNI configuration directory by exporting this variable as:
export NETCONFPATH=/tmp/cniconfig/
First we create a netns – a network namespace. This is kind of a privately sorta-jailed space in which network components live, and is the basis of networking in containers, “here’s your private namespace in which to do your network-y things”. This, from a CNI point of view, is equivalent to the “sandbox” which is the basis container of pods that run in kubernetes. In k8s we’d have one or more containers running inside this sandbox, and they’d share the networks as in this network namespace.
sudo ip netns add myplayground
You can go and list them to see that it’s there…
sudo ip netns list | grep myplayground
Now we’re going to run cnitool with sudo so it has the appropriate permissions, and we’re going to need to pass it along our environment variables and our path to cnitool (if your root user doesn’t have a go environment, or isn’t configured that way), for me it looks like:
$(which cnitool) figures out the path of cnitool so that inside your sudo environment, you don’t need your GOPATH (you’re rad if you have that setup, though)
add myptp /var/run/netns/myplayground says that add is the CNI method which is being invoked, myptp is our configuration, and the /var/run/... is the path to the netns that we created.
You can then actually do a ping out that interface, with:
sudo ip -n myplayground addr
sudo ip netns exec myplayground ping -c 1 4.2.2.2
And you can use nsenter to more interactively play with it, too…
sudo nsenter --net=/var/run/netns/myplayground /bin/bash
[root@host dir]# ip a
[root@host dir]# ip route
[root@host dir]# ping -c 5 4.2.2.2
Let’s interactively look at a CNI plugin running with cnitool.
What we’re going to do is create a shell script that is a CNI plugin. You see, CNI plugins can be executables of any variety – they just need to be able to read from stdin, and write to stdout and stderr.
This is kind of a blank slate for a CNI plugin that’s made with bash. You could use this approach, but, in reality – you’ll probably write these applications with go. Why? Well, especially because there’s the CNI libraries (especially libcni) which you would use to be able to express some of these ideas about CNI in a more elegant fashion. Take a look at how Multus uses CNI’s skel (skeletal components, for the framework of your CNI plugin) in its main routine to call the methods as CNI has called them. Just read through Multus’ main.go and look how it imports skel and then using skel calls our method to add when CNI ADD is used.
First, let’s make a cni configuration for our dummy plugin. I made mine at /tmp/cniconfig/05-dummy.conf.
The first thing to note is that the majority of this file is to actually just setup some logging for looking at the CNI parameters, and all the magic happens in the last 3-4 lines.
Mainly, we want to output 3 environment using these three lines. These are some environment variables that are sent to us from CNI and that a CNI plugin can use to figure out the netns, the container id, and the CNI command.
Importantly – since we have this DEBUG variable turned on, we’re outputting via stderr… if there’s any stderr output during a CNI plugin run, this is considered a failure, as that’s what you’re supposed to do when you error out, is output to stderr.
Here we’ll see that there’s a lot of information that we as humans already know, since we’re executing CNI tool, but it demonstrates how a CNI plugin interacts with this information, it’s telling us that it:
Knows that we’re doing a CNI ADD operation.
We’re using a netns that’s called dummyplayground
It’s outputting a CNI result.
These are the general basics of what a CNI plugin needs in order to operate. And then… from there, the sky’s the limit. A more realistic plugin might
And to learn a bit more, you might think about looking at some of the reference CNI plugins, and see what they do to create interfaces inside these network namespaces.
But what if my CNI plugins interacts with Kubernetes!?
…And that’s for next time! You’ll need a Kubernetes environment of some sort.
Something that’s a real challenge when you’re trying to attach multiple networks to pods in Kubernetes is trying to get the right IP addresses assigned to those interfaces. Sure, you’d think, “Oh, give it an IP address, no big deal” – but, turns out… It’s less than trivial. That’s why I came up with the IP Address Management (IPAM) plugin that I call “Whereabouts” – you can think of it like a DHCP replacement, it assigns IP addresses dynamically to interfaces created by CNI plugins in Kubernetes. Today, we’ll walk through how to use Whereabouts, and highlight some of the issues that it overcomes. First – a little background.
The “multi-networking problem” in Kubernetes is something that’s been near and dear to me. Basically what it boils down to is the question “How do you access multiple networks from networking-based workloads in Kube?” As a member of the Network Plumbing Working Group, I’ve helped to write a specification for how to express your intent to attach to multiple networks, and I’ve contributed to Multus CNI in the process. Multus CNI is a reference implementation of that spec and it gives you the ability to create additional interfaces in pods, each one of those interfaces created by CNI plugins. This kind of functionality is critical for creating network topologies that provide control and data plane isolation (for example). If you’re a follower of my blog – you’ll know that I’m apt to use telephony examples (especially with Asterisk!) usually to show how you might isolate signal, media and control.
I’ll admit to being somewhat biased (being a Multus maintainer), but typically I see community members pick up Multus and have some nice success with it rather quickly. However, sometimes they get tripped up when it comes to getting IP addresses assigned on their additional interfaces. Usually they start by using the quick-start guide). The examples for Multus CNI are focused on a quick start in a lab, and for IP address assignment, we use the host-local reference plugin from the CNI maintainers. It works flawlessly for a single node.
But… Once they get through the quickstart guide in a lab, they’re like “Great! Ok, now let’s exapand the scale a little bit…” and once that happens, they’re using more than one node, and… It all comes crumbling down.
See – the reason why host local doesn’t work across multiple nodes is actually right in the name “host-local” – the storage for the IP allocations is local to each node. That is, it stores which IPs have been allocated in a flat file on the node, and it doesn’t know if IPs in the same range have been allocated on a different node. This is… Frustrating, and really the core reasoning behind why I originally created Whereabouts. That’s not to say there’s anything inherently wrong with host-local, it works great for the purpose for which its designed, and its purview (from my view) is for local configurations for each node (which isn’t necessarily the paradigm that’s used with a technology like Multus CNI where CNI configurations aren’t local to each node).
Of course, the next thing you might ask is “Why not just DHCP?” and actually that’s what people typically try next. They’ll try to use the DHCP CNI plugin. And you know, the DHCP CNI plugin is actually pretty great (and aside from the README, these rkt docs kind of explain it pretty well in the IP Address management section). But, some of it is less than intuitive. Firstly, it requires two parts – one of which is to run the DHCP CNI plugin in “daemon mode”. You’ve gotta have this running on each node, so you’ll need a recipe to do just that. But… It’s “DHCP CNI Plugin in Daemon Mode” it’s not a “DHCP Server”. Soooo – if you don’t already have a DHCP server you can use, you’ll also need to setup a DHCP server itself. The “DHCP CNI Plugin in Daemon Mode” just gives you a way to listen to for DHCP messages.
And personally – I think managing a DHCP server is a pain in the gluteous maximus. And it’s the beginning of ski season, and I’m a telemark skier, so I have enough of those pains.
I’d also like to give some BIG THANKS! I’d like to point out that Christopher Randles has made some monstrous contributions to Whereabouts – especially but not limited to the engine which provides the Kubernetes-backed data store (Thanks Christopher!). Additionally, I’d also like to thank Tomofumi Hayashi who is the author of the static IPAM CNI plugin. I originally based Whereabouts on the structure of the static IPAM CNI plugin as it had all the basics, and also I could leverage what was built there to allow Whereabouts users to also use the static features alongside Whereabouts.
We choose whereabouts as a value for type which defines which IPAM plugin we’re calling.
We’d like to use kubernetes for our datastore (where we’ll store the IP addresses we’ve allocated) (and we’ll provide a kubeconfig for it, so Whereabouts can access the kube API)
And we’d like an IP address range that’s a /24 – we’re asking Whereabouts to assign us IP addresses in the range of 192.168.2.1 to 192.168.2.255.
Behind the scenes, honestly… It’s not much more complex than what you might assume from the exposed knobs from the user perspective. Essentially – it’s storing the IP address allocations in a data store. It can use the Kubernetes API natively to do so, or, it can use an etcd instance. This provides a method to access what’s been allocated across the cluster – so you can assign IP addresses across nodes in the cluster (unlike being limited to a single host, with host-local). Otherwise, regarding internals – I have to admit it was kind of satisfying to program the logic to scan through IP address ranges with bitwise operations, ok I’m downplaying it… Let’s be honest, it was super satisifying.
Requirements
A Kubernetes Cluster v1.16 or later
You can use a lesser version of Kubernetes, but, you might have to tweak some deployments.
I’d recommend 2 or more worker nodes, to make it more interesting.
You can check to see that it’s been installed by watching the pods for it come up, with watch -n1 kubectl get pods --all-namespaces. When you see the kube-multus-ds-* pods in a Running state you’re good. If you’re a curious type you can check out the contents (on any or all nodes) of /etc/cni/net.d/00-multus.conf to see how Multus was configured.
Let’s fire up Whereabouts!
The installation for it is easy, it’s basically the same as Multus, we clone it and apply the daemonset. This is copied directly from the Whereabouts README.
Same drill as above, just wait for the pods to come up with watch -n1 kubectl get pods --all-namespaces, they’re named whereabouts-* (usually in the kube-system namespace).
Time for a test drive
The goal here is to create a configuration to add an extra interface on a pod, add a Whereabouts configurations to that, spin up two pods, have those pods on different nodes, and show that they’ve been assigned IP addresses as we’ve specified.
Alright, what I’m going to do next is to give my nodes some labels so I can be assured that pods wind up on different nodes – this is mostly just used to illustrate that Whereabouts works with multiple nodes (as opposed to how host-local works).
Now what we’re going to do is create a NetworkAttachmentDefinition – this a custom resource that we’ll create to express that we’d like to attach an additional interface to a pod. Basically what we do is pack a CNI configuration inside our NetworkAttachmentDefinition. In this CNI configuration we’ll also include our whereabouts config.
What we’re doing here is creating a NetworkAttachmentDefinition for a macvlan-type interface (using the macvlan CNI plugin).
NOTE: If you’re copying and pasting the above configuration (and I hope you are!) make sure you set the master parameter to match the name of a real interface name as available on your nodes.
Then we specify an ipam section, and we say that we want to use whereabouts as our type of IPAM plugin. We specify where the kubeconfig lives (this gives whereabouts access to the Kube API).
And maybe most important to us as users – we specify the range we’d like to have IP addresses assigned in. You can use CIDR notation here, and… If you need to use other options to exclude ranges, or other range formats – check out the README’s guide to the core parameters.
After we’ve created this configuration, we can list it too – in case we need to remove or change it later, such as:
$ kubectl get network-attachment-definitions.k8s.cni.cncf.io
Alright, we have all our basic setup together, now let’s finally spin up some pods…
Note that we have annotations here that include k8s.v1.cni.cncf.io/networks: macvlan-conf – that value of macvlan-conf matches the name of the NetworkAttachmentDefinition that we created above.
Let’s create the first pod for our “left side” label:
I then wait for the pods to come up with watch -n1 kubectl get pods --all-namespaces or I look at the details of one pod with watch -n1 'kubectl describe pod samplepod-left | tail -n 50'
Also – you’ll note if you kubectl get pods -o wide the pods are indeed running on different nodes.
Once the pods are up and in a Running state, we can interact with them.
The first thing I do is check out that the IPs have been assigned:
$ kubectl exec -it samplepod-left -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
link/ether 3e:f7:4b:a1:16:4b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.2.4/24 scope global eth0
valid_lft forever preferred_lft forever
4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether b6:42:18:70:12:6e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.2.225/28 scope global net1
valid_lft forever preferred_lft forever
You’ll note there’s three interfaces, a local loopback, an eth0 that’s for our “default network” (where we have pod-to-pod connectivity by default), and an additional interface – net1. This is our macvlan connection AND it’s got an IP address assigned dynamically by Whereabouts. In this case 192.168.2.225
Let’s check out the right side, too:
$ kubectl exec -it samplepod-right -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
link/ether 96:28:58:b9:a4:4c brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.1.3/24 scope global eth0
valid_lft forever preferred_lft forever
4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 7a:31:a7:57:82:1f brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.2.226/28 scope global net1
valid_lft forever preferred_lft forever
Great, we’ve got another dynamically assigned address that does not collide with our already reserved IP address from the left side! Our address on the right side here is 192.168.2.226.
And while connectivity is kind of outside the scope of this article – in most cases it should generally work right out the box, and you should be able to ping from one pod to the next!
[centos@kube-whereabouts-demo-master whereabouts]$ kubectl exec -it samplepod-right -- ping -c5 192.168.2.225
PING 192.168.2.225 (192.168.2.225) 56(84) bytes of data.
64 bytes from 192.168.2.225: icmp_seq=1 ttl=64 time=0.438 ms
64 bytes from 192.168.2.225: icmp_seq=2 ttl=64 time=0.217 ms
64 bytes from 192.168.2.225: icmp_seq=3 ttl=64 time=0.316 ms
64 bytes from 192.168.2.225: icmp_seq=4 ttl=64 time=0.269 ms
64 bytes from 192.168.2.225: icmp_seq=5 ttl=64 time=0.226 ms
And that’s how you can determine your pods Whereabouts (by assigning it a dynamic address without the pain of runnning DHCP!).
If you’ve got workloads that live in VMs, and you want to get them into your Kubernetes environment (because, I don’t wish maintaining two platforms even on the worst of the supervillains!) – you might also have networking workloads that require you to really push some performance…. KubeVirt with SR-IOV device plugin might be just the hero you need to save the day. Not all heros wear capes, sometimes those heroes just wear a t-shirt with a KubeVirt logo that they got at Kubecon. Today we’ll spin up KubeVirt with SR-IOV device plugin and we’ll run a VoIP workload on it, so jump into a phonebooth, change into your Kubevirt t-shirt and fire up a terminal!
I’ll be giving a talk at Kubecon EU 2019 in Barcelona titled High Performance Networking with KubeVirt. Presenting with me is the guy with the best Yoda drawing on all of GitHub, Abdul Halim from Intel. and I’ll give a demonstration of what’s going on here in this article, and this material will be provided to attendees too so that they can follow the bouncing ball and get the same demo working in their environment.
Part of the talk is this recorded demo on YouTube. It’ll give you a preview of all that we’re about to do here in this article. Granted this recorded demo does skip over some of the most interesting configuration, but, shows the results. We’ll cover all the details herein to get you to the same point.
We’ll look at spinning up KubeVirt, with SR-IOV capabilities. We’ll walk through what the physical installation and driver setup looks like, we’ll fire up KubeVirt, spin up VMs running in Kube, and then we’ll put our VoIP workload (using Asterisk) in those pods – which isn’t complete until we terminate a phone call over a SIP trunk! The only thing that’s on you is to install Kubernetes (but, I’ll have pointers to get you started there, too). Just a quick note that I’m just using Asterisk as an example of a VoIP workload, it’s definitely NOT limited to running in a VM, it also works well in a container, even as a containerized VNF. You might be getting the point that I love Asterisk! (Shameless plugin, it’s a great open source telephony solution!)
So – why VMs? The thing is, maybe you’re stuck with them. Maybe it’s how your vendor shipped the software you bought and deploy. Maybe the management of the application is steeped in the history of it being virtualized. Maybe your software has legacies that simply just can’t be easily re-written into something that’s containerized. Maybe you like having pets (I don’t always love pets in my production deployments – but, I do love my cats Juniper & Otto, who I trained using know-how from The Trainable Cat! …Mostly I just trained them to come inside on command as they’re indoor-outdoor cats.)
Something really cool about the KubeVirt ecosystem is that it REALLY leverages some other hereos in the open source community. A good hero works well in a team for sure. In this case KubeVirt leverages Multus CNI which enables us to connect multiple network interfaces to pods (which also means VMs in the case of KubeVirt!), and we also use the SR-IOV Device Plugin – this plugin gives the Kubernetes scheduler awareness of which limited resources on our worker nodes have been exhausted – specifically which SR-IOV virtual functions (VFs) have been used up, this way we schedule workloads on machines that have sufficient resources.
I’d like to send a HUGE thanks to Booxter – Ihar from the KubeVirt team at Red Hat helped me get all of this going, and I could not have gotten nearly as far as I did without his help. Also thanks to SchSeba & Phoracek, too!
Requirements
Not a ton of requirements, I think the heaviest two here is that you’re going to need:
Some experience with Kubernetes (you know how to use kubectl for some basic stuff, at least), and a way to install Kubernetes.
SR-IOV capable devices on bare metal machines (and make them part of the Kubernetes cluster that you create)
I’m not going to cover the Kubernetes install here, I have some other material I will share with you on how to do so, though.
Once you have Kubernetes installed – you’re going to need to have some CNI plugin installed to act as the default network for your cluster. This will provide network connectivity between pods in the regular old fashioned way that you’re used to. Why am I calling it the “default network”, you ask? Because we’re going to add additional network interfaces and attachments to other networks on top of this.
When it’s installed you should see all nodes in a “ready” state when you issue kubectl get nodes.
SR-IOV Setup
Primarily, I followed the KubeVirt docs for SR-IOV setup. In my opinion, this is maybe the biggest adventure in this whole process – mostly because depending on what SR-IOV hardware you have, and what mobo & CPU you have, etc… It might require you to have to dig deeply into your BIOS and figure out what to enable.
Mostly – I will leave this adventure to you, but, I will give you a quick overview of how it went on my equipment.
It’s a little like making a witch’s brew, “Less eye of newt, more hair of frog… nope. Ok let’s try that again, blackcat_iommu=no ravensbreath_pci=on”
Or as my co-worker Anton Ivanov said:
It’s just like that old joke about SCSI.
How many places do you terminate a SCSI cable?
Three. Once on each end and a black goat with a silver knife at full moon in the middle
Mostly, I first had to modify my kernel parameters, so, I added an extra menuentry in my /etc/grub2.cfg, and set it as the default with grubby --set-default-index=0, and made sure my linux line included:
amd_iommu=on pci=realloc
Make sure to do this on each node in your cluster that has SR-IOV hardware.
Note that I was using an AMD based motherboard and CPU, so you might have intel_iommu=on if you’re using Intel, and the KubeVirt docs suggest a couple other parameters you can try.
If it errors out, you might get a hint from following your journal, that is with journalctl -f and see if it gives you any hints. I almost thought I was going to have to modify my BIOS (gulp!), I had found this Reddit thread, but, luckily it never got that far for me. It took me a few iterations at fixing my Kernel parameters and finding all the hidden bits in my BIOS, but… With patience I got there.
…Last but not least, make sure your physical ports on your SR-IOV card are connected to something. I had forgotten to connect mine initially and I couldn’t get SR-IOV capable interfaces in my VMs to come up. So, back to our roots – check layer 1!
Make sure to modprobe vfio-pci
Make sure you have the vfio-pci kernel module loaded…
I did:
# modprobe vfio-pci
And then verified it with:
# lsmod | grep -i vfio
And then I added vfio-pci to /etc/modules
KubeVirt installation
First we install the cluster-network-addons, this will install Multus CNI, and the SR-IOV device plugin.
Before we get any further, let’s open the SR-IOV feature gate. So, on your machine where you use kubectl, issue:
Watch the pods to be ready, kubectl get pods and all that good stuff.
Then we wait for this to be readied up…
$ kubectl wait kv kubevirt --for condition=Ready
(Mine never became ready?)
[centos@kube-nonetwork-master ~]$ kubectl wait kv kubevirt --for condition=Ready
Error from server (NotFound): kubevirts.kubevirt.io "kubevirt" not found
For this step, we’re going to use a helper script. I took this from an existing (and open at the time of writing this article) pull request, and I put it into this gist.
I went ahead and did this as root on each node that has SR-IOV devices (in my case, just one machine)
Move back to your master (or wherever your run Kubevirt from), and we’re going to spin up a vanilla VM just to get the commands down and make sure everything’s looking hunky dory.
First we’ll clone the kubevirt repo (word to the wise, it’s pretty big, maybe 400 meg clone).
$ git clone https://github.com/kubevirt/kubevirt.git --depth 50 && cd kubevirt
Let’s move into the example VMs section…
$ cd cluster/examples/
And edit a file in there, let’s edit the vm-cirros.yaml – a classic test VM image. Bring it up in your editor first, but, we’ll edit in place like so:
$ sed -ie "s|registry:5000/kubevirt/cirros-container-disk-demo:devel|kubevirt/cirros-container-disk-demo:latest|" vm-cirros.yaml
Kubectl create from that file…
$ kubectl create -f vm-cirros.yaml
And let’s look at the vms custom resources, and we’ll see that it’s created, but, not yet running.
$ kubectl get vms
NAME AGE RUNNING VOLUME
vm-cirros 2m13s false
Yep, it’s not started yet, let’s start it…
$ virtctl start vm-cirros
VM vm-cirros was scheduled to start
$ kubectl get vms
NAME AGE RUNNING VOLUME
vm-cirros 3m17s true
Wait for it to come up (watch the pods…), and then we’ll console in (you can see that the password is listed right there in the MOTD, gocubsgo). You might have to hit <enter> to see the prompt.
[centos@kube-nonetwork-master examples]$ virtctl console vm-cirros
Successfully connected to vm-cirros console. The escape sequence is ^]
login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.
vm-cirros login: cirros
Password:
$ echo "foo"
foo
(You can hit ctrl+] to get back to your command line, btw.)
Presenting… a VM with an SR-IOV interface!
Ok, back into your master, and still in the examples directory… Let’s create the SR-IOV example. First we change the image location again…
sed -ie "s|registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel|kubevirt/fedora-cloud-container-disk-demo:latest|" vmi-sriov.yaml
Create a network configuration, a NetworkAttachmentDefinition for this one…
(Side note: The IPAM section here isn’t actually doing a lot for us, in theory you can have "ipam": {}, instead of this setup with the host-local plugin – I struggled with that a little bit, so, I included here an otherwise dummy IPAM section)
Console in with:
virtctl console vmi-sriov
Login as fedora (with password fedora), become root (sudo su -) create an ifcfg-eth1 script:
Once you’ve loaded that, console into the VM and issue:
# asterisk -rx 'pjsip reload'
Next we’re going to create a file /etc/asterisk/extensions.conf which is our “dialplan” – this tells Asterisk how to behave when a call comes in our trunk. In our case, we’re going to have it answer the call, play a sound file, and then hangup.
Create the file as so:
[endpoints]
exten => _X.,1,NoOp()
same => n,Answer()
same => n,SayDigits(1)
same => n,Hangup()
Next, you’re going to tell asterisk to reload this with:
# asterisk -rx 'dialplan reload'
Now, from the first VM with the 192.168.100.2 address, go ahead and console into the VM and run asterisk -rvvv to get an Asterisk console, and we’ll set some debugging output on, and then we’ll originate a phone call:
vmi-sriov*CLI> pjsip set logger on
vmi-sriov*CLI> rtp set debug on
vmi-sriov*CLI> channel originate PJSIP/333@bob application saydigits 1
You should see a ton of output now! You’ll see the SIP messages to initiate the phone call, and then you’ll see information about the RTP (real-time protocol) packets that include the voice media going between the machines!
Awesome! Thanks for sticking with it, now… For your workload to the rescue!
So you need a Kubernetes Operator Tutorial, right? I sure did when I started. So guess what? I got that b-roll! In this tutorial, we’re going to use the Operator SDK, and I definitely got myself up-and-running by following the Operator Framework User Guide. Once we have all that setup – oh yeah! We’re going to run a custom Operator. One that’s designed for Asterisk, it can spin up Asterisk instances, discover them as services and dynamically create SIP trunks between n-number-of-instances of Asterisk so they can all reach one another to make calls between them. Fire up your terminals, it’s time to get moving with Operators.
What exactly are Kubernetes Operators? In my own description – Operators are applications that manage other applications, specifically with tight integration with the Kubernetes API. They allow you build in your own “operational knowledge” into them, and perform automated actions when managing those applications. You might also want to see what CoreOS has to say on the topic, read their blog article where they introduced operators.
Sidenote: Man, what an overloaded term, Operators! In the telephony world, well, we have operators, like… a switchboard operator (I guess that one’s at least a little obsolete). Then we have platform operators, like… sysops. And we have how things operate, and the operations they perform… Oh my.
A guy on my team said (paraphrased): “Well if they’re applications that manage applications, then… Why write them in Go? Why not just write them in bash?”. He was… Likely kidding. However, it always kind of stuck with me and got me to think about it a lot. One of the main reasons why you’ll see these written in Go is because it’s going to be the default choice for interacting with the Kubernetes API. There’s likely other ways to do it – but, all of the popular tools for interacting with it are written in Go, just like Kubernetes itself. The thing here is – you probably care about managing your application running in Kubernetes with an operator because you care about integrating with the Kubernetes API.
One more thing to keep in mind here as we continue along – the idea of CRDS – Custom Resource Definitions. These are the lingua franca of Kubernetes Operators. We often watch what these are doing and take actions based on them. What’s a CRD? It’s often described as “a way to extend the Kubernetes API”, which is true. The thing is – that sounds SO BIG. It sounds daunting. It’s not really. CRDs, in the end, are just a way for you to store some of your own custom data, and then access it through the Kubernetes API. Think of it as some meta data you can push into the Kube API and then access it – so if you’re interacting with the Kube API, it’s simple to store some of your own data, without having to roll-your-own way of otherwise storing it (and otherwise reading & writing that data).
Today we have a big agenda for this blog article… Here’s what we’re going to do:
Create a development environment where we can use the operator-sdk
Create own application as scaffolded by the Operator SDK itself.
Spin up the asterisk-operator, dissect it a little bit, and then we’ll run it and see it in action.
Lastly, we’ll introduce the Helm Operator, a way to kind of lower the barrier of entry that allows you to create a Kubernetes Operator using Helm, and it might solve some of your problems that you’d use an Operator for without having to slang any golang.
References
Here’s a few articles that I used when I was building this article myself.
Alright, we’ve got some deps to work through. Including, ahem, dep. I didn’t include “root or your regular user” but in short, generally, just the yum & systemctl lines here require su, otherwise they should be your regular user.
Make sure you have git, and this is a good time to install whatever usual goodies you use.
Install minikube (optional: if this is part of a cluster or otherwise have access to another cluster). I’m not generally a huge minikube fan, however, in this case we’re working on a development environment (seeing that we’re looking into building an operator), so it’s actually appropriate here.
It’ll take a few minutes while it downloads a few container images from which it runs Kubernetes.
If something went wrong and you need to restart minikube from scratch you can do so with:
$ sudo /usr/local/bin/minikube stop; cd /etc/kubernetes/; sudo rm -F *.conf; /usr/local/bin/minikube delete; cd -
Follow the instructions from minikube for setting up your .kube folder. I didn’t have great luck with it, so I performed a sudo su - in order to run say, kubectl get nodes to see that the cluster was OK. In my case, this also meant that I had to bring the cluster up as root as well.
You can test that your minikube is operational with:
$ mkdir -p $GOPATH/src/github.com/operator-framework
$ cd $GOPATH/src/github.com/operator-framework
$ git clone https://github.com/operator-framework/operator-sdk
$ cd operator-sdk
$ git checkout master
$ export PATH=$PATH:$GOPATH/bin && make dep && make install
Create your new project
We’re going to create a sample project using the operator-sdk CLI tool. Note – I used my own GitHub namespace here, feel free to replace it with yours. If not, cool, you can also get a halloween costume of me (and scare kids and neighbors!)
$ mkdir -p $GOPATH/src/github.com/dougbtv
$ cd $GOPATH/src/github.com/dougbtv
$ operator-sdk new hello-operator --kind=Memcached
$ operator-sdk add api --api-version=cache.example.com/v1alpha1 --kind=Memcached
$ cd hello-operator
Sidenote: For what it’s worth, at some point I had tried a few versions of operator-sdk tools to try to fix another issue. During this, I had some complaint (when running operator-sdk new ...) that something didn’t meet constraints (No versions of k8s.io/gengo met constraints), and it turned out it was this kind of stale dep package cache. So you can clear it as such:
Also, ignore if it complains it can’t complete the git actions, they’re so simple you can just manage it as a git repo however you please.
Inspecting the scaffolded project
Let’s modify the types package to define what our CRD looks like…
Modify ./pkg/apis/cache/v1alpha1/types.go, replace the two structs at the bottom (that say // Fill me) like so:
type MemcachedSpec struct {
// Size is the size of the memcached deployment
Size int32 `json:"size"`
}
type MemcachedStatus struct {
// Nodes are the names of the memcached pods
Nodes []string `json:"nodes"`
}
And then update the generated code for the custom resources…
operator-sdk generate k8s
Then let’s update the handler, it’s @ ./pkg/stub/handler.go
You’ll also need to change the github namespace in that file, replace it with your namespace + the project name you used during operator-sdk new $name_here. I changed mine like so:
$ sed -i -e 's|example-inc/memcached-operator|dougbtv/hello-operator|' pkg/stub/handler.go
Now, let’s create the CRD. First, let’s just cat (I’m a cat person, like, seriously I love cats, if you’re a dog person you can stop reading this article right now, or, you probably use less as a pager too, dog people, seriously!) it and take a look…
$ cat deploy/crd.yaml
Now you can create it…
$ kubectl create -f deploy/crd.yaml
Once it has been created, you can see it’s listed, but, there’s no CRD objects yet…
$ kubectl get memcacheds.cache.example.com
In the Operator-SDK user guide they list two options for running your SDK. Of course, the production way to do it is create a docker image and push it up to a registry, but… we haven’t even compiled this yet, so let’s go one step at a time and run in our local cluster.
$ operator-sdk up local
Cool, you’ll see it initialize, and you might get an error you can ignore for now:
ERRO[0000] failed to initialize service object for operator metrics: OPERATOR_NAME must be set
Alright, so what has it done? Ummm, nothing yet! Let’s create a custom resource and we’ll watch what it does… Create a custom resource yaml file like so:
Awesome, 4 instances going. Alright cool, we’ve got an operator running! So… Can we create our own?
Creating our own operator!
Well, almost! What we’re going to do now is use Doug’s asterisk-operator. Hopefully there’s some portions here that you can use as a springboard for your own Operator.
How the operator was created
Some of the things that I modified after I had the scaffold was..
Updated the types.go to include the fields I needed.
I moved the /pkg/apis/cache/ to /pkg/apis/voip/
And changed references to memcached to asterisk
Created a scheme to discover all IPs of the Asterisk pods
Created REST API called to Asterisk to push the configuration
Some things to check out in the code…
Aside from what we reviewed earlier when we were scaffolding the application – which is argually the most interesting from a standpoint of “How do I create any operator that want?” The second most interesting, or, potentially most interesting if you’re interested in Asterisk – is how we handle the service discovery and dynamically pushing configuration to Asterisk.
You can find the bulk of this in the handler.go. Give it a skim through, and you’ll find where it makes the actions of:
Creating the deployment and giving it a proper size based on the CRDs
How it figures out the IP addresses of each pod, and then goes through and uses those to cycle through all the instances and create SIP trunks to all of the other Asterisk instances.
But… What about making it better? This Operator is mostly provided as an example, and to “do a cool thing with Asterisk & Operators”, so some of the things here are clearly in the proof-of-concept realm. A few of the things that it could use improvement with are…
It’s not very graceful with how it handles waiting for the Asterisk instances to become ready. There’s some timing issues with when the pod is created, and when the IP address is assigned. It’s not the cleanest in that regard.
There’s a complete “brute force” method by which it creates all the SIP trunks. If you start with say, 2 instances, and change to 3 instances – well… It creates all of the SIP trunks all over again instead of just creating the couple new ones it needs, I went along with the idea of don’t prematurely optimize. But, this could really justified to optimize it.
What’s the application doing?
In short the application really just does three things:
Watches a CRD to see how many Asterisk instances to create
Figures out the IP addresses of all the Asterisk instances, using the Kube API
Creates SIP trunks from each Asterisk instance to each other Asterisk instance, using ARI push configuration, allowing us to make calls from any Asterisk instance to any other Asterisk instance.
Let’s give the Asterisk Operator a spin!
This assumes that you’ve completed creating the development environment above, and have it all running – you know, with golang and GOPATH all set, minikube running and the operator-sdk binaries available.
First things first – make sure you pull the image we’ll use in advance, this will make for a lot less confusing waiting when you first start the operator itself.
docker pull dougbtv/asterisk-example-operator
Then, clone the asterisk-operator git repo:
mkdir -p $GOPATH/src/github.com/dougbtv && cd $GOPATH/src/github.com/dougbtv
git clone https://github.com/dougbtv/asterisk-operator.git && cd asterisk-operator
We’ll need to create the CRD for it:
kubectl create -f deploy/crd.yaml
Next… We’ll just start the operator itself!
operator-sdk up local
Ok, cool, now, we’ll create a CRD so that the operator sees it and spins up asterisk instances – open up a new terminal window for this.
Take a look at the output from the operator – you’ll see it logging a number of things. It has some waits to properly wait for Asterisk’s IP to be found, and for Asterisk instances to be booted – and then it’ll log that it’s creating some trunks for us.
Check out the deployment to see that all of the instances are up:
watch -n1 kubectl get deployment
You should see that it desires to have 2 instances, and that it’s fulfilled those instances. It does this as it has created a deployment.
Let’s go ahead and exec into one of the Asterisk pods, and we’ll run the Asterisk console…
Ok, cool, this has a trunk setup for us, the trunk name in the Aor field is example-asterisk-6c6dff544-wnkpx. Go ahead and copy that value in your own terminal (yours will be different, if it’s not different – leave your keyboard right now, and go buy a lotto ticket).
We can use that to originate a call, I do so with:
Now our kubectl get deployment will show us that we have three, but! Better yet, we have all the SIP trunks created for us. Let’s exec in and look at the AORs again.
And there you have it – you can do it for n-number of instances. I tested it out with 33 instances, which works out to 1056 trunks (counting both sides) and… While it took like 15ish minutes, which felt like forever… It takes me longer than that to create 2 or 3 by hand! So… Not a terrible trade off.
$ git clone https://github.com/operator-framework/helm-app-operator-kit.git
$ cd helm-app-operator-kit/
Now, build a Docker image. Note: You’ll probably want to change the name (from -t dougbtv/... to your name, or someone else’s name if that’s how you roll).
Alright, now there’s a series of things we’ve got to customize. There’s more instructions on what needs to be customized, too, if you need it.
# this can stay changed to "tomcat"
$ sed -i -e 's/<chart>/tomcat/' helm-app-operator/deploy/operator.yaml
# this you should change to your docker namespace
$ sed -i -e 's|quay.io/<namespace>|dougbtv|' helm-app-operator/deploy/operator.yaml
# Change the group & kind to match what we had in the docker build.
$ sed -i -e 's/group: example.com/group: apache.org/' helm-app-operator/deploy/crd.yaml
$ sed -i -e 's/kind: ExampleApp/kind: Tomcat/' helm-app-operator/deploy/crd.yaml
# And the name has to match that, too
$ sed -i -e 's/name: exampleapps.example.com/name: exampleapps.apache.org/' helm-app-operator/deploy/crd.yaml
# Finally update the Custom Resource to be what we like.
$ sed -i -e 's|apiVersion: example.com/v1alpha1|apiVersion: apache.org/v1alpha1|' helm-app-operator/deploy/cr.yaml
$ sed -i -e 's/kind: ExampleApp/kind: Tomcat/' helm-app-operator/deploy/cr.yaml
So you want to install Kubernetes on CentOS? Awesome, I’ve got a little choose-your-own-adventure here for you. If you choose to continue installing Kubernetes, keep reading. If you choose to not install Kubernetes, skip to the very bottom of the article. I’ve got just the recipe for you to brew it up. It’s been a year since my last article on installing Kubernetes on CentOS, and while it’s still probably useful – some of the Ansible playbooks we were using have changed significantly. Today we’ll use kube-ansible which is a playbook developed by my team and I to spin up Kubernetes clusters for development purposes. Our goal will be to get Kubernetes up (and we’ll use Flannel as the CNI plugin), and then spin up a test pod to make sure everything’s working swimmingly.
What’s inside?
Our goal here is to spin up a development cluster of Kubernetes machines to experiment here. If you’re looking for something that’s a little bit more production grade, you might want to consider using OpenShift – the bottom line is that it’s a lot more opinionated, and will guide you to make some good decisions for production, especially in terms of reliability and maintenance. What we’ll spin up here is more-or-less the bleeding edge of Kubernetes. This project is more appropriate for infrastructure experimentation, and is generally a bit more fragile.
We’ll be using Ansible – but you don’t have to be an Ansible expert. If you can get it installed (which should be as easy as a pip install or dnf install) – you’re well on your way. I’ll give you the command-by-command rundown here, and I’ll provide example inventories (which tell Ansible which machines to operate on). We use kube-ansible extensively here to do the job for us.
Generally – what these playbooks do is bootstrap some hosts for you so they’re readied for a Kubernetes install. They then use kubeadm. If you have more interest in this, follow that previous link to the official docs, or check out my (now likely a bit dated) article on manually installing Kubernetes on CentOS.
Then, post install, the playbooks can install some CNI plugins – the plugins that Kubernetes uses to configure the networking on the cluster. By default we spin up the cluster with Flannel.
Breif overview of the adventure.
So what exactly are we going to do?
You’ll clone a repo to help install Kube on CentOS.
You’ll make a choice:
To provision a CentOS host to use as a virtual machine host which hosts the virtual guests which will comprise your cluster
Install CentOS on any number of machines (2+ recommended) which will become the nodes which comprise your cluster.
Install Kubernetes
Verify the installation by running a couple pods.
Requirements
Overall you’re required to have:
Some box with Ansible installed – you don’t need to be an Ansible expert.
Git.
You guessed it, a coffee in hand. Beans must have been ground at approximately the time of brewing, and your coffee was poured from 12” or higher into your drinking vessel to help aerate the coffee. Seeing it’s a choose your own adventure – you may also choose tea.You’ll just be suffering a little. But, grab some Smith Teamaker’s Rooibos, it’s pretty fine.
Secondarily, there’s a choose-your-own-adventure part. Basically, you can choose to either:
Provision a host that can run virtual machines, or
Spin up whatever CentOS hosts yourself.
Generally – I’d suggest #2. Hopefully you have a way to spin up hosts in your own environment. You could use anything from spacewalk, to bifrost, or… If you’re hipster cool, maybe you’re even using matchbox.
Mostly the playbooks used to spin up virtual machines for you herein are for my own quick iteration when I’m quickly building (and destroying) clusters, and trying different setups, configurations, new features, CNI plugins, etc. Feel free to use it, but, it could just slow you down if you otherwise have a workflow for spinning up boxen. Sidenote: For years I called a virtualization host I was using in a development environment “deathstar” because the rebels kept destroying the damn thing. Side-sidenote: I was a rebel.
If you’ve choosen “1. Provision a host that can run virtual machines.” – then you’re just required to have a host that can run virtual machines. I assume there’s already a CentOS operating system on it. You should have approximately 60-120+ gigs of disk space free, and maybe 16-32 gigs of RAM. That should be more than enough.
If you chose the adventure “2. Spin up whatever CentOS hosts yourself.” – then go ahead and spin those CentOS machines up yourself, and I’d recommend 3 of them. 2 is fine too. 1 will just not be nearly as much fun. Generally, I’d recommend 4 gig of RAM a piece, and maybe 20+ gig free for each node.
I admit that the box sizing recommendations are fairly arbitrary. You’d likely size them according to your workloads, but, these are essentially “medium range guesses” to make sure it works.
Clone the kube-ansible repo.
Should be fairly simple, just clone ‘er right up:
$ git clone -b v0.5.0 https://github.com/redhat-nfvpe/kube-ansible.git && cd kube-ansible
You’ll note that we’re cloning at a particular tag – v0.5.0. If you want, omit the -b v0.5.0, which will make it so you’re on the master branch. In theory, it should be fine. I chose a particular tag for this article so it’ll still be relevant in the case that we (inevitably) make changes to the kube-ansible repo.
It’ll change directory into that directory with the copy-and-pasted command, and then you can initialize the included roles…
$ ansible-galaxy install -r requirements.yml
You’ll note here that we’re cloning at a particular tag so that things don’t change and I can base the documentation on it. If you’re feeling particularly, ahem, adventurous – you can choose the adventure to remove the -b 0.2.1 parameter, and clone at master HEAD. I’m hopeful that there’s some maturity on these playbooks and that shouldn’t matter much, but, at least at this tag it’ll match your experience with this article. Granted – we’ll be installing the latest and greatest Kubernetes, so, that will change.
So, what exactly do these playbooks do?
Configures a machine to use as a virtual machine host (which is optional, you’ll get to choose this later on) on which the nodes run.
Installs all the deps necessary on the hosts
Runs kubeadm init to bootstrap the cluster (kubeadm docs)
Installs a CNI plugin for pod networking (by default, it’s flannel.)
Joins the hosts to a cluster.
You chose the adventure: Provision a host that can run virtual machines
If you chose the adventure “2. Spin up whatever CentOS hosts yourself.” head down to the next header topic, you’ve just saved yourself some work. (Unless you had to manually install CentOS like, twice, then you didn’t but I’m hopeful you have a good way to spin up nodes in your environment.)
If you chose “1. Provision a host that can run virtual machines.”, continue reading from here.
I recommended adventure #2, to spin them up yourself. I’m only going to glance over this part, I think it’s handy for iterating on Kubernetes setups, but, there’s really a bunch of options here. For the time being – I’m going to only cover a setup that uses a NAT’d setup for the VMs. IMO – it’s less convenient, but, it’s more normalized to generally document. So that’s what we’ll get today.
Alright – so you’ve got CentOS all setup on this new host, and you can SSH to it, and at least sudo root from there. That’s necessary for our Ansible playbook.
Let’s create a small inventory, and we’ll use that.
We can copy out a sample inventory, and we’ll go from there.
By default this will spin up 4 hosts for us to use. If you’d like to use other hosts, you can specify them, you’ll find the default variable for the list of these VMs in the variable called virtual_machines in the ./playbooks/ka-init/group_vars/all.yml file, which you’re intended to override (instead of edit) – you can specify the memory & CPU requirements for those VMs, too.
Let that puppy run, and you’ll find out that it will create a file for you with a new inventory – ./inventory/vms.local.generated.
It has also created a private key to SSH to these vms. So if you want to ssh to one, you can do something like:
` ~/.ssh/vmhost/id_vm_rsa is the private key, and vmhost` is the name of the host from the first inventory we used.
192.168.1.119 is the IP address of the virtualization host.
and 192.168.122.58 is the IP address of the VM (which you discovered from looking at the vms.local.generated file)
Check that out, we’re going to use it in the “Interall Kubernetes step” (which you can skip to, now.)
You chose the adventure: Spin up whatever CentOS hosts yourself
If you chose “1. Provision a host that can run virtual machines.”, continue to the next header.
Go ahead and spin up N+1 boxes. I recommend at least 2, 3 makes it more interesting. And even more for the brave. You need at least a master, and I recommend another as a node.
Make sure that you can SSH to these boxes, and let’s create a sample inventory.
Create yourself an inventory, which you can base on this inventory:
Go ahead and put that inventory file in the ./inventory directory at whatever name you choose, I’d choose ./inventory/myname.inventory – you can replace myname with your name, your dogs name, your favorite cheese – actually that’s the official suggested name of the inventory now… manchego.inventory.
So place that file at ./inventory/manchego.inventory.
(sidenote, I actually prefer a sharp cheddar, or a brie-style cheese like Jasper Hill’s Moses Sleeper)
Installing Kubernetes
Alright – you’ve gotten this far, you’re on the path to success. Let’s kick off an install.
Replace ./inventory/your.inventory with:
./inventory/vms.local.generated if you chose #1, build a virtualization host
./inventory/manchego.inventory if you chose #2, provision your own machines.
Wait! Did you already run that? If you didn’t there’s another mini-adventure you can choose, go to the next header, “Run the kube-install with Multus for networking”.
And you’re on the way to success! And if you’ve finished your coffee now… It’s time to skip down to “Verify your Kubernetes setup!”
(Optional) Run the kube-install with Multus for networking
If you aren’t going to use Multus, skip down to “Verify your Kubernetes setup!”, otherwise, continue here.
Alright, so this is an optional one, some of my audience for this blog gets here because they’re looking for a way to use Multus CNI. I’m a big fan of Multus, it allows us to attach multiple network interfaces to pods. If you’re following Multus, I urge you to check out what’s happening with the Network Plumbing Working Group (NPWG) – an offshoot of Kubernetes SIG-Network (the special interest group for networking). Up in the NPWG, we’re working on standardizing how multiple network attachments for pods work, and I’m excited to be trying Multus.
Ok, so you want to use Multus! Great. Let’s create an extra vars file that we can use.
Our Multus demo uses macvlan – so you’ll want to change the multus_ipam_* variables to match your network. This one matches the default NAT’ed setup for libvirt VMs in CentOS.
Now that we have that file in place, we can kick off the install like so:
If you created your own inventory change ./inventory/vms.local.generated with ./inventory/manchego.inventory (or whatever you called yours if you didn’t pick my cheesy inventory name).
Verify your Kubernetes setup!
Go ahead and SSH to the master node, and you can view which nodes have registered, if everything is good, it should look something like:
[centos@kube-master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 30m v1.9.3
kube-node-1 Ready <none> 22m v1.9.3
kube-node-2 Ready <none> 22m v1.9.3
kube-node-3 Ready <none> 22m v1.9.3
Let’s create a pod to make sure things are working a-ok.
[centos@kube-master ~]$ watch -n1 kubectl get pods -o wide
Assuming you have multiple nodes, these should be coming up on separate nodes, once they’re up, go ahead and find the IP of one of them…
[centos@kube-master ~]$ IP=$(kubectl describe pod $(kubectl get pods | grep nginx | head -n1 | awk '{print $1}') | grep -P "^IP" | awk '{print $2}')
[centos@kube-master ~]$ echo $IP
10.244.3.2
[centos@kube-master ~]$ curl -s $IP | grep -i thank
<p><em>Thank you for using nginx.</em></p>
And there you have it, an instance of nginx running on Kube!
For Multus verification…
(If you haven’t installed with Multus, skip down to the “Some other adventures you can choose” section.)
You can kick off a pod and go ahead and exec ip a on it. The nginx pods that we spun up don’t have the right tools to inspect the network. So let’s kick off a pod with some better tools.
You can watch it come up with watch -n1 kubectl get pods -o wide, then you can verify that it has multiple interfaces…
[centos@kube-master ~]$ kubectl exec -it debugging -- ip a | grep -Pi "^\d|^\s*inet\s"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
inet 127.0.0.1/8 scope host lo
3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
inet 10.244.3.2/24 scope global eth0
4: net0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
inet 192.168.122.200/24 scope global net0
Hurray! There’s your Kubernetes install up and running showing multiple network attachments per pod using Multus.
Some other adventures you can choose…
This is just the tip of the iceberg for more advanced scenarios you can spin up…