Using Koko to create vxlan interfaces for cross-host container network isolation -- and cross-connecting them with VPP!
03 Aug 2017I’ve blogged about koko in the past – the container connector. Due to the awesome work put forward by my associate Tomofumi Hayashi – today we can run it and connect to FD.io VPP (vector packet processing), which is used for a fast data path, something we’re quite interested with in the NFV space. We’re going to setup vxlan links between containers (on separate hosts) back to a VPP forwarding host, where we’ll create cross-connects to forward packets between those containers. As a bonus, we’ll also compile koro, an auxillary utility to use with koko for “container routing”, which we’ll using in a following companion article. Put your gloves on start up your terminals, we’re going to put our hands right on it and have it all up and running.
Also – If you haven’t been paying attention, Tomo has been putting some awesome work into Koko. He’s working on getting it packaged into an RPM, he has significantly improved it by breaking out the go code so you can use it as a library and not just at the command line, and even beautified the syntax for the arguments! …Among other great stuff. Great work, Tomo. Next time we can RPM install it instead of building it ourself (it’s not hard, but, so handy to have the packages, I can’t wait.)
Since we’ll be in the thick of it here, it’s almost free to compile koro while we’re at it. I’m excited to put my hands on koro, and we’ll cover it in the next article (spoiler alert: it’s got service chains in containers using koko and koro!). I’ll refer to the build for koro here for those looking for it.
What are we building?
Here’s a diagram showing the layout of what we’re going to build today:
The gist we’ll build three boxes (I used VMs), and we’ll install VPP on one, and the two other hosts are container hosts where we run containers that we’ll modify using koko.
Some limitations
When we deploy VPP, we are deploying it directly on the host, and not in containers. Also, the containers use VXLAN interfaces that have pairs on the VPP host. This isn’t a limitation per-se, but, more that in the future we’d like to explore further the concepts around user-space networking with containers – so it can feel like a limitation when you know there’s more territory to explore!
Note that this is a manual process shown here, to show you the working parts of these applications. A likely end goal would be to automate these processes in order to have this happen for larger, more complex systems – and in much shorter time periods (and I mean MUCH shorter!) Tomo is working towards these implementations, but I won’t spoil all the fun yet.
Boxen setup & requirements.
For my setup, I used 3 boxes… 4 vcpus, 2048 megs of ram each. I assume a CentOS 7 distro for each of them. I highly recommend CentOS, but, you can probably mentally convert to another distro if you so please. Also, I used VMs, VMs or baremetal will work – I tend to like this approach to spinning up a CentOS cloud image with virsh.
Specifically I had these hosts, so you can refer back if you need to:
- koko1 - 192.168.1.165
- koko2 - 192.168.1.143
- vpp1 - 192.168.1.179
VPP setup
Time to install VPP – it’s not too bad. But, one of the first things you’re going to need to do is enable hugepages.
Setup hugepages
First, take a look to see if huge pages are enabled, likely not:
[root@vpp1 vpp]# cat /proc/meminfo | grep Huge
AnonHugePages: 6144 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Ok, huge pages isn’t enabled. So what I did was…
[root@vpp1 centos]# echo 'vm.nr_hugepages = 1024' >> /etc/sysctl.conf
That should do the trick and live through a reboot.
If you want it to show up now, issue a:
[root@vpp1 centos]# sysctl -p
vm.nr_hugepages = 1024
And then check with:
[root@vpp1 centos]# cat /proc/meminfo | grep Huge
Hugepagesize: 2048 kB
AnonHugePages: 4096 kB
HugePages_Total: 1024
HugePages_Free: 1024
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
That’s what mine looked like. I also recommend, optionally, to reboot at this point to make sure it sticks.
Compile VPP
Go ahead and install git, and clone the VPP repo from fd.io:
[root@vpp1 centos]# yum install -y git
[root@vpp1 centos]# git clone https://gerrit.fd.io/r/vpp
[root@vpp1 centos]# cd vpp/
Now you should be able to run the make
commands, up to and including make run
. It will install the deps for us in the first step (there’s a fair amount of them)
[root@vpp1 centos]# yes | make install-dep
[root@vpp1 vpp]# make bootstrap
[root@vpp1 vpp]# make build
[root@vpp1 vpp]# make run
If you get something like this:
dpdk_config: not enough free huge pages
You didn’t properly setup huge pages in the first steps. For what it’s worth I got a few hints from this fd.io jira issue.
Getting your interfaces to show up in VPP
I didn’t see my interface, just a loopback, in the show interface
command. What I saw it say was:
vlib_pci_bind_to_uio: Skipping PCI device 0000:00:03.0 as host interface eth0 is up
Important We’re going to bring an interface down on this machine, which may impact your ability to ssh to it. In my case, it’s just virtual machines with a single nic. That being the case, I assigned root a password (e.g. sudo su root
and then passwd
) then logged into the box with virsh console vpp1
and did a ifdown eth0
, then ran make run
.
Now bring down your interface on the vpp1
host…
$ ifdown eth0
The interface will show as being in a down state in vpp, but, it shows up in the list.
DBGvpp# show interface
Name Idx State Counter Count
GigabitEthernet0/3/0 1 down
local0 0 down
DBGvpp# show int GigabitEthernet0/3/0
Name Idx State Counter Count
GigabitEthernet0/3/0 1 down
And I set the interface up, but I only want to do that after I assign it a static address…
DBGvpp# set interface state GigabitEthernet0/3/0 down
DBGvpp# set int ip address GigabitEthernet0/3/0 192.168.1.223/24
DBGvpp# set int state GigabitEthernet0/3/0 up
DBGvpp# ping 192.168.1.1
[...snip...]
Great! We’re mostly there. We’ll come back here to setup some vxlan and cross-connects in a little bit.
Compiling koko
Ahhh, go apps – they’re easy on us. We don’t need much to do it, mostly, we need git and golang, so go ahead and install up git.
[root@koko1 centos]# yum install -y git
However, for the latest editions of Koko we need go version 1.7 or greater for koko (as documented in this issue). We’ll use repos from go-repo.io – which at the time of writing installs Go 1.8.3.
Install the .repo
file and then just say let’s install golang.
[root@koko1 doug]# rpm --import https://mirror.go-repo.io/centos/RPM-GPG-KEY-GO-REPO
[root@koko1 doug]# curl -s https://mirror.go-repo.io/centos/go-repo.repo | tee /etc/yum.repos.d/go-repo.repo
[root@koko1 doug]# yum install -y golang
[root@koko1 doug]# go version
go version go1.8.3 linux/amd64
Set up your go path.
[root@koko1 centos]# mkdir -p /home/centos/gocode/{bin,pkg,src}
[root@koko1 centos]# export GOPATH=/home/centos/gocode/
And go ahead and clone koko.
[root@koko1 centos]# git clone https://github.com/redhat-nfvpe/koko.git $GOPATH/src/koko
Get the koko deps, and then build it.
[root@koko1 centos]# go get koko
[root@koko1 centos]# go build koko
[root@koko1 centos]# ls $GOPATH/bin
koko
That results in a koko
binary in the $GOPATH/bin
. So you can get some help out of it if you need.
[root@koko1 centos]# $GOPATH/bin/koko --help
Usage:
./koko -d centos1,link1,192.168.1.1/24 -d centos2,link2,192.168.1.2/24 #with IP addr
./koko -d centos1,link1 -d centos2,link2 #without IP addr
./koko -d centos1,link1 -c link2
./koko -n /var/run/netns/test1,link1,192.168.1.1/24 <other>
See https://github.com/redhat-nfvpe/koko/wiki/Examples for the detail.
Compiling koro
Next we’re going to compile koro, the tool for container routing (hence its namesake) – since you have all setup done for koko before, it’s basically free (work-wise) to compile koro. We’re not
Alright, so assuming you’ve got koko installed, you’re most of the way there, using the same installed applications and set $GOPATH
, you can now clone it up.
[root@koko1 centos]# git clone https://github.com/s1061123/koro.git $GOPATH/src/koro
Get the deps, build it, and run the help.
[root@koko1 centos]# go get koro
[root@koko1 centos]# go build koro
[root@koko1 centos]# $GOPATH/bin/koro
Easy street.
Install a compatible Docker.
You’re going to need an up-to-date docker for koko to perform at its best. So let’s get that up and running for us.
These instructions are basically the verbatim docker instll instructions for Docker CE on CentOS.
[root@koko1 centos]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@koko1 centos]# yum install -y docker-ce
[root@koko1 centos]# systemctl enable docker
[root@koko1 centos]# systemctl start docker
[root@koko1 centos]# docker version | grep -A1 Server | grep Version
Version: 17.06.0-ce
Alright, that’s great.
Wash, rinse, and repeat on koko2
Now go ahead, and compile koko and koro and install Docker on the second koko host.
Fire up a few containers and run koko to create vxlan interfaces
Alright, let’s start some containers. First, we’ll pull my handy utility image (it’s just centos:centos7
but has a few handy packages installed, like… iproute
).
[root@koko1 centos]# docker pull dougbtv/centos-network
Do that on both hosts, koko1
and koko2
, and now we can run that rascal.
[root@koko1 centos]# docker run --name test1 --net=none -dt dougbtv/centos-network sleep 2000000
And on koko2
.
[root@koko2 centos]# docker run --name test2 --net=none -dt dougbtv/centos-network sleep 2000000
Now, let’s connect those to a vxlan interface using koko, on the first koko host.
[root@koko1 centos]# /home/centos/gocode/bin/koko -d test1,link1,10.0.1.1/24 -x eth0,192.168.1.223,11
Create vxlan link1
And on koko2 host.
[root@koko2 centos]# /home/centos/gocode/bin/koko -d test2,link2,10.0.1.2/24 -x eth0,192.168.1.223,12
Create vxlan link2
Dissecting the koko parameters
Let’s dissect the parameters we’ve used here. Looking at this command I had you run earlier:
/home/centos/gocode/bin/koko -d test1,link1,10.0.1.1/24 -x eth0,192.168.1.223,11
/home/centos/gocode/bin/koko
is the path to the compiled koko binary.-d
is for the Docker arguments (“Docker == d”)test1
is the name of the containerlink1
is the name of the interface we’ll create in the container10.0.1.1/24
is the IP address we’ll assign tolink1
-x
is for the vxlan argument (“v X lan = x”)eth0
is the parent interface that exists on the host.192.168.1.223
is the address of the VPP host.11
is the vxlan ID.
Inspecting the containers
Ok cool, let’s enter a container and see what’s been done. We can see that there’s a link1
interface created…
[root@koko1 centos]# docker exec -it test1 ip a
And we can see that it’s a vxlan interface
[root@koko1 centos]# docker exec -it test1 ip -d link show
[... snip ...]
vxlan id 11 remote 192.168.1.223 dev 2 srcport 0 0 dstport 4789 l2miss l3miss ageing 300 addrgenmode eui64
Creating vxlan tunnels and cross-connects in VPP
It’s all well and good that the containers are setup, but right now they’re in a state where the vxlan isn’t actually working because the far end of these, the VPP host, isn’t aware of them. Now, we’ll have to set this up in VPP.
Go back to your VPP console. We’re going to create vxlan tunnels, and keep your cli docs for vxlan tunnels handy.
For your reference again, note that:
192.168.1.223
is the vpp host itself.192.168.1.165
is koko1, and192.168.1.143
is koko2.11
&12
are the vxland IDs we have chosen.
Here’s how we create them:
DBGvpp# create vxlan tunnel src 192.168.1.223 dst 192.168.1.165 vni 11
DBGvpp# create vxlan tunnel src 192.168.1.223 dst 192.168.1.143 vni 12
(If you need to, you can delete those by issuing the same create command and then putting del
on the end.)
And you can see what we created…
DBGvpp# show interface
DBGvpp# show interface vxlan_tunnel0
DBGvpp# show interface vxlan_tunnel1
And that’s all well and good, but, it’s not perfect until we setup the cross connect.
DBGvpp# set interface l2 xconnect vxlan_tunnel0 vxlan_tunnel1
DBGvpp# set interface l2 xconnect vxlan_tunnel1 vxlan_tunnel0
Ok, now… Let’s exec a ping in the test1
container we created and applied koko to.
[root@koko1 centos]# docker exec -it test1 ping -c 5 10.0.1.2
Should be good to go!
In closing.
Alright, what’ve done is:
- Installed koko on two hosts, and ran a container per host
- Created a vxlan interface inside the container that is switched at VPP
- Installed VPP on a host, and setup vxlan and cross connects
Excellent! Next up we’re going to take some of these basics, and we’ll demonstrate create a chain of services using koko and koro.