16 Jun 2023
Today we’re going to run Ooobabooga – the text generation UI to run large language models (LLMs) on your local machine. We’ll make it containerized so that you can keep everything sitting pretty right where it is, otherwise.
Requirements
Looks like we’ll need podman compose if you don’t have it…
- Fedora 38
- A nVidia GPU
- Podman (typically included by default)
- podman-compose (optional)
- The nVidia drivers
If you want podman compose, pick up:
pip3 install --user podman-compose
Driver install
You’re also going to need to install the nVidia driver, and the nVidia container tools
Before you install CUDA, do a dnf update
(otherwise I wound up with mismatched deps), then install CUDA Toolkit (link is for F37 RPM, but it worked fine on F38)
And the container tools:
curl -s -L https://nvidia.github.io/libnvidia-container/centos8/libnvidia-container.repo | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
sudo dnf install nvidia-container-toolkit nvidia-docker2
(nvidia docker 2 might not be required.)
If you need more of a reference for GPUs on Red Hat flavored linuxes, this article from the red hat blog is very good
Let’s get started
In my experience, you’ve gotta use podman for GPU support in Fedora 38 (and probably a few versions earlier, is my guess).
Go ahead and clone this oobabooga/text-generation-webui
From their README, you’ve gotta set this up to do the container build…
ln -s docker/{Dockerfile,docker-compose.yml,.dockerignore} .
cp docker/.env.example .env
# Edit .env and set TORCH_CUDA_ARCH_LIST based on your GPU model
docker compose up --build
Importantly – you’ve got to set the TORCH_CUDA_ARCH_LIST
. You can check that you’ve got the right one from this grid on wikipedia
DOUBLE CHECK – everything, but especially that you’re using the right .env file. Because I really made that take longer than it should when I got that wrong.
TORCH_CUDA_ARCH_LIST=8.6+PTX
First, try building ti with podman – it worked for me on the second attempt. Unsure what went wrong, but I built with…
podman build -t dougbtv/oobabooga .
WARNING: These are some BIG images. I think mine came out to ~16 gigs.
And then I loaded that image it into podman…
I need make a few mods before I can run it… Copy the .env file also to the docker folder (we could probably improve this with a symlink in an earlier step). And while we’re here we’ll need to copy the template prompts, presets, too.
cp .env docker/.env
cp prompts/* docker/prompts/
cp presets/* docker/presets/
Now you’ll need at least a model, so to download one leveraging the container image…
podman-compose run --entrypoint "/bin/bash -c 'source venv/bin/activate; python download-model.py TheBloke/stable-vicuna-13B-GPTQ'" text-generation-webui
Naturally, change TheBloke/stable-vicuna-13B-GPTQ
to whatever model you want.
You’ll find the model in…
I also modify the docker/.env to change this line to…
CLI_ARGS=--model TheBloke_stable-vicuna-13B-GPTQ --chat --model_type=Llama --wbits 4 --groupsize 128 --listen
However, I run it by hand with:
podman run \
--env-file /home/doug/ai-ml/text-generation-webui/docker/.env \
-v /home/doug/ai-ml/text-generation-webui/characters:/app/characters \
-v /home/doug/ai-ml/text-generation-webui/extensions:/app/extensions \
-v /home/doug/ai-ml/text-generation-webui/loras:/app/loras \
-v /home/doug/ai-ml/text-generation-webui/models:/app/models \
-v /home/doug/ai-ml/text-generation-webui/presets:/app/presets \
-v /home/doug/ai-ml/text-generation-webui/prompts:/app/prompts \
-v /home/doug/ai-ml/text-generation-webui/softprompts:/app/softprompts \
-v /home/doug/ai-ml/text-generation-webui/docker/training:/app/training \
-p 7860:7860 \
-p 5000:5000 \
--gpus all \
-i \
--tty \
--shm-size=512m \
localhost/dougbtv/oobabooga:latest
(If you’re smarter than me, you can get it running with podman-compose at this point)
At this point, you should be done, grats!
It should give you a web address, fire it up and get on generating!
Mount your models somewhere
I wound up bind mounting some directories…
sudo mount --bind /home/doug/ai-ml/oobabooga_linux/text-generation-webui/models/ docker/models/
sudo mount --bind /home/doug/ai-ml/oobabooga_linux/text-generation-webui/presets/ docker/presets/
sudo mount --bind /home/doug/ai-ml/oobabooga_linux/text-generation-webui/characters/ docker/characters/
Bonus note: I also wound up changing my dockerfile to install a torch+cu118, in case that helps you.
So I changed out two lines that looked like this diff:
- pip3 install torch torchvision torchaudio && \
+ pip3 install torch==2.0.1+cu118 torchvision==0.15.2+cu118 torchaudio==2.0.2 -f https://download.pytorch.org/whl/cu118/torch_stable.html && \
I’m not sure how much it helped, but, I kept this change after I made it.
I’m hopeful to submit a patch for https://github.com/RedTopper/Text-Generation-Webui-Podman which isn’t building for me right now hopefully integrating what I learned from this. And then have the whole thing in podman, later.
Don’t make my stupid mistakes
I ran into an issue where, I got:
RuntimeError: CUDA error: no kernel image is available for execution on the device
I tried messing with the TORCH_CUDA_ARCH_LIST in the .env file and change it to 8.6+PTX, 8.0, etc, the whole list, commented out, no luck.
I created an issue in the meanwhile: https://github.com/oobabooga/text-generation-webui/issues/2002
I also found this podman image repo!
https://github.com/RedTopper/Text-Generation-Webui-Podman
and I forked it.
It looks like it could possibly need updates.
I’ll try to contribute my work back to this repo at some point.
06 May 2023
In today’s tutorial, we’re going to install Stable Diffusion on Fedora 38.
I’m putting together a lab machine for GPU workloads. And the first thing I wanted to do was get Stable Diffusion running, and I’m also hopeful to start using it for training LoRA’s, embeddings, maybe even a fine tuning checkpoint (we’ll see).
Fedora is my default home server setup, and I didn’t find a direct guide on how to do it, although it’s not terribly different from other distros
…Oddly enough I actually fired this up with Fedora Workstation.
Requirements
- An install of Fedora 38
- A nVidia GPU (if someone has insight on AMD GPUs, and wants to provide instructions, hit me up and I’ll update the article)
Installing Automatic Stable Diffusion WebUI on Fedora 38
I’m going to be using Vladmanic’s fork of Automatic1111 sd webui: https://github.com/vladmandic/automatic
Clone it.
Fedora 38 ships with Python 3.11, but some dependency for stable diffusion requires python 3.11, which will require a few extra steps.
Install python 3.10
Also, before you install CUDA, do a dnf update
(otherwise I wound up with mismatched deps for NetworkManager and couldn’t boot off a new kernel, and I had to wheel up a crash cart, just kidding I don’t have a crash cart or a KVM for my Linux lab so it’s much more annoying where I move my server to my workstation area, luckily I just have a desktop server lab)
Install CUDA Toolkit (link is for F37 RPM, but it worked fine on F38)
And – follow the instructions there. You might need to reboot now.
Make a handler script to export the correct python version… I named mine user-webui.sh
#!/bin/bash
export python_cmd=python3.10
screen -S ./webui.sh --listen
NOTE: I fire it up in screen
. If you don’t have Stockholm Syndrome for screen you can decide to not be a luddite and modify it to use tmux
. And if you need a cheat sheet for screen, there you go. I also use the --listen
flag because I’m going to connect to this from other machines on my network.
Then run the ./user-webui.sh
once to get the venv, it will likely fail at this point. Or if you’re a smarter python user, create the venv yourself.
Then enter the venv.
Then ensurepip…
And now you can fire up the script!
14 Apr 2023
Have you played with ChatGPT yet? Ummm, yeah, who hasn’t!? I have pirate-styled rap battles to make! So let’s get right to the point so we can get back to generating rap-battles as soon as possible!
Today we’re going to run a LLM (Large Language Model) locally on one of our own machines, and we’re going to set it up so that we can interface with it via API, and we’ll even write a small program to test it out.
I have some ideas where I want to take some software I’m building and hook it up to one of these, later I’d like to train it on custom data and then query it. Maybe even have some like real-ish-time data fed to it and then be able to query it in near-real-time too. I also have some ideas for populating it with logs and then being like “yo, tell me what’s up with this machine?”
But yeah, instead of relying on a GPT service, I want to run the LLM myself, using open source tools.
Pre-requisites
Ok, so I’m not going to go deep into the details of the installation, so I’m just going to give some pointers. It’s not necessarily rocket science
First up, we’re going to install a webUI, OobaBooga: https://github.com/oobabooga/text-generation-webui
This is one of the few times I’m going to say the word “windows” on this blog, but, I actually installed mine on windows, because it’s a windows box that’s an art and music workstation where I’ve got my decent GPU (for gaming and also for stable diffusion and then associated windoze-y art tools). I follow this youtube video by @TroubleChute. I even used his opinionated script to automatically install Vicuna!
But you can also just install it with the instructions on the README, which appears to be really straight forward.
The Model we’re going to use, Vicuna, you can find it @ https://vicuna.lmsys.org/ – the thing that’s interesting about Vicuna is that it’s trained on crowd-sourced GPT output, and claims be 90% as good as GPT, which seems like a lofty statement. But so far it does seem like it’s pretty decent, even if it does parrot a lot of the kind of “walled garden” stuff that ChatGPT says (“As an AI language model, I can’t tell you what you asked me for” kind of stuff.)
Quick tip: After you install it and you’re playing with it on the WebUI, assuming you’ve installed Vicuna… Go to the bottom of the chat and find the radio button for “mode” and switch it to “instruct” and then in the “instruction template” drop down, select “Vicuna” – this will parse the output from the LLM so it makes more sense in context with your queries/conversation/prompts to it.
Well, first, in your text-generation-webui git clone – go and check out the ./extensions/api
folder and make sure there’s a script there, there should be by default.
Next, we’re going to enable that by adding a flag to our startup script.
In your startup script, add a --extensions api
field, and also a --listen
flag (if you want to listen on all addresses and not just your loopback so you can access the api from your LAN/WAN).
So now, for example, mine looks like:
python server.py --extensions api --listen --auto-devices --chat --model anon8231489123_vicuna-13b-GPTQ-4bit-128g --wbits 4 --groupsize 128
Now your api should be ready to query.
Calling the API
Then I whipped together a node.js script (with ChatGPT’s help!) to query the API.
I’ve got the script here in this github gist.
Lucky you, the script asks for suggestions for cheese. If you ask me, I’d tell you have to have a Bayley Hazen blue, or maybe the Cabot Clothbound both from Jasper Hill Farms (they’ve go the raddest cheese cave, super good cheese)
And you can just run that with
node oobabooga_api_query.js
And away you go. Or if you don’t like Node, just have ChatGPT re-write it for you in assembly or something!
(The script still needs a little help parsing, but it’s close! I think it’s kind of missing the kind of “mode” + “template” from the UI, so that’s next to figure out)
B-b-b-bonus hip hop battle
Just for you, I asked it to:
write a rap battle about api clients written in node.js vs. assembly
And it gave me:
Verse 1 (Node.js):
I'm Node, I'm fast and sleek,
When it comes to APIs, I can't be beat!
With JavaScript at my core,
I make building apps a breeze, no need to feel bore-ot.
Chorus:
API clients, we rule the land,
Node.js or Assembly, which one takes the stand?
We may not be as old school,
But when it comes to speed and coolness, we'll never lack cool.
Verse 2 (Assembly):
I may be older than you think,
But don't underestimate me, I still have some tricks.
With raw power and precision,
My code runs smoothly without any friction.
Chorus:
API clients, we rule the land,
Node.js or Assembly, which one takes the stand?
We may not be as old school,
But when it comes to speed and coolness, we'll never lack cool.
Outro:
In the end, it doesn't matter who wins this battle,
Because both Node.js and Assembly are great for different purposes, so let's celebrate!
Thanks for your “everyone gets an award!” style indifference LLM!
…But not too shabby for something running on my local!
14 Jul 2022
The gist of Chainsaw CNI (brum-brum-brum-brum-brrrrrrrrr) is it’s a CNI plugin that runs in a CNI chain (more on that soon), and it allows you to run arbitrary ip
commands against your Kubernetes pods to either manipulate or inspect networking. You can do this at run-time by annotating a pod with the commands you want to run.
For example, you can annotate a pod with:
k8s.v1.cni.cncf.io/chainsaw: >
["ip route","ip addr"]
And then get the output of ip route
and ip addr
for your pod.
I named it Chainsaw because:
- It works using CNI Chains.
- It’s powerful, but kind of dangerous.
Today, we’re going to:
- Talk about why I made it.
- Look at what CNI chains are.
- See what the architecture is comprised of.
- And of course, engage the choke, pull the rope start and fire up this chainsaw.
We’ll be using it with network attachment definitions – that is, the custom resource type that’s used by Multus CNI
Why do you say it’s dangerous? Well, like a chainsaw, you do permanent harm to something. You could totally turn off networking for a pod. Or, potentially you open up a way for some user of your system to do something more privileged than you thought. I’m still thinking about how to better address this part, but for now… I’d advise that you use it carefully, and in lab situations rather than production before these aspects are more fully considered.
Also, as an aside… I am a physical chainsaw user. I have one and, I use it. But I’m appropriately afraid of it. I take a long long time to think about it before I use it. I’ve watched a bunch of videos about it, but I really want to take a Game Of Logging course so I can really operate it safely. Typically, I’m just using my Silky Katanaboy (awesome Japanese pull saw!) for trail work and what not.
Last but not least, a quick disclaimer: This is… a really new project. So it’s missing all kinds of stuff you might take for granted: unit tests, automatic builds, all that. Just a proof of concept, really.
Why, though?
I was originally inspired by this hearing this particular discussion:
Person: “Hey I want to manipulate a route on a particular pod”
Me: “Cool, that’s totally possible, use the route override CNI” (it’s another chained plugin!)
Person: “But I don’t want to manipulate the net-attach-def, there’s tons of pods using them, and I only want to manipulate for a specific site, so I want to do it at runtime, adding more net-attach-defs makes life harder”.
Well, this kinda bothered me! I talked to a co-worker who said “Sure, next they’re going to want to change EVERYTHING at runtime!”
I was thinking: “hey, what if you COULD change whatever you wanted at runtime?”
And I figured, it could be a really handy tool, even if just for CNI developers, or network tinkerers as it may be.
CNI Chains
┌──────────────────┐ ┌────────────────┐
│ │ │ │
│ │ ┌───────────┐ │ │
│ CNI Plugin A │ │ │ │ CNI Plugin B │
│ ├───► cni result├───► │
│ │ │ │ │ │
│ │ └───────────┘ │ │
└──────────────────┘ └────────────────┘
CNI chains are… sometimes confusing to people. But, they don’t need to be, it’s basically as simple as saying, “You can chain as many CNI plugins together as you want, and each CNI plugin gets all the CNI results of the plugin before it”
This functionality was introduced in CNI 0.3.0 and is available in all later versions of CNI, naturally.
You can tell if you have a CNI plugin chain by looking at your CNI configuration, if the top level JSON has the "type"
field – then it’s not a chain.
If it has the "plugins": []
array – then it’s a chain of plugins, and will run in the order within the array. As of CNI 1.0, you’ll always be using the plugins field, and always have chains, even if a “chain of one”.
Why do you use chained plugins? The best example I can usually think of is the Tuning Plugin. Which allows you to set network sysctls, or manipulate other parameters of networks – such as setting an interface into promiscuous mode. This is done typically after the work of your main plugin, which is going to do the plumbing to setup the networking for you (e.g. say, a vxlan tunnel, or a macvlan interface, etc etc).
The architecture
Not a whole lot to say, but it’s a “sort of thick plugin” – thick CNI plugins are those that have a resident daemon, as opposed to “thin CNI plugins” – which run as a one-shot (all of the reference CNI plugins are one shots). But in this case, we just use the daemonset that’s resident for looking at the log output, for inspecting our results.
Other than that, it’s similar to Multus CNI in that it knows how to talk to the k8s API and get the annotations, and it uses a generated kubeconfig to authorize itself against the k8s API
Let’s get to using it!
Requirements:
- A k8s cluster, the newer the beter.
- Multus CNI must be installed
That’s about it. Don’t use a production cluster ;)
So go ahead and clone dougbtv/chainsaw-cni.
Then create the daemonset with:
kubectl create -f deployments/daemonset.yaml
NOTE: Are you an openshift user? Use the deployments/daemonset_openshift.yaml
deployment instead :thumbsup:
Now, let’s create a net-attach-def which implements chainsaw in a chain – note the plugins
array!
Also note the use of the special token CURRENT_INTERFACE
which will use the current interface name as opposed to you having to know it in advance.
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: test-chainsaw
spec:
config: '{
"cniVersion": "0.4.0",
"name": "test-chainsaw-chain",
"plugins": [{
"type": "bridge",
"name": "mybridge",
"bridge": "chainsawbr0",
"ipam": {
"type": "host-local",
"subnet": "192.0.2.0/24"
}
}, {
"type": "chainsaw",
"foo": "bar"
}]
}'
---
apiVersion: v1
kind: Pod
metadata:
name: chainsawtestpod
annotations:
k8s.v1.cni.cncf.io/networks: test-chainsaw
k8s.v1.cni.cncf.io/chainsaw: >
["ip route add 192.0.3.0/24 dev CURRENT_INTERFACE", "ip route"]
spec:
containers:
- name: chainsawtestpod
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
Next, check what node the pod is running with:
You can then find the output from the results of the ip commands from the chainsaw daemonset that is running on that node, e.g.
kubectl get pods -n kube-system -o wide | grep -iP "status|chainsaw"
And looking at the logs for the daemonset pod that correlates to the node on which the pod resides, for example:
kubectl logs kube-chainsaw-cni-ds-kgx69 -n kube-system
You’ll see that we have added a route to 192.0.3.0/24
and then show the IP route output!
So my results look like:
Detected commands: [route add 192.0.3.0/24 dev CURRENT_INTERFACE route]
Running ip netns exec 901afa16-48e7-4f22-b2b1-7678fa3e9f5e ip route add 192.0.3.0/24 dev net1 ===============
Running ip netns exec 901afa16-48e7-4f22-b2b1-7678fa3e9f5e ip route ===============
default via 10.129.2.1 dev eth0
10.128.0.0/14 dev eth0
10.129.2.0/23 dev eth0 proto kernel scope link src 10.129.2.64
172.30.0.0/16 via 10.129.2.1 dev eth0
192.0.2.0/24 dev net1 proto kernel scope link src 192.0.2.51
192.0.3.0/24 dev net1 scope link
224.0.0.0/4 dev eth0
14 May 2021
If you’re looking at developing (or debugging!) CNI plugins, you’re going to need a workflow for developing CNI plugins – something that really lets you get in there, and see exactly what a CNI plugin is doing. You’re going to need a bit of a swiss army knife, or something that slices, dices, and makes juilienne fries. cnitool
is just the thing to do the job. Today we’ll walk through setting up cnitool
, and then we’ll make a “dummy” CNI plugin to use it with, and we’ll run a reference CNI plugin.
We’ll also cover some of the basics of the information that’s passed to and from the CNI plugins and CNI itself, and how you might interact with that information, and how you might inspect a container that’s been plumbed with interfaces as created by a CNI plugin.
In this article, we’ll do this entirely without interacting with Kubernetes (and save it for another time!). And we actually do it without a container runtime at all – no docker, no crio. We just create the network namespace by hand. But the same kind of principles apply with both a container runtime (docker, crio) or a container orchestration enginer (e.g. k8s)
You might remember my blog article about a workflow for developing CNI plugins. That article uses the docker-run.sh, which is still totally valid. You might look at it for a reference, but CNI tool gives a bit more granularity.
Prerequisites
- Golang installed and configured on your system.
- I used a Fedora environment, these steps probably work elsewhere.
Basically, all the steps necessary to install cnitool are available in the cnitool README. I’ll summarize them here, but, it may be worth a reference.
Install cnitool…
go get github.com/containernetworking/cni
go install github.com/containernetworking/cni/cnitool
You can test if it’s in your path and operational with:
Next, we’ll compile the “reference CNI plugins” – these are a series of plugins that are offered by the CNI maintainers that create network interfaces for pods (as well as provide a number of “meta” type plugins that alter the properties, attributes, and what not of a particular container’s network). We also set our CNI_PATH
variable (which is used by cnitool to know where these plugin executables are)
git clone https://github.com/containernetworking/plugins.git
cd plugins
./build_linux.sh
export CNI_PATH=$(pwd)/bin
echo $CNI_PATH
Alright, you’re basically all setup at this point.
We’ll need to create a CNI configuration. For testing purposes, we’re going to create a configuration for the bridge CNI.
Create a directory and file at /tmp/cniconfig/10-myptp.conf
with these contents:
{
"cniVersion": "0.4.0",
"name": "myptp",
"type": "ptp",
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "172.16.29.0/24",
"routes": [{
"dst": "0.0.0.0/0"
}]
}
}
And then set your CNI configuration directory by exporting this variable as:
export NETCONFPATH=/tmp/cniconfig/
First we create a netns – a network namespace. This is kind of a privately sorta-jailed space in which network components live, and is the basis of networking in containers, “here’s your private namespace in which to do your network-y things”. This, from a CNI point of view, is equivalent to the “sandbox” which is the basis container of pods that run in kubernetes. In k8s we’d have one or more containers running inside this sandbox, and they’d share the networks as in this network namespace.
sudo ip netns add myplayground
You can go and list them to see that it’s there…
sudo ip netns list | grep myplayground
Now we’re going to run cnitool
with sudo
so it has the appropriate permissions, and we’re going to need to pass it along our environment variables and our path to cnitool (if your root user doesn’t have a go environment, or isn’t configured that way), for me it looks like:
sudo NETCONFPATH=$(echo $NETCONFPATH) CNI_PATH=$(echo $CNI_PATH) $(which cnitool) add myptp /var/run/netns/myplayground
Let’s breakdown what this is doing more or less…
NETCONFPATH=$(echo $NETCONFPATH) CNI_PATH=$(echo $CNI_PATH)
sets our environment variables to tell tool
$(which cnitool)
figures out the path of cnitool
so that inside your sudo environment, you don’t need your GOPATH (you’re rad if you have that setup, though)
add myptp /var/run/netns/myplayground
says that add
is the CNI method which is being invoked, myptp
is our configuration, and the /var/run/...
is the path to the netns that we created.
You should get some output that looks like:
{
"cniVersion": "0.4.0",
"interfaces": [
{
"name": "veth20b2acac",
"mac": "62:22:15:72:b2:29"
},
{
"name": "eth0",
"mac": "42:48:16:0b:e9:98",
"sandbox": "/var/run/netns/myplayground"
}
],
"ips": [
{
"version": "4",
"interface": 1,
"address": "172.16.29.3/24",
"gateway": "172.16.29.1"
}
],
"routes": [
{
"dst": "0.0.0.0/0"
}
],
"dns": {}
}
You can then actually do a ping out that interface, with:
sudo ip -n myplayground addr
sudo ip netns exec myplayground ping -c 1 4.2.2.2
And you can use nsenter to more interactively play with it, too…
sudo nsenter --net=/var/run/netns/myplayground /bin/bash
[root@host dir]# ip a
[root@host dir]# ip route
[root@host dir]# ping -c 5 4.2.2.2
What we’re going to do is create a shell script that is a CNI plugin. You see, CNI plugins can be executables of any variety – they just need to be able to read from stdin, and write to stdout and stderr.
This is kind of a blank slate for a CNI plugin that’s made with bash. You could use this approach, but, in reality – you’ll probably write these applications with go. Why? Well, especially because there’s the CNI libraries (especially libcni) which you would use to be able to express some of these ideas about CNI in a more elegant fashion. Take a look at how Multus uses CNI’s skel
(skeletal components, for the framework of your CNI plugin) in its main routine to call the methods as CNI has called them. Just read through Multus’ main.go and look how it imports skel and then using skel calls our method to add when CNI ADD is used.
First, let’s make a cni configuration for our dummy plugin. I made mine at /tmp/cniconfig/05-dummy.conf
.
{
"cniVersion": "0.4.0",
"name": "mydummy",
"type": "dummy"
}
There’s not a lot to pay attention to here, the most important things are:
- the
type
field which must have the same name as our executable on disk – which are both going to be dummy
- the
name
field is the name we’ll reference in our cnitool
command, which will be mydummy
.
Now, in the path where we have our reference CNI plugins, lets add another file, name it dummy
, and then make sure its executable. In my case I did a:
vi ./bin/dummy
chmod 0755 ./bin/dummy
I made mine with the contents from this gist.
The first thing to note is that the majority of this file is to actually just setup some logging for looking at the CNI parameters, and all the magic happens in the last 3-4 lines.
Mainly, we want to output 3 environment using these three lines. These are some environment variables that are sent to us from CNI and that a CNI plugin can use to figure out the netns, the container id, and the CNI command.
Importantly – since we have this DEBUG variable turned on, we’re outputting via stderr… if there’s any stderr output during a CNI plugin run, this is considered a failure, as that’s what you’re supposed to do when you error out, is output to stderr.
And last but not least, we output a CNI result at the bottom line, which calls this function which outputs a (sorta kinda realistic) CNI result.
You can turn that off, but we have it on for demonstrative purposes so you can easily see the what those variables are.
So, let’s run it!
sudo NETCONFPATH=$(echo $NETCONFPATH) CNI_PATH=$(echo $CNI_PATH) $(which cnitool) add mydummy /var/run/netns/dummyplayground
And you can see output that looks like:
CNI method: ADD
CNI container id: cnitool-06764c511c35893f831e
CNI netns: /var/run/netns/dummyplayground
{
"cniVersion": "0.4.0",
"interfaces": [
{
"name": "dummy"
}
],
"dns": {}
}
Here we’ll see that there’s a lot of information that we as humans already know, since we’re executing CNI tool, but it demonstrates how a CNI plugin interacts with this information, it’s telling us that it:
- Knows that we’re doing a CNI
ADD
operation.
- We’re using a netns that’s called
dummyplayground
- It’s outputting a CNI result.
These are the general basics of what a CNI plugin needs in order to operate. And then… from there, the sky’s the limit. A more realistic plugin might
And to learn a bit more, you might think about looking at some of the reference CNI plugins, and see what they do to create interfaces inside these network namespaces.
But what if my CNI plugins interacts with Kubernetes!?
…And that’s for next time! You’ll need a Kubernetes environment of some sort.