Bootstrap a kpm registry to run a kpm registry

Yo dawg… I heard you like kpm-registries. So I bootstrapped a kpm-registry so you can deploy a kpm-registry from a kpm-registry.

So, I was deploying my kpm registry using a public, and beta kpm registry, and this happens right about the time I’m about to give a demo of spinning up stackanetes, and I need a kpm registry for that… But, the beta kpm registry (beta.kpm.sh) was down, argh/fiddlesticks!. So I went through and deploy a kpm registry so I can push a kpm registry package to run it. In the meanwhile I also opened a kpm issue, too.

Why the extra steps here, like… If you can run a kpm registry without a kpm registry, why would you do it? The thing is… Then I’m managing it myself (between a single docker container and a gunicorn web app), instead of having Kubernetes (k8s) manage it for me. And I want k8s to do the work. So I just bootstrap it, then I can deploy it as k8s pods.

This already assumes that you have kpm (the client) installed. If you don’t have kpm installed, go ahead and use my ansible galaxy role to do so. Which will give you a clone of the kpm client in /usr/src/kpm/

Also make sure you have gunicorn (the “green unicorn”, a Python web server gateway interface) installed.

$ sudo yum install -y python-gunicorn

It requires etcd to be present, so get up etcd first.

$ docker run --name tempetcd -dt -p 2379:2379 -p 2380:2380 quay.io/coreos/etcd:v3.0.6 /usr/local/bin/etcd -listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 -advertise-client-urls http://$127.0.0.1:2379,http://127.0.0.1:4001

Now you can run the registry API server with gunicorn, a la:

$ pwd
/usr/src
$ gunicorn kpm.api.wsgi:app -b :5555

And then you can push the kpm-registry packages, but only after you set the proper tag in the manifest, because there isn’t a pushed image for this particular tag.

$ pwd
/usr/src/kpm/deploy/kpm-registry
$ sed -i 's/v0.21.2/v0.21.1/' manifest.jsonnet 
$ kpm push -H http://localhost:5555 -f
package: coreos/kpm-registry (0.21.2-4) pushed

Can we it deploy kpm-registry now? Not quite… We also have to push the coreos/etcd package to our bootstrapping registry. And I found the manifest for it in the kubespray/kpm-packages repo.

$ cd /usr/src/
$ git clone https://github.com/kubespray/kpm-packages.git
$ cd kpm-packages/
$ cd coreos/etcdv3
$ pwd
/usr/src/kpm-packages/coreos/etcdv3
$ kpm push -H http://localhost:5555 -f
$ kpm list -H http://localhost:5555
app                  version    downloads
-------------------  ---------  -----------
coreos/etcd          3.0.6-1    -
coreos/kpm-registry  0.21.2-4   -

Now you should be able to deploy a kpm registry from the bootstrapping registry via:

$ kpm deploy coreos/kpm-registry --namespace kpm -H http://localhost:5555
create coreos/kpm-registry 

 01 - coreos/etcd:
 --> kpm (namespace): created
 --> etcd-kpm-1 (deployment): created
 --> etcd-kpm-2 (deployment): created
 --> etcd-kpm-3 (deployment): created
 --> etcd-kpm-1 (service): created
 --> etcd-kpm-2 (service): created
 --> etcd-kpm-3 (service): created
 --> etcd (service): created

 02 - coreos/kpm-registry:
 --> kpm (namespace): ok
 --> kpm-registry (deployment): created
 --> kpm-registry (service): created

Voila! Now you can tear down the bootstrapping registry if you’d like, e.g. stop the docker container and the API server as run by gunicorn.

Running Stackanetes on Openshift

Stackanetes is an open-source project that aims to run OpenStack on top of Kubernetes. Today we’re going to use a project that I created that uses ansible plays to setup Stackanetes on Openshift, openshift-stackanetes. We’ll use an all-in-one server approach to setting up openshift in this article to simplify that aspect, and later provide playbooks to launch Stackanetes with a cluster and focus on HA requirements in the future.

If you’re itching to get into the walk-through, head yourself down to the requirements section and you can get hopping. Otherwise, we’ll start out with an intro and overview of what’s involved to get the components together in order to make all that good stuff down in that section work in concert.

During this year’s OpenStack summit, and announced on October 26th 2016, Stackanetes was demonstrated as a technical preview. Up until this point, I don’t believe it has been documented as being run on OpenShift. I wouldn’t be able to document this myself if it weren’t for the rather gracious assistance of the crew from the CoreOS project and the Stackanetes as they helped me through this issue on GitHub. Big thanks go to ss7pro, PAStheLod, ant31, and Quentin-M. Really appreciated the help crew, big time!

On terminology – while the Tech Crunch article considers the name Stackanetes unfortunate, I disagree – I like the name. It kind of rolls off the tongue. Also if you say it fast enough, someone might say “Gesundheit!” after you say it. Also, theoretically using the construst of i18n (internationalization) or better yet, k8s (Kubernetes), you could also say this is s9s (stackanetes), which I’d use in my commit messages and what not because… It’s a bit of typing! You might see s9s here and again in this article, too. Also, you might hear me say “OpenShift” a huge number of times – I really mean “OpenShift Origin” whenever I say it.


Scope of this walk-through

First thing’s first – openshift-stackanetes is the project we’ll focus on to use to spin up Stackanetes on Openshift, it is a series of Ansible roles to help us accomplish getting Stackanetes on OpenShift.

Primarily we’ll focus on using an all-in-one OpenShift instance, that is one that uses the oc cluster up command to run OpenShift all on a single host, as outlined in the local cluster management documentation. My “openshift on openstack in easy mode” goes into some of those details as well. However, the playbooks will take care of this setup for you in this case.

Things we do cover:

  • Getting OpenShift up (all-in-one style, or what I like to call “easy mode”)
  • Spinning up a KPM registry
  • Setting up proper permissions for Stackanetes to run under OpenShift
  • Getting Stackanetes running in OpenShift

Things we don’t cover:

  • High availability (hopefully we’ll look at this in a further article)
  • For now, tenant / external networking, we’ll just run OpenStack clouds instances in their own isolated network. (This is kind of a project on its own)
  • In depth usage of OpenStack – we’ll just do enough to get some cloud instances up
  • Spinning up Ceph
  • A sane way of exposing DNS externally (we’ll just use a hosts file for our client machines outside of the s9s box)
  • Details of how to navigate OpenShift, surf this blog for some basics if you need them.
  • Changing out the container runtime (e.g. using rkt, we just use Docker this time around)
  • Ansible installation and basic usage, we will however give you all the ansible commands to run this playbook.

Considerations of using Stackanetes on OpenShift

Some of the primary considerations I had to overcome for using Stackanetes on OpenShift is managing the SCCs (security context constraints).

I’m not positive that the SCCs I have defined herein in are ideal. In some ways, I can point out that they are insufficient in a few ways. However, my initial focus has been to get to Stackanetes to run properly.


Components of openshift-stackanetes

So, first off there’s a lot of components of Stackanetes, especially the veritable cornucopia of pieces that comprise OpenStack. If you’re interested in those, you might want to check out the Wikipedia article on OpenStack which has a fairly comprehensive list.

One very interesting part of what stackanetes is that it leverages KPM registry.

KPM is described as “a tool to deploy and manage application stacks on Kubernetes”. I like to think of it as “k8s package manager”, and while never exactly branded that way, that makes sense to me. In my own words – it’s a way to take the definition YAML files you’d use to build k8s resources and parameterize them, and then store them in a registry so that you can access them later. In a word: brilliant.

Something I did in the process of creating openshift-stackanetes was to create an Ansible-Galaxy role for KPM on Centos to get a contemporary revision of kpm client running on CentOS, it’s included in the openshift-stackanetes ansible project as a requirement.

Another really great component of s9s is that they’ve gone ahead and implemented Traefik – which is a fairly amazing “modern reverse proxy” (Traefik’s words). This doles out the HTTP requests to the proper services.

Let’s give a quick sweeping overview of the roles as ran by the openshift-stackanetes playbooks:

  • docker-install installs the latest Docker from the official Docker RPM repos for CentOS.
  • dougbtv.kpm-install installs the KPM client to the OpenShift host machine.
  • openshift-install preps the machine with the deps to get OpenShift up and running.
  • openshift-up generally runs the oc cluster up command.
  • kpm-registry creates a namespace for the KPM registry and spins up the pods for it.
  • openshift-aio-dns-hack is my “all-in-one” OpenShift DNS hack.
  • stackanetes-configure preps the pieces to go into the kpm registry for stackanetes and spins up the pods in their own namespace.
  • stackanetes-routing creates routes in OpenShift for the stackanetes services that we need to expose

Requirements

  • A machine with CentOS 7.3 installed
  • 50 gig HDD minimum (64+ gigs recommended)
  • 12 gigs of RAM minimum
  • 4 cores recommended
  • Networking pre-configured to your liking
  • SSH keys to root on this machine from a client machine
  • A client machine with git and ansible installed.

You can use a virtual machine or bare metal, it’s your choice. I do highly recommend doubling all those above requirements though, and using a bare metal machine as your experience will

If you use a virtual machine you’ll need to make sure that you have nested virtualization passthrough. I was able to make this work, and while I won’t go into super details here, the gist of what I did was to check if there were virtual machine extensions on the host, and also the guest. You’ll node I was using an AMD machine.

# To check if you have virtual machine extensions (on host and guest)
$ cat /proc/cpuinfo | grep -Pi "(vmx|svm)"

# Then check that you have nesting enabled
$ cat /sys/module/kvm_amd/parameters/nested
1

And then I needed to use the host-passthrough CPU mode to get it to work.

$ virsh dumpxml stackanetes | grep -i pass
  <cpu mode='host-passthrough'/>

All that said, I still recommend the bare metal machine, and my notes were double checked against bare metal… I think your experience will be improved, but I realize that isn’t always a convenient option.


Let’s run some playbooks!

So, we’re assuming that you’ve got your CentOS 7.3 machine up, you know its IP address and you have SSH keys to the root user. (Don’t like the root user? I don’t really, feel free to contribute updates to the playbooks to properly use become!)

git clone and basic ansible setup

First things first, make sure you have ansible installed on your client machine, and then we’ll clone the repo.

$ git clone https://github.com/dougbtv/openshift-stackanetes.git
$ cd openshift-stackanetes

Now that we have it installed, let’s go ahead and modify the inventory file in the root directory. In theory, all you should need to do is change the IP address there to the CentOS OpenShift host machine.

It looks about like this:

$ cat inventory && echo
stackanetes ansible_ssh_host=192.168.1.100 ansible_ssh_user=root

[allinone]
stackanetes

Ansible variable setup

Now that you’ve got that good to go, you can modify some of our local variables, check out the vars/main.yml file to see the variables you can change.

There’s two important variables you may need to change:

  • facter_ipaddress
  • minion_interface_name

Firstly facter_ipaddress variable. This is important as the value of this determines how we’re going to find your IP address. By default it’s set to ipaddress. If you’re unsure what to put here, go ahead and install facter and check out which value returns the IP address you’d like to use for external access ot the machine.

[root@labstackanetes ~]# yum install -y epel-release
[root@labstackanetes ~]# yum install -y facter
[root@labstackanetes ~]# facter | grep -i ipaddress
ipaddress => 192.168.1.100
ipaddress_enp1s0f1 => 192.168.1.100
ipaddress_lo => 127.0.0.1

In this case, you’ll see that either ipaddress or ipaddress_enp1s0f1 look like valid choices – however the ipaddress isn’t reliable, so choose one based on your NIC.

Next the minion_interface_name, additionally important because this is the interface we’re going to tell Stackanetes to use for networking for the pods it deploys. This should generally be the same interface that the above ip address came from.

You can either edit the ./vars/main.yml file or you can pass them in as extra vars e.g. --extra-vars "facter_ipaddress=ipaddress_enp1s0f1 minion_interface_name=enp1s0f1"

Let’s run that playbook!

Now that you’re setup, you should be able to run the playbook…

The default way you’d run the playbook is with…

$ ansible-playbook -i inventory all-in-one.yml

Or if you’re specifying the --extra-vars, insert that before the yaml filename.

If everything has gone well!

It likely may have! If everything has gone as planned, there should be some output that will help you get going…

It should list:

  • The location of the openshift dashboard, e.g. https://yourip:8443
  • The location of the KPM registry (a cluster.local URL)
  • A series of lines representing a /etc/hosts file to put on your client machine.

You should be able to check out the OpenShift dashboard (cockpit) and take a little peek around to see what has happened.

Possible “gotchyas” and troubleshooting

First thing’s first – you can log into the openshift host and issue:

oc projects
oc project openstack
oc get pods

And see if any pods are in error.

The biggest possibility of what has gone wrong is that etcd in the kpm package didn’t come up properly. This happens intermittently to me, and I haven’t debugged it, nor opened up an issue with the KPM folks (Unsure if it’s how they instantiate etcd or etcd itself, I do know however that spinning up an etcd cluster can be a precarious thing, so, it happens.)

In this case that this happens, go ahead and delete the KPM namespace and run the playbook again, e.g.

# Change away from the kpm project in case you're on it
oc project default
# Delete the project / namespace
oc project delete kpm
# List the projects to see if it's gone before you re-run
oc projects

Let’s access OpenStack!

Alright! You got this far, nice work… You’re fairly brave if you made it this far. I’ve been having good luck, but, I still appreciate your bravado!

First up – did you make an /etc/hosts file on your local machine? We’re not worrying about external DNS yet, so you’ll have to do that, it will have entries that look somewhat similar to this but has your IP address of your OpenShift host:

192.168.1.100 identity.openstack.cluster
192.168.1.100 horizon.openstack.cluster
192.168.1.100 image.openstack.cluster
192.168.1.100 network.openstack.cluster
192.168.1.100 volume.openstack.cluster
192.168.1.100 compute.openstack.cluster
192.168.1.100 novnc.compute.openstack.cluster
192.168.1.100 search.openstack.cluster

So, you can access Horizon (the OpenStack dashboard) by pointing your browser at:

http://horizon.openstack.cluster

Great, now just login with username “admin” and password “password”, aka SuperSecure(TM).

Surf around that until you’re satisfied that the GUI isn’t powerful enough and you now need to hit up the command line ;)


Using the openstack client

Go ahead and SSH into the OpenShift host machine, and in root’s home directory you’ll find that there’s a stackanetesrc file available there. It’s based on the /usr/src/stackanetes/env_openstack.sh file that comes in the Stackanetes git clone.

So you can use it like so and get kickin’

[root@labstackanetes ~]# source ~/stackanetesrc 
[root@labstackanetes ~]# openstack hypervisor list
+----+----------------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname        | Hypervisor Type | Host IP       | State |
+----+----------------------------+-----------------+---------------+-------+
|  1 | identity.openstack.cluster | QEMU            | 192.168.1.100 | down  |
|  2 | labstackanetes             | QEMU            | 192.168.1.100 | up    |
+----+----------------------------+-----------------+---------------+-------+

So how about a cloud instance!?!?!!!

Alright, now that we’ve sourced our run commands, we can go ahead and configure up our OpenStack so we can run some instances. There’s a handy file for a suite demo commands to spin up some instances packaged in Stackanetes itself, my demo here is based on the same. You can find that configuration @ /usr/src/stackanetes/demo_openstack.sh.

First up, we download the infamous cirros & upload it to glance.

$ source ~/stackanetesrc 
$ curl -o /tmp/cirros.qcow2 http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
$ openstack image create --disk-format qcow2  --container-format bare  --file /tmp/cirros.qcow2 cirros

Now let’s create our networks

# External Net
$ openstack network create ext-net --external --provider-physical-network physnet1 --provider-network-type flat
$ openstack subnet create ext-subnet --no-dhcp --allocation-pool start=172.17.0.25,end=172.17.0.50 --network=ext-net --subnet-range 172.17.0.0/24 --gateway 172.17.0.1

# Internal Net
$ openstack network create int
$ openstack subnet create int-subnet --allocation-pool start=30.0.0.2,end=30.0.0.254 --network int --subnet-range 30.0.0.0/24 --gateway 30.0.0.1 --dns-nameserver 8.8.8.8 --dns-nameserver 8.8.4.4
$ openstack router create demo-router
$ neutron router-interface-add demo-router $(openstack subnet show int-subnet -c id -f value)
$ neutron router-gateway-set demo-router ext-net

Alright, now let’s at least add a flavor.

$ openstack flavor create --public m1.tiny --ram 512 --disk 1 --vcpus 1

And a security group

$ openstack security group rule create default --protocol icmp
$ openstack security group rule create default --protocol tcp --dst-port 22

…Drum roll please. Here comes an instance!

openstack server create cirros1 \
  --image $(openstack image show cirros -c id -f value) \
  --flavor $(openstack flavor show m1.tiny -c id -f value) \
  --nic net-id=$(openstack network show int -c id -f value)

Check that it hasn’t errored out with a nova list, and then give it a floating IP.

# This should come packaged with a few new deprecation warnings.
$ openstack ip floating add $(openstack ip floating create ext-net -c floating_ip_address -f value) cirros1

Let’s do something with it!

So, you want to SSH into it? Well… Not yet. Go ahead and use Horizon to access the machine and then console into it, and ping the gateway, 30.0.0.1 in this example, there you go! You did something with it, and over the network.

Currently, I haven’t got the provider network working, just a small isolated tenant network. So, we’re saving that for next time. We didn’t want to spoil all the fun for now, right!?

Diagnosing a failed Nova instance creation.

So, the Nova instance didn’t spin up, huh? There’s a few reasons for that. To figure out the reason, first do a

nova list
nova show $failed_uuid

That will likely give you a whole lot of nothing more than probably a “no valid host found”. Which is essentially, nothing. So you’re going to want to look at the Nova compute logs. We can get those with kubectl or the oc commands.

# Make sure you're on the openstack project
oc projects
# Change to that project
oc project openstack
# List the pods to find the "nova-compute" pod
oc get pods
# Get the logs for that pod
oc logs nova-compute-3298216887-sriaa | tail -n 10

Or in short.

$ oc logs $(oc get pods | grep compute | awk '{print $1}') | tail -n 50

Now you should be able to see something.

A few things that have happened to me intermittently.

  1. You’ve sized your cluster wrong, or you’re using a virtual container host, and it doesn’t have nested virtualization. There might not be enough ram or processors for the instance, even though we’re using a pretty darn micro instance here.

  2. Something busted with openvswitch

I’d get an error like:

ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Protocol error)

So what I would do is delete the neutron-openvswitch pod, and it’d automatically deploy again and usually that’d do the trick.

  1. One time I had a bad glance image, I just deleted it and uploaded to glance again, I lost the notes for this error but it was something along the lines of “writing to a .part file” errored out.

Deploy a custom builder image on OpenShift

In the last article on creating custom s2i builder images we created the (intentionally ridiculous) pickle-http sample, and today we’re going to go ahead and deploy it under openshift. It’s the easy part, when it comes down to it. It’s rather fast, and cockpit (the web GUI) provides some nice clean information about the builds, including logs and links to webhooks to trigger builds.

Push custom builder image to registry

First I went ahead and pushed my local image to a public image in this case (you can push it to your local registry if you want, or you can feel free to use the public image named bowline/pickle-http). I tagged the image, pushed it – oh yeah, and I logged into dockerhub (not shown).

[openshift@test-cluster-master-0 stackanetes]$ docker tag pickle-http bowline/pickle-http
[openshift@test-cluster-master-0 stackanetes]$ docker push bowline/pickle-http

Create a new project and deploy new app!

Next I created a play project to work under in openshift, I also added this role to the admin user, so that I can see the project on cockpit.

[openshift@test-cluster-master-0 stackanetes]$ oc new-project pickler
[openshift@test-cluster-master-0 stackanetes]$ oc policy add-role-to-user admin admin -n pickler

Then we create a new app using our custom builder image. This is… as easy as it gets.

[openshift@test-cluster-master-0 stackanetes]$ oc new-app bowline/pickle-http~https://github.com/octocat/Spoon-Knife.git

Basically it’s just in the format oc new-app ${your_custom_builder_image_name}~{your_git_url}.

Inspect the app’s status and configure a readiness probe

It should be up at this point (after a short wait to pull the image). Great! It’s fast. Really fast, how great. Granted, we have the simplest use case – “just clone the code into my container”. So in this particular case if you don’t have the image pulled yet, that’s going to be the longest wait.

Let’s look at its status.

[openshift@test-cluster-master-0 stackanetes]$ oc status
In project pickler on server https://192.168.17.4:8443

svc/spoon-knife - 172.30.236.145:8080
  dc/spoon-knife deploys istag/spoon-knife:latest <-
    bc/spoon-knife source builds https://github.com/octocat/Spoon-Knife.git on istag/pickle-http:latest 
    deployment #1 deployed 6 minutes ago - 1 pod

1 warning identified, use 'oc status -v' to see details.

We have a warning, and it’s because we don’t have a “readiness probe”. A “probe” is a k8s action that can take diagnostic actions periodically. Let’s go ahead and add ours to be complete.

Pick up on some help on the topic with:

[openshift@test-cluster-master-0 stackanetes]$ oc help set probe
oc set probe dc/spoon-knife --readiness --get-url=http://:8080/

In this case we’ll just look at the index on port 8080. You can run oc status again and see that we’re clear.

Look at the details of the build on cockpit

Now that we have a custom build going for us, there’s a lot more on the UI that’s going to be interesting to us. Firstly navigate to Builds -> Builds. From there choose “spoon-knife”.

There’s a few things here that are notable:

  • Summary -> Logs: check out what happened in the s2i custom building process (in this case, just a git clone)
  • Configuration: Has links to triggers to automatically trigger a new build (e.g. in a git webhook), details on the git source repository

That’s that, now you can both create your own custom builder image, and go forward with deploying pods crafted from just source (no dockerfile!) on openshift.

Using OpenShift's s2i custom builder

Let’s use OpenShift’s s2i custom building functionality to make a custom image build. Wait, what’s s2i? It’s “source-to-image”. The gist here is that you plug into OpenShift’s dashboard a git URL, and it combines it into an image. There’s already “builder images” pre-loaded into OpenShift, and while those are handy… If you’re doing anything more than a bog standard web app – you’re going to need a little more horsepower to put together a custom image. That’s why we’re going to look at the work-flow to create a custom builder image using s2i.

This walk-through assumes that you have an openshift instance up and running. I have a couple tutorials available on this blog, or you can just run an all-in-one server.

A little background. I’m exploring a few different build pipelines for Docker images, in a couple different cases (one of which being CIRA). Naturally my own Bowline comes to mind, and I think it still does fit a particularly good need for both build visibility / build logs, and also for publishing images. However, I’d like to explore the options with doing it all within OpenShift.

Our goal here is going to be to make an image for a custom HTTP server that shows a raster graphic depicting a pickle, and then we can make a “custom application” which is a git repo that gets cloned into here and we can serve up more pickle images in this case. So our custom dockerfile has an index.html and a single pickle graphic, and then when a custom build is triggered we clone in some pickles that can be viewed (Why all the pickles? Mostly just because it’s custom, is really all, and at least mildly more entertaining than just saying “hello world!”). For now we’re just going to build the image, and run it manually. In another installment we’ll feed this into OpenShift Origin proper and use a builder image there and deploy a pod.

I have an openshift cluster up, and I’m going to ssh into my master and perform these operations there.

Installing s2i

So you ssh’d to your master, and you tried running s2i for fun, and “command not found”, so you do which s2i and it’s not there. Sigh. It’s a stand-alone tool, so you’ll have to install.

Go ahead and browse to the latest version and let’s download the tar, extract it, and move the bins into place.

[openshift@test-cluster-master-0 ~]$ curl -L -O https://github.com/openshift/source-to-image/releases/download/v1.1.3/source-to-image-v1.1.3-ddb10f1-linux-386.tar.gz
[openshift@test-cluster-master-0 ~]$ tar -xzvf source-to-image-v1.1.3-ddb10f1-linux-386.tar.gz
[openshift@test-cluster-master-0 ~]$ sudo mv {s2i,sti} /usr/bin/
[openshift@test-cluster-master-0 ~]$ s2i version

Setup the Dockerfile

Run s2i to create a Dockerfile and s2i templates for you. You’ll note that the first argument after create “pickle-http” is the name of the image, and second and last argument is the name of the directory it creates

[openshift@test-cluster-master-0 ~]$ s2i create pickle-http s2i-pickle-http
[openshift@test-cluster-master-0 ~]$ cd s2i-pickle-http/
[openshift@test-cluster-master-0 s2i-pickle-http]$ ls -l
total 8
-rw-------. 1 openshift openshift 1257 Dec  9 09:53 Dockerfile
-rw-------. 1 openshift openshift  175 Dec  9 09:53 Makefile
drwx------. 3 openshift openshift   48 Dec  9 09:53 test
[openshift@test-cluster-master-0 s2i-pickle-http]$ find

You’ll note in the find that the s2i command has bootstrapped a bunch of assets for us.

Important…. Now let’s download our pickle photograph. (Feel free to download your own pickle.)

[openshift@test-cluster-master-0 s2i-pickle-http]$ curl -o ./pickle.jpg http://i.imgur.com/m8R5SJX.jpg

And let’s create a index.html file

[openshift@test-cluster-master-0 ~]$ cat << EOF > index.html
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
  <head>
    <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1" />
    <title>Pickle Raster Graphic</title>
  </head>
  <body>
    <p>
      This is a <a href="/images/">collection of pickles</a>.
    </p>
    <p>
      <img src="/pickle.jpg" alt="that's a pickle." />
    </p>
  </body>
</html>
EOF

Let’s go ahead and edit the Dockerfile. Here’s what my Dockerfile looks like now.

[openshift@test-cluster-master-0 s2i-pickle-http]$ cat Dockerfile 
# pickle-http
FROM centos:centos7
MAINTAINER @dougbtv
ENV BUILDER_VERSION 1.0
LABEL io.k8s.description="It shows a freakin' pickle, dude." \
      io.k8s.display-name="pickle 0.2.4" \
      io.openshift.expose-services="8080:http" \
      io.openshift.tags="pickle,preservation,demo,food,cucumber"

# Install apache and add our content
RUN yum install -y httpd && yum clean all -y
ADD index.html /var/www/html/
ADD pickle.jpg /var/www/html/
RUN mkdir -p /var/www/html/images/

# Configure apache to use port 8080 (this simplifies some OSP stuff for us)
RUN sed -i 's/Listen 80/Listen 8080/' /etc/httpd/conf/httpd.conf

# TODO (optional): Copy the builder files into /opt/app-root
# COPY ./<builder_folder>/ /opt/app-root/

# Add the s2i scripts.
LABEL io.openshift.s2i.scripts-url=image:///usr/libexec/s2i
COPY ./.s2i/bin/ /usr/libexec/s2i

# Setup privileges for both s2i code insertion, and openshift arbitrary user
RUN mkdir -p /opt/app-root/src
ENV APP_DIRS /opt/app-root /var/www/ /run/httpd/ /etc/httpd/logs/ /var/log/httpd/
RUN chown -R 1001:1001 $APP_DIRS
RUN chgrp -R 0 $APP_DIRS
RUN chmod -R g+rwx $APP_DIRS

WORKDIR /opt/app-root/src

# This default user is created in the openshift/base-centos7 image
USER 1001

EXPOSE 8080

CMD /usr/sbin/httpd -D FOREGROUND

Modifying the s2i scripts

Great, let’s look at the ./.s2i/bin/assemble file, go ahead and cat that if you’d like. This is responsible for building the application.

I just added a single line to mine to say to copy the cloned git repo into the /var/www document root for apache.

[openshift@test-cluster-master-0 s2i-pickle-http]$ tail -n 2 ./.s2i/bin/assemble
# TODO: Add build steps for your application, eg npm install, bundle install
cp -R /tmp/src/* /var/www/html/images

Now, time to modify the run s2i script. In this case we’ll just basically be doing apache in the foreground. Here’s what mine looks like now.

[openshift@test-cluster-master-0 s2i-pickle-http]$ cat ./.s2i/bin/run
#!/bin/bash -e
#
# S2I run script for the 'pickle-http' image.
# The run script executes the server that runs your application.
#
# For more information see the documentation:
# https://github.com/openshift/source-to-image/blob/master/docs/builder_image.md
#

exec /usr/sbin/httpd -D FOREGROUND

There’s also a construct called “incremental builds” that we’re not using, so we’re going to remove that script.

[openshift@test-cluster-master-0 s2i-pickle-http]$ rm ./.s2i/bin/save-artifacts

There’s also a usage script that you can decorate to make for better usage instructions. We’re going to leave ours alone for now, but, you should update yours! Here’s where it is.

[openshift@test-cluster-master-0 s2i-pickle-http]$ cat ./.s2i/bin/usage

Build all the things!

First off, go ahead and build your pickle-http s2i image.

[openshift@test-cluster-master-0 s2i-pickle-http]$ sudo docker build -t pickle-http .

Let’s make a little placeholder, and put something that’s not exactly a pickle image into the test/test-app dir.

[openshift@test-cluster-master-0 s2i-pickle-http]$ echo "not a pickle image" > test/test-app/pickle.txt

Now we can run s2i to import code into this image.

[openshift@test-cluster-master-0 s2i-pickle-http]$ sudo s2i build test/test-app/ pickle-http sample-pickle

In likely reality you’ll be cloning a git repo into this bad boy, here we’ll use the sample “spoon-knife” git repo to learn forking on github and it’ll look more like…

[openshift@test-cluster-master-0 s2i-pickle-http]$ sudo s2i build https://github.com/octocat/Spoon-Knife.git pickle-http sample-pickle

Go ahead and finish this up using the git method there.

Run the image to test it

Alright, so now we have a few images, if the above has been going well for you, you should have a set of docker images that looks something like:

[openshift@test-cluster-master-0 s2i-pickle-http]$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
sample-pickle       latest              ffaa100d8fa2        About a minute ago   246.6 MB
pickle-http         latest              8e03db77f8e6        7 minutes ago        246.6 MB
docker.io/centos    centos7             0584b3d2cf6d        5 weeks ago          196.5 MB

Let’s go ahead and give that a run…

[openshift@test-cluster-master-0 s2i-pickle-http]$ sudo docker run -u 1234 -p 8080:8080 -d sample-pickle 

Wait… wait… Why are you using the -u parameter to run as user #1234? That my friend is to test to make sure this will actually run on OpenShift. Since OpenShift is going to pick an arbitrary user to run this image as, we’re going to test it here with a faked out user id. I’ve accounted for this in the Dockerfile above.

If all is working well, you should have it in your docker ps and you can see it’s logs

[openshift@test-cluster-master-0 s2i-pickle-http]$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
2f650727ef43        sample-pickle       "/usr/libexec/s2i/run"   15 seconds ago      Up 14 seconds       0.0.0.0:8080->8080/tcp   distracted_bassi
[openshift@test-cluster-master-0 s2i-pickle-http]$ sudo docker logs distracted_bassi
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.131.0.2. Set the 'ServerName' directive globally to suppress this message

Now let’s go ahead and see that’s it is actually serving our content. This command should show the index HTML that we baked into the base image.

[openshift@test-cluster-master-0 s2i-pickle-http]$ curl localhost:8080

We have dynamic content in the /images directory in the document root, so let’s look at what’s there.

[openshift@test-cluster-master-0 s2i-pickle-http]$ curl -s localhost:8080/images/ | grep -i fork.me
  Fork me? Fork you, @octocat!

You can see that it’s the content from the git clone running in the container created from the sample-pickle docker image.

In another article, we’ll go into how to add this builder image to your running openshift cluster so that you can deploy pods/containers using it.

Editorial

The way this is made is rather rapid when it comes to inserting the code, granted – we’re just adding some content to a simple flat HTTP server. I think this might be just the ticket if you’re deploying (say) 100 microservices all in the same programming language. Or 100 microservices across 5 programming languages. This could be very convenient.

I’ll probably have more to say when it comes to deploying the builder image, and I think this is fairly handy. Where I still like Bowline is that it’s rather visual, and gives good visibility of a build process. It also has solid logging to show you what’s happening in your builds, and has a lot of opportunities for extensibility. They’re really… two different kind of tools.

Hello Ansible CIRA!

Today we’re going to look at CIRA. CIRA is a tool to deploy a CI reference architecture to test OpenStack. I’m going to go with the Docker deployment option, as that’s the environment that I tend towards. Today we’ll get it up and running here.


Requirements

In short we’ll need these things:

  • Ansible
  • Docker
  • Docker-compose
  • Python shade module
  • A clone of ansible cira
  • The ansible-galaxy roles provided.

I’ve already got docker on my host, so let’s just go ahead and install docker-compose which is a requirement

[root@undercloud stack]# curl -L "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
[root@undercloud stack]# chmod 0755 /usr/local/bin/docker-compose
[root@undercloud stack]# docker-compose --version

We also need to install shade, in my case I already have it installed as it’s a requirement for openshift-ansible when you’re using the Openstack method of deploying that. Speaking of which you also need ansible, which I already had for the same requirement.

[stack@undercloud ~]$ pip install --user shade

Now we’ll clone the repository

[stack@undercloud ~]$ git clone https://github.com/redhat-nfvpe/ansible-cira.git
[stack@undercloud ~]$ cd ansible-cira

And finally for the requirements make sure you’ve got the ansible galaxy roles

[stack@undercloud ansible-cira]$ ansible-galaxy install -r requirements.yml

CIRA setup

We’re going to:

  • Create the clouds.yml user config file
  • Initialize the custom ansible vars

We’re going to need a clouds.yml file.

[stack@undercloud ~]$ mkdir ~/.config/openstack
[stack@undercloud ~]$ touch ~/.config/openstack/clouds.yml

Then let’s reference our overcloudrc to get the things we need in here.

[stack@undercloud ~]$ cat overcloudrc 

And then I’ll setup my clouds.yml file. Here’s what mine winds up looking like…

[stack@undercloud ~]$ cat ~/.config/openstack/clouds.yml
clouds:
    mycloud:
        auth:
            auth_url: http://192.168.1.150:5000/v2.0
            username: admin
            password: fDZmuDw6U2pR29TYvTyfpytsM
            project_name: "Doug's OpenShift-on-Openstack"

We need to init some ansible vars, so make sure you’re ready for this requirement by making a blank cira_vars.yml file.

[stack@undercloud ~]$ mkdir -p ~/.ansible/vars/
[stack@undercloud ~]$ touch ~/.ansible/vars/cira_vars.yml

I also preloaded my vars a little bit…

[stack@undercloud ansible-cira]$ cat ~/.ansible/vars/cira_vars.yml 
---
cloud_name_prefix: redhat                  # virtual machine name prefix
cloud_name: mycloud                        # same as specified in clouds.yml

We’re also going to add a jenkins slave, as I think it’s required. Err, first time I ran I got a fatal: [jenkins_master]: FAILED! => {"failed": true, "msg": "'dict object' has no attribute 'jenkins_slave'"}. For what it’s worth.

We’re going to use the undercloud itself as a slave. Unwise? Maybe. [stack@undercloud ansible-cira]$ ssh-copy-id -i ~/.ssh/id_rsa.pub stack@127.0.0.1

I went ahead and altered the hosts/containers to add a slave.

[stack@undercloud ansible-cira]$ cat hosts/containers 
# This inventry file is for container (docker case)
# these names map to container name
jenkins_master
logstash
elasticsearch
kibana

[jenkins_slave]
slave01 ansible_connection=ssh ansible_host=127.0.0.1 ansible_user=stack

[jenkins_slave:vars]
slave_description=CIRA Testing Node
slave_remoteFS=/home/stack
slave_port=22
slave_credentialsId=stack-credential
slave_label=cira

Start it up

Go ahead and run the docker-compose, to put the composure up in daemon mode.

[stack@undercloud ansible-cira]$ docker-compose up -d
[stack@undercloud ansible-cira]$ docker ps
CONTAINER ID        IMAGE                        COMMAND             CREATED             STATUS              PORTS               NAMES
f45e3b3e5f1d        ansiblecira_logstash         "/sbin/init"        13 seconds ago      Up 9 seconds                            logstash
3e076480315c        ansiblecira_kibana           "/sbin/init"        13 seconds ago      Up 10 seconds                           kibana
257ec8b685e4        ansiblecira_elasticsearch    "/sbin/init"        13 seconds ago      Up 10 seconds                           elasticsearch
cc1dc506908b        ansiblecira_jenkins_master   "/sbin/init"        13 seconds ago      Up 10 seconds                           jenkins_master

Now we’ll fire off the playbook.

ansible-playbook site.yml -i hosts/containers -e use_openstack_deploy=false -e deploy_type='docker' -c docker

Alright, now you should be looking good, you’ll see that there’s some info about where Jenkins & Kibana UIs are located at the bottom, I went and pasted my snips here below:

TASK [Where is Kibana located?] ************************************************
ok: [kibana] => {
    "msg": "Kibana can be reached at http://172.20.0.4:5601/"
}

[... snip ...]

TASK [Where is Jenkins Master located?] ****************************************
ok: [jenkins_master] => {
    "msg": "Jenkins Master can be reached at http://172.20.0.2:8080/"
}

Let’s connect to the web UIs

If you’re like me, this is running on a remote machine, and it’s talking to a new bridge, and you don’t have access to it over the network, so you’ll have to tunnel in to reach them.

You’ll find the IPs to use in the out put, and I tunnel like so:

[doug@localhost laboratoryb]$ ssh -L 5601:172.20.0.4:5601 stack@192.168.1.201

And point my browser on my local machine @ http://localhost:5601

…Although I don’t have anything logged to ES, so it’s complaining that there’s nothing to find, but, I can get there!

And I can get to Jenkins similarly

[doug@localhost laboratoryb]$ ssh -L 8080:172.20.0.2:8080 stack@192.168.1.201

And that my friends is CIRA up and running! Another time we’ll look about how to load it with jobs and how to create jobs to fit a need for testing an openstack reference architecture.