Asterisk Autobuilder for Docker

I've gone ahead and expanded upon my Asterisk Docker image, to make a system that automatically builds a new image for it when it finds a new tarball available.

Here's the key features I was looking for:

  • Build the Asterisk docker image and make it available shortly after a release
  • Monitor the progress of the build process
  • Update the Asterisk-Docker git repo

To address the secondary bullet point, I made a REPL interface that's accessible via IRC -- and like any well behaved IRC netizen; it posts logs to a pastebin.

Speaking of such! In the process, I made an NPM modules for pasteall. If you don't know pasteall.org -- it's the best pastebin you'll ever use.

You can visit the bot in ##asterisk-autobuilder on freenode.net.

As for the last bullet point, when it finds a new tarball it dutifully updates the asterisk-docker github repo, and makes a pull request. Check out the first successful one here. You'll note that it keeps a link to the pasteall.org logs, so you can see the results of the build -- in all their gory detail, every step of the docker build.

I have bigger plans for this, but, some of the shorter-term ones are:

  • Allow multiple branches / multiple builds of Asterisk (Hopefully before Asterisk 13!!)

Docker and Asterisk

Let's get straight to the goods, then we'll examine my methodology.

You can clone or fork my docker-asterisk project on GitHub. And/or you can pull the image from dockerhub.

Which is as simple as running:

docker pull dougbtv/asterisk

Let's inspect the important files in the clone

.
|-- Dockerfile
|-- extensions.conf
|-- iax.conf
|-- modules.conf
`-- tools
    |-- asterisk-cli.sh
    |-- clean.sh
    `-- run.sh

In the root dir:

  • Dockerfile what makes the dockerhub image dougbtv/asterisk
  • extensions.conf a very simple dialplan
  • iax.conf a sample iax.conf which sets up an IAX2 client (for testing, really)
  • modules.conf currently unused, but an example for overriding the modules.conf from the sample files.

In the tools/ dir are some utilities I find myself using over and over:

  • asterisk-cli.sh runs the nsenter command (note: image name must contain "asterisk" for it to detect it, easy enough to modify to fit your needs)
  • clean.sh kills all containers, and removes them.
  • run.sh a suggested way to run the Docker container.

That's about it, for now!


There's a couple key steps to getting Asterisk and Docker playing together nicely, and I have a few requirements:

  • I need to access the Asterisk CLI
  • I also need to allow wide ranges of UDP ports.

On the first bullet point, we'll get over this by using nsenter, which requires root or sudo privileges, but, will let you connect to the CLI, which is what I'm after. I was inspired to use this solution from this article on the docker blog. And I got my method of running it from coderwall.

On the second point... Docker doesn't like UDP it seems (which is what the VoIP world runs on). At least in my tests trying to get some IAX2 VoIP over it, on port 4569. (It's an easier test to mock up than SIP!) (Correction: Actually, Docker is fine with UDP, you just have to let it know when you run a docker container, e.g. docker run -p 4569:4569/udp -t user/tag)

So, I settled on opening up the network to Docker using the --net host parameter on a docker run.

At first, I tried out bridged networking. And maybe not all is lost. Here's the basics on bridging here @ redhat I followed. Make sure you have bridge-utils package: yum install -y bridge-utils. But, I didn't have mileage with it. Somehow I set it up, and it borked my docker images from even getting on the net. I should maybe read the Docker advanced networking docs in more detail. Aaaargh.

Some things I have yet to do are:

  • Setup a secondary container for running FastAGI with xinetd.

I'm thinking I'll running xinetd in it's own container and connect the asterisk image with the xinetd image for running FastAGI.

Blog redux -- Markdown edition

I've redone my blog! It's all markdown now. Thank goodness. If for whatever reason, you find content on this blog that's out of place, or wrong... let me know! I'd appreciate it.

As a back-up, I'm keeping a copy of my previous blog @ blog.dougbtv.com -- which you can feel free to reference in the short meanwhile.

How I learned to stop worrying and love the firewall-cmd

With the advent of Centos 7, I had to face it that firewalld is a way of life. I guess it's probably part of the systemd controversy.

I tried to go back to vanilla iptables. But... I just felt dirty. I've been living with firewalld on my Fedora workstations for... a while now. But, I never wanted to manage it much. I basically just kept it locked down -- it's workstations anyways, and I was still using iptables on Centos 6. I tried to be lazy -- and run firewall-config over an x11 forwarded connection, but... That seemed to be proving harder than actually learning firewall-cmd.

So, I stopped worrying. I might as well use it.

Hell, for about 8 zillion years I've been having to google stuff like "cyberciti iptables drop" to remember what the hell to do with iptables anyways. I just needed a recipe every time. And, then I used firewall-cmd.

Really, I needed to read the Centos 7 page on using firewalld in detail, before I got it.

Once I figured out that I could define what the zones meant by doing an --add-source, it clicked for me. So, here's my cheatsheet of what I did to get my bearings, and I have to say, it's kind of a better world. (I'm still struggling with systemctl... I'm like blinded by oldschool sysv style init scripts). I was really trying to just open up for openVPN and then disable SSH was my first goal, so I used two zones "public" (for everything) and then a specific source for the LAN which I called "trusted". Here, I just really play around so I could test it out and proved that it worked according to my assumptions and newly learned tid-bits about firewalld / firewall-cmd

# Check out what it looks like...
firewall-cmd --get-active-zones
firewall-cmd --zone=public --list-all

# Try a port:
firewall-cmd --zone=public --add-port=5060-5061/udp
firewall-cmd --zone=public --list-ports

# Let's setup the trusted zone:
firewall-cmd --permanent --zone=trusted --add-source=192.168.100.0/24
firewall-cmd --permanent --zone=trusted --list-sources

# I needed to reload before I saw the changes:
firewall-cmd --reload
firewall-cmd --get-active-zones

# Now let's configure that up:
firewall-cmd --zone=trusted --add-port=80/tcp --permanent
firewall-cmd --zone=trusted --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=1194/udp --permanent
firewall-cmd --zone=public --add-port=1194/tcp --permanent

# Now list what you've got
firewall-cmd --zone=trusted --list-all
firewall-cmd --zone=public --list-all

Oh t3h noes! You just borked your Fedora 20 install!

So, I made a nice boo-boo with my Fedora 20 install on my laptop this morning. I accidentally rebooted before a yum update was finished. Annnd.... I figured it was better off to re-install rather than try to figure out how to recover -- doubly so since I lost network connectivity, making it really hard to get references.

If you're a MEAN stack developer on Fedora 20, you might install some of the same tools I do when I reinstall, so I took some note for me, and... For you.

First things first, install chrome: install chrome: http://www.if-not-true-then-false.com/2010/install-google-chrome-with-yum-on-fedora-red-hat-rhel/ & then I disable SELinux.

# Command line basic tools.
yum install nano terminator rsyslog

# install your MEAN stack developers stuff
yum install mongodb mongodb-server nodejs nginx npm git rubygem-compass

# install robomongo
# http://robomongo.org/
yum install glibc.i686 libstdc++.i686
rpm -ivh robomongo_version.rpm

# install your super sweet grunt & yeoman globally install npm packages
npm install -g grunt-cli yo