In our first post about our research architecture for automated Docker deployments, we’ve explained a simple mechanism to automatically update a Docker container based on a push to a code repo. That mechanism, frankly, was not quite sophisticated. Yet we’ve learnt a lot and that’s the main reason we’re doing this. We iterate quickly to learn very fast.
So for our next sprint, we’ve set a few more goals:
- hack a second table
- update not just a single Docker container, but all containers running on all deployed tables
- Create a local Docker registry and also save the resulting Docker image to that local registry
- create a flexible, extensible communication backbone
- think about and implement security
Once again, we sat together and drew a new architecture picture. This is what we came up with (ok, I created the diagram way later, but we’ve scribbled this down in one way or the other):
The big, fundamental change is that we now included a messaging system based on MQTT at the center of our architecture. The systems such as CircleCI or Docker Hub are connected to this central nervous system via “bridges”: we convert the webhooks they send out to MQTT messages. We’ve also added new components such as the “multiplexer” The job of the multiplexer is to take the newly created Docker image, retag it and push it to our local registry. The deployment service has fundamentally changed, too. In v1 we’ve simply run a bash script or simple go commands to pull the image and restart the container. Now, we’re speaking Docker Swarm. We need to create a “table service” and manage multiple nodes, each representing a table.
When we looked at security, it soon became clear that we have to look closer into Transport Layer Security (TLS) as it is the de-facto security standard of the web. It helps with authenticating the clients and servers and encrypts all data sent in between client and server. Once you can be sure about the server/client identity, you can also easily go one step further and use the identities to build a custom authorisation system.
So we’ve started to setup our own Certificate Authority and derived trusted certificates for each client (and server such as the local Docker registry) from it. We will describe the setup of the CA, deriving the client/server certificates and the complete process as part of the secure MQTT series.
Setting up an MQTT broker with TLS-based mutual authentication and custom authorisation grants for each client
As my co-worker Max and me have both used the mosca MQTT broker before, we’ve chosen mosca to setup our broker for the research architecture. Mosca is written in node.js and we knew that we can easily control the authorizations (e..g which client can subscribe/publish to which MQTT topic) via Javasript callbacks. What we did NOT quite know was how hard it would be to set it up with TLS. It turned out that the documentation about it was mainly… wrong. So we had to go very deep into the code and figure it out the hard way. In the end, we managed to run mosca with TLS and used client certificates to authenticate the clients. The certificate serial number of the client certificates, manually chosen, is the key for setting up the authorizations, too.
We’ll dedicate a complete series of blog posts to our thoughts about TLS, creating the CA and certificates, setting up mosca in a secure way, implementing the authorisations, etc. but here’s at least a few little teasers. First, the options to setup a secure mosca MQTT broker:
Another snippet of config that might be really interesting and that I can share at this point is the Dockerfile for creating the Docker image to run our MQTT broker in. We did not want to have the certificates stored in the container, for obvious reasons. We also thought it’s an excellent idea to keep the authorisation grants outside of the container. So here’s our Dockerfile which shows how we’ve declared the two volumes – one for the certificates and one for the authorizations file:
Our authorisations are stored in a file which is pulled in from the Docker host system via a volume. This way we can change the authorisations, add clients, etc. and simply restart the container for the changes come alive:
In the above JSON snippet, A001 is the hex serial number of an MQTT client called “shivago”. Shivago is our test MQTT client, written in Go. The serial number is manually chosen at the time the certificate for the client is created, so we currently use a simple text document to keep track of the serial numbers. As you can also see, we store the publish and subscribe authorizations in JSON arrays, the topics may also contain the typical MQTT wildcards such as ‘#’ and ‘+’.
Creating the bridges: webhook to MQTT
To bridge the CircleCI system to our MQTT backbone, we created simple go-based API servers. These servers are exposed via ngrok tunnels as we run both bridges on a single Raspberry PI (and of course they are also containers).
To report the on_success or on_failure state from CircleCI to our MQTT broker, we use these simple curl calls (at the time of writing, CircleCI v2 did not support built in webhooks for this… really?).
For the bridges, we again chose the GIN web framework to implement the APIs. The following gist shows the almost complete program, written in Go. I’ve just removed the imports and also some structs for the sake of brevity. It nicely shows how to setup a MQTT client in go using the Paho MQTT client and how to parse the incoming Docker Hub post request into a struct, then forwarding it to the MQTT broker.
Setting up a local Docker registry
As the hardware we have abundantly available are RPIs, we chose to setup a local Docker registry on a Raspberry PI. This can be easily achieved by using one of the public images for arm32v6 available on Docker hub. In our case, we’ve chosen budry/registry-arm and further modified the container by using an external volume for storing the images and also making it a bit more secure by adding TLS. We’re not using any user authentication though, but that’s so far for our local registry.
This is the docker container run command that we use to start our registry:
As our registry is using TLS (default port 443) and is running on a PI with hostname ‘registry’, our registry prefix for tagging images has now become ‘registry:443’. A valid image would be registry:443/user/image:tag.
Storing the newly generated Docker images in our local registry
To store images in our new local registry, a few things need to happen:
- the system processing the new image needs to be triggered and informed about the availability of a new Docker image
- the image needs to be pulled, retagged (with our local registry:443 prefix) and finally pushed to the new registry
- for a local Docker daemon to be able to push to our registry, we need to make the root CA (self-signed) available to the local Docker daemon
We again decided for golang, mqtt and also dockerized the complete solution. The following shows a snippet how we get triggered and then pull/tag/push the image to the new registry.
Adding the CA certificate to the local docker was a bit tricky. The problem is that Docker on Mac runs in a virtual machine and the directories like /etc/docker/ are mainly relevant for the linux world. So there was a bit of confusion, but finally solved. For Docker on a Raspberry PI with Raspbian OS, the following directory has to be created and the relevant ca.crt (no other name allowed) must be put there:
#for raspberry pi, raspbian os mkdir -p /etc/docker/certs.d/registry\:443/ #then place the ca.crt there cp ca.crt /etc/docker/certs.d/registry\:443
Now our docker daemon was able to verify the authenticity of our local TLS Docker Registry at registry:443.
Creating a Docker Swarm based Deployment Service
The last task included setting up a Docker Swarm service. Once we had the service running via Docker CLI commands, we figured out the update procedure and finally turned these CLI commands into golang docker API calls.
First, to use Swarm you need to setup the Swarm manager:
#run on swarm manager to get join tokens and init manager docker swarm init --advertise-addr <ip> --listen-addr <ip>
The <ip> above is refers to the static IP of the Raspberry PI which we run the command on. It is advised to create a static IP for the Swarm Manager.
Once you run the above command, Docker Swarm will show you the call to join the nodes. In our case we had two table Raspberry PIs which need to run the same container, so we ran a similar command on each of these:
#run on each node
docker swarm join --token <token> <manager ip>:2377
The command ‘docker node ls’ will now give you an overview of the current swarm. We need to have the node ids, typically based on the hostnames, to tag each node. Tagging is useful as it can be used for defining container placement. We tag each swarm node that should run the table container (based on our service definition) with ‘table’:
docker node update --label-add type=table table-max docker node update --label-add type=table table-sven
Finally, we can start the service like this:
docker service create \ --name tableservice \ --constraint 'node.labels.type==table' \ --publish mode=host,published=80,target=8080 \ --mode=global \ --no-resolve-image \ --mount type=bind,source=/sys,target=/sys \ user/image:tag
And we can update an existing service like this:
docker service update --image user/image:tag --no-resolve-image --force tableservice
The flag –no-resolve-image was required as Docker seems to be confused by our image manifest. Apparently Docker thought that our image was not created for arm, which it was, and refused to run the update. This flag stops docker from checking the image manifest.
Put into Golang Docker API code, the swarm service update looks like below. Yes, it is – compared to the command line – really horribly long. It also cost use quite some time to figure out the complex nesting of structs, but we eventually made it. The –no-resolve-image translates into ‘QueryRegistry: false’ – WOW. At this point we’re really not so 100% sure if using the Golang Docker API is a good choice. Maybe it’s smarter to use the REST API of the docker engine, as it seems like a double translation task from REST to Golang structs.
It’s a lot of stuff to understand and read, we know. So if you have questions, please add a comment. We still hope you find the info and especially the code snippets useful.