Code to Container v3: the discovery of GitLab

When I take another look at Code to Container v2 (our last sprint results), I get goose bumps 🙂 But that’s OK. In v2 we’ve created an pretty complex system of MQTT messaging and systems involving Continuous Integration (Circle CI, namely) and Continuous Deployment (self-made mqtt-triggered Docker deployers). When I look back and also taking into account the new info we’ve acquired, the architecture drawn seems unnecessarily complex. It shows that is really important to reflect on your work and to not be afraid to trash it and start over in case there is a promising solution on the horizon.

Before I start discussing the new architecture we came up with, let’s first look at the issues we discoverec with v2. The key issue we discovered was testing. While we’re able to compile code and build Docker containers for various architectures using Qemu hardware virtualization, we’ve not found a solution for testing. Testing in our case is a complex issue. It does not only involve running Unit Tests, but these Unit Tests might have hardware dependencies. For example, one unit test might involve using the GPIO pins of the Raspberry PI. Mocking these hardware dependencies would be a real pain. On top of the “unit testing with hardware dependencies” issue we would also like to test the created Docker images from the outside, perform some black box testing. In our case we often expose RESTful APIs and it’s a good idea to run tests against a newly spawned Docker container with these APIs. Again, this requires in some cases that the Docker image is run on a system that is as close as it can get to the target device and architecture. We came to the conclusion, that building and running the containers in the cloud will always be suboptimal. Hence we’ve looked a bit around…

The promising new kid on the block, solving our “on the device” CI/CD problems (and more) is: GitLab. At the end of our third sprint, we had evaluated GitLab using a free gitlab.com (cloud) account and we drew this fancy new architecture picture:

SAMSON MCRA 0.3 (1).png

To be honest: we have evaluated and also implemented the complete left side, but the right side of this diagram is fiction. We did not have time to implement and test it. Turns out that we’ll also not need it as we’ve found a few other options once again. So let’s go step by step.

GitLab takes over

The diagram above can be split into two parts. The left side is all about CI – Continuous Integration. In our world right now, that’s dev’s pushing (maybe locally tested, running) code to a repo, therewith kicking off the build pipeline. Our pipeline (unit) tests the code, then builds a docker image, tests the docker image and finally pushes it to Docker registries. This completes the CI part of the above diagram.

Why is this better than in v2, using CircleCI, you might ask.
GitLab has the concept of Gitlab Runners. While you can use shared runners in the cloud, you can disable these and simply register your own custom runners. As a runner is – in the end – a Golang binary running on the target device, we’re able to have a batch to target system GitLab runners connected to our Gitlab cloud instance. The CI/CD jobs, specified in the build config (we’ll see this later) are run directly on the target systems, in our case on Raspberry PIs. This allows us to test, build, black-box test the Docker images in the best possible way.

As we’ve collected calls to webhooks for each build step, we can easily trigger a CD – Continuous Deployment – system once the tested Docker image is available. That CD part is fiction, but should well be possible to implement. We send CI webhooks to our “Webhook to MQTT bridge” which simply translates the HTTP-based webhooks to MQTT consumable messaging events. That’s mainly a leftover from the v2 architecture, but we still also believe that a central MQTT messaging system will have benefites for extensibility for example. New is a deployment docker container, that is sitting right on the target system. Configure via some Docker volume and YML, it subscribes to the right topics to get updates on new Docker images available for deployment. With logic described in the YML-based configuration and the Docker socket (mounted into the deployment container), we can instruct the underlying Docker daemon to update the swarm service.

As explained, the CD part is “evaluated fiction”. So I’ll not describe this in detail. Let’s take a look at the CI part in detail.

GitLab 101

GitLab is really a bit more than CI/CD. It’s your GIT repo, includes a Docker registry, can store your static web content and will also provide an issue system (similar, but simpler to Jira) if you need it.  We’re currently investigation the use of the code (GIT) repo, the excellent CI/CD system and the Docker registry that is built in.

The key config file you need to understand is the .gitlab-ci.yml file which describes the build stages and the jobs in each build stage. Build stages, in our case, are: test, build, apitest and push.

  • test stage: here a GitLab runner checks out the code (conveniently stored in the GitLab GIT repo) and runs the golang unit tests.
  • build stage: based on the Dockerfile which is part of our project, we instruct GitLab to build the Docker image with the Docker CLI commands. As our Docker build is a multi-step build we’re able to pull the compiled Go binary out of the build container and put it into a new, slim, docker image. The result is an about 15MB Docker image.
  • apitest stage: this stage take the freshly build Docker image and runs it. It will then use go testing (but now focusing on API tests) to test the exposed API of that running container. Once the api testing is done, we know the code compile and the image created behaves like we expect it to behave.
  • push stage: finally, we push the docker image to the GitLab Docker registry. At some point we’ve pushed our image to Docker Hub, a local Docker registry and also to the GitLab, but we’ve reduced that down to just GitLab now. Still good to know…

The following Gist shows the skeleton of our .gitlab-ci.yml file:

As our Gitlab Runners use the “docker” executor type, we’re specifying a Docker image which is used for our build. This essentially means, that each build job is run within a Docker container using the specified Docker image.

While we’re currently using architecture-specific images, we’ll evaluate the use of multi-arch base Docker images for the future. The pipeline described above can only be run on arm32v6 – e.g. Raspberry PIs. Simply switching to “docker:latest” would use the multi-arch support of that docker image.

(Unit-)Test Stage

Here, we checkout the code and run the go test functions for unit testing. We’ve names our test functions (in unit_test.go) in a special way, they all contain the string ‘UNIT’ to be able to exclude all other test functions.

As the arm32v6/docker image does not contain Golang, we need to install it (we also need musl-dev for go). We then create the correct path in GOROOT, which happens to be /root/go as we’re root in our docker container. ${GIT_PROJECT_PATH} is a variable defined by ourselves in the variables section and results in gitlab.com/samsonpe/<project> in our case.

After moving the code to this directoty, we change into it and run the test cases using Golang’s test command. We limit the test functions to the ones containing “UNIT” as explained earlier.

Build Stage

Our build stage uses the Dockerfile, which is part of our project, and builds a Docker image. We export this image and make it part of the build artifacts of our build pipeline. We can automatically access these artifacts from other build stages and build jobs (which do not necessarily need to happen on the same machine!).

While this is a pretty standard docker build command, it might be worth to note that we’re adding the ${GIT_PROJECT_PATH} as a build argument to our Docker build. As the Docker build compiles Go code, we also need to create the correct directory structure as for testing. To not be redundant, we pass that variable from the build system into the Docker build. The name of the Docker image, the tag, is at this point just the ${CI_PROJECT_NAME}, the simple name of the GitLab Project. In our case, as we build our “table” project, that name is simply “table”.

(API-)Test Stage

Next up is the API testing stage, which is a bit more involved. First, we check for leftovers of previous builds, running docker containers. If we find one, we clean it up. Of course we also clean up after the job, but if for whatever reason the job does not succeed, we might have the issue that a test container is still running.

Next, we need to figure out the hostname of the current build system we’re running on. For this, we’re spinning up a slim alpine:latest docker image and execute the hostname command in it. As we’re using the network of the host, we can figure out the hostname this way.

We’re now starting our test container with the new docker image. If all goes well, we’ll expose an API on that host and test it using Golang testing. Here, we’re limiting test functions to functions containing “REST” for our API tests.

At the very end, we try try to kill and remove our test container. We then also prune the containers (remove stopped containers, if any).

Push Stage

Our final CI stage is “push” – we push to the GitLab registry.

There is not much to explain, we simply load the build image and tag it based on our the version info in our GitLab YML file. As the registry user and password are available as GitLab environment variables, it’s pretty simple to push to the GitLab Docker registry. We’ve also evaluated a local registry (local to the GitLab Runners, in our network) and the official Docker Hub. That was all working as expected, so we reduced it to a simple solution in the end.

Attaching Custom GitLab Runners to your GitLab CI/CD Pipeline

To make above pipeline work, you need to make sure that the jobs are executed on a system that supports arm32v6 – or switch to another docker image such as docker:latest (which also includes the docker cli).

How to install GitLab Runners is nicely described here, also for Raspberry PI (follow Linux). You next need to register these runners with your pipeline. Essentially you use the GitLab token provided (on the CI/CD Runners page) and run a register command such as:

It’s important to choose docker for the executor type and pass the docker.sock of the host Docker daemon as a volume into the job container.

Why is the GitLab Runner configuration saved to a /etc/gitlab-runner.config.toml file. TOML?! Why not YML… we don’t know, but if you want to learn about TOML, here we go.

Outlook

In our next sprint, we’ll also evaluate GitLab Runners for Deployment. We also need to create our own multi-arch base images, etc. So come back soon to check for updates. And of course comment on this post if you have questions!

 

Leave a Reply

%d bloggers like this: