As already introduced in the last Code to Container post, we’re moving to GitLab. We’ve started evaluating the Gitlab CI system first and now begin to move other parts such as the deployments to GitLab, too. This blog post is a bit shorter and mainly describes how we setup deployments using a manual build step in GitLab. On top I’ll discuss how we use the experimental Docker Manifest feature to build multi-arch images directly in our GitLab Pipeline. It solves the very special problem of ours: we would like to build images that run on multiple architectures. As a user, you don’t have to care: just do docker container run and it will pick the right image for you, regardless of your os and architecture.
With the changes implemented during the last sprint, our new world looks like this:
At the very very center I should add a big fat GITLAB buy now. The notable changes are that the deployment steps are now part of the GitLab pipeline, too. Before we deploy our created Docker image (as I’ll explain later in detail), we first create a multi-arch Docker image: that’s the Manifest stage that you can see in the overview above.
Creating and using multi-arch Docker images
On an abstract level we combine two or more images for different architectures into one and let the docker cli decide which one to pick. Docker running on an arm32 Raspberry PI will choose the linux/arm32 image while my Mac would choose the darwin/amd64 image. The docker manifest tool has recently been merged with the official docker cli code base and is now available via docker manifest as an experimental feature. To enable the feature, make sure that experimental is set to enabled in .docker/config.json (yes, it’s “enabled” as a String – not true or “true” or 1 etc :-).
Here is the manifest stage in our GitLab YAML file:
The steps are:
- Login to the docker registry of your choice
- Create a manifest, our case all these parameters resolve to <registry>/samsonpe/table:0.1 for example. Note: no architecture whatsoever in the tag name!
- Annotate the created manifest. This essentially adds a supported os/architecture combination to the created manifest. We specify the image name that supports the os and architecture.
- The above step is repeated for each os/architecture. Here we have arm(32) and arm64.
- Push the manifest.
These steps look very simple and logical, but the implementation did not come without some issues. We might notice that we’re not using the default CI_REGISTRY_USER and CI_REGISTRY_PASSWORD variables that GitLab provides to login to our (GitLab) registry. That’s simply because it only gives us push permissions. As the docker manifest command currently requires * permission, we had to add a registry user via secret variables to our build pipeline. That’s an issue on the docker side, requiring * permissions to push the manifest looks like something to change when this feature is no longer experimental.
Deploying to a Raspberry PI via GitLab Runner
For our next CI/CD stage, we’re deploying our Docker image to a Docker swarm. It required that we’ve tagged a custom GitLab runner with the “tablemaster” tag. The tablemaster identifies the gitlab-runner which is running on the Swarm master for our table project. It will issue the commands necessary to create or update a swarm.
Line 6 – tags: tablemaster – will lock this GitLab job down to the single gitlab runner that is tagged with tablemaster. In addition, we’re not pulling the sources (GIT_STRATEGY: none) as we no longer need them.
The script part involves
- checking for a service named “tableservice” and saving it to an environment variable
- If the service cannot be found, we create it using the docker service create command.
- Otherwise, we update the service using docker service update.
Using a manual build step to trigger the deployment
As we’re kicking off the CI/CD pipeline with each commit, the new tableservice got deployed each time a minor change was made. To fix this until we’ve got a more solid, likely tag/release based strategy in place, we’ve made the deploy step a manual stage. This means the GitLab pipeline stops at the deploy stage and waits for a real user to log into the GitLab system and hit the “run the deploy step” button.
Adding the manual step is really simple, look at this example Gist:
During our last sprint, we’ve also looked into Kubernetes. After some initial trouble with the setup on Raspberry PI’s, we’ve created a few clusters and learnt a lot. We’ve deployed some sample pods, created Replica and DaemonSets, used labels to filter our nodes, etc. We think that Kubernetes will be a great addition to our existing architecture. Tasks like rolling out gitlab-runners onto our devices or managing the deployments will be a natural fit.
So stay tuned, we’ll next share our updated architecture including the Kubernetes setup. Again, let us know in the comments if you have any feedback.