Continuous integration (CI) and continuous deployment (CD) has been a staple in “regular” software engineering for quite some time now. As per usual, the embedded world is lagging behind by a few years, but CI/CD is becoming increasingly popular for embedded software as well.
I have personally set up a few Jenkins instances to build and run unit tests on a desktop PC, cross-compile for an embedded target and archive the artifacts, package Python tools, build and publish documentation, etc. However, I have had Jenkins installed directly on the server and have been using the built-in Jenkins node to do all the work. This requires that all the build tools are installed and configured correctly on the server itself and you must manually install and configure Jenkins plugins and set up pipeline jobs. This is alright for small projects, but as the project grows, the server can start to become difficult to manage. Also, if the server ever breaks down and has to be recreated from scratch, you better pray that the setup procedure has been properly documented. In other words: The server quickly becomes a pet.
Luckily, there is a better way. The main Jenkins instance (or controller) can be set up as a Docker container and configured using plain text files, meaning that a new Jenkins instance can be spun up with just a single command in the terminal – no more noodling around in the web interface. Separate Jenkins agents, that handle the actual work of executing the jobs, can be set up as separate Docker containers or virtual machines and connected to the controller. For example, you could set up a Linux agent that has most of the build tools required for your project, and then a Windows VM to build any code that requires Windows-only tools.
In this blog post I am going to set up one of my old desktop PCs with Linux and use it as a build server. I am going to go through how I set up a Jenkins controller in a Docker container and configured it using only text-based configuration files. In a future blog post, I will cover setting up build agents as separate Docker containers and virtual machines.
Setting up a Linux server with Docker
For my Linux server I decided to go with Debian 11. I chose a version that includes non-free firmware (e.g. drivers for wireless network adapters) but from Debian 12, which is set to release in June 2023, this will be included by default. After downloading the image I plugged in my USB drive and used lsblk
to see that it was registered as /dev/sdd
. I unmounted all sdd
partitions with umount
and then flashed the image onto the USB drive with `dd`:
$ sudo dd of=/dev/sdd if=firmware-11.7.0-amd64-DVD-1.iso bs=4M status=progress
Then I booted the server from the USB drive and went through the Debian installation wizard. I chose the lightweight Xfce desktop environment and made sure to install OpenSSH server, so I can connect to the server from my own PC after the initial setup. I added a user for myself called klein
.
After installation was complete, I started up the server and switched to the root
user using su -
. The dash makes sure that a login shell is invoked when switching users, so that all the root
user’s environment variables (including the PATH
) are loaded.
Granting sudo permissions and opening the SSH port
To grant sudo
permissions to my own user, I added it to the sudo
group:
# usermod -aG sudo klein
Then to allow incoming SSH traffic through the firewall, I used iptables
to append a rule to the INPUT
chain, allowing incoming TCP traffic on port 22:
# iptables -A INPUT -p tcp --dport 22 -j ACCEPT
To save the iptables
configuration I used the iptables-persistent
package (available through apt
):
# netfilter-persistent save
After this initial set up I could shut down the server, unplug the monitor and keyboard and move the machine out in the hallway next to the WiFi router, where it is out of sight. I connected power and ethernet and booted it back up.
Now to connect to the server with SSH from my own PC I first had to get the server’s IP address, which is assigned by DHCP. I used nmap
to search for hosts with port 22 open and found it here:
$ sudo nmap 192.168.123.1/24 -p 22
Starting Nmap 7.80 ( https://nmap.org ) at 2023-05-31 10:10 CEST
...
Nmap scan report for 192.168.123.38
Host is up (0.097s latency).
PORT STATE SERVICE
22/tcp open ssh
MAC Address: xx:xx:xx:xx:xx:xx (Intel Corporate)
Now I was able to log in remotely with:
$ ssh klein@192.168.123.38
Installing Docker
To install Docker Engine and Docker Compose, I simply followed the installation guide on the Docker website. This adds the Docker repository to the apt
source list and installs the required packages. I copied all the commands into a shell script (with the minor change of fetching the distribution codename with lsb_release
), made it executable with chmod +x
and executed it. The script ought to work for both Debian and Ubuntu:
#!/bin/bash
sudo apt update
sudo apt install ca-certificates curl gnupg -y
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
VERSION_CODENAME=$(lsb_release -cs)
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
In order to use the docker
commands without root privileges, I added myself to the docker
group:
$ sudo usermod -aG docker klein
For the changes to take effect, I logged out and back in. Then, to ensure that everything works, I ran the hello-world
image:
$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
This basically completes the server setup – everything else will be done in Docker.
Jenkins controller in a Docker container
The base image
As a starting point for our controller we are going to use the community-maintained Jenkins image on DockerHub as a base image. This is a replacement for the official image, which is now deprecated. We are going to use the long-term support version with the tag lts-jdk11
. Let us create a Dockerfile:
FROM jenkins/jenkins:lts-jdk11
Now, while in the same directory as the Dockerfile, we can build an image named “controller” with:
$ docker build -t controller .
And then start a container using the newly created image exposing the web interface on port 8080:
$ docker run -p 8080:8080 controller
If we go to http://192.168.123.38:8080 (or whatever your server’s IP address is) we will be met by the Jenkins setup wizard. We obviously do not want to go through this every time we start up a new container, so let us see how we can disable this and also install a few plugins that will help us configure the Jenkins instance.
Disabling the setup wizard and installing plugins
To disable the setup wizard we can simply add the following environment variable in the Dockerfile:
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
Before rebuilding the image, let us also add a few plugins. There are three very useful plugins that allow us to configure Jenkins using only text files. The holy trinity of configuration plugins, if you will. These are:
- Configuration as Code (to configure the Jenkins instance itself and its plugins)
- Job DSL (to configure jobs)
- Pipeline (to configure pipelines)
Additionally we might want some additional plugins such as:
When adding plugins we can use the plugin manager CLI, which is built into the Jenkins image. You can either specify the plugin IDs directly when invoking the plugin manager or pass a text file. We will opt for the latter. The ID for each plugin can be found on their respective page at https://plugins.jenkins.io. Let us create the file plugin.txt
and list our desired plugins:
configuration-as-code
job-dsl
workflow-aggregator
git
pipeline-stage-view
Note that you can also specify a version for each plugin (e.g. git:5.0.2
). If omitted, the newest version will be downloaded.
Now we just have to add a few commands to the Dockerfile in order to copy plugins.txt
to the container and run the plugin manager:
COPY --chown=jenkins:jenkins plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN jenkins-plugin-cli -f /usr/share/jenkins/ref/plugins.txt
Everything we put into /usr/share/jenkins/ref
will be copied into /var/jenkins_home
when Jenkins starts up. It is also possible to override existing files in /var/jenkins_home
by adding a .override
extension to the files in /usr/share/jenkins/ref
. This is useful when /var/jenkins_home
is on a persistent mount, which we will get to later. See the image documentation for more info on the reference folder.
Now when we rebuild the image and start a container, we will see the Jenkins dashboard instead of the setup wizard when opening the web interface. If we browse to Manage Jenkins > Manage Plugins > Installed plugins, we will see the plugins that we listed in plugins.txt
(along with their dependencies).
We also notice a few notifications and warnings in the top-right corner urging us to set up authorization, not to build using the built-in node and to configure the Jenkins URL. Let us fix that next.
Jenkins configuration
To configure Jenkins using the Configuration as Code plugin, we have to create a YAML file with the desired configuration and add an environment variable CASC_JENKINS_CONFIG
containing the path to the configuration file, as described in the documentation. Below is a very basic configuration that merely takes care of the issues Jenkins complained about above.
jenkins:
systemMessage: "Hello, World"
numExecutors: 0
authorizationStrategy:
loggedInUsersCanDoAnything:
allowAnonymousRead: false
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "admin"
password: "password"
name: "Administrator"
unclassified:
location:
url: http://192.168.123.38:8080
Now we will add the following lines to the Dockerfile in order to set the environment variable and copy the configuration file to the image:
ENV CASC_JENKINS_CONFIG /var/jenkins_home/config.yaml
COPY --chown=jenkins:jenkins config.yaml /usr/share/jenkins/ref/config.yaml
After rebuilding and restarting, we will now be met with a login prompt when opening the web interface. We can login with admin:password
as we configured above. To explore which options can be configured in the YAML file, you can take a look at the documentation under Manage Jenkins > Configuration as Code > Documentation. It can also be helpful to do the configuration manually in the web interface first and then view the generated YAML under Manage Jenkins > Configuration as Code > View Configuration. You can then copy the relevant bits to your own config.yaml
.
Job configuration
To automate job creation and configuration we first have to describe our jobs in a Groovy file. We can then use the Job DSL plugin to process this file, with a little help from the Configuration as Code plugin (described in detail here).
Let us create a file named jobs.groovy
and define a job that says hello:
job('Hello') {
steps {
shell('echo Hello, World!')
}
}
Then in the Dockerfile, copy the file to the image:
COPY --chown=jenkins:jenkins jobs.groovy /usr/share/jenkins/ref/jobs.groovy
Next, add the following lines to config.yaml
in order to make the Configuration as Code plugin fetch the job configuration file and pass it to the Job DSL plugin for processing:
jobs:
- file: /var/jenkins_home/jobs.groovy
Now after rebuilding and restarting the container, you should see the “Hello” job appear in the job list. You can explore all the options for configuring jobs in the Job DSL API at http://192.168.123.38:8080/plugin/job-dsl/api-viewer/index.html (substitute with your own server IP address).
Pipeline
Last up we have the Pipeline plugin, which I am sure many of you are familiar with. This plugin allows us define the stages and steps of a job in a pipeline script. The pipeline script can be committed along with your code to a version control repository and is usually named Jenkinsfile
. When creating a pipeline job in Jenkins, you specify that the job should fetch the Jenkinsfile
from the repository and execute the pipeline. The pipeline concept is already very well described in the Jenkins documentation and is pretty much an integral part of Jenkins, so I do not think it makes sense to describe it further here.
A simple pipeline script with a single stage that says hello would look like this:
pipeline {
agent any
stages {
stage('Say hello') {
steps {
echo 'Hello, World!'
}
}
}
}
Making the Jenkins home folder persistent
Up until now our Jenkins controller has been “reset” every time we restarted it. This is probably not what we want, since we would lose build history every time the server went for a reboot. To fix this we can map the Jenkins home folder /var/jenkins_home
to a volume mount. We do this by adding the --volume
or -v
flag to the docker run
command, specifying both the volume name and the path in the container:
$ docker run -p 8080:8080 -v jenkins_home:/var/jenkins_home controller
Now we should see that all changes in Jenkins are persisted between restarts.
Conclusion
In this blog post you have learned how to set up a Jenkins controller as a Docker container and configure it using just a few plugins and text files. To copy the Jenkins instance to another server, you simply install Docker on the new server, copy over the Dockerfile and configuration files, build the image and start up the container.
In a future blog post I will go through how I set up a Linux build agent in a separate Docker container as well as a Windows build agent in a virtual machine.