Imagine the following scenario: You come back from your well-deserved vacation, you brewed your first coffee and open up your laptop. After going through your long list of emails, you start off your day by checking out the latest changes from your development branch. You hit compile and your mood swings: 10 compile errors, one bizillion warnings! The development branch is broken, someone forgot to check in a source file and now they are on vacation!
Instead of increasing your blood pressure and waisting your time, this scenario could have been easily prevented by running regression tests in a CI Pipeline. I hate to say it, but numerous embedded projects are lacking CI Pipelines, which is unthinkable in other software engineering domains! Here is how you and your team can catch up and get started with CI Pipelines for regression testing using the example of building a STM32CubeIDE Project in a Gitlab Pipeline.
Note: The concepts introduced here apply to other build systems and other Git or SVN service providers. To see how these concepts are applied in practice, we will point you to an example repository when necessary: https://gitlab.com/julian_honeysuckle/stm32cubeidepipelinedemo. The repository contains a STM32CubeIDE Project at /STM32Project/
and additional files that are used to setup the Pipeline.
To follow along, make sure your local machine has these programs installed:
But first: What is a CI Pipeline and Regression testing
CI Pipeline stands for Continous Integration Pipeline.
CI Pipeline are workflows that run automatically in your version control hosting service (GitHub, GitLab, SVN, etc). The workflow is triggered by changes in our repository. Continous Integration is all about integrating and verifying new changes continously. This includes checking for regressions.
Today, we take a look on how to get started with the the most basic regression test: Checking that our project (still) compiles!
Setting up the Pipeline Descriptor file: .gitlab-ci.yml
Everytime changes are pushed to Gitlab, Gitlab looks for the file .gitlab-ci.yml at the root of your repository file tree.
The .gitlab-ci.yml describes all stages of your pipeline and the jobs for each stage. Pipeline Jobs are fully encapsulated scripts that do one specific thing: Build stuff, static code analysis, run tests on a testsuite, push images and so on. Pipeline Stages group Pipeline Jobs and specify when to execute job group.
At runtime, Gitlab uses available Pipeline Runners to execute the Pipeline Job of each Pipeline Stage in parallel. Don't worry: You can use the freely available Pipeline Runners hosted directly by Gitlab or you can provide Pipeline Runners that run in your local server rack.
Now, let's take a look at our .gitlab-ci.yml file:
variables:
stages:
- build
build-stm32-project:
needs: []
stage: build
image: ${CI_REGISTRY}/julian_honeysuckle/stm32cubeidepipelinedemo/stm32cubeide-ci:1.16.1
script:
- build_project.sh -p STM32Project/STM32CubeIDEPipelineBlogPost.ioc -t Debug
- ls STM32Project/Debug/STM32CubeIDEPipelineBlogPost.elf
artifacts:
when: on_success
paths:
- STM32Project/Debug/STM32CubeIDEPipelineBlogPost.elf
The file has one stage, the "build" stage, and in this stage there is only one job "build-stm32-project".
The "script" field indicates what commands are executed in this job. First, the job executes a script that builds the project in STM32CubeIDE and second, verifies the build by checking if the firmware binary is present.
But how do we upload our build system, in this case the STM32CubeIDE's build system, so that the gitlab runners in the cloud or in our server rack have access to it?
Docker Container and Docker Images: Providing Gitlab with an environment containing our build system
Docker is a service that allows storing and executing Linux environment snapshots. Docker Images are files that store the Linux environment. At runtime, the Docker Image is loaded and executed inside a Docker Container.
So, to run STM32CubeIDE in the Gitlab Runners, we need to create and upload a Docker Image with STM32CubeIDE installed. Docker Images are created with Dockerfiles. For this tutorial, you should think of Dockerfiles as recipes that describe sets up our environment step-by-step.
Here is the Dockerfile of the Docker Image that runs in our Demo:
FROM --platform=amd64 ubuntu:24.10
ENV LICENSE_ALREADY_ACCEPTED=1
RUN apt-get -y update && apt-get -y install zip
ARG INSTALLER_ZIP
COPY ${INSTALLER_ZIP} /tmp/stm32cubeide.sh.zip
# extract cubeide installer script and remove download file
RUN unzip -p /tmp/stm32cubeide.sh.zip > /tmp/installer.sh && rm /tmp/stm32cubeide.sh.zip
# Flag the script as executable, execute the script and remove the download file
RUN chmod +x /tmp/installer.sh && \
printf '/opt/stm\nn\n' | /tmp/installer.sh && \
find /opt/stm -name "*.zip" -type f -delete && \
rm -r /tmp/installer.sh
COPY build_project.sh /opt/stm
ENV PATH="${PATH}:/opt/stm"
RUN chmod +x /opt/stm/build_project.sh
Now, let's go through this Dockerfile step by step:
Line 1: FROM --platform=amd64 ubuntu:24.10
-› The first line states that our Docker Image should be based on the publicly available Ubuntu 24.10 Docker Image.
Line 8: COPY ${INSTALLER_ZIP} /tmp/stm32cubeide.sh.zip
-› The Dockerfile copies an installer archive in the environment, extracts it (line 11) and executes the installer script (RUN command starting in line 14).
Line 17: rm -r /tmp/installer.sh
-› Now that the IDE is installed, let's remove the install script to free up disk space in the final image.
Now, when we use docker build
to build our image, we will have an Image that contains only the IDE without the installer, which will safe us some disk space.
At the end, the Dockerfile copies the build_project.sh script into the environment, which is a script that invokes a headless build using the IDE's build sysstem. For each different build system or IDE, it is a good idea to have such a file. You can find the script here: https://gitlab.com/julian_honeysuckle/stm32cubeidepipelinedemo/-/blob/master/PipelineFiles/ContainerFiles/build_project.sh?ref_type=heads
It is good practice to have a script that builds your container image instead of calling docker build
. We won't go into detail, but you can find the build script here: https://gitlab.com/julian_honeysuckle/stm32cubeidepipelinedemo/-/blob/master/PipelineFiles/build_container.sh?ref_type=heads
All the script does is parse command line parameters and create the right docker build command.
# Build the Docker image
echo "Building Docker image $GITLAB_REGISTRY/$PROJECT_NAMESPACE/$IMAGE_NAME:$VERSION from Dockerfile: $DOCKERFILE..."
docker build -t "$GITLAB_REGISTRY/$PROJECT_NAMESPACE/$IMAGE_NAME:$VERSION" -f "$DOCKERFILE" --build-arg INSTALLER_ZIP="$INSTALLER_ZIP" "$CONTAINER_ROOT_DIR"
# Check if the build was successful
if [ $? -eq 0 ]; then
echo "Docker image $GITLAB_REGISTRY/$PROJECT_NAMESPACE/$IMAGE_NAME:$VERSION built successfully."
else
echo "Docker image build failed."
exit 1
fi
When executed, the script builds the Docker Image. This takes several minutes as it extracts the IDE installer scripts and executes it. Now, we have the Docker Image containing the STM32CubeIDE on our local machine.
To push our local image into our Gitlab Project's Container Registry, we provide the push_container.sh script.
The docker image is pushed into our Container Registry using the docker push
command.
For more details, take a look at the script here: https://gitlab.com/julian_honeysuckle/stm32cubeidepipelinedemo/-/blob/master/PipelineFiles/push_container.sh?ref_type=heads
This script requires you to login with your gitlab credentials, see the additional resources below.
This is how the pushed Docker Image looks like in our Gitlab Container Registry:
Now, our Gitlab has a Docker Image with the STM32CubeIDE to build our project in the pipeline!
Utilizing the Pipeline: Block Merge Requests that break the Pipeline
From now on everytime we push changes now, the pipeline triggers and compiles our project. To utilize the pipeline, it is a good practice to setup gitlab so that Merge Requests are blocked until all Pipelines succeeded.
To do so, navigate to Settings → Merge Requests in your Gitlab Project and go to the section "Merge checks" and tick the box Pipelines must succeed. Now, we block branches that do not compile from merging into our development branch.
To showcase this, we added two Merge Requests to our Gitlab Demo.
One merge request compiles so the pipeline succeeds: https://gitlab.com/julian_honeysuckle/stm32cubeidepipelinedemo/-/merge_requests/3
The other merge request contains errors and the pipeline fails: https://gitlab.com/julian_honeysuckle/stm32cubeidepipelinedemo/-/merge_requests/1
Investigating why the pipeline fails
Let's investigate why the pipeline failed: Click on the red cross icon that indicates pipeline failure. This will take us to a page with an overview of all pipeline stages and each pipeline job. Now click on the build job. This will take us to a page with the logs of the build job.
../Core/Src/main.c:77:20: error: expected ';' before 'SystemClock_Config'
77 | init_my_project()
| ^
| ;
......
82 | SystemClock_Config();
There is a syntax error, the ;
is missing! Now we found one error that causes the pipeline to fail. If this was a real project, you would now fix the syntax error, check your local build and push if your local build succeeds.
Downloading the firmware from the Pipeline Build Job
Let's take a look at the other Merge Request's Pipeline again. When you navigate to the Job's logs, you can download the firmware binary with the download button on the right side: https://gitlab.com/julian_honeysuckle/stm32cubeidepipelinedemo/-/jobs/7956052399
The .gitlab-ci.yml describes which files are to be uploaded to the Gitlab download server, see here: https://gitlab.com/julian_honeysuckle/stm32cubeidepipelinedemo/-/blame/master/.gitlab-ci.yml?ref_type=heads#L13
It is good practice to only deploy firmware that is build from pipelines. This way, we prevent deploying firmware that was built in an incorrect environment. Our Pipeline is the single source of truth and only builds originating from our Pipeline are verifiably build in the correct environment!
Conclusion & Outlook
Now we have setup a Pipeline that builds our STM32CubeIDE project automatically on changes and checks that block Merge Request from merging branches that do not compile!
From here on out, feel free add additional Pipeline jobs. Now it is time to brainstorm: How can we use Pipeline Automation to improve our team's productivity? Maybe we should add a job that checks our code format or a job for static code analysis.
If you need help in setting up your Pipeline or if you need help in desiging Pipelines for complicated projects with multiple MCUs, please contact us here: https://honeysuckle.dev/contact.html
Additional Resources
- Gitlab CI/CD: https://docs.gitlab.com/ee/ci/index.html
- .gitlab-ci.yml File Specification: https://docs.gitlab.com/ee/ci/yaml/index.html#stages
- Gitlab Container registry: https://docs.gitlab.com/ee/user/packages/container_registry/
- Docker documentation: https://docs.docker.com/
- STM32CubeIDE: https://www.st.com/en/development-tools/stm32cubeide.html