In this my third GitLab-related article I will show how I set up a CI/CD pipeline in GitLab CE for Maven-based applications that can produce, scan Docker images for release versions of the application. When I set out to create this pipeline, little did I know that I would also need to define a Git workflow, which I have come to realize is necessary in order for me to create a pipeline.
Requirements
Continuing in the spirit of the two previous GitLab articles I have written (first article here, second article here) I wanted all the parts of my setup to run in containers, including the GitLab runner that builds Docker images. In addition, I wanted to be able to rely entirely on GitLab and avoid having to use, for instance, an external repository manager like Nexus or a build server like Jenkins.
Some of the more soft requirements are:
- A stable master branch that can be released at any time.
- It should not
be possible for a build to pass if the code does not meet the
required standards.
This include tests, code-coverage, static code auditing using PMD and SpotBugs and source-code style auditing using Checkstyle. - It should only
be possible to build a Docker image in GitLab from a release
version.
If a developer wants to build a Docker image containing a snapshot version, this will have to be done locally, on the developer’s computer.
Git Workflow
Let’s start with my Git workflow, since having seen it will make it easier to understand the GitLab pipeline. I am sure this is nothing new under the sun – while researching for this article I have seen several very good articles about git workflows. Another disclaimer is that this workflow is not universal; I have limited experience creating git workflows and does not aspire to create more than something to start evolving my own workflow from.
However, I believe every man, woman and child has the right to his/her own git workflow and so here is mine.

A picture says more than a thousand words they say, but let me describe the workflow in the above figure in words anyway.
- Working on a
project, I start out with just the master branch.
The example project starts at version 1.0.0-SNAPSHOT. - As I want to
add a feature to the application, I create a new branch.
In the figure, the feature-branch is green and will, as the master branch, be at version 1.0.0-SNAPSHOT. - As the branch
is created and, later, for every push to the remote branch (in
GitLab), the common automatic build steps (blue in the above figure)
will be executed.
The common automatic build steps ensure that the project at all times adhere to code quality standards and that the source-code is written according to the style verified by Checkstyle. - When I am finished developing the feature I set out to add in the branch and all the steps of the CI/CD pipeline complete successfully, I create a merge request against the master branch.
- If I am not alone developing, then I ask someone else to perform a code review of the code committed in the merge request.
- The feature
branch (green) is merged into the master branch.
This is done after the code review has been approved. If I am alone, I just merge the merge request. The feature branch is deleted as part of merging the branches. - After the new
feature has been implemented and merged into the master branch I
want to release a new version of the application.
New releases should only be possible to create from the master branch.
After the common automatic build steps have completed successfully, I trigger the release build step in the master branch. This causes a tag to be created with the name of the current version, 1.0.0 in this example. The version in the master branch is increased to the next snapshot version, 1.0.1-SNAPSHOT. - In my case, a
release is a tagged version that is not a snapshot version.
In GitLab, it is not possible to modify the code in a tag, which is just the way I want it – once a release has been created, it should not be possible to modify it. If additions or modifications are to be made, a new release will have to be created. - In a release
tag there is an additional, automatic, build step that builds the
release artifact.
Assume that in this example this is a JAR-file. Since I do not want to involve an additional artifact repository, like Nexus, I settle for having the release artifact retained in GitLab for a period of time, which is configurable in the GitLab CI/CD pipeline. If I want to have the artifact retained indefinitely, I can accomplish this in the GitLab GUI by the click of a button. - In a release
tag, there are three additional build steps which are all related to
packaging the application in a Docker image.
These build steps build a Docker image on the computer on which the GitLab runner runs, scans the Docker image for vulnerabilities using Clair and, after these two steps both have completed successfully, gives you the opportunity to push the Docker image to a repository. In this article I will use Docker Hub as Docker image repository.
In this article all these three steps are to be triggered manually, but any number of these steps can of course be made automatic. - If the Docker
image that the application/component use as base image changes (for
instance if using the “latest” tag) then it will be possible to
create a new version of a Docker image that contains the same
version of the application/component as a previous version of a
Docker image with the above workflow. Thus one has to decide which
strategy to use when tagging Docker images containing
applications/components released using this workflow. Do you want to
replace existing Docker images or add a new one and preserve the
previous ones (except for maybe the one with the “latest”
tag)?
I leave the tagging-strategy discussion for now, but it is a very important question that should be answered before starting to produce Docker images to any larger extent.
Prerequisites
This article continues assumes that you have GitLab set up as described in my previous GitLab article.
Updated GitLab Runner Configuration
After many failed attempts at building a Docker image using Maven in a GitLab CI/CD pipeline I settled on using Docker socket binding in order to make Docker available to GitLab Runner job containers. There are advantages and disadvantages to this approach, but so far it is the only approach that has worked for me.
A minor modification to the GitLab Runner configuration is required in order for the Docker socket from the host to be present in GitLab Runner job containers. The complete GitLab Runner register command is shown below with line 18 being the addition.
gitlab-runner register \ --executor="docker" \ --custom_build_dir-enabled="true" \ --docker-image="maven:3.6.1-jdk-11" \ --url="http://gitlab:80" \ --clone-url="http://gitlab:80" \ --registration-token="vWtwwQgdPSEzTPNTGZnq" \ --description="docker-runner" \ --tag-list="docker" \ --run-untagged="true" \ --locked="false" \ --docker-network-mode="gitlabnetwork" \ --cache-dir="/cache" \ --docker-disable-cache="true" \ --docker-volumes="gitlab-runner-builds:/builds" \ --docker-volumes="gitlab-runner-cache:/cache" \ --docker-privileged="true" \ --docker-volumes="/var/run/docker.sock:/var/run/docker.sock"
Looking at the GitLab Runner configuration file config.toml lcoated in the directory /etc/gitlab-runner/ in the GitLab Runner container, it should look like this:
concurrent = 1 check_interval = 0 [session_server] session_timeout = 1800 [[runners]] name = "docker-runner" request_concurrency = 1 url = "http://gitlab:80" token = "_eKPhYvhfE4tzZqU_SwX" executor = "docker" cache_dir = "/cache" clone_url = "http://gitlab:80" [runners.custom_build_dir] enabled = true [runners.docker] tls_verify = false image = "maven:3.6.1-jdk-11" privileged = true disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = true volumes = ["gitlab-runner-builds:/builds", "gitlab-runner-cache:/cache", "/var/run/docker.sock:/var/run/docker.sock"] network_mode = "gitlabnetwork" shm_size = 0 [runners.cache] [runners.cache.s3] [runners.cache.gcs]
The GitLab Runner configuration can be modified and will be
automatically reloaded when saved. This is the approach I will use in
this article to modify the configuration of the GitLab
Runner.
Alternatively any existing GitLab runner(s) that you
want to replace can be deleted and a new runner registered.
- Open a terminal window or re-use an already open terminal
window.
This window has to be opened on the Docker host on which the GitLab Runner container is located. - If you do not know the name of the container in which the GitLab Runner is running, issue the following command to obtain the container name:
sudo docker ps -a
- Enter the container by executing the following command in the terminal window, replacing the container name if it is different:
sudo docker exec -it gitlabce_gitlab-runner_1 bash
- In the container execute the following command to edit the GitLab Runner configuration file:
vi /etc/gitlab-runner/config.toml
- Add the “/var/run/docker.sock:/var/run/docker.sock” as seen above to the volumes entry under [runners.docker].
- Save the file using the :wq key combination.
- Exit the container by issuing the exit command.
- Examine the logs of the GitLab
Runner container using the following command.
Modify the name of your GitLab Runner container if it does not match.
sudo docker logs gitlabce_gitlab-runner_1
- In the log output you should see a line that looks like this:
Configuration loaded builds=0
The GitLab Runner configuration has now been updated successfully.
Sample Project
In order to have something to build, I have put together a sample project. If you already have a Maven-based project of some kind, then feel free to use that project instead!
To obtain a copy of my sample project as it was prior to adding a GitLab CI/CD pipeline, please clone the before branch of the git-workflow-sampleapp repository found here. The master branch contain the final result with the GitLab CI/CD pipeline.
GitLab CI/CD Pipeline Prerequisites
GitLab
CI/CD pipelines does not implement the different tasks associated
with my git workflow. For this you need a tool for build automation
and in my case I will use Apache
Maven. I have chosen to have two different Maven profiles in my
pom.xml file; one that contains the build plugins and their
configuration used in connection with code quality measures and
another profile that contains the build plugins and their
configuration used to create a Docker image containing the
application.
The reason for using Maven profiles is to separate
the configuration used by a CI/CD pipeline and the configuration used
when building Docker images from other build plugin configuration
that one may wish to include in a project.
CI/CD Maven Profile
The CI/CD Maven profile that contains the configuration of plugins associated with maintaining code quality looks like this:
<profile> <id>cicdprofile</id> <properties> <javadoc.opts>-Xdoclint:none</javadoc.opts> </properties> <build> <plugins> <!-- Code coverage. --> <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.8.4</version> <executions> <execution> <id>default-prepare-agent</id> <goals> <goal>prepare-agent</goal> </goals> </execution> <execution> <id>default-report</id> <goals> <goal>report</goal> </goals> </execution> <execution> <id>default-check</id> <goals> <goal>check</goal> </goals> <configuration> <rules> <rule> <element>CLASS</element> <excludes> </excludes> <limits> <limit> <counter>LINE</counter> <value>COVEREDRATIO</value> <minimum>70%</minimum> </limit> </limits> </rule> <rule> <element>BUNDLE</element> <excludes> </excludes> <limits> <limit> <counter>CLASS</counter> <value>COVEREDRATIO</value> <minimum>90%</minimum> </limit> </limits> </rule> </rules> </configuration> </execution> </executions> </plugin> <!-- Code quality: PMD --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <version>3.12.0</version> <configuration> <targetJdk>${java.version}</targetJdk> <includeTests>true</includeTests> <failOnViolation>true</failOnViolation> <printFailingErrors>true</printFailingErrors> </configuration> </plugin> <!-- Code quality: Spotbugs --> <plugin> <groupId>com.github.spotbugs</groupId> <artifactId>spotbugs-maven-plugin</artifactId> <version>3.1.12</version> <configuration combine.self="append"> <includeTests>true</includeTests> <effort>Max</effort> <threshold>Low</threshold> <failOnError>true</failOnError> </configuration> </plugin> <!-- Source code style checking. --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <version>3.1.0</version> <configuration> <configLocation>ivans_checkstyle_config.xml</configLocation> <includeTestSourceDirectory>true</includeTestSourceDirectory> <excludes>module-info.java</excludes> <failOnViolation>true</failOnViolation> <failsOnError>true</failsOnError> <consoleOutput>true</consoleOutput> </configuration> </plugin> </plugins> </build> </profile>
We can see that:
- There are four plugins in the cicdprofile Maven
profile.
These are the Jacoco (code coverage), PMD (static code analysis), Spotbugs (static code analysis), Checkstyle (source-code style checking). - The Jacoco code coverage plugin is configured to require a
minimum coverage of 70% lines in each class, a minimum coverage of
90% of the classes in a bundle.
If these coverage requirements are not met, the build will fail. - There are empty <excludes> elements in the Jacoco
coverage plugin configuration.
These allow for excluding classes etc from the code coverage requirements. - The PMD code analysis plugin is configured with the Java JDK version used by the project, to include test-code in the analysis, to fail the build if there are violations and finally to log the violations that cause a build to fail to the console.
- The Spotbugs code analysis plugin is also configured to analyse test-code and to fail the build if there are issues discovered.
- The Checkstyle plugin uses a local configuration stored in a
file named ivans_checkstyle_config.xml.
This is just for the sake of this example. Normally you have this configuration accessible over HTTP and insert a link to the configuration in the <configLocation> element in the Checkstyle plugin configuration. - The Checkstyle plugin is configured to:
Scan the tests source-code.
Exclude the module-info.java file from scanning.
Fails on violations and errors.
Output violations and errors to the console when scanning.
With this profile in place, it is possible to manually execute the code quality measures using the following commands in a terminal window:
- mvn -Pcicdprofile checkstyle:check
Verify that the source-code adheres to, in this particular case, the Ivan Coding Style or what ever coding style Checkstyle is configured to verify. - mvn -Pcicdprofile pmd:check
Performs source-code analysis checking for common programming flaws. - mvn -Pcicdprofile spotbugs:check
Performs static code analysis looking for common bug patterns. - mvn -Pcicdprofile install
Ensure that the minimum amounts of code covered by tests is reached.
Docker Maven Profile
The Docker Maven profile that contains plugins needed to build Docker images containing the application looks like this:
<!-- Builds a Docker image containing the Spring Boot application. Start a container with the SYS_TIME capability, in order for time synchronization in the container to work properly. --> <profile> <id>dockerimage</id> <properties> <!-- Name of Docker image that will be built. --> <docker.image.name>hello-webapp</docker.image.name> <!-- Directory that holds Docker file and static content necessary to build the Docker image. --> <docker.image.src.root>src/main/docker</docker.image.src.root> <!-- Directory to which the Docker image artifacts and the Docker file will be copied to and which will serve as the root directory when building the Docker image. --> <docker.build.directory>${project.build.directory}/dockerimgbuild</docker.build.directory> <!-- URL to the Docker host used to build the Docker image. --> <docker.host.url>unix://var/run/docker.sock</docker.host.url> <!-- Name of the Dockerfile the Docker image will be built from. --> <docker.file.name>Dockerfile</docker.file.name> </properties> <build> <plugins> <plugin> <artifactId>maven-resources-plugin</artifactId> <executions> <execution> <id>copy-resources</id> <phase>package</phase> <goals> <goal>copy-resources</goal> </goals> <configuration> <outputDirectory>${docker.build.directory}</outputDirectory> <resources> <resource> <directory>${docker.image.src.root}</directory> <filtering>false</filtering> </resource> </resources> </configuration> </execution> </executions> </plugin> <!-- Copy the JAR file containing the Spring Boot application to the application/lib directory. --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> <id>copy</id> <phase>package</phase> <goals> <goal>copy</goal> </goals> <configuration> <artifactItems> <artifactItem> <!-- Specify groupId, artifactId, version and type for the artifact you want to package in the Docker image. In the case of a Spring Boot application these are the same as the project group id, artifact id and version. --> <groupId>${project.groupId}</groupId> <artifactId>${project.artifactId}</artifactId> <version>${project.version}</version> <type>jar</type> <overWrite>true</overWrite> <outputDirectory>${docker.build.directory}/application/lib</outputDirectory> <!-- Specify the destination name as to have one and the same name to refer to in the Dockerfile. --> <destFileName>hello-webapp.jar</destFileName> </artifactItem> <!-- Add additional artifacts to be packaged in the Docker image here. --> </artifactItems> <outputDirectory>${docker.build.directory}</outputDirectory> <overWriteReleases>true</overWriteReleases> <overWriteSnapshots>true</overWriteSnapshots> </configuration> </execution> </executions> </plugin> <!-- Build the Docker image. --> <plugin> <groupId>io.fabric8</groupId> <artifactId>docker-maven-plugin</artifactId> <version>0.19.0</version> <configuration> <dockerHost>${docker.host.url}</dockerHost> <images>  </images> </configuration> </plugin> </plugins> </build> </profile>
I have written an article earlier about building Docker images with Maven and will not repeat that information here. However, there are two things to note in the profile:
- The Docker host URL is unix://var/run/docker.sock
Thus the Docker Maven plugin will communicate with the Docker service over the UNIX Docker socket. The reason for this is of course that the GitLab Runner containers expose the UNIX Docker socket from the host in job containers. - The version of the Docker Maven plugin is version 0.19.0.
As of writing, the latest version of this plugin is version 0.30.0. The reason for not using the latest version yet is that there have been changes in at which stage the plugin is executed and this cause the Docker image built to be incomplete. I am sure that it is possible to use the newer versions of the Docker Maven plugin, I just haven’t taken the time to sit down and figure it out.
GitLab Access Token
The GitLab CI/CD pipeline I have devised need to push changes back to the repository. Regretfully, the GitLab token conveyed to the GitLab pipeline jobs does not have write access, so another token, one that has write access, need to be enclosed. In preparation of this we here create this token:
- Log in to GitLab if you haven’t already.
- Click the current user’s icon in the upper right corner and select the Settings item.
- In the User Settings menu on the left, click the Access Tokens item.
- Enter a name for the new access token.
I will call mine “gitlab-cicd-write-token”. - Select a scope for the access token.
I will select api scope, as to ensure that the pipeline will have complete read/write access. However, the write_repository scope may be sufficient.

- When you are finished, click the Create personal access token button.
- Copy the new personal access token and store it someplace
safe.
This is the only time GitLab will show you the access token.
Create A Project Variable For The Access Token
We will not insert the access token directly into the GitLab pipeline configurations but instead store it in a GitLab project variable. This variable will be passed to GitLab pipeline jobs executing the project’s pipeline.
- Go to the project’s page in GitLab and click Settings in the menu on the left.
- In the Settings, select the CI/CD item.
- Click the Expand button associated with the Variables section.
- Make sure that the Type of the new variable is Variable (and not File).
- Enter a name (key) of the variable.
I will call my variable GITLAB_CICD_TOKEN. - Paste the access token created earlier into the Value field.
- Make sure that the state of the variable is not protected.
- Make sure that the Masked switch for the variable is
enabled.
This will hide the value of the variable in log outputs, for instance, in runner job logs. - Click the button Save variables.

There now is a variable named GITLAB_CICD_TOKEN that contains an access token that will allow writing to GitLab repositories. This variable will be passed as an environment variable to all GitLab CI/CD pipeline jobs associated with the project.
Note!
In this example I have used a project variable
for the access token since I have but a single user and no groups of
users. This will soon become tiresome in a GitLab repository with
many projects. The recommended approach is to use a group-level
environment variable to store the access token in.
GitLab CI/CD Pipeline
With
the Maven profiles presented above in place and the access token
safely stored in a GitLab variable, we can now create a GitLab CI/CD
pipeline configuration that uses these profiles to automate the git
workflow presented earlier.
Without further ado, I’ll show the
complete .gitlab-ci.yml file and then explain the different parts of
it.
image: maven:3.6.1-jdk-11 variables: MAVEN_OPTS: "-Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=.m2/repository -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true" MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end" DOCKER_IMAGE_TO_SCAN: hello-webapp:latest # Cache the Maven repository so that each job does not have to download it. cache: key: mavenrepo paths: - ./.m2/repository/ stages: - build - release - create_docker_image - scan_docker_image - push_docker_image # Run tests. test: stage: build script: - 'mvn $MAVEN_CLI_OPTS install' # Checkstyle source code standard review. checkstyle: stage: build script: - 'mvn $MAVEN_CLI_OPTS -Pcicdprofile checkstyle:check' # PMD code quality analysis. pmd: stage: build script: - 'mvn $MAVEN_CLI_OPTS -Pcicdprofile pmd:check' # SpotBugs code quality analysis. spotbugs: stage: build script: - 'mvn $MAVEN_CLI_OPTS -Pcicdprofile spotbugs:check' # Test code coverage analysis. code-coverage: stage: build script: - 'mvn $MAVEN_CLI_OPTS -P-Pcicdprofile install' # Supplies the option to perform Maven releases from the master branch. # Releases need to be triggered manually in the GitLab CI/CD pipeline. master-release: stage: release when: manual script: - git config --global user.email "gitlab@ivankrizsan.se" - git config --global user.name "GitLab CI/CD" # Fix the repository URL, replacing any host, localhost in my case, with gitlab. # Note that gitlab is the name of the container in which GitLab is running. # Insert GitLab access token into URL so release tag and next snapshot version # can be pushed to the repository. - export NEW_REPO_URL=$(echo $CI_REPOSITORY_URL | sed 's/@[^/]*/@gitlab/' | sed 's/\(http[s]*\):\/\/[^@]*/\1:\/\/oauth2:'$GITLAB_CICD_TOKEN'/') # Debug git interaction. - 'export GIT_TRACING=2' - 'export GIT_CURL_VERBOSE=1' # Remove the SNAPSHOT from the project's version thus creating the release version number. - 'mvn $MAVEN_CLI_OPTS versions:set -DremoveSnapshot -DprocessAllModules=true' - 'export RELEASE_VERSION=$(mvn --batch-mode --no-transfer-progress --non-recursive help:evaluate -Dexpression=project.version | grep -v "\[.*")' - 'echo "Release version: $RELEASE_VERSION"' # Push the release version to a new tag. # This relies on the .m2 directory containing the Maven repository # in the build directory being included in the .gitignore file in the # project, since we do not want to commit the contents of the Maven repository. - 'git add $CI_PROJECT_DIR' - 'git commit -m "Create release version"' - 'git tag -a $RELEASE_VERSION -m "Create release version tag"' - 'git remote set-url --push origin $NEW_REPO_URL' - 'git push origin $RELEASE_VERSION' # Update master branch to next snapshot version. # If automatic building of the master branch is desired, remove # the "[ci skip]" part in the commit message. - 'git checkout master' - 'git reset --hard "origin/master"' - 'git remote set-url --push origin $NEW_REPO_URL' - 'mvn $MAVEN_CLI_OPTS versions:set -DnextSnapshot=true -DprocessAllModules=true' - 'git add $CI_PROJECT_DIR' - 'git commit -m "Create next snapshot version [ci skip]"' - 'git push origin master' only: - master # Builds release version tags as to create release artifact(s). # Artifacts are retained 2 weeks if the Keep button in the web GUI # is not clicked, in which case they will be retained forever. release-build: stage: release script: - 'mvn $MAVEN_CLI_OPTS install' only: - /^\d+\.\d+\.\d+$/ - tags artifacts: paths: - target/*.jar expire_in: 2 weeks # Build a Docker image. # Action can be manually triggered in the GitLab CI/CD pipeline # of release tags. create-docker-image: stage: create_docker_image when: manual before_script: # Install a Docker client in the container as to be able to build Docker image(s). - wget -q https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz - tar zxvf docker*.tgz - cp docker/docker /usr/local/bin/docker script: - mvn -Pdockerimage docker:build only: - /^\d+\.\d+\.\d+$/ - tags # Scan the (local) Docker image using Clair. scan-docker-image: image: docker:stable stage: scan_docker_image when: manual variables: DOCKER_HOST: unix:///var/run/docker.sock CLAIR_DB_CONTAINER_NAME: clairdb_$CI_CONCURRENT_PROJECT_ID CLAIR_CONTAINER_NAME: clair_$CI_CONCURRENT_PROJECT_ID before_script: # Start an instance of Postgresql with a pre-populated Clair DB. - docker run -d --name $CLAIR_DB_CONTAINER_NAME --network=gitlabnetwork arminc/clair-db:latest # Start the Clair server. - docker run -p 6060:6060 --link $CLAIR_DB_CONTAINER_NAME:postgres --network=gitlabnetwork -d --name $CLAIR_CONTAINER_NAME --restart on-failure arminc/clair-local-scan:v2.0.6 # Download Clair scanner client. - wget -nv -qO clair-scanner https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64 - chmod +x clair-scanner script: # Scan the Docker image built in the previous step using Clair. - ./clair-scanner --ip="$(hostname -i)" -c "http://$CLAIR_CONTAINER_NAME:6060" $DOCKER_IMAGE_TO_SCAN after_script: # Stop and remove the Clair DB container. - if docker stop $CLAIR_DB_CONTAINER_NAME ; then echo "Clair DB container stopped"; else echo "There is no Clair DB container to stop"; fi - if docker rm $CLAIR_DB_CONTAINER_NAME ; then echo "Clair DB container removed"; else echo "There is no Clair DB container to remove"; fi # Remove the Clair container. - if docker stop $CLAIR_CONTAINER_NAME ; then echo "Clair container stopped"; else echo "There is no Clair container to stop"; fi - if docker rm $CLAIR_CONTAINER_NAME ; then echo "Clair container removed"; else echo "There is no Clair container to remove"; fi only: - /^\d+\.\d+\.\d+$/ - tags # Push the Docker image to a repository. # In this example the repository is DockerHub. push-docker-image: stage: push_docker_image when: manual script: - docker login --username ivan --password secret - docker push $DOCKER_IMAGE_TO_SCAN only: - /^\d+\.\d+\.\d+$/ - tags
Note that:
- The Docker image from which to
create the containers in which GitLab pipeline jobs are executed is
maven:3.6.1-jdk-11
The project is Maven-based and developed using Java 11, thus the use of a Maven Docker image with Java 11. This image needs to be selected depending on the build system and possibly Java version used by your project. - The variables MAVEN_OPTS and
MAVEN_CLI_OPTS contain Maven configuration options.
The MAVEN_OPTS environment variable contains parameters used when starting up the JVM in which Maven will run. These parameters are always applied when Maven is run.
The MAVEN_CLI_OPTS environment variable contains common, optional, Maven command-line parameters that can be enclosed when running builds as we will see below. - The Maven repository is cached
with the key “mavenrepo”.
This reduces the amount of dependencies each job has to download for instance when building applications with Maven. Note that the cache is not distributed and if there are multiple nodes with GitLab Runners, each node will have its own cache.
Using a fixed key for the cache allows sharing the cached Maven repository between jobs building different applications. - The build stage consists of the
test, checkstyle, pmd, spotbugs and code-coverage steps.
These steps are applied to all branches and tags every time a change is pushed to the repository. To optimize, one may chose to apply these steps only on the master branch. - There are two steps in the release stage; the master-release and the release-build steps.
- The master-release step need to be triggered manually and will only be available if the pipeline is run on the master branch.
- The master-release step creates a
tag in the GitLab repository containing a release and then updates
to the next snapshot version.
If the master branch is 1.0.1-SNAPSHOT, a tag 1.0.1 will be created and the new version in the master branch will be 1.0.2-SNAPSHOT. - The release-build step is executed automatically and will only be executed on tags which name is a version number.
- In my release-build step, a
release version of the project is built and the artifact(s) created
by the build is retained as to be downloadable from GitLab.
Since the example project is a Spring Boot application, the only artifact created is a single JAR-file.
If you have, for instance, a Nexus repository then this is the place where you would deploy the artifact(s) created by the build to that repository.
This could be accomplished by the following Maven command:
mvn $MAVEN_CLI_OPTS -Dusername=$NEXUS_USERNAME -Dpassword=$NEXUS_PASSWORD -DskipTests=true deploy
Where NEXUS_USERNAME and NEXUS_PASSWORD are GitLab variables containing login credentials for a user with write-access to a Nexus repository. - There are three Docker-related
stages.
These are create_docker_image, scan_docker_image and push_docker_image. The reason for there being three separate stages for the above is to ensure the ordering of these steps: Before a Docker image can be pushed, it needs to be scanned and, of course, before a Docker image can be scanned, it must be created. This will, until someone tampers with the GitLab CI/CD configuration, ensure the quality of the Docker images being pushed to the Docker repository by not allowing for images that do not pass the scan to be pushed. - The Docker image is created using
Maven.
Note that a Docker client is downloaded into the container running this step. The container is created from the maven:3.6.1-jdk-11 image and do not contain Docker. Since the GitLab Runner is started sharing the UNIX Docker socket from the host in job containers the Docker client in this container will thus connect to the Docker daemon on the host computer. - The (local) Docker image is
scanned using Clair.
Here, I stand on the shoulders of giants and rely heavily on the work of Armin Coralic, among others.
Since I do not want a standalone Clair server, which would be the most sensible option if I were doing this for a larger group of developers, I chose to start a Clair server in a container with a pre-populated PostgreSQL database for Clair in another container. - Both the he Clair container and
the Clair DB container need to join the same network as the GitLab
runner, since the clair-scanner client will run in a GitLab runner
container, which also joins this network.
The clair-scanner client will send requests to the Clair server and this will fail if the containers in which they run are not in the same network. - The Clair server container is
started with the option –restart on-failure.
This is to ensure that the Clair server will try again if it fails to connect to the Clair database the first time. - The clair-scanner client is downloaded and made executable.
- The Docker image created earlier
is scanned.
Two parameters are given to the clair-scanner; the first one is the IP-address of the computer/container on which the clair-scanner is executed. The second parameter is the URL of the Clair server. Note that the name of the Docker container in which Clair is run is used in this URL. This is possible since the Clair server and the GitLab runner is in the same network, the gitlabnetwork. - The after-script in the
scan_docker_image stage stops and removes the Clair and Clair DB
containers.
This cleans up to ensure that there are no lingering containers after the completion of the GitLab pipeline stage. - The final Docker-related stage
pushes the Docker image to a repository.
In this example, the Docker image is pushed to DockerHub. - All the Docker-related stages in the GitLab pipeline are only be executed on tags which name is a version number.
Final Words
With the above GitLab CI/CD pipeline in place, the implementation of my git workflow is now complete.
Note that you may want to refine scanning of Docker images by ignoring vulnerabilities below a certain level and/or using a whitelist. An alternative is to allow the scan Docker image job in the pipeline to fail, using the output from Clair as reference only.
Happy coding!
This is really good – I’m trying to adapt to work with git-flow (so we work on develop and merge releases into master) but failing!
Have you ever used a version with master and develop branches? Seems to get tricky as we need to update Pom.xml in both?
Hello!
No, so far I have not tried a master and develop only branches. I envision the possibility of multiple developers working on the same project at the same time, each having his/her own branch.
I may have to figure something out for git-flow in the future, in which case I will post a follow-up article.
Hey man, nice work.
I`ve tried to use the CI but I`m getting this error on the runner`s docker network
ERROR: Job failed (system failure): Error response from daemon: network gitlabnetwork not found (docker.go:881:0s)
Do you know what it can be ?
i`m using the gitlab and runner docker-compose that you posted (https://www.ivankrizsan.se/2019/06/17/building-in-docker-containers-on-gitlab-ce/)
This is the best maven3 cicd guide out there. Answers all my questions and fits all my needs. Thank you very much. I have wasted a lot of time making things work with maven flatten plugin but this is just too elegant to not to for.
Great post and helps a lot to understand how versioning works with maven. Will this also work with an application with dependent modules and their pom files ?
I have used this pipeline with multi-module Maven applications without any issues, if that is what you are asking about.
Happy coding!