1Continuous Integration
2======================
3
4GitLab CI
5---------
6
7GitLab provides a convenient framework for running commands in response to Git pushes.
8We use it to test merge requests (MRs) before merging them (pre-merge testing),
9as well as post-merge testing, for everything that hits ``master``
10(this is necessary because we still allow commits to be pushed outside of MRs,
11and even then the MR CI runs in the forked repository, which might have been
12modified and thus is unreliable).
13
14The CI runs a number of tests, from trivial build-testing to complex GPU rendering:
15
16- Build testing for a number of build systems, configurations and platforms
17- Sanity checks (``meson test`` & ``scons check``)
18- Some drivers (softpipe, llvmpipe, freedreno and panfrost) are also tested
19  using `VK-GL-CTS <https://github.com/KhronosGroup/VK-GL-CTS>`__
20- Replay of application traces
21
22A typical run takes between 20 and 30 minutes, although it can go up very quickly
23if the GitLab runners are overwhelmed, which happens sometimes. When it does happen,
24not much can be done besides waiting it out, or cancel it.
25
26Due to limited resources, we currently do not run the CI automatically
27on every push; instead, we only run it automatically once the MR has
28been assigned to ``Marge``, our merge bot.
29
30If you're interested in the details, the main configuration file is ``.gitlab-ci.yml``,
31and it references a number of other files in ``.gitlab-ci/``.
32
33If the GitLab CI doesn't seem to be running on your fork (or MRs, as they run
34in the context of your fork), you should check the "Settings" of your fork.
35Under "CI / CD" → "General pipelines", make sure "Custom CI config path" is
36empty (or set to the default ``.gitlab-ci.yml``), and that the
37"Public pipelines" box is checked.
38
39If you're having issues with the GitLab CI, your best bet is to ask
40about it on ``#freedesktop`` on Freenode and tag `Daniel Stone
41<https://gitlab.freedesktop.org/daniels>`__ (``daniels`` on IRC) or
42`Eric Anholt <https://gitlab.freedesktop.org/anholt>`__ (``anholt`` on
43IRC).
44
45The three GitLab CI systems currently integrated are:
46
47
48.. toctree::
49   :maxdepth: 1
50
51   bare-metal
52   LAVA
53   docker
54
55Intel CI
56--------
57
58The Intel CI is not yet integrated into the GitLab CI.
59For now, special access must be manually given (file a issue in
60`the Intel CI configuration repo <https://gitlab.freedesktop.org/Mesa_CI/mesa_jenkins>`__
61if you think you or Mesa would benefit from you having access to the Intel CI).
62Results can be seen on `mesa-ci.01.org <https://mesa-ci.01.org>`__
63if you are *not* an Intel employee, but if you are you
64can access a better interface on
65`mesa-ci-results.jf.intel.com <http://mesa-ci-results.jf.intel.com>`__.
66
67The Intel CI runs a much larger array of tests, on a number of generations
68of Intel hardware and on multiple platforms (X11, Wayland, DRM & Android),
69with the purpose of detecting regressions.
70Tests include
71`Crucible <https://gitlab.freedesktop.org/mesa/crucible>`__,
72`VK-GL-CTS <https://github.com/KhronosGroup/VK-GL-CTS>`__,
73`dEQP <https://android.googlesource.com/platform/external/deqp>`__,
74`Piglit <https://gitlab.freedesktop.org/mesa/piglit>`__,
75`Skia <https://skia.googlesource.com/skia>`__,
76`VkRunner <https://github.com/Igalia/vkrunner>`__,
77`WebGL <https://github.com/KhronosGroup/WebGL>`__,
78and a few other tools.
79A typical run takes between 30 minutes and an hour.
80
81If you're having issues with the Intel CI, your best bet is to ask about
82it on ``#dri-devel`` on Freenode and tag `Clayton Craft
83<https://gitlab.freedesktop.org/craftyguy>`__ (``craftyguy`` on IRC) or
84`Nico Cortes <https://gitlab.freedesktop.org/ngcortes>`__ (``ngcortes``
85on IRC).
86
87.. _CI-farm-expectations:
88
89CI farm expectations
90--------------------
91
92To make sure that testing of one vendor's drivers doesn't block
93unrelated work by other vendors, we require that a given driver's test
94farm produces a spurious failure no more than once a week.  If every
95driver had CI and failed once a week, we would be seeing someone's
96code getting blocked on a spurious failure daily, which is an
97unacceptable cost to the project.
98
99Additionally, the test farm needs to be able to provide a short enough
100turnaround time that we can get our MRs through marge-bot without the
101pipeline backing up.  As a result, we require that the test farm be
102able to handle a whole pipeline's worth of jobs in less than 15 minutes
103(to compare, the build stage is about 10 minutes).
104
105If a test farm is short the HW to provide these guarantees, consider
106dropping tests to reduce runtime.
107``VK-GL-CTS/scripts/log/bottleneck_report.py`` can help you find what
108tests were slow in a ``results.qpa`` file.  Or, you can have a job with
109no ``parallel`` field set and:
110
111.. code-block:: yaml
112
113    variables:
114      CI_NODE_INDEX: 1
115      CI_NODE_TOTAL: 10
116
117to just run 1/10th of the test list.
118
119If a HW CI farm goes offline (network dies and all CI pipelines end up
120stalled) or its runners are consistently spuriously failing (disk
121full?), and the maintainer is not immediately available to fix the
122issue, please push through an MR disabling that farm's jobs by adding
123'.' to the front of the jobs names until the maintainer can bring
124things back up.  If this happens, the farm maintainer should provide a
125report to mesa-dev@lists.freedesktop.org after the fact explaining
126what happened and what the mitigation plan is for that failure next
127time.
128
129Personal runners
130----------------
131
132Mesa's CI is currently run primarily on packet.net's m1xlarge nodes
133(2.2Ghz Sandy Bridge), with each job getting 8 cores allocated.  You
134can speed up your personal CI builds (and marge-bot merges) by using a
135faster personal machine as a runner.  You can find the gitlab-runner
136package in Debian, or use GitLab's own builds.
137
138To do so, follow `GitLab's instructions
139<https://docs.gitlab.com/ce/ci/runners/#create-a-specific-runner>`__ to
140register your personal GitLab runner in your Mesa fork.  Then, tell
141Mesa how many jobs it should serve (``concurrent=``) and how many
142cores those jobs should use (``FDO_CI_CONCURRENT=``) by editing these
143lines in ``/etc/gitlab-runner/config.toml``, for example::
144
145  concurrent = 2
146
147  [[runners]]
148    environment = ["FDO_CI_CONCURRENT=16"]
149
150
151Docker caching
152--------------
153
154The CI system uses Docker images extensively to cache
155infrequently-updated build content like the CTS.  The `freedesktop.org
156CI templates
157<https://gitlab.freedesktop.org/freedesktop/ci-templates/>`_ help us
158manage the building of the images to reduce how frequently rebuilds
159happen, and trim down the images (stripping out manpages, cleaning the
160apt cache, and other such common pitfalls of building Docker images).
161
162When running a container job, the templates will look for an existing
163build of that image in the container registry under
164``FDO_DISTRIBUTION_TAG``.  If it's found it will be reused, and if
165not, the associated `.gitlab-ci/containers/<jobname>.sh`` will be run
166to build it.  So, when developing any change to container build
167scripts, you need to update the associated ``FDO_DISTRIBUTION_TAG`` to
168a new unique string.  We recommend using the current date plus some
169string related to your branch (so that if you rebase on someone else's
170container update from the same day, you will get a Git conflict
171instead of silently reusing their container)
172
173When developing a given change to your Docker image, you would have to
174bump the tag on each ``git commit --amend`` to your development
175branch, which can get tedious.  Instead, you can navigate to the
176`container registry
177<https://gitlab.freedesktop.org/mesa/mesa/container_registry>`_ for
178your repository and delete the tag to force a rebuild.  When your code
179is eventually merged to master, a full image rebuild will occur again
180(forks inherit images from the main repo, but MRs don't propagate
181images from the fork into the main repo's registry).
182
183Building locally using CI docker images
184---------------------------------------
185
186It can be frustrating to debug build failures on an environment you
187don't personally have.  If you're experiencing this with the CI
188builds, you can use Docker to use their build environment locally.  Go
189to your job log, and at the top you'll see a line like::
190
191    Pulling docker image registry.freedesktop.org/anholt/mesa/debian/android_build:2020-09-11
192
193We'll use a volume mount to make our current Mesa tree be what the
194Docker container uses, so they'll share everything (their build will
195go in _build, according to ``meson-build.sh``).  We're going to be
196using the image non-interactively so we use ``run --rm $IMAGE
197command`` instead of ``run -it $IMAGE bash`` (which you may also find
198useful for debug).  Extract your build setup variables from
199.gitlab-ci.yml and run the CI meson build script:
200
201.. code-block:: console
202
203    IMAGE=registry.freedesktop.org/anholt/mesa/debian/android_build:2020-09-11
204    sudo docker pull $IMAGE
205    sudo docker run --rm -v `pwd`:/mesa -w /mesa $IMAGE env PKG_CONFIG_PATH=/usr/local/lib/aarch64-linux-android/pkgconfig/:/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/aarch64-linux-android/pkgconfig/ GALLIUM_DRIVERS=freedreno UNWIND=disabled EXTRA_OPTION="-D android-stub=true -D llvm=disabled" DRI_LOADERS="-D glx=disabled -D gbm=disabled -D egl=enabled -D platforms=android" CROSS=aarch64-linux-android ./.gitlab-ci/meson-build.sh
206
207All you have left over from the build is its output, and a _build
208directory.  You can hack on mesa and iterate testing the build with:
209
210.. code-block:: console
211
212    sudo docker run --rm -v `pwd`:/mesa $IMAGE ninja -C /mesa/_build
213