1<a id="top"></a>
2# Command line
3
4**Contents**<br>
5[Specifying which tests to run](#specifying-which-tests-to-run)<br>
6[Choosing a reporter to use](#choosing-a-reporter-to-use)<br>
7[Breaking into the debugger](#breaking-into-the-debugger)<br>
8[Showing results for successful tests](#showing-results-for-successful-tests)<br>
9[Aborting after a certain number of failures](#aborting-after-a-certain-number-of-failures)<br>
10[Listing available tests, tags or reporters](#listing-available-tests-tags-or-reporters)<br>
11[Sending output to a file](#sending-output-to-a-file)<br>
12[Naming a test run](#naming-a-test-run)<br>
13[Eliding assertions expected to throw](#eliding-assertions-expected-to-throw)<br>
14[Make whitespace visible](#make-whitespace-visible)<br>
15[Warnings](#warnings)<br>
16[Reporting timings](#reporting-timings)<br>
17[Load test names to run from a file](#load-test-names-to-run-from-a-file)<br>
18[Just test names](#just-test-names)<br>
19[Specify the order test cases are run](#specify-the-order-test-cases-are-run)<br>
20[Specify a seed for the Random Number Generator](#specify-a-seed-for-the-random-number-generator)<br>
21[Identify framework and version according to the libIdentify standard](#identify-framework-and-version-according-to-the-libidentify-standard)<br>
22[Wait for key before continuing](#wait-for-key-before-continuing)<br>
23[Specify multiples of clock resolution to run benchmarks for](#specify-multiples-of-clock-resolution-to-run-benchmarks-for)<br>
24[Usage](#usage)<br>
25[Specify the section to run](#specify-the-section-to-run)<br>
26[Filenames as tags](#filenames-as-tags)<br>
27[Override output colouring](#override-output-colouring)<br>
28
29Catch works quite nicely without any command line options at all - but for those times when you want greater control the following options are available.
30Click one of the followings links to take you straight to that option - or scroll on to browse the available options.
31
32<a href="#specifying-which-tests-to-run">               `    <test-spec> ...`</a><br />
33<a href="#usage">                                       `    -h, -?, --help`</a><br />
34<a href="#listing-available-tests-tags-or-reporters">   `    -l, --list-tests`</a><br />
35<a href="#listing-available-tests-tags-or-reporters">   `    -t, --list-tags`</a><br />
36<a href="#showing-results-for-successful-tests">        `    -s, --success`</a><br />
37<a href="#breaking-into-the-debugger">                  `    -b, --break`</a><br />
38<a href="#eliding-assertions-expected-to-throw">        `    -e, --nothrow`</a><br />
39<a href="#invisibles">                                  `    -i, --invisibles`</a><br />
40<a href="#sending-output-to-a-file">                    `    -o, --out`</a><br />
41<a href="#choosing-a-reporter-to-use">                  `    -r, --reporter`</a><br />
42<a href="#naming-a-test-run">                           `    -n, --name`</a><br />
43<a href="#aborting-after-a-certain-number-of-failures"> `    -a, --abort`</a><br />
44<a href="#aborting-after-a-certain-number-of-failures"> `    -x, --abortx`</a><br />
45<a href="#warnings">                                    `    -w, --warn`</a><br />
46<a href="#reporting-timings">                           `    -d, --durations`</a><br />
47<a href="#input-file">                                  `    -f, --input-file`</a><br />
48<a href="#run-section">                                 `    -c, --section`</a><br />
49<a href="#filenames-as-tags">                           `    -#, --filenames-as-tags`</a><br />
50
51
52</br>
53
54<a href="#list-test-names-only">                        `    --list-test-names-only`</a><br />
55<a href="#listing-available-tests-tags-or-reporters">   `    --list-reporters`</a><br />
56<a href="#order">                                       `    --order`</a><br />
57<a href="#rng-seed">                                    `    --rng-seed`</a><br />
58<a href="#libidentify">                                 `    --libidentify`</a><br />
59<a href="#wait-for-keypress">                           `    --wait-for-keypress`</a><br />
60<a href="#benchmark-resolution-multiple">               `    --benchmark-resolution-multiple`</a><br />
61<a href="#use-colour">                                  `    --use-colour`</a><br />
62
63</br>
64
65
66
67<a id="specifying-which-tests-to-run"></a>
68## Specifying which tests to run
69
70<pre>&lt;test-spec> ...</pre>
71
72Test cases, wildcarded test cases, tags and tag expressions are all passed directly as arguments. Tags are distinguished by being enclosed in square brackets.
73
74If no test specs are supplied then all test cases, except "hidden" tests, are run.
75A test is hidden by giving it any tag starting with (or just) a period (```.```) - or, in the deprecated case, tagged ```[hide]``` or given name starting with `'./'`. To specify hidden tests from the command line ```[.]``` or ```[hide]``` can be used *regardless of how they were declared*.
76
77Specs must be enclosed in quotes if they contain spaces. If they do not contain spaces the quotes are optional.
78
79Wildcards consist of the `*` character at the beginning and/or end of test case names and can substitute for any number of any characters (including none).
80
81Test specs are case insensitive.
82
83If a spec is prefixed with `exclude:` or the `~` character then the pattern matches an exclusion. This means that tests matching the pattern are excluded from the set - even if a prior inclusion spec included them. Subsequent inclusion specs will take precedence, however.
84Inclusions and exclusions are evaluated in left-to-right order.
85
86Test case examples:
87
88<pre>thisTestOnly            Matches the test case called, 'thisTestOnly'
89"this test only"        Matches the test case called, 'this test only'
90these*                  Matches all cases starting with 'these'
91exclude:notThis         Matches all tests except, 'notThis'
92~notThis                Matches all tests except, 'notThis'
93~*private*              Matches all tests except those that contain 'private'
94a* ~ab* abc             Matches all tests that start with 'a', except those that
95                        start with 'ab', except 'abc', which is included
96</pre>
97
98Names within square brackets are interpreted as tags.
99A series of tags form an AND expression whereas a comma-separated sequence forms an OR expression. e.g.:
100
101<pre>[one][two],[three]</pre>
102This matches all tests tagged `[one]` and `[two]`, as well as all tests tagged `[three]`
103
104Test names containing special characters, such as `,` or `[` can specify them on the command line using `\`.
105`\` also escapes itself.
106
107<a id="choosing-a-reporter-to-use"></a>
108## Choosing a reporter to use
109
110<pre>-r, --reporter &lt;reporter></pre>
111
112A reporter is an object that formats and structures the output of running tests, and potentially summarises the results. By default a console reporter is used that writes, IDE friendly, textual output. Catch comes bundled with some alternative reporters, but more can be added in client code.<br />
113The bundled reporters are:
114
115<pre>-r console
116-r compact
117-r xml
118-r junit
119</pre>
120
121The JUnit reporter is an xml format that follows the structure of the JUnit XML Report ANT task, as consumed by a number of third-party tools, including Continuous Integration servers such as Hudson. If not otherwise needed, the standard XML reporter is preferred as this is a streaming reporter, whereas the Junit reporter needs to hold all its results until the end so it can write the overall results into attributes of the root node.
122
123<a id="breaking-into-the-debugger"></a>
124## Breaking into the debugger
125<pre>-b, --break</pre>
126
127Under most debuggers Catch2 is capable of automatically breaking on a test
128failure. This allows the user to see the current state of the test during
129failure.
130
131<a id="showing-results-for-successful-tests"></a>
132## Showing results for successful tests
133<pre>-s, --success</pre>
134
135Usually you only want to see reporting for failed tests. Sometimes it's useful to see *all* the output (especially when you don't trust that that test you just added worked first time!).
136To see successful, as well as failing, test results just pass this option. Note that each reporter may treat this option differently. The Junit reporter, for example, logs all results regardless.
137
138<a id="aborting-after-a-certain-number-of-failures"></a>
139## Aborting after a certain number of failures
140<pre>-a, --abort
141-x, --abortx [&lt;failure threshold>]
142</pre>
143
144If a ```REQUIRE``` assertion fails the test case aborts, but subsequent test cases are still run.
145If a ```CHECK``` assertion fails even the current test case is not aborted.
146
147Sometimes this results in a flood of failure messages and you'd rather just see the first few. Specifying ```-a``` or ```--abort``` on its own will abort the whole test run on the first failed assertion of any kind. Use ```-x``` or ```--abortx``` followed by a number to abort after that number of assertion failures.
148
149<a id="listing-available-tests-tags-or-reporters"></a>
150## Listing available tests, tags or reporters
151<pre>-l, --list-tests
152-t, --list-tags
153--list-reporters
154</pre>
155
156```-l``` or ```--list-tests``` will list all registered tests, along with any tags.
157If one or more test-specs have been supplied too then only the matching tests will be listed.
158
159```-t``` or ```--list-tags``` lists all available tags, along with the number of test cases they match. Again, supplying test specs limits the tags that match.
160
161```--list-reporters``` lists the available reporters.
162
163<a id="sending-output-to-a-file"></a>
164## Sending output to a file
165<pre>-o, --out &lt;filename>
166</pre>
167
168Use this option to send all output to a file. By default output is sent to stdout (note that uses of stdout and stderr *from within test cases* are redirected and included in the report - so even stderr will effectively end up on stdout).
169
170<a id="naming-a-test-run"></a>
171## Naming a test run
172<pre>-n, --name &lt;name for test run></pre>
173
174If a name is supplied it will be used by the reporter to provide an overall name for the test run. This can be useful if you are sending to a file, for example, and need to distinguish different test runs - either from different Catch executables or runs of the same executable with different options. If not supplied the name is defaulted to the name of the executable.
175
176<a id="eliding-assertions-expected-to-throw"></a>
177## Eliding assertions expected to throw
178<pre>-e, --nothrow</pre>
179
180Skips all assertions that test that an exception is thrown, e.g. ```REQUIRE_THROWS```.
181
182These can be a nuisance in certain debugging environments that may break when exceptions are thrown (while this is usually optional for handled exceptions, it can be useful to have enabled if you are trying to track down something unexpected).
183
184Sometimes exceptions are expected outside of one of the assertions that tests for them (perhaps thrown and caught within the code-under-test). The whole test case can be skipped when using ```-e``` by marking it with the ```[!throws]``` tag.
185
186When running with this option any throw checking assertions are skipped so as not to contribute additional noise. Be careful if this affects the behaviour of subsequent tests.
187
188<a id="invisibles"></a>
189## Make whitespace visible
190<pre>-i, --invisibles</pre>
191
192If a string comparison fails due to differences in whitespace - especially leading or trailing whitespace - it can be hard to see what's going on.
193This option transforms tabs and newline characters into ```\t``` and ```\n``` respectively when printing.
194
195<a id="warnings"></a>
196## Warnings
197<pre>-w, --warn &lt;warning name></pre>
198
199Enables reporting of suspicious test states. There are currently two
200available warnings
201
202```
203    NoAssertions   // Fail test case / leaf section if no assertions
204                   // (e.g. `REQUIRE`) is encountered.
205    NoTests        // Return non-zero exit code when no test cases were run
206                   // Also calls reporter's noMatchingTestCases method
207```
208
209
210<a id="reporting-timings"></a>
211## Reporting timings
212<pre>-d, --durations &lt;yes/no></pre>
213
214When set to ```yes``` Catch will report the duration of each test case, in milliseconds. Note that it does this regardless of whether a test case passes or fails. Note, also, the certain reporters (e.g. Junit) always report test case durations regardless of this option being set or not.
215
216<a id="input-file"></a>
217## Load test names to run from a file
218<pre>-f, --input-file &lt;filename></pre>
219
220Provide the name of a file that contains a list of test case names - one per line. Blank lines are skipped and anything after the comment character, ```#```, is ignored.
221
222A useful way to generate an initial instance of this file is to use the <a href="#list-test-names-only">list-test-names-only</a> option. This can then be manually curated to specify a specific subset of tests - or in a specific order.
223
224<a id="list-test-names-only"></a>
225## Just test names
226<pre>--list-test-names-only</pre>
227
228This option lists all available tests in a non-indented form, one on each line. This makes it ideal for saving to a file and feeding back into the <a href="#input-file">```-f``` or ```--input-file```</a> option.
229
230
231<a id="order"></a>
232## Specify the order test cases are run
233<pre>--order &lt;decl|lex|rand&gt;</pre>
234
235Test cases are ordered one of three ways:
236
237
238### decl
239Declaration order. The order the tests were originally declared in. Note that ordering between files is not guaranteed and is implementation dependent.
240
241### lex
242Lexicographically sorted. Tests are sorted, alpha-numerically, by name.
243
244### rand
245Randomly sorted. Test names are sorted using ```std::random_shuffle()```. By default the random number generator is seeded with 0 - and so the order is repeatable. To control the random seed see <a href="#rng-seed">rng-seed</a>.
246
247<a id="rng-seed"></a>
248## Specify a seed for the Random Number Generator
249<pre>--rng-seed &lt;'time'|number&gt;</pre>
250
251Sets a seed for the random number generator using ```std::srand()```.
252If a number is provided this is used directly as the seed so the random pattern is repeatable.
253Alternatively if the keyword ```time``` is provided then the result of calling ```std::time(0)``` is used and so the pattern becomes unpredictable.
254
255In either case the actual value for the seed is printed as part of Catch's output so if an issue is discovered that is sensitive to test ordering the ordering can be reproduced - even if it was originally seeded from ```std::time(0)```.
256
257<a id="libidentify"></a>
258## Identify framework and version according to the libIdentify standard
259<pre>--libidentify</pre>
260
261See [The LibIdentify repo for more information and examples](https://github.com/janwilmans/LibIdentify).
262
263<a id="wait-for-keypress"></a>
264## Wait for key before continuing
265<pre>--wait-for-keypress &lt;start|exit|both&gt;</pre>
266
267Will cause the executable to print a message and wait until the return/ enter key is pressed before continuing -
268either before running any tests, after running all tests - or both, depending on the argument.
269
270<a id="benchmark-resolution-multiple"></a>
271## Specify multiples of clock resolution to run benchmarks for
272<pre>--benchmark-resolution-multiple &lt;multiplier&gt;</pre>
273
274When running benchmarks the clock resolution is estimated. Benchmarks are then run for exponentially increasing
275numbers of iterations until some multiple of the estimated resolution is exceed. By default that multiple is 100, but
276it can be overridden here.
277
278<a id="usage"></a>
279## Usage
280<pre>-h, -?, --help</pre>
281
282Prints the command line arguments to stdout
283
284
285<a id="run-section"></a>
286## Specify the section to run
287<pre>-c, --section &lt;section name&gt;</pre>
288
289To limit execution to a specific section within a test case, use this option one or more times.
290To narrow to sub-sections use multiple instances, where each subsequent instance specifies a deeper nesting level.
291
292E.g. if you have:
293
294<pre>
295TEST_CASE( "Test" ) {
296  SECTION( "sa" ) {
297    SECTION( "sb" ) {
298      /*...*/
299    }
300    SECTION( "sc" ) {
301      /*...*/
302    }
303  }
304  SECTION( "sd" ) {
305    /*...*/
306  }
307}
308</pre>
309
310Then you can run `sb` with:
311<pre>./MyExe Test -c sa -c sb</pre>
312
313Or run just `sd` with:
314<pre>./MyExe Test -c sd</pre>
315
316To run all of `sa`, including `sb` and `sc` use:
317<pre>./MyExe Test -c sa</pre>
318
319There are some limitations of this feature to be aware of:
320- Code outside of sections being skipped will still be executed - e.g. any set-up code in the TEST_CASE before the
321start of the first section.</br>
322- At time of writing, wildcards are not supported in section names.
323- If you specify a section without narrowing to a test case first then all test cases will be executed
324(but only matching sections within them).
325
326
327<a id="filenames-as-tags"></a>
328## Filenames as tags
329<pre>-#, --filenames-as-tags</pre>
330
331When this option is used then every test is given an additional tag which is formed of the unqualified
332filename it is found in, with any extension stripped, prefixed with the `#` character.
333
334So, for example,  tests within the file `~\Dev\MyProject\Ferrets.cpp` would be tagged `[#Ferrets]`.
335
336<a id="use-colour"></a>
337## Override output colouring
338<pre>--use-colour &lt;yes|no|auto&gt;</pre>
339
340Catch colours output for terminals, but omits colouring when it detects that
341output is being sent to a pipe. This is done to avoid interfering with automated
342processing of output.
343
344`--use-colour yes` forces coloured output, `--use-colour no` disables coloured
345output. The default behaviour is `--use-colour auto`.
346
347---
348
349[Home](Readme.md#top)
350