• Home
  • History
  • Annotate
  • Raw
  • Download

Lines Matching +full:method +full:- +full:complexity

3-and-test](https://github.com/google/benchmark/workflows/build-and-test/badge.svg)](https://github…
5 …st-bindings](https://github.com/google/benchmark/workflows/test-bindings/badge.svg)](https://githu…
7 [![Build Status](https://travis-ci.org/google/benchmark.svg?branch=master)](https://travis-ci.org/g…
32 [User Guide](#user-guide) for a more comprehensive feature overview.
39 [Discussion group](https://groups.google.com/d/forum/benchmark-discuss)
59 See [Platform-Specific Build Instructions](#platform-specific-build-instructions).
63 This describes the installation process using cmake. As pre-requisites, you'll
77 $ cmake -E make_directory "build"
79 $ cmake -E chdir "build" cmake -DCMAKE_BUILD_TYPE=Release ../
81 # cmake -DCMAKE_BUILD_TYPE=Release -S . -B "build"
83 $ cmake --build "build" --config Release
101 $ cmake -E chdir "build" ctest --build-config Release
107 sudo cmake --build "build" --config Release --target install
114 * Otherwise, if `-DBENCHMARK_DOWNLOAD_DEPENDENCIES=ON` is specified during
118 If you do not wish to build and run the tests, add `-DBENCHMARK_ENABLE_GTEST_TESTS=OFF`
125 `-DCMAKE_BUILD_TYPE=Release` when generating the build system files, as shown
126 above. The use of `--config Release` in build commands is needed to properly
127 support multi-configuration tools (like Visual Studio for example) and can be
130 To enable link-time optimisation, also add `-DBENCHMARK_ENABLE_LTO=true` when
187 $ g++ mybenchmark.cc -std=c++11 -isystem benchmark/include \
188 -Lbenchmark/build/src -lbenchmark -lpthread -o mybenchmark
194 The compiled executable will run all benchmarks by default. Pass the `--help`
199 If using CMake, it is recommended to link against the project-provided
225 can link to pthread by adding `-pthread` to your linker command. Note, you can
226 also use `-lpthread`, but there are potential issues with ordering of command
231 The `shlwapi` library (`-lshlwapi`) is required to support a call to `CPUInfo` which reads the regi…
259 too (`-lkstat`).
265 [Output Formats](#output-formats)
267 [Output Files](#output-files)
269 [Running Benchmarks](#running-benchmarks)
271 [Running a Subset of Benchmarks](#running-a-subset-of-benchmarks)
273 [Result Comparison](#result-comparison)
277 [Runtime and Reporting Considerations](#runtime-and-reporting-considerations)
279 [Passing Arguments](#passing-arguments)
281 [Calculating Asymptotic Complexity](#asymptotic-complexity)
283 [Templated Benchmarks](#templated-benchmarks)
287 [Custom Counters](#custom-counters)
289 [Multithreaded Benchmarks](#multithreaded-benchmarks)
291 [CPU Timers](#cpu-timers)
293 [Manual Timing](#manual-timing)
295 [Setting the Time Unit](#setting-the-time-unit)
297 [Preventing Optimization](#preventing-optimization)
299 [Reporting Statistics](#reporting-statistics)
301 [Custom Statistics](#custom-statistics)
303 [Using RegisterBenchmark](#using-register-benchmark)
305 [Exiting with an Error](#exiting-with-an-error)
307 [A Faster KeepRunning Loop](#a-faster-keep-running-loop)
309 [Disabling CPU Frequency Scaling](#disabling-cpu-frequency-scaling)
312 <a name="output-formats" />
317 `--benchmark_format=<console|json|csv>` flag (or set the
327 ----------------------------------------------------------------------
342 "date": "2015/03/17-18:40:25",
377 The CSV format outputs comma-separated values. The `context` is output on stderr
387 <a name="output-files" />
391 Write benchmark results to a file with the `--benchmark_out=<filename>` option
393 `--benchmark_out_format={json|console|csv}` (or set
395 `--benchmark_out` does not suppress the console output.
397 <a name="running-benchmarks" />
404 `--option_flag=<value>` CLI switch, a corresponding environment variable
407 with the `--help` switch.
409 <a name="running-a-subset-of-benchmarks" />
413 The `--benchmark_filter=<regex>` option (or `BENCHMARK_FILTER=<regex>`
418 $ ./run_benchmarks.x --benchmark_filter=BM_memcpy/32
420 2016-06-25 19:34:24
422 ----------------------------------------------------
429 <a name="result-comparison" />
436 <a name="runtime-and-reporting-considerations" />
454 repetitions are requested using the `--benchmark_repetitions` command-line
458 As well as the per-benchmark entries, a preamble in the report will include
461 <a name="passing-arguments" />
482 BENCHMARK(BM_memcpy)->Arg(8)->Arg(64)->Arg(512)->Arg(1<<10)->Arg(8<<10);
486 short-hand. The following invocation will pick a few appropriate arguments in
490 BENCHMARK(BM_memcpy)->Range(8, 8<<10);
498 BENCHMARK(BM_memcpy)->RangeMultiplier(2)->Range(8, 8<<10);
503 The preceding code shows a method of defining a sparse range. The following
504 example shows a method of defining a dense range. It is then used to benchmark
515 BENCHMARK(BM_DenseRange)->DenseRange(0, 1024, 128);
536 ->Args({1<<10, 128})
537 ->Args({2<<10, 128})
538 ->Args({4<<10, 128})
539 ->Args({8<<10, 128})
540 ->Args({1<<10, 512})
541 ->Args({2<<10, 512})
542 ->Args({4<<10, 512})
543 ->Args({8<<10, 512});
547 short-hand. The following macro will pick a few appropriate arguments in the
552 BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {128, 512}});
561 ->ArgsProduct({{1<<10, 3<<10, 8<<10}, {20, 40, 60, 80}})
564 ->Args({1<<10, 20})
565 ->Args({3<<10, 20})
566 ->Args({8<<10, 20})
567 ->Args({3<<10, 40})
568 ->Args({8<<10, 40})
569 ->Args({1<<10, 40})
570 ->Args({1<<10, 60})
571 ->Args({3<<10, 60})
572 ->Args({8<<10, 60})
573 ->Args({1<<10, 80})
574 ->Args({3<<10, 80})
575 ->Args({8<<10, 80});
587 b->Args({i, j});
589 BENCHMARK(BM_SetInsert)->Apply(CustomArguments);
614 <a name="asymptotic-complexity" />
616 ### Calculating Asymptotic Complexity (Big O)
618 Asymptotic complexity might be calculated for a family of benchmarks. The
619 following code will calculate the coefficient for the high-order term in the
620 running time and the normalized root-mean square error of string comparison.
624 std::string s1(state.range(0), '-');
625 std::string s2(state.range(0), '-');
632 ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity(benchmark::oN);
635 As shown in the following invocation, asymptotic complexity might also be
640 ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity();
643 The following code will specify asymptotic complexity with a lambda function,
644 that might be used to customize high-order term calculation.
647 BENCHMARK(BM_StringCompare)->RangeMultiplier(2)
648 ->Range(1<<10, 1<<18)->Complexity([](benchmark::IterationCount n)->double{return n; });
651 <a name="templated-benchmarks" />
663 for (int i = state.range(0); i--; )
665 for (int e = state.range(0); e--; )
672 BENCHMARK_TEMPLATE(BM_Sequential, WaitQueue<int>)->Range(1<<0, 1<<10);
695 * `BENCHMARK_F(ClassName, Method)`
696 * `BENCHMARK_DEFINE_F(ClassName, Method)`
697 * `BENCHMARK_REGISTER_F(ClassName, Method)`
723 BENCHMARK_REGISTER_F(MyFixture, BarTest)->Threads(2);
731 * `BENCHMARK_TEMPLATE_F(ClassName, Method, ...)`
732 * `BENCHMARK_TEMPLATE_DEFINE_F(ClassName, Method, ...)`
752 BENCHMARK_REGISTER_F(MyFixture, DoubleTest)->Threads(2);
755 <a name="custom-counters" />
759 You can add your own counters with user-defined names. The example below
775 and `Counter` values. The latter is a `double`-like class, via an implicit
777 assignment operators (`=,+=,-=,*=,/=`) to change the value of each counter.
784 ; a bit flag which allows you to show counters as rates, and/or as per-thread
785 iteration, and/or as per-thread averages, and/or iteration invariants,
786 and/or finally inverting the result; and a flag specifying the 'unit' - i.e.
804 // Set the counter as a thread-average quantity. It will
836 ------------------------------------------------------------------------------
838 ------------------------------------------------------------------------------
856 passing the flag `--benchmark_counters_tabular=true` to the benchmark
861 `--benchmark_counters_tabular=true` is passed:
864 ---------------------------------------------------------------------------------------
866 ---------------------------------------------------------------------------------------
875 --------------------------------------------------------------
877 --------------------------------------------------------------
895 <a name="multithreaded-benchmarks"/>
918 BENCHMARK(BM_MultiThreaded)->Threads(2);
922 single-threaded code, you may want to use real-time ("wallclock") measurements
926 BENCHMARK(BM_test)->Range(8, 8<<10)->UseRealTime();
931 <a name="cpu-timers" />
956 // measure to anywhere from near-zero (the overhead spent before/after work
957 // handoff to worker thread[s]) to the whole single-thread time.
958 BENCHMARK(BM_OpenMP)->Range(8, 8<<10);
960 // Measure the user-visible time, the wall clock (literally, the time that
963 // time spent by the main thread in single-threaded case, in general decreasing
965 BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->UseRealTime();
969 // time spent by the main thread in single-threaded case.
970 BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->MeasureProcessCPUTime();
974 BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->MeasureProcessCPUTime()->UseRealTime();
996 BENCHMARK(BM_SetInsert_With_Timer_Control)->Ranges({{1<<10, 8<<10}, {128, 512}});
999 <a name="manual-timing" />
1003 For benchmarking something for which neither CPU time nor real-time are
1013 be accurately measured using CPU time or real-time. Instead, they can be
1032 end - start);
1037 BENCHMARK(BM_ManualTiming)->Range(1, 1<<17)->UseManualTime();
1040 <a name="setting-the-time-unit" />
1049 BENCHMARK(BM_test)->Unit(benchmark::kMillisecond);
1052 <a name="preventing-optimization" />
1113 <a name="reporting-statistics" />
1123 `--benchmark_repetitions` flag or on a per benchmark basis by calling
1127 Additionally the `--benchmark_report_aggregates_only={true|false}`,
1128 `--benchmark_display_aggregates_only={true|false}` flags or
1132 only the aggregates (i.e. mean, median and standard deviation, maybe complexity
1134 reporters - standard output (console), and the file.
1142 <a name="custom-statistics" />
1148 observation is, e.g. because you have some real-time constraints. This is easy.
1162 ->ComputeStatistics("max", [](const std::vector<double>& v) -> double {
1165 ->Arg(512);
1168 <a name="using-register-benchmark" />
1197 <a name="exiting-with-an-error" />
1205 `KeepRunning()` are skipped. For the ranged-for version of the benchmark loop
1249 <a name="a-faster-keep-running-loop" />
1253 In C++11 mode, a ranged-based for loop should be used in preference to
1265 The reason the ranged-for loop is faster than using `KeepRunning`, is
1267 ever iteration, whereas the ranged-for variant is able to keep the iteration count
1270 For example, an empty inner loop of using the ranged-based for method looks like:
1279 add rbx, -1
1304 Unless C++03 compatibility is required, the ranged-for variant of writing
1307 <a name="disabling-cpu-frequency-scaling" />
1320 sudo cpupower frequency-set --governor performance
1322 sudo cpupower frequency-set --governor powersave