1Table of contents
2-----------------
3
41. Overview
52. How fio works
63. Running fio
74. Job file format
85. Detailed list of parameters
96. Normal output
107. Terse output
118. Trace file format
129. CPU idleness profiling
13
141.0 Overview and history
15------------------------
16fio was originally written to save me the hassle of writing special test
17case programs when I wanted to test a specific workload, either for
18performance reasons or to find/reproduce a bug. The process of writing
19such a test app can be tiresome, especially if you have to do it often.
20Hence I needed a tool that would be able to simulate a given io workload
21without resorting to writing a tailored test case again and again.
22
23A test work load is difficult to define, though. There can be any number
24of processes or threads involved, and they can each be using their own
25way of generating io. You could have someone dirtying large amounts of
26memory in an memory mapped file, or maybe several threads issuing
27reads using asynchronous io. fio needed to be flexible enough to
28simulate both of these cases, and many more.
29
302.0 How fio works
31-----------------
32The first step in getting fio to simulate a desired io workload, is
33writing a job file describing that specific setup. A job file may contain
34any number of threads and/or files - the typical contents of the job file
35is a global section defining shared parameters, and one or more job
36sections describing the jobs involved. When run, fio parses this file
37and sets everything up as described. If we break down a job from top to
38bottom, it contains the following basic parameters:
39
40	IO type		Defines the io pattern issued to the file(s).
41			We may only be reading sequentially from this
42			file(s), or we may be writing randomly. Or even
43			mixing reads and writes, sequentially or randomly.
44
45	Block size	In how large chunks are we issuing io? This may be
46			a single value, or it may describe a range of
47			block sizes.
48
49	IO size		How much data are we going to be reading/writing.
50
51	IO engine	How do we issue io? We could be memory mapping the
52			file, we could be using regular read/write, we
53			could be using splice, async io, syslet, or even
54			SG (SCSI generic sg).
55
56	IO depth	If the io engine is async, how large a queuing
57			depth do we want to maintain?
58
59	IO type		Should we be doing buffered io, or direct/raw io?
60
61	Num files	How many files are we spreading the workload over.
62
63	Num threads	How many threads or processes should we spread
64			this workload over.
65
66The above are the basic parameters defined for a workload, in addition
67there's a multitude of parameters that modify other aspects of how this
68job behaves.
69
70
713.0 Running fio
72---------------
73See the README file for command line parameters, there are only a few
74of them.
75
76Running fio is normally the easiest part - you just give it the job file
77(or job files) as parameters:
78
79$ fio job_file
80
81and it will start doing what the job_file tells it to do. You can give
82more than one job file on the command line, fio will serialize the running
83of those files. Internally that is the same as using the 'stonewall'
84parameter described in the parameter section.
85
86If the job file contains only one job, you may as well just give the
87parameters on the command line. The command line parameters are identical
88to the job parameters, with a few extra that control global parameters
89(see README). For example, for the job file parameter iodepth=2, the
90mirror command line option would be --iodepth 2 or --iodepth=2. You can
91also use the command line for giving more than one job entry. For each
92--name option that fio sees, it will start a new job with that name.
93Command line entries following a --name entry will apply to that job,
94until there are no more entries or a new --name entry is seen. This is
95similar to the job file options, where each option applies to the current
96job until a new [] job entry is seen.
97
98fio does not need to run as root, except if the files or devices specified
99in the job section requires that. Some other options may also be restricted,
100such as memory locking, io scheduler switching, and decreasing the nice value.
101
102
1034.0 Job file format
104-------------------
105As previously described, fio accepts one or more job files describing
106what it is supposed to do. The job file format is the classic ini file,
107where the names enclosed in [] brackets define the job name. You are free
108to use any ascii name you want, except 'global' which has special meaning.
109A global section sets defaults for the jobs described in that file. A job
110may override a global section parameter, and a job file may even have
111several global sections if so desired. A job is only affected by a global
112section residing above it. If the first character in a line is a ';' or a
113'#', the entire line is discarded as a comment.
114
115So let's look at a really simple job file that defines two processes, each
116randomly reading from a 128MB file.
117
118; -- start job file --
119[global]
120rw=randread
121size=128m
122
123[job1]
124
125[job2]
126
127; -- end job file --
128
129As you can see, the job file sections themselves are empty as all the
130described parameters are shared. As no filename= option is given, fio
131makes up a filename for each of the jobs as it sees fit. On the command
132line, this job would look as follows:
133
134$ fio --name=global --rw=randread --size=128m --name=job1 --name=job2
135
136
137Let's look at an example that has a number of processes writing randomly
138to files.
139
140; -- start job file --
141[random-writers]
142ioengine=libaio
143iodepth=4
144rw=randwrite
145bs=32k
146direct=0
147size=64m
148numjobs=4
149
150; -- end job file --
151
152Here we have no global section, as we only have one job defined anyway.
153We want to use async io here, with a depth of 4 for each file. We also
154increased the buffer size used to 32KB and define numjobs to 4 to
155fork 4 identical jobs. The result is 4 processes each randomly writing
156to their own 64MB file. Instead of using the above job file, you could
157have given the parameters on the command line. For this case, you would
158specify:
159
160$ fio --name=random-writers --ioengine=libaio --iodepth=4 --rw=randwrite --bs=32k --direct=0 --size=64m --numjobs=4
161
162When fio is utilized as a basis of any reasonably large test suite, it might be
163desirable to share a set of standardized settings across multiple job files.
164Instead of copy/pasting such settings, any section may pull in an external
165.fio file with 'include filename' directive, as in the following example:
166
167; -- start job file including.fio --
168[global]
169filename=/tmp/test
170filesize=1m
171include glob-include.fio
172
173[test]
174rw=randread
175bs=4k
176time_based=1
177runtime=10
178include test-include.fio
179; -- end job file including.fio --
180
181; -- start job file glob-include.fio --
182thread=1
183group_reporting=1
184; -- end job file glob-include.fio --
185
186; -- start job file test-include.fio --
187ioengine=libaio
188iodepth=4
189; -- end job file test-include.fio --
190
191Settings pulled into a section apply to that section only (except global
192section). Include directives may be nested in that any included file may
193contain further include directive(s). Include files may not contain []
194sections.
195
196
1974.1 Environment variables
198-------------------------
199
200fio also supports environment variable expansion in job files. Any
201substring of the form "${VARNAME}" as part of an option value (in other
202words, on the right of the `='), will be expanded to the value of the
203environment variable called VARNAME.  If no such environment variable
204is defined, or VARNAME is the empty string, the empty string will be
205substituted.
206
207As an example, let's look at a sample fio invocation and job file:
208
209$ SIZE=64m NUMJOBS=4 fio jobfile.fio
210
211; -- start job file --
212[random-writers]
213rw=randwrite
214size=${SIZE}
215numjobs=${NUMJOBS}
216; -- end job file --
217
218This will expand to the following equivalent job file at runtime:
219
220; -- start job file --
221[random-writers]
222rw=randwrite
223size=64m
224numjobs=4
225; -- end job file --
226
227fio ships with a few example job files, you can also look there for
228inspiration.
229
2304.2 Reserved keywords
231---------------------
232
233Additionally, fio has a set of reserved keywords that will be replaced
234internally with the appropriate value. Those keywords are:
235
236$pagesize	The architecture page size of the running system
237$mb_memory	Megabytes of total memory in the system
238$ncpus		Number of online available CPUs
239
240These can be used on the command line or in the job file, and will be
241automatically substituted with the current system values when the job
242is run. Simple math is also supported on these keywords, so you can
243perform actions like:
244
245size=8*$mb_memory
246
247and get that properly expanded to 8 times the size of memory in the
248machine.
249
250
2515.0 Detailed list of parameters
252-------------------------------
253
254This section describes in details each parameter associated with a job.
255Some parameters take an option of a given type, such as an integer or
256a string. Anywhere a numeric value is required, an arithmetic expression
257may be used, provided it is surrounded by parentheses. Supported operators
258are:
259
260	addition (+)
261	subtraction (-)
262	multiplication (*)
263	division (/)
264	modulus (%)
265	exponentiation (^)
266
267For time values in expressions, units are microseconds by default. This is
268different than for time values not in expressions (not enclosed in
269parentheses). The following types are used:
270
271str	String. This is a sequence of alpha characters.
272time	Integer with possible time suffix. In seconds unless otherwise
273	specified, use eg 10m for 10 minutes. Accepts s/m/h for seconds,
274	minutes, and hours, and accepts 'ms' (or 'msec') for milliseconds,
275	and 'us' (or 'usec') for microseconds.
276int	SI integer. A whole number value, which may contain a suffix
277	describing the base of the number. Accepted suffixes are k/m/g/t/p,
278	meaning kilo, mega, giga, tera, and peta. The suffix is not case
279	sensitive, and you may also include trailing 'b' (eg 'kb' is the same
280	as 'k'). So if you want to specify 4096, you could either write
281	out '4096' or just give 4k. The suffixes signify base 2 values, so
282	1024 is 1k and 1024k is 1m and so on, unless the suffix is explicitly
283	set to a base 10 value using 'kib', 'mib', 'gib', etc. If that is the
284	case, then 1000 is used as the multiplier. This can be handy for
285	disks, since manufacturers generally use base 10 values when listing
286	the capacity of a drive. If the option accepts an upper and lower
287	range, use a colon ':' or minus '-' to separate such values.  May also
288	include a prefix to indicate numbers base. If 0x is used, the number
289	is assumed to be hexadecimal.  See irange.
290bool	Boolean. Usually parsed as an integer, however only defined for
291	true and false (1 and 0).
292irange	Integer range with suffix. Allows value range to be given, such
293	as 1024-4096. A colon may also be used as the separator, eg
294	1k:4k. If the option allows two sets of ranges, they can be
295	specified with a ',' or '/' delimiter: 1k-4k/8k-32k. Also see
296	int.
297float_list	A list of floating numbers, separated by a ':' character.
298
299With the above in mind, here follows the complete list of fio job
300parameters.
301
302name=str	ASCII name of the job. This may be used to override the
303		name printed by fio for this job. Otherwise the job
304		name is used. On the command line this parameter has the
305		special purpose of also signaling the start of a new
306		job.
307
308description=str	Text description of the job. Doesn't do anything except
309		dump this text description when this job is run. It's
310		not parsed.
311
312directory=str	Prefix filenames with this directory. Used to place files
313		in a different location than "./". See the 'filename' option
314		for escaping certain characters.
315
316filename=str	Fio normally makes up a filename based on the job name,
317		thread number, and file number. If you want to share
318		files between threads in a job or several jobs, specify
319		a filename for each of them to override the default. If
320		the ioengine used is 'net', the filename is the host, port,
321		and protocol to use in the format of =host,port,protocol.
322		See ioengine=net for more. If the ioengine is file based, you
323		can specify a number of files by separating the names with a
324		':' colon. So if you wanted a job to open /dev/sda and /dev/sdb
325		as the two working files, you would use
326		filename=/dev/sda:/dev/sdb. On Windows, disk devices are
327		accessed as \\.\PhysicalDrive0 for the first device,
328		\\.\PhysicalDrive1 for the second etc. Note: Windows and
329		FreeBSD prevent write access to areas of the disk containing
330		in-use data (e.g. filesystems).
331		If the wanted filename does need to include a colon, then
332		escape that with a '\' character. For instance, if the filename
333		is "/dev/dsk/foo@3,0:c", then you would use
334		filename="/dev/dsk/foo@3,0\:c". '-' is a reserved name, meaning
335		stdin or stdout. Which of the two depends on the read/write
336		direction set.
337
338filename_format=str
339		If sharing multiple files between jobs, it is usually necessary
340		to  have fio generate the exact names that you want. By default,
341		fio will name a file based on the default file format
342		specification of jobname.jobnumber.filenumber. With this
343		option, that can be customized. Fio will recognize and replace
344		the following keywords in this string:
345
346		$jobname
347			The name of the worker thread or process.
348
349		$jobnum
350			The incremental number of the worker thread or
351			process.
352
353		$filenum
354			The incremental number of the file for that worker
355			thread or process.
356
357		To have dependent jobs share a set of files, this option can
358		be set to have fio generate filenames that are shared between
359		the two. For instance, if testfiles.$filenum is specified,
360		file number 4 for any job will be named testfiles.4. The
361		default of $jobname.$jobnum.$filenum will be used if
362		no other format specifier is given.
363
364opendir=str	Tell fio to recursively add any file it can find in this
365		directory and down the file system tree.
366
367lockfile=str	Fio defaults to not locking any files before it does
368		IO to them. If a file or file descriptor is shared, fio
369		can serialize IO to that file to make the end result
370		consistent. This is usual for emulating real workloads that
371		share files. The lock modes are:
372
373			none		No locking. The default.
374			exclusive	Only one thread/process may do IO,
375					excluding all others.
376			readwrite	Read-write locking on the file. Many
377					readers may access the file at the
378					same time, but writes get exclusive
379					access.
380
381readwrite=str
382rw=str		Type of io pattern. Accepted values are:
383
384			read		Sequential reads
385			write		Sequential writes
386			randwrite	Random writes
387			randread	Random reads
388			rw,readwrite	Sequential mixed reads and writes
389			randrw		Random mixed reads and writes
390
391		For the mixed io types, the default is to split them 50/50.
392		For certain types of io the result may still be skewed a bit,
393		since the speed may be different. It is possible to specify
394		a number of IO's to do before getting a new offset, this is
395		done by appending a ':<nr>' to the end of the string given.
396		For a random read, it would look like 'rw=randread:8' for
397		passing in an offset modifier with a value of 8. If the
398		suffix is used with a sequential IO pattern, then the value
399		specified will be added to the generated offset for each IO.
400		For instance, using rw=write:4k will skip 4k for every
401		write. It turns sequential IO into sequential IO with holes.
402		See the 'rw_sequencer' option.
403
404rw_sequencer=str If an offset modifier is given by appending a number to
405		the rw=<str> line, then this option controls how that
406		number modifies the IO offset being generated. Accepted
407		values are:
408
409			sequential	Generate sequential offset
410			identical	Generate the same offset
411
412		'sequential' is only useful for random IO, where fio would
413		normally generate a new random offset for every IO. If you
414		append eg 8 to randread, you would get a new random offset for
415		every 8 IO's. The result would be a seek for only every 8
416		IO's, instead of for every IO. Use rw=randread:8 to specify
417		that. As sequential IO is already sequential, setting
418		'sequential' for that would not result in any differences.
419		'identical' behaves in a similar fashion, except it sends
420		the same offset 8 number of times before generating a new
421		offset.
422
423kb_base=int	The base unit for a kilobyte. The defacto base is 2^10, 1024.
424		Storage manufacturers like to use 10^3 or 1000 as a base
425		ten unit instead, for obvious reasons. Allow values are
426		1024 or 1000, with 1024 being the default.
427
428unified_rw_reporting=bool	Fio normally reports statistics on a per
429		data direction basis, meaning that read, write, and trim are
430		accounted and reported separately. If this option is set,
431		the fio will sum the results and report them as "mixed"
432		instead.
433
434randrepeat=bool	For random IO workloads, seed the generator in a predictable
435		way so that results are repeatable across repetitions.
436
437randseed=int	Seed the random number generators based on this seed value, to
438		be able to control what sequence of output is being generated.
439		If not set, the random sequence depends on the randrepeat
440		setting.
441
442fallocate=str	Whether pre-allocation is performed when laying down files.
443		Accepted values are:
444
445			none		Do not pre-allocate space
446			posix		Pre-allocate via posix_fallocate()
447			keep		Pre-allocate via fallocate() with
448					FALLOC_FL_KEEP_SIZE set
449			0		Backward-compatible alias for 'none'
450			1		Backward-compatible alias for 'posix'
451
452		May not be available on all supported platforms. 'keep' is only
453		available on Linux.If using ZFS on Solaris this must be set to
454		'none' because ZFS doesn't support it. Default: 'posix'.
455
456fadvise_hint=bool By default, fio will use fadvise() to advise the kernel
457		on what IO patterns it is likely to issue. Sometimes you
458		want to test specific IO patterns without telling the
459		kernel about it, in which case you can disable this option.
460		If set, fio will use POSIX_FADV_SEQUENTIAL for sequential
461		IO and POSIX_FADV_RANDOM for random IO.
462
463size=int	The total size of file io for this job. Fio will run until
464		this many bytes has been transferred, unless runtime is
465		limited by other options (such as 'runtime', for instance,
466		or increased/decreased by 'io_size'). Unless specific nrfiles
467		and filesize options are given, fio will divide this size
468		between the available files specified by the job. If not set,
469		fio will use the full size of the given files or devices.
470		If the files do not exist, size must be given. It is also
471		possible to give size as a percentage between 1 and 100. If
472		size=20% is given, fio will use 20% of the full size of the
473		given files or devices.
474
475io_size=int
476io_limit=int	Normally fio operates within the region set by 'size', which
477		means that the 'size' option sets both the region and size of
478		IO to be performed. Sometimes that is not what you want. With
479		this option, it is possible to define just the amount of IO
480		that fio should do. For instance, if 'size' is set to 20G and
481		'io_size' is set to 5G, fio will perform IO within the first
482		20G but exit when 5G have been done. The opposite is also
483		possible - if 'size' is set to 20G, and 'io_size' is set to
484		40G, then fio will do 40G of IO within the 0..20G region.
485
486filesize=int	Individual file sizes. May be a range, in which case fio
487		will select sizes for files at random within the given range
488		and limited to 'size' in total (if that is given). If not
489		given, each created file is the same size.
490
491file_append=bool	Perform IO after the end of the file. Normally fio will
492		operate within the size of a file. If this option is set, then
493		fio will append to the file instead. This has identical
494		behavior to setting offset to the size of a file. This option
495		is ignored on non-regular files.
496
497fill_device=bool
498fill_fs=bool	Sets size to something really large and waits for ENOSPC (no
499		space left on device) as the terminating condition. Only makes
500		sense with sequential write. For a read workload, the mount
501		point will be filled first then IO started on the result. This
502		option doesn't make sense if operating on a raw device node,
503		since the size of that is already known by the file system.
504		Additionally, writing beyond end-of-device will not return
505		ENOSPC there.
506
507blocksize=int
508bs=int		The block size used for the io units. Defaults to 4k. Values
509		can be given for both read and writes. If a single int is
510		given, it will apply to both. If a second int is specified
511		after a comma, it will apply to writes only. In other words,
512		the format is either bs=read_and_write or bs=read,write,trim.
513		bs=4k,8k will thus use 4k blocks for reads, 8k blocks for
514		writes, and 8k for trims. You can terminate the list with
515		a trailing comma. bs=4k,8k, would use the default value for
516		trims.. If you only wish to set the write size, you
517		can do so by passing an empty read size - bs=,8k will set
518		8k for writes and leave the read default value.
519
520blockalign=int
521ba=int		At what boundary to align random IO offsets. Defaults to
522		the same as 'blocksize' the minimum blocksize given.
523		Minimum alignment is typically 512b for using direct IO,
524		though it usually depends on the hardware block size. This
525		option is mutually exclusive with using a random map for
526		files, so it will turn off that option.
527
528blocksize_range=irange
529bsrange=irange	Instead of giving a single block size, specify a range
530		and fio will mix the issued io block sizes. The issued
531		io unit will always be a multiple of the minimum value
532		given (also see bs_unaligned). Applies to both reads and
533		writes, however a second range can be given after a comma.
534		See bs=.
535
536bssplit=str	Sometimes you want even finer grained control of the
537		block sizes issued, not just an even split between them.
538		This option allows you to weight various block sizes,
539		so that you are able to define a specific amount of
540		block sizes issued. The format for this option is:
541
542			bssplit=blocksize/percentage:blocksize/percentage
543
544		for as many block sizes as needed. So if you want to define
545		a workload that has 50% 64k blocks, 10% 4k blocks, and
546		40% 32k blocks, you would write:
547
548			bssplit=4k/10:64k/50:32k/40
549
550		Ordering does not matter. If the percentage is left blank,
551		fio will fill in the remaining values evenly. So a bssplit
552		option like this one:
553
554			bssplit=4k/50:1k/:32k/
555
556		would have 50% 4k ios, and 25% 1k and 32k ios. The percentages
557		always add up to 100, if bssplit is given a range that adds
558		up to more, it will error out.
559
560		bssplit also supports giving separate splits to reads and
561		writes. The format is identical to what bs= accepts. You
562		have to separate the read and write parts with a comma. So
563		if you want a workload that has 50% 2k reads and 50% 4k reads,
564		while having 90% 4k writes and 10% 8k writes, you would
565		specify:
566
567		bssplit=2k/50:4k/50,4k/90:8k/10
568
569blocksize_unaligned
570bs_unaligned	If this option is given, any byte size value within bsrange
571		may be used as a block range. This typically wont work with
572		direct IO, as that normally requires sector alignment.
573
574bs_is_seq_rand	If this option is set, fio will use the normal read,write
575		blocksize settings as sequential,random instead. Any random
576		read or write will use the WRITE blocksize settings, and any
577		sequential read or write will use the READ blocksize setting.
578
579zero_buffers	If this option is given, fio will init the IO buffers to
580		all zeroes. The default is to fill them with random data.
581		The resulting IO buffers will not be completely zeroed,
582		unless scramble_buffers is also turned off.
583
584refill_buffers	If this option is given, fio will refill the IO buffers
585		on every submit. The default is to only fill it at init
586		time and reuse that data. Only makes sense if zero_buffers
587		isn't specified, naturally. If data verification is enabled,
588		refill_buffers is also automatically enabled.
589
590scramble_buffers=bool	If refill_buffers is too costly and the target is
591		using data deduplication, then setting this option will
592		slightly modify the IO buffer contents to defeat normal
593		de-dupe attempts. This is not enough to defeat more clever
594		block compression attempts, but it will stop naive dedupe of
595		blocks. Default: true.
596
597buffer_compress_percentage=int	If this is set, then fio will attempt to
598		provide IO buffer content (on WRITEs) that compress to
599		the specified level. Fio does this by providing a mix of
600		random data and a fixed pattern. The fixed pattern is either
601		zeroes, or the pattern specified by buffer_pattern. If the
602		pattern option is used, it might skew the compression ratio
603		slightly. Note that this is per block size unit, for file/disk
604		wide compression level that matches this setting, you'll also
605		want to set refill_buffers.
606
607buffer_compress_chunk=int	See buffer_compress_percentage. This
608		setting allows fio to manage how big the ranges of random
609		data and zeroed data is. Without this set, fio will
610		provide buffer_compress_percentage of blocksize random
611		data, followed by the remaining zeroed. With this set
612		to some chunk size smaller than the block size, fio can
613		alternate random and zeroed data throughout the IO
614		buffer.
615
616buffer_pattern=str	If set, fio will fill the io buffers with this
617		pattern. If not set, the contents of io buffers is defined by
618		the other options related to buffer contents. The setting can
619		be any pattern of bytes, and can be prefixed with 0x for hex
620		values. It may also be a string, where the string must then
621		be wrapped with "".
622
623dedupe_percentage=int	If set, fio will generate this percentage of
624		identical buffers when writing. These buffers will be
625		naturally dedupable. The contents of the buffers depend on
626		what other buffer compression settings have been set. It's
627		possible to have the individual buffers either fully
628		compressible, or not at all. This option only controls the
629		distribution of unique buffers.
630
631nrfiles=int	Number of files to use for this job. Defaults to 1.
632
633openfiles=int	Number of files to keep open at the same time. Defaults to
634		the same as nrfiles, can be set smaller to limit the number
635		simultaneous opens.
636
637file_service_type=str  Defines how fio decides which file from a job to
638		service next. The following types are defined:
639
640			random	Just choose a file at random.
641
642			roundrobin  Round robin over open files. This
643				is the default.
644
645			sequential  Finish one file before moving on to
646				the next. Multiple files can still be
647				open depending on 'openfiles'.
648
649		The string can have a number appended, indicating how
650		often to switch to a new file. So if option random:4 is
651		given, fio will switch to a new random file after 4 ios
652		have been issued.
653
654ioengine=str	Defines how the job issues io to the file. The following
655		types are defined:
656
657			sync	Basic read(2) or write(2) io. lseek(2) is
658				used to position the io location.
659
660			psync 	Basic pread(2) or pwrite(2) io.
661
662			vsync	Basic readv(2) or writev(2) IO.
663
664			psyncv	Basic preadv(2) or pwritev(2) IO.
665
666			libaio	Linux native asynchronous io. Note that Linux
667				may only support queued behaviour with
668				non-buffered IO (set direct=1 or buffered=0).
669				This engine defines engine specific options.
670
671			posixaio glibc posix asynchronous io.
672
673			solarisaio Solaris native asynchronous io.
674
675			windowsaio Windows native asynchronous io.
676
677			mmap	File is memory mapped and data copied
678				to/from using memcpy(3).
679
680			splice	splice(2) is used to transfer the data and
681				vmsplice(2) to transfer data from user
682				space to the kernel.
683
684			syslet-rw Use the syslet system calls to make
685				regular read/write async.
686
687			sg	SCSI generic sg v3 io. May either be
688				synchronous using the SG_IO ioctl, or if
689				the target is an sg character device
690				we use read(2) and write(2) for asynchronous
691				io.
692
693			null	Doesn't transfer any data, just pretends
694				to. This is mainly used to exercise fio
695				itself and for debugging/testing purposes.
696
697			net	Transfer over the network to given host:port.
698				Depending on the protocol used, the hostname,
699				port, listen and filename options are used to
700				specify what sort of connection to make, while
701				the protocol option determines which protocol
702				will be used.
703				This engine defines engine specific options.
704
705			netsplice Like net, but uses splice/vmsplice to
706				map data and send/receive.
707				This engine defines engine specific options.
708
709			cpuio	Doesn't transfer any data, but burns CPU
710				cycles according to the cpuload= and
711				cpucycle= options. Setting cpuload=85
712				will cause that job to do nothing but burn
713				85% of the CPU. In case of SMP machines,
714				use numjobs=<no_of_cpu> to get desired CPU
715				usage, as the cpuload only loads a single
716				CPU at the desired rate.
717
718			guasi	The GUASI IO engine is the Generic Userspace
719				Asyncronous Syscall Interface approach
720				to async IO. See
721
722				http://www.xmailserver.org/guasi-lib.html
723
724				for more info on GUASI.
725
726			rdma    The RDMA I/O engine  supports  both  RDMA
727				memory semantics (RDMA_WRITE/RDMA_READ) and
728				channel semantics (Send/Recv) for the
729				InfiniBand, RoCE and iWARP protocols.
730
731			falloc	IO engine that does regular fallocate to
732				simulate data transfer as fio ioengine.
733				DDIR_READ  does fallocate(,mode = keep_size,)
734				DDIR_WRITE does fallocate(,mode = 0)
735				DDIR_TRIM  does fallocate(,mode = punch_hole)
736
737			e4defrag IO engine that does regular EXT4_IOC_MOVE_EXT
738				ioctls to simulate defragment activity in
739				request to DDIR_WRITE event
740
741			rbd	IO engine supporting direct access to Ceph
742				Rados Block Devices (RBD) via librbd without
743				the need to use the kernel rbd driver. This
744				ioengine defines engine specific options.
745
746			gfapi	Using Glusterfs libgfapi sync interface to
747				direct access to Glusterfs volumes without
748				options.
749
750			gfapi_async Using Glusterfs libgfapi async interface
751				to direct access to Glusterfs volumes without
752				having to go through FUSE. This ioengine
753				defines engine specific options.
754
755			libhdfs	Read and write through Hadoop (HDFS).
756				The 'filename' option is used to specify host,
757				port of the hdfs name-node to connect. This
758				engine interprets offsets a little
759				differently. In HDFS, files once created
760				cannot be modified. So random writes are not
761				possible. To imitate this, libhdfs engine
762				expects bunch of small files to be created
763				over HDFS, and engine will randomly pick a
764				file out of those files based on the offset
765				generated by fio backend. (see the example
766				job file to create such files, use rw=write
767				option). Please note, you might want to set
768				necessary environment variables to work with
769				hdfs/libhdfs properly.
770
771			external Prefix to specify loading an external
772				IO engine object file. Append the engine
773				filename, eg ioengine=external:/tmp/foo.o
774				to load ioengine foo.o in /tmp.
775
776iodepth=int	This defines how many io units to keep in flight against
777		the file. The default is 1 for each file defined in this
778		job, can be overridden with a larger value for higher
779		concurrency. Note that increasing iodepth beyond 1 will not
780		affect synchronous ioengines (except for small degress when
781		verify_async is in use). Even async engines may impose OS
782		restrictions causing the desired depth not to be achieved.
783		This may happen on Linux when using libaio and not setting
784		direct=1, since buffered IO is not async on that OS. Keep an
785		eye on the IO depth distribution in the fio output to verify
786		that the achieved depth is as expected. Default: 1.
787
788iodepth_batch_submit=int
789iodepth_batch=int This defines how many pieces of IO to submit at once.
790		It defaults to 1 which means that we submit each IO
791		as soon as it is available, but can be raised to submit
792		bigger batches of IO at the time.
793
794iodepth_batch_complete=int This defines how many pieces of IO to retrieve
795		at once. It defaults to 1 which means that we'll ask
796		for a minimum of 1 IO in the retrieval process from
797		the kernel. The IO retrieval will go on until we
798		hit the limit set by iodepth_low. If this variable is
799		set to 0, then fio will always check for completed
800		events before queuing more IO. This helps reduce
801		IO latency, at the cost of more retrieval system calls.
802
803iodepth_low=int	The low water mark indicating when to start filling
804		the queue again. Defaults to the same as iodepth, meaning
805		that fio will attempt to keep the queue full at all times.
806		If iodepth is set to eg 16 and iodepth_low is set to 4, then
807		after fio has filled the queue of 16 requests, it will let
808		the depth drain down to 4 before starting to fill it again.
809
810direct=bool	If value is true, use non-buffered io. This is usually
811		O_DIRECT. Note that ZFS on Solaris doesn't support direct io.
812		On Windows the synchronous ioengines don't support direct io.
813
814atomic=bool	If value is true, attempt to use atomic direct IO. Atomic
815		writes are guaranteed to be stable once acknowledged by
816		the operating system. Only Linux supports O_ATOMIC right
817		now.
818
819buffered=bool	If value is true, use buffered io. This is the opposite
820		of the 'direct' option. Defaults to true.
821
822offset=int	Start io at the given offset in the file. The data before
823		the given offset will not be touched. This effectively
824		caps the file size at real_size - offset.
825
826offset_increment=int	If this is provided, then the real offset becomes
827		offset + offset_increment * thread_number, where the thread
828		number is a counter that starts at 0 and is incremented for
829		each sub-job (i.e. when numjobs option is specified). This
830		option is useful if there are several jobs which are intended
831		to operate on a file in parallel disjoint segments, with
832		even spacing between the starting points.
833
834number_ios=int	Fio will normally perform IOs until it has exhausted the size
835		of the region set by size=, or if it exhaust the allocated
836		time (or hits an error condition). With this setting, the
837		range/size can be set independently of the number of IOs to
838		perform. When fio reaches this number, it will exit normally
839		and report status. Note that this does not extend the amount
840		of IO that will be done, it will only stop fio if this
841		condition is met before other end-of-job criteria.
842
843fsync=int	If writing to a file, issue a sync of the dirty data
844		for every number of blocks given. For example, if you give
845		32 as a parameter, fio will sync the file for every 32
846		writes issued. If fio is using non-buffered io, we may
847		not sync the file. The exception is the sg io engine, which
848		synchronizes the disk cache anyway.
849
850fdatasync=int	Like fsync= but uses fdatasync() to only sync data and not
851		metadata blocks.
852		In FreeBSD and Windows there is no fdatasync(), this falls back to
853		using fsync()
854
855sync_file_range=str:val	Use sync_file_range() for every 'val' number of
856		write operations. Fio will track range of writes that
857		have happened since the last sync_file_range() call. 'str'
858		can currently be one or more of:
859
860		wait_before	SYNC_FILE_RANGE_WAIT_BEFORE
861		write		SYNC_FILE_RANGE_WRITE
862		wait_after	SYNC_FILE_RANGE_WAIT_AFTER
863
864		So if you do sync_file_range=wait_before,write:8, fio would
865		use SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE for
866		every 8 writes. Also see the sync_file_range(2) man page.
867		This option is Linux specific.
868
869overwrite=bool	If true, writes to a file will always overwrite existing
870		data. If the file doesn't already exist, it will be
871		created before the write phase begins. If the file exists
872		and is large enough for the specified write phase, nothing
873		will be done.
874
875end_fsync=bool	If true, fsync file contents when a write stage has completed.
876
877fsync_on_close=bool	If true, fio will fsync() a dirty file on close.
878		This differs from end_fsync in that it will happen on every
879		file close, not just at the end of the job.
880
881rwmixread=int	How large a percentage of the mix should be reads.
882
883rwmixwrite=int	How large a percentage of the mix should be writes. If both
884		rwmixread and rwmixwrite is given and the values do not add
885		up to 100%, the latter of the two will be used to override
886		the first. This may interfere with a given rate setting,
887		if fio is asked to limit reads or writes to a certain rate.
888		If that is the case, then the distribution may be skewed.
889
890random_distribution=str:float	By default, fio will use a completely uniform
891		random distribution when asked to perform random IO. Sometimes
892		it is useful to skew the distribution in specific ways,
893		ensuring that some parts of the data is more hot than others.
894		fio includes the following distribution models:
895
896		random		Uniform random distribution
897		zipf		Zipf distribution
898		pareto		Pareto distribution
899
900		When using a zipf or pareto distribution, an input value
901		is also needed to define the access pattern. For zipf, this
902		is the zipf theta. For pareto, it's the pareto power. Fio
903		includes a test program, genzipf, that can be used visualize
904		what the given input values will yield in terms of hit rates.
905		If you wanted to use zipf with a theta of 1.2, you would use
906		random_distribution=zipf:1.2 as the option. If a non-uniform
907		model is used, fio will disable use of the random map.
908
909percentage_random=int	For a random workload, set how big a percentage should
910		be random. This defaults to 100%, in which case the workload
911		is fully random. It can be set from anywhere from 0 to 100.
912		Setting it to 0 would make the workload fully sequential. Any
913		setting in between will result in a random mix of sequential
914		and random IO, at the given percentages. It is possible to
915		set different values for reads, writes, and trim. To do so,
916		simply use a comma separated list. See blocksize.
917
918norandommap	Normally fio will cover every block of the file when doing
919		random IO. If this option is given, fio will just get a
920		new random offset without looking at past io history. This
921		means that some blocks may not be read or written, and that
922		some blocks may be read/written more than once. If this option
923		is used with verify= and multiple blocksizes (via bsrange=),
924		only intact blocks are verified, i.e., partially-overwritten
925		blocks are ignored.
926
927softrandommap=bool See norandommap. If fio runs with the random block map
928		enabled and it fails to allocate the map, if this option is
929		set it will continue without a random block map. As coverage
930		will not be as complete as with random maps, this option is
931		disabled by default.
932
933random_generator=str	Fio supports the following engines for generating
934		IO offsets for random IO:
935
936		tausworthe	Strong 2^88 cycle random number generator
937		lfsr		Linear feedback shift register generator
938
939		Tausworthe is a strong random number generator, but it
940		requires tracking on the side if we want to ensure that
941		blocks are only read or written once. LFSR guarantees
942		that we never generate the same offset twice, and it's
943		also less computationally expensive. It's not a true
944		random generator, however, though for IO purposes it's
945		typically good enough. LFSR only works with single
946		block sizes, not with workloads that use multiple block
947		sizes. If used with such a workload, fio may read or write
948		some blocks multiple times.
949
950nice=int	Run the job with the given nice value. See man nice(2).
951
952prio=int	Set the io priority value of this job. Linux limits us to
953		a positive value between 0 and 7, with 0 being the highest.
954		See man ionice(1).
955
956prioclass=int	Set the io priority class. See man ionice(1).
957
958thinktime=int	Stall the job x microseconds after an io has completed before
959		issuing the next. May be used to simulate processing being
960		done by an application. See thinktime_blocks and
961		thinktime_spin.
962
963thinktime_spin=int
964		Only valid if thinktime is set - pretend to spend CPU time
965		doing something with the data received, before falling back
966		to sleeping for the rest of the period specified by
967		thinktime.
968
969thinktime_blocks=int
970		Only valid if thinktime is set - control how many blocks
971		to issue, before waiting 'thinktime' usecs. If not set,
972		defaults to 1 which will make fio wait 'thinktime' usecs
973		after every block. This effectively makes any queue depth
974		setting redundant, since no more than 1 IO will be queued
975		before we have to complete it and do our thinktime. In
976		other words, this setting effectively caps the queue depth
977		if the latter is larger.
978
979rate=int	Cap the bandwidth used by this job. The number is in bytes/sec,
980		the normal suffix rules apply. You can use rate=500k to limit
981		reads and writes to 500k each, or you can specify read and
982		writes separately. Using rate=1m,500k would limit reads to
983		1MB/sec and writes to 500KB/sec. Capping only reads or
984		writes can be done with rate=,500k or rate=500k,. The former
985		will only limit writes (to 500KB/sec), the latter will only
986		limit reads.
987
988ratemin=int	Tell fio to do whatever it can to maintain at least this
989		bandwidth. Failing to meet this requirement, will cause
990		the job to exit. The same format as rate is used for
991		read vs write separation.
992
993rate_iops=int	Cap the bandwidth to this number of IOPS. Basically the same
994		as rate, just specified independently of bandwidth. If the
995		job is given a block size range instead of a fixed value,
996		the smallest block size is used as the metric. The same format
997		as rate is used for read vs write separation.
998
999rate_iops_min=int If fio doesn't meet this rate of IO, it will cause
1000		the job to exit. The same format as rate is used for read vs
1001		write separation.
1002
1003latency_target=int	If set, fio will attempt to find the max performance
1004		point that the given workload will run at while maintaining a
1005		latency below this target. The values is given in microseconds.
1006		See latency_window and latency_percentile
1007
1008latency_window=int	Used with latency_target to specify the sample window
1009		that the job is run at varying queue depths to test the
1010		performance. The value is given in microseconds.
1011
1012latency_percentile=float	The percentage of IOs that must fall within the
1013		criteria specified by latency_target and latency_window. If not
1014		set, this defaults to 100.0, meaning that all IOs must be equal
1015		or below to the value set by latency_target.
1016
1017max_latency=int	If set, fio will exit the job if it exceeds this maximum
1018		latency. It will exit with an ETIME error.
1019
1020ratecycle=int	Average bandwidth for 'rate' and 'ratemin' over this number
1021		of milliseconds.
1022
1023cpumask=int	Set the CPU affinity of this job. The parameter given is a
1024		bitmask of allowed CPU's the job may run on. So if you want
1025		the allowed CPUs to be 1 and 5, you would pass the decimal
1026		value of (1 << 1 | 1 << 5), or 34. See man
1027		sched_setaffinity(2). This may not work on all supported
1028		operating systems or kernel versions. This option doesn't
1029		work well for a higher CPU count than what you can store in
1030		an integer mask, so it can only control cpus 1-32. For
1031		boxes with larger CPU counts, use cpus_allowed.
1032
1033cpus_allowed=str Controls the same options as cpumask, but it allows a text
1034		setting of the permitted CPUs instead. So to use CPUs 1 and
1035		5, you would specify cpus_allowed=1,5. This options also
1036		allows a range of CPUs. Say you wanted a binding to CPUs
1037		1, 5, and 8-15, you would set cpus_allowed=1,5,8-15.
1038
1039cpus_allowed_policy=str Set the policy of how fio distributes the CPUs
1040		specified by cpus_allowed or cpumask. Two policies are
1041		supported:
1042
1043		shared	All jobs will share the CPU set specified.
1044		split	Each job will get a unique CPU from the CPU set.
1045
1046		'shared' is the default behaviour, if the option isn't
1047		specified. If split is specified, then fio will will assign
1048		one cpu per job. If not enough CPUs are given for the jobs
1049		listed, then fio will roundrobin the CPUs in the set.
1050
1051numa_cpu_nodes=str Set this job running on spcified NUMA nodes' CPUs. The
1052		arguments allow comma delimited list of cpu numbers,
1053		A-B ranges, or 'all'. Note, to enable numa options support,
1054		fio must be built on a system with libnuma-dev(el) installed.
1055
1056numa_mem_policy=str Set this job's memory policy and corresponding NUMA
1057		nodes. Format of the argements:
1058			<mode>[:<nodelist>]
1059		`mode' is one of the following memory policy:
1060			default, prefer, bind, interleave, local
1061		For `default' and `local' memory policy, no node is
1062		needed to be specified.
1063		For `prefer', only one node is allowed.
1064		For `bind' and `interleave', it allow comma delimited
1065		list of numbers, A-B ranges, or 'all'.
1066
1067startdelay=time	Start this job the specified number of seconds after fio
1068		has started. Only useful if the job file contains several
1069		jobs, and you want to delay starting some jobs to a certain
1070		time.
1071
1072runtime=time	Tell fio to terminate processing after the specified number
1073		of seconds. It can be quite hard to determine for how long
1074		a specified job will run, so this parameter is handy to
1075		cap the total runtime to a given time.
1076
1077time_based	If set, fio will run for the duration of the runtime
1078		specified even if the file(s) are completely read or
1079		written. It will simply loop over the same workload
1080		as many times as the runtime allows.
1081
1082ramp_time=time	If set, fio will run the specified workload for this amount
1083		of time before logging any performance numbers. Useful for
1084		letting performance settle before logging results, thus
1085		minimizing the runtime required for stable results. Note
1086		that the ramp_time is considered lead in time for a job,
1087		thus it will increase the total runtime if a special timeout
1088		or runtime is specified.
1089
1090invalidate=bool	Invalidate the buffer/page cache parts for this file prior
1091		to starting io. Defaults to true.
1092
1093sync=bool	Use sync io for buffered writes. For the majority of the
1094		io engines, this means using O_SYNC.
1095
1096iomem=str
1097mem=str		Fio can use various types of memory as the io unit buffer.
1098		The allowed values are:
1099
1100			malloc	Use memory from malloc(3) as the buffers.
1101
1102			shm	Use shared memory as the buffers. Allocated
1103				through shmget(2).
1104
1105			shmhuge	Same as shm, but use huge pages as backing.
1106
1107			mmap	Use mmap to allocate buffers. May either be
1108				anonymous memory, or can be file backed if
1109				a filename is given after the option. The
1110				format is mem=mmap:/path/to/file.
1111
1112			mmaphuge Use a memory mapped huge file as the buffer
1113				backing. Append filename after mmaphuge, ala
1114				mem=mmaphuge:/hugetlbfs/file
1115
1116		The area allocated is a function of the maximum allowed
1117		bs size for the job, multiplied by the io depth given. Note
1118		that for shmhuge and mmaphuge to work, the system must have
1119		free huge pages allocated. This can normally be checked
1120		and set by reading/writing /proc/sys/vm/nr_hugepages on a
1121		Linux system. Fio assumes a huge page is 4MB in size. So
1122		to calculate the number of huge pages you need for a given
1123		job file, add up the io depth of all jobs (normally one unless
1124		iodepth= is used) and multiply by the maximum bs set. Then
1125		divide that number by the huge page size. You can see the
1126		size of the huge pages in /proc/meminfo. If no huge pages
1127		are allocated by having a non-zero number in nr_hugepages,
1128		using mmaphuge or shmhuge will fail. Also see hugepage-size.
1129
1130		mmaphuge also needs to have hugetlbfs mounted and the file
1131		location should point there. So if it's mounted in /huge,
1132		you would use mem=mmaphuge:/huge/somefile.
1133
1134iomem_align=int	This indiciates the memory alignment of the IO memory buffers.
1135		Note that the given alignment is applied to the first IO unit
1136		buffer, if using iodepth the alignment of the following buffers
1137		are given by the bs used. In other words, if using a bs that is
1138		a multiple of the page sized in the system, all buffers will
1139		be aligned to this value. If using a bs that is not page
1140		aligned, the alignment of subsequent IO memory buffers is the
1141		sum of the iomem_align and bs used.
1142
1143hugepage-size=int
1144		Defines the size of a huge page. Must at least be equal
1145		to the system setting, see /proc/meminfo. Defaults to 4MB.
1146		Should probably always be a multiple of megabytes, so using
1147		hugepage-size=Xm is the preferred way to set this to avoid
1148		setting a non-pow-2 bad value.
1149
1150exitall		When one job finishes, terminate the rest. The default is
1151		to wait for each job to finish, sometimes that is not the
1152		desired action.
1153
1154bwavgtime=int	Average the calculated bandwidth over the given time. Value
1155		is specified in milliseconds.
1156
1157iopsavgtime=int	Average the calculated IOPS over the given time. Value
1158		is specified in milliseconds.
1159
1160create_serialize=bool	If true, serialize the file creating for the jobs.
1161			This may be handy to avoid interleaving of data
1162			files, which may greatly depend on the filesystem
1163			used and even the number of processors in the system.
1164
1165create_fsync=bool	fsync the data file after creation. This is the
1166			default.
1167
1168create_on_open=bool	Don't pre-setup the files for IO, just create open()
1169			when it's time to do IO to that file.
1170
1171create_only=bool	If true, fio will only run the setup phase of the job.
1172			If files need to be laid out or updated on disk, only
1173			that will be done. The actual job contents are not
1174			executed.
1175
1176pre_read=bool	If this is given, files will be pre-read into memory before
1177		starting the given IO operation. This will also clear
1178		the 'invalidate' flag, since it is pointless to pre-read
1179		and then drop the cache. This will only work for IO engines
1180		that are seekable, since they allow you to read the same data
1181		multiple times. Thus it will not work on eg network or splice
1182		IO.
1183
1184unlink=bool	Unlink the job files when done. Not the default, as repeated
1185		runs of that job would then waste time recreating the file
1186		set again and again.
1187
1188loops=int	Run the specified number of iterations of this job. Used
1189		to repeat the same workload a given number of times. Defaults
1190		to 1.
1191
1192verify_only	Do not perform specified workload---only verify data still
1193		matches previous invocation of this workload. This option
1194		allows one to check data multiple times at a later date
1195		without overwriting it. This option makes sense only for
1196		workloads that write data, and does not support workloads
1197		with the time_based option set.
1198
1199do_verify=bool	Run the verify phase after a write phase. Only makes sense if
1200		verify is set. Defaults to 1.
1201
1202verify=str	If writing to a file, fio can verify the file contents
1203		after each iteration of the job. The allowed values are:
1204
1205			md5	Use an md5 sum of the data area and store
1206				it in the header of each block.
1207
1208			crc64	Use an experimental crc64 sum of the data
1209				area and store it in the header of each
1210				block.
1211
1212			crc32c	Use a crc32c sum of the data area and store
1213				it in the header of each block.
1214
1215			crc32c-intel Use hardware assisted crc32c calcuation
1216				provided on SSE4.2 enabled processors. Falls
1217				back to regular software crc32c, if not
1218				supported by the system.
1219
1220			crc32	Use a crc32 sum of the data area and store
1221				it in the header of each block.
1222
1223			crc16	Use a crc16 sum of the data area and store
1224				it in the header of each block.
1225
1226			crc7	Use a crc7 sum of the data area and store
1227				it in the header of each block.
1228
1229			xxhash	Use xxhash as the checksum function. Generally
1230				the fastest software checksum that fio
1231				supports.
1232
1233			sha512	Use sha512 as the checksum function.
1234
1235			sha256	Use sha256 as the checksum function.
1236
1237			sha1	Use optimized sha1 as the checksum function.
1238
1239			meta	Write extra information about each io
1240				(timestamp, block number etc.). The block
1241				number is verified. The io sequence number is
1242				verified for workloads that write data.
1243				See also verify_pattern.
1244
1245			null	Only pretend to verify. Useful for testing
1246				internals with ioengine=null, not for much
1247				else.
1248
1249		This option can be used for repeated burn-in tests of a
1250		system to make sure that the written data is also
1251		correctly read back. If the data direction given is
1252		a read or random read, fio will assume that it should
1253		verify a previously written file. If the data direction
1254		includes any form of write, the verify will be of the
1255		newly written data.
1256
1257verifysort=bool	If set, fio will sort written verify blocks when it deems
1258		it faster to read them back in a sorted manner. This is
1259		often the case when overwriting an existing file, since
1260		the blocks are already laid out in the file system. You
1261		can ignore this option unless doing huge amounts of really
1262		fast IO where the red-black tree sorting CPU time becomes
1263		significant.
1264
1265verify_offset=int	Swap the verification header with data somewhere else
1266			in the block before writing. Its swapped back before
1267			verifying.
1268
1269verify_interval=int	Write the verification header at a finer granularity
1270			than the blocksize. It will be written for chunks the
1271			size of header_interval. blocksize should divide this
1272			evenly.
1273
1274verify_pattern=str	If set, fio will fill the io buffers with this
1275		pattern. Fio defaults to filling with totally random
1276		bytes, but sometimes it's interesting to fill with a known
1277		pattern for io verification purposes. Depending on the
1278		width of the pattern, fio will fill 1/2/3/4 bytes of the
1279		buffer at the time(it can be either a decimal or a hex number).
1280		The verify_pattern if larger than a 32-bit quantity has to
1281		be a hex number that starts with either "0x" or "0X". Use
1282		with verify=meta.
1283
1284verify_fatal=bool	Normally fio will keep checking the entire contents
1285		before quitting on a block verification failure. If this
1286		option is set, fio will exit the job on the first observed
1287		failure.
1288
1289verify_dump=bool	If set, dump the contents of both the original data
1290		block and the data block we read off disk to files. This
1291		allows later analysis to inspect just what kind of data
1292		corruption occurred. Off by default.
1293
1294verify_async=int	Fio will normally verify IO inline from the submitting
1295		thread. This option takes an integer describing how many
1296		async offload threads to create for IO verification instead,
1297		causing fio to offload the duty of verifying IO contents
1298		to one or more separate threads. If using this offload
1299		option, even sync IO engines can benefit from using an
1300		iodepth setting higher than 1, as it allows them to have
1301		IO in flight while verifies are running.
1302
1303verify_async_cpus=str	Tell fio to set the given CPU affinity on the
1304		async IO verification threads. See cpus_allowed for the
1305		format used.
1306
1307verify_backlog=int	Fio will normally verify the written contents of a
1308		job that utilizes verify once that job has completed. In
1309		other words, everything is written then everything is read
1310		back and verified. You may want to verify continually
1311		instead for a variety of reasons. Fio stores the meta data
1312		associated with an IO block in memory, so for large
1313		verify workloads, quite a bit of memory would be used up
1314		holding this meta data. If this option is enabled, fio
1315		will write only N blocks before verifying these blocks.
1316
1317verify_backlog_batch=int	Control how many blocks fio will verify
1318		if verify_backlog is set. If not set, will default to
1319		the value of verify_backlog (meaning the entire queue
1320		is read back and verified).  If verify_backlog_batch is
1321		less than verify_backlog then not all blocks will be verified,
1322		if verify_backlog_batch is larger than verify_backlog, some
1323		blocks will be verified more than once.
1324
1325verify_state_save=bool	When a job exits during the write phase of a verify
1326		workload, save its current state. This allows fio to replay
1327		up until that point, if the verify state is loaded for the
1328		verify read phase. The format of the filename is, roughly,
1329		<type>-<jobname>-<jobindex>-verify.state. <type> is "local"
1330		for a local run, "sock" for a client/server socket connection,
1331		and "ip" (192.168.0.1, for instance) for a networked
1332		client/server connection.
1333
1334verify_state_load=bool	If a verify termination trigger was used, fio stores
1335		the current write state of each thread. This can be used at
1336		verification time so that fio knows how far it should verify.
1337		Without this information, fio will run a full verification
1338		pass, according to the settings in the job file used.
1339
1340stonewall
1341wait_for_previous Wait for preceding jobs in the job file to exit, before
1342		starting this one. Can be used to insert serialization
1343		points in the job file. A stone wall also implies starting
1344		a new reporting group.
1345
1346new_group	Start a new reporting group. See: group_reporting.
1347
1348numjobs=int	Create the specified number of clones of this job. May be
1349		used to setup a larger number of threads/processes doing
1350		the same thing. Each thread is reported separately; to see
1351		statistics for all clones as a whole, use group_reporting in
1352		conjunction with new_group.
1353
1354group_reporting	It may sometimes be interesting to display statistics for
1355		groups of jobs as a whole instead of for each individual job.
1356		This is especially true if 'numjobs' is used; looking at
1357		individual thread/process output quickly becomes unwieldy.
1358		To see the final report per-group instead of per-job, use
1359		'group_reporting'. Jobs in a file will be part of the same
1360		reporting group, unless if separated by a stonewall, or by
1361		using 'new_group'.
1362
1363thread		fio defaults to forking jobs, however if this option is
1364		given, fio will use pthread_create(3) to create threads
1365		instead.
1366
1367zonesize=int	Divide a file into zones of the specified size. See zoneskip.
1368
1369zoneskip=int	Skip the specified number of bytes when zonesize data has
1370		been read. The two zone options can be used to only do
1371		io on zones of a file.
1372
1373write_iolog=str	Write the issued io patterns to the specified file. See
1374		read_iolog.  Specify a separate file for each job, otherwise
1375		the iologs will be interspersed and the file may be corrupt.
1376
1377read_iolog=str	Open an iolog with the specified file name and replay the
1378		io patterns it contains. This can be used to store a
1379		workload and replay it sometime later. The iolog given
1380		may also be a blktrace binary file, which allows fio
1381		to replay a workload captured by blktrace. See blktrace
1382		for how to capture such logging data. For blktrace replay,
1383		the file needs to be turned into a blkparse binary data
1384		file first (blkparse <device> -o /dev/null -d file_for_fio.bin).
1385
1386replay_no_stall=int When replaying I/O with read_iolog the default behavior
1387		is to attempt to respect the time stamps within the log and
1388		replay them with the appropriate delay between IOPS.  By
1389		setting this variable fio will not respect the timestamps and
1390		attempt to replay them as fast as possible while still
1391		respecting ordering.  The result is the same I/O pattern to a
1392		given device, but different timings.
1393
1394replay_redirect=str While replaying I/O patterns using read_iolog the
1395		default behavior is to replay the IOPS onto the major/minor
1396		device that each IOP was recorded from.  This is sometimes
1397		undesirable because on a different machine those major/minor
1398		numbers can map to a different device.  Changing hardware on
1399		the same system can also result in a different major/minor
1400		mapping.  Replay_redirect causes all IOPS to be replayed onto
1401		the single specified device regardless of the device it was
1402		recorded from. i.e. replay_redirect=/dev/sdc would cause all
1403		IO in the blktrace to be replayed onto /dev/sdc.  This means
1404		multiple devices will be replayed onto a single, if the trace
1405		contains multiple devices.  If you want multiple devices to be
1406		replayed concurrently to multiple redirected devices you must
1407		blkparse your trace into separate traces and replay them with
1408		independent fio invocations.  Unfortuantely this also breaks
1409		the strict time ordering between multiple device accesses.
1410
1411write_bw_log=str If given, write a bandwidth log of the jobs in this job
1412		file. Can be used to store data of the bandwidth of the
1413		jobs in their lifetime. The included fio_generate_plots
1414		script uses gnuplot to turn these text files into nice
1415		graphs. See write_lat_log for behaviour of given
1416		filename. For this option, the suffix is _bw.x.log, where
1417		x is the index of the job (1..N, where N is the number of
1418		jobs).
1419
1420write_lat_log=str Same as write_bw_log, except that this option stores io
1421		submission, completion, and total latencies instead. If no
1422		filename is given with this option, the default filename of
1423		"jobname_type.log" is used. Even if the filename is given,
1424		fio will still append the type of log. So if one specifies
1425
1426		write_lat_log=foo
1427
1428		The actual log names will be foo_slat.x.log, foo_clat.x.log,
1429		and foo_lat.x.log, where x is the index of the job (1..N,
1430		where N is the number of jobs). This helps fio_generate_plot
1431		fine the logs automatically.
1432
1433write_iops_log=str Same as write_bw_log, but writes IOPS. If no filename is
1434		given with this option, the default filename of
1435		"jobname_type.x.log" is used,where x is the index of the job
1436		(1..N, where N is the number of jobs). Even if the filename
1437		is given, fio will still append the type of log.
1438
1439log_avg_msec=int By default, fio will log an entry in the iops, latency,
1440		or bw log for every IO that completes. When writing to the
1441		disk log, that can quickly grow to a very large size. Setting
1442		this option makes fio average the each log entry over the
1443		specified period of time, reducing the resolution of the log.
1444		Defaults to 0.
1445
1446log_offset=int	If this is set, the iolog options will include the byte
1447		offset for the IO entry as well as the other data values.
1448
1449log_compression=int	If this is set, fio will compress the IO logs as
1450		it goes, to keep the memory footprint lower. When a log
1451		reaches the specified size, that chunk is removed and
1452		compressed in the background. Given that IO logs are
1453		fairly highly compressible, this yields a nice memory
1454		savings for longer runs. The downside is that the
1455		compression will consume some background CPU cycles, so
1456		it may impact the run. This, however, is also true if
1457		the logging ends up consuming most of the system memory.
1458		So pick your poison. The IO logs are saved normally at the
1459		end of a run, by decompressing the chunks and storing them
1460		in the specified log file. This feature depends on the
1461		availability of zlib.
1462
1463log_store_compressed=bool	If set, and log_compression is also set,
1464		fio will store the log files in a compressed format. They
1465		can be decompressed with fio, using the --inflate-log
1466		command line parameter. The files will be stored with a
1467		.fz suffix.
1468
1469lockmem=int	Pin down the specified amount of memory with mlock(2). Can
1470		potentially be used instead of removing memory or booting
1471		with less memory to simulate a smaller amount of memory.
1472		The amount specified is per worker.
1473
1474exec_prerun=str	Before running this job, issue the command specified
1475		through system(3). Output is redirected in a file called
1476		jobname.prerun.txt.
1477
1478exec_postrun=str After the job completes, issue the command specified
1479		 though system(3). Output is redirected in a file called
1480		 jobname.postrun.txt.
1481
1482ioscheduler=str	Attempt to switch the device hosting the file to the specified
1483		io scheduler before running.
1484
1485disk_util=bool	Generate disk utilization statistics, if the platform
1486		supports it. Defaults to on.
1487
1488disable_lat=bool Disable measurements of total latency numbers. Useful
1489		only for cutting back the number of calls to gettimeofday,
1490		as that does impact performance at really high IOPS rates.
1491		Note that to really get rid of a large amount of these
1492		calls, this option must be used with disable_slat and
1493		disable_bw as well.
1494
1495disable_clat=bool Disable measurements of completion latency numbers. See
1496		disable_lat.
1497
1498disable_slat=bool Disable measurements of submission latency numbers. See
1499		disable_slat.
1500
1501disable_bw=bool	Disable measurements of throughput/bandwidth numbers. See
1502		disable_lat.
1503
1504clat_percentiles=bool Enable the reporting of percentiles of
1505		 completion latencies.
1506
1507percentile_list=float_list Overwrite the default list of percentiles
1508		for completion latencies. Each number is a floating
1509		number in the range (0,100], and the maximum length of
1510		the list is 20. Use ':' to separate the numbers, and
1511		list the numbers in ascending order. For example,
1512		--percentile_list=99.5:99.9 will cause fio to report
1513		the values of completion latency below which 99.5% and
1514		99.9% of the observed latencies fell, respectively.
1515
1516clocksource=str	Use the given clocksource as the base of timing. The
1517		supported options are:
1518
1519			gettimeofday	gettimeofday(2)
1520
1521			clock_gettime	clock_gettime(2)
1522
1523			cpu		Internal CPU clock source
1524
1525		cpu is the preferred clocksource if it is reliable, as it
1526		is very fast (and fio is heavy on time calls). Fio will
1527		automatically use this clocksource if it's supported and
1528		considered reliable on the system it is running on, unless
1529		another clocksource is specifically set. For x86/x86-64 CPUs,
1530		this means supporting TSC Invariant.
1531
1532gtod_reduce=bool Enable all of the gettimeofday() reducing options
1533		(disable_clat, disable_slat, disable_bw) plus reduce
1534		precision of the timeout somewhat to really shrink
1535		the gettimeofday() call count. With this option enabled,
1536		we only do about 0.4% of the gtod() calls we would have
1537		done if all time keeping was enabled.
1538
1539gtod_cpu=int	Sometimes it's cheaper to dedicate a single thread of
1540		execution to just getting the current time. Fio (and
1541		databases, for instance) are very intensive on gettimeofday()
1542		calls. With this option, you can set one CPU aside for
1543		doing nothing but logging current time to a shared memory
1544		location. Then the other threads/processes that run IO
1545		workloads need only copy that segment, instead of entering
1546		the kernel with a gettimeofday() call. The CPU set aside
1547		for doing these time calls will be excluded from other
1548		uses. Fio will manually clear it from the CPU mask of other
1549		jobs.
1550
1551continue_on_error=str	Normally fio will exit the job on the first observed
1552		failure. If this option is set, fio will continue the job when
1553		there is a 'non-fatal error' (EIO or EILSEQ) until the runtime
1554		is exceeded or the I/O size specified is completed. If this
1555		option is used, there are two more stats that are appended,
1556		the total error count and the first error. The error field
1557		given in the stats is the first error that was hit during the
1558		run.
1559
1560		The allowed values are:
1561
1562			none	Exit on any IO or verify errors.
1563
1564			read	Continue on read errors, exit on all others.
1565
1566			write	Continue on write errors, exit on all others.
1567
1568			io	Continue on any IO error, exit on all others.
1569
1570			verify	Continue on verify errors, exit on all others.
1571
1572			all	Continue on all errors.
1573
1574			0		Backward-compatible alias for 'none'.
1575
1576			1		Backward-compatible alias for 'all'.
1577
1578ignore_error=str Sometimes you want to ignore some errors during test
1579		 in that case you can specify error list for each error type.
1580		 ignore_error=READ_ERR_LIST,WRITE_ERR_LIST,VERIFY_ERR_LIST
1581		 errors for given error type is separated with ':'. Error
1582		 may be symbol ('ENOSPC', 'ENOMEM') or integer.
1583		 Example:
1584			ignore_error=EAGAIN,ENOSPC:122
1585		 This option will ignore EAGAIN from READ, and ENOSPC and
1586		 122(EDQUOT) from WRITE.
1587
1588error_dump=bool If set dump every error even if it is non fatal, true
1589		by default. If disabled only fatal error will be dumped
1590
1591cgroup=str	Add job to this control group. If it doesn't exist, it will
1592		be created. The system must have a mounted cgroup blkio
1593		mount point for this to work. If your system doesn't have it
1594		mounted, you can do so with:
1595
1596		# mount -t cgroup -o blkio none /cgroup
1597
1598cgroup_weight=int	Set the weight of the cgroup to this value. See
1599		the documentation that comes with the kernel, allowed values
1600		are in the range of 100..1000.
1601
1602cgroup_nodelete=bool Normally fio will delete the cgroups it has created after
1603		the job completion. To override this behavior and to leave
1604		cgroups around after the job completion, set cgroup_nodelete=1.
1605		This can be useful if one wants to inspect various cgroup
1606		files after job completion. Default: false
1607
1608uid=int		Instead of running as the invoking user, set the user ID to
1609		this value before the thread/process does any work.
1610
1611gid=int		Set group ID, see uid.
1612
1613flow_id=int	The ID of the flow. If not specified, it defaults to being a
1614		global flow. See flow.
1615
1616flow=int	Weight in token-based flow control. If this value is used, then
1617		there is a 'flow counter' which is used to regulate the
1618		proportion of activity between two or more jobs. fio attempts
1619		to keep this flow counter near zero. The 'flow' parameter
1620		stands for how much should be added or subtracted to the flow
1621		counter on each iteration of the main I/O loop. That is, if
1622		one job has flow=8 and another job has flow=-1, then there
1623		will be a roughly 1:8 ratio in how much one runs vs the other.
1624
1625flow_watermark=int	The maximum value that the absolute value of the flow
1626		counter is allowed to reach before the job must wait for a
1627		lower value of the counter.
1628
1629flow_sleep=int	The period of time, in microseconds, to wait after the flow
1630		watermark has been exceeded before retrying operations
1631
1632In addition, there are some parameters which are only valid when a specific
1633ioengine is in use. These are used identically to normal parameters, with the
1634caveat that when used on the command line, they must come after the ioengine
1635that defines them is selected.
1636
1637[libaio] userspace_reap Normally, with the libaio engine in use, fio will use
1638		the io_getevents system call to reap newly returned events.
1639		With this flag turned on, the AIO ring will be read directly
1640		from user-space to reap events. The reaping mode is only
1641		enabled when polling for a minimum of 0 events (eg when
1642		iodepth_batch_complete=0).
1643
1644[cpu] cpuload=int Attempt to use the specified percentage of CPU cycles.
1645
1646[cpu] cpuchunks=int Split the load into cycles of the given time. In
1647		microseconds.
1648
1649[cpu] exit_on_io_done=bool Detect when IO threads are done, then exit.
1650
1651[netsplice] hostname=str
1652[net] hostname=str The host name or IP address to use for TCP or UDP based IO.
1653		If the job is a TCP listener or UDP reader, the hostname is not
1654		used and must be omitted unless it is a valid UDP multicast
1655		address.
1656
1657[netsplice] port=int
1658[net] port=int	The TCP or UDP port to bind to or connect to. If this is used
1659with numjobs to spawn multiple instances of the same job type, then this will
1660be the starting port number since fio will use a range of ports.
1661
1662[netsplice] interface=str
1663[net] interface=str  The IP address of the network interface used to send or
1664		receive UDP multicast
1665
1666[netsplice] ttl=int
1667[net] ttl=int	Time-to-live value for outgoing UDP multicast packets.
1668		Default: 1
1669
1670[netsplice] nodelay=bool
1671[net] nodelay=bool	Set TCP_NODELAY on TCP connections.
1672
1673[netsplice] protocol=str
1674[netsplice] proto=str
1675[net] protocol=str
1676[net] proto=str	The network protocol to use. Accepted values are:
1677
1678			tcp	Transmission control protocol
1679			tcpv6	Transmission control protocol V6
1680			udp	User datagram protocol
1681			udpv6	User datagram protocol V6
1682			unix	UNIX domain socket
1683
1684		When the protocol is TCP or UDP, the port must also be given,
1685		as well as the hostname if the job is a TCP listener or UDP
1686		reader. For unix sockets, the normal filename option should be
1687		used and the port is invalid.
1688
1689[net] listen	For TCP network connections, tell fio to listen for incoming
1690		connections rather than initiating an outgoing connection. The
1691		hostname must be omitted if this option is used.
1692
1693[net] pingpong	Normaly a network writer will just continue writing data, and
1694		a network reader will just consume packages. If pingpong=1
1695		is set, a writer will send its normal payload to the reader,
1696		then wait for the reader to send the same payload back. This
1697		allows fio to measure network latencies. The submission
1698		and completion latencies then measure local time spent
1699		sending or receiving, and the completion latency measures
1700		how long it took for the other end to receive and send back.
1701		For UDP multicast traffic pingpong=1 should only be set for a
1702		single reader when multiple readers are listening to the same
1703		address.
1704
1705[net] window_size	Set the desired socket buffer size for the connection.
1706
1707[net] mss	Set the TCP maximum segment size (TCP_MAXSEG).
1708
1709[e4defrag] donorname=str
1710	        File will be used as a block donor(swap extents between files)
1711[e4defrag] inplace=int
1712		Configure donor file blocks allocation strategy
1713		0(default): Preallocate donor's file on init
1714		1 	  : allocate space immidietly inside defragment event,
1715			    and free right after event
1716
1717
1718
17196.0 Interpreting the output
1720---------------------------
1721
1722fio spits out a lot of output. While running, fio will display the
1723status of the jobs created. An example of that would be:
1724
1725Threads: 1: [_r] [24.8% done] [ 13509/  8334 kb/s] [eta 00h:01m:31s]
1726
1727The characters inside the square brackets denote the current status of
1728each thread. The possible values (in typical life cycle order) are:
1729
1730Idle	Run
1731----    ---
1732P		Thread setup, but not started.
1733C		Thread created.
1734I		Thread initialized, waiting or generating necessary data.
1735	p	Thread running pre-reading file(s).
1736	R	Running, doing sequential reads.
1737	r	Running, doing random reads.
1738	W	Running, doing sequential writes.
1739	w	Running, doing random writes.
1740	M	Running, doing mixed sequential reads/writes.
1741	m	Running, doing mixed random reads/writes.
1742	F	Running, currently waiting for fsync()
1743	f	Running, finishing up (writing IO logs, etc)
1744	V	Running, doing verification of written data.
1745E		Thread exited, not reaped by main thread yet.
1746_		Thread reaped, or
1747X		Thread reaped, exited with an error.
1748K		Thread reaped, exited due to signal.
1749
1750Fio will condense the thread string as not to take up more space on the
1751command line as is needed. For instance, if you have 10 readers and 10
1752writers running, the output would look like this:
1753
1754Jobs: 20 (f=20): [R(10),W(10)] [4.0% done] [2103MB/0KB/0KB /s] [538K/0/0 iops] [eta 57m:36s]
1755
1756Fio will still maintain the ordering, though. So the above means that jobs
17571..10 are readers, and 11..20 are writers.
1758
1759The other values are fairly self explanatory - number of threads
1760currently running and doing io, rate of io since last check (read speed
1761listed first, then write speed), and the estimated completion percentage
1762and time for the running group. It's impossible to estimate runtime of
1763the following groups (if any). Note that the string is displayed in order,
1764so it's possible to tell which of the jobs are currently doing what. The
1765first character is the first job defined in the job file, and so forth.
1766
1767When fio is done (or interrupted by ctrl-c), it will show the data for
1768each thread, group of threads, and disks in that order. For each data
1769direction, the output looks like:
1770
1771Client1 (g=0): err= 0:
1772  write: io=    32MB, bw=   666KB/s, iops=89 , runt= 50320msec
1773    slat (msec): min=    0, max=  136, avg= 0.03, stdev= 1.92
1774    clat (msec): min=    0, max=  631, avg=48.50, stdev=86.82
1775    bw (KB/s) : min=    0, max= 1196, per=51.00%, avg=664.02, stdev=681.68
1776  cpu        : usr=1.49%, sys=0.25%, ctx=7969, majf=0, minf=17
1777  IO depths    : 1=0.1%, 2=0.3%, 4=0.5%, 8=99.0%, 16=0.0%, 32=0.0%, >32=0.0%
1778     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
1779     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
1780     issued r/w: total=0/32768, short=0/0
1781     lat (msec): 2=1.6%, 4=0.0%, 10=3.2%, 20=12.8%, 50=38.4%, 100=24.8%,
1782     lat (msec): 250=15.2%, 500=0.0%, 750=0.0%, 1000=0.0%, >=2048=0.0%
1783
1784The client number is printed, along with the group id and error of that
1785thread. Below is the io statistics, here for writes. In the order listed,
1786they denote:
1787
1788io=		Number of megabytes io performed
1789bw=		Average bandwidth rate
1790iops=           Average IOs performed per second
1791runt=		The runtime of that thread
1792	slat=	Submission latency (avg being the average, stdev being the
1793		standard deviation). This is the time it took to submit
1794		the io. For sync io, the slat is really the completion
1795		latency, since queue/complete is one operation there. This
1796		value can be in milliseconds or microseconds, fio will choose
1797		the most appropriate base and print that. In the example
1798		above, milliseconds is the best scale. Note: in --minimal mode
1799		latencies are always expressed in microseconds.
1800	clat=	Completion latency. Same names as slat, this denotes the
1801		time from submission to completion of the io pieces. For
1802		sync io, clat will usually be equal (or very close) to 0,
1803		as the time from submit to complete is basically just
1804		CPU time (io has already been done, see slat explanation).
1805	bw=	Bandwidth. Same names as the xlat stats, but also includes
1806		an approximate percentage of total aggregate bandwidth
1807		this thread received in this group. This last value is
1808		only really useful if the threads in this group are on the
1809		same disk, since they are then competing for disk access.
1810cpu=		CPU usage. User and system time, along with the number
1811		of context switches this thread went through, usage of
1812		system and user time, and finally the number of major
1813		and minor page faults.
1814IO depths=	The distribution of io depths over the job life time. The
1815		numbers are divided into powers of 2, so for example the
1816		16= entries includes depths up to that value but higher
1817		than the previous entry. In other words, it covers the
1818		range from 16 to 31.
1819IO submit=	How many pieces of IO were submitting in a single submit
1820		call. Each entry denotes that amount and below, until
1821		the previous entry - eg, 8=100% mean that we submitted
1822		anywhere in between 5-8 ios per submit call.
1823IO complete=	Like the above submit number, but for completions instead.
1824IO issued=	The number of read/write requests issued, and how many
1825		of them were short.
1826IO latencies=	The distribution of IO completion latencies. This is the
1827		time from when IO leaves fio and when it gets completed.
1828		The numbers follow the same pattern as the IO depths,
1829		meaning that 2=1.6% means that 1.6% of the IO completed
1830		within 2 msecs, 20=12.8% means that 12.8% of the IO
1831		took more than 10 msecs, but less than (or equal to) 20 msecs.
1832
1833After each client has been listed, the group statistics are printed. They
1834will look like this:
1835
1836Run status group 0 (all jobs):
1837   READ: io=64MB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
1838  WRITE: io=64MB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
1839
1840For each data direction, it prints:
1841
1842io=		Number of megabytes io performed.
1843aggrb=		Aggregate bandwidth of threads in this group.
1844minb=		The minimum average bandwidth a thread saw.
1845maxb=		The maximum average bandwidth a thread saw.
1846mint=		The smallest runtime of the threads in that group.
1847maxt=		The longest runtime of the threads in that group.
1848
1849And finally, the disk statistics are printed. They will look like this:
1850
1851Disk stats (read/write):
1852  sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00%
1853
1854Each value is printed for both reads and writes, with reads first. The
1855numbers denote:
1856
1857ios=		Number of ios performed by all groups.
1858merge=		Number of merges io the io scheduler.
1859ticks=		Number of ticks we kept the disk busy.
1860io_queue=	Total time spent in the disk queue.
1861util=		The disk utilization. A value of 100% means we kept the disk
1862		busy constantly, 50% would be a disk idling half of the time.
1863
1864It is also possible to get fio to dump the current output while it is
1865running, without terminating the job. To do that, send fio the USR1 signal.
1866You can also get regularly timed dumps by using the --status-interval
1867parameter, or by creating a file in /tmp named fio-dump-status. If fio
1868sees this file, it will unlink it and dump the current output status.
1869
1870
18717.0 Terse output
1872----------------
1873
1874For scripted usage where you typically want to generate tables or graphs
1875of the results, fio can output the results in a semicolon separated format.
1876The format is one long line of values, such as:
1877
18782;card0;0;0;7139336;121836;60004;1;10109;27.932460;116.933948;220;126861;3495.446807;1085.368601;226;126864;3523.635629;1089.012448;24063;99944;50.275485%;59818.274627;5540.657370;7155060;122104;60004;1;8338;29.086342;117.839068;388;128077;5032.488518;1234.785715;391;128085;5061.839412;1236.909129;23436;100928;50.287926%;59964.832030;5644.844189;14.595833%;19.394167%;123706;0;7313;0.1%;0.1%;0.1%;0.1%;0.1%;0.1%;100.0%;0.00%;0.00%;0.00%;0.00%;0.00%;0.00%;0.01%;0.02%;0.05%;0.16%;6.04%;40.40%;52.68%;0.64%;0.01%;0.00%;0.01%;0.00%;0.00%;0.00%;0.00%;0.00%
1879A description of this job goes here.
1880
1881The job description (if provided) follows on a second line.
1882
1883To enable terse output, use the --minimal command line option. The first
1884value is the version of the terse output format. If the output has to
1885be changed for some reason, this number will be incremented by 1 to
1886signify that change.
1887
1888Split up, the format is as follows:
1889
1890	terse version, fio version, jobname, groupid, error
1891	READ status:
1892		Total IO (KB), bandwidth (KB/sec), IOPS, runtime (msec)
1893		Submission latency: min, max, mean, deviation (usec)
1894		Completion latency: min, max, mean, deviation (usec)
1895		Completion latency percentiles: 20 fields (see below)
1896		Total latency: min, max, mean, deviation (usec)
1897		Bw (KB/s): min, max, aggregate percentage of total, mean, deviation
1898	WRITE status:
1899		Total IO (KB), bandwidth (KB/sec), IOPS, runtime (msec)
1900		Submission latency: min, max, mean, deviation (usec)
1901		Completion latency: min, max, mean, deviation (usec)
1902		Completion latency percentiles: 20 fields (see below)
1903		Total latency: min, max, mean, deviation (usec)
1904		Bw (KB/s): min, max, aggregate percentage of total, mean, deviation
1905	CPU usage: user, system, context switches, major faults, minor faults
1906	IO depths: <=1, 2, 4, 8, 16, 32, >=64
1907	IO latencies microseconds: <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000
1908	IO latencies milliseconds: <=2, 4, 10, 20, 50, 100, 250, 500, 750, 1000, 2000, >=2000
1909	Disk utilization: Disk name, Read ios, write ios,
1910			  Read merges, write merges,
1911			  Read ticks, write ticks,
1912			  Time spent in queue, disk utilization percentage
1913	Additional Info (dependent on continue_on_error, default off): total # errors, first error code
1914
1915	Additional Info (dependent on description being set): Text description
1916
1917Completion latency percentiles can be a grouping of up to 20 sets, so
1918for the terse output fio writes all of them. Each field will look like this:
1919
1920	1.00%=6112
1921
1922which is the Xth percentile, and the usec latency associated with it.
1923
1924For disk utilization, all disks used by fio are shown. So for each disk
1925there will be a disk utilization section.
1926
1927
19288.0 Trace file format
1929---------------------
1930There are two trace file format that you can encounter. The older (v1) format
1931is unsupported since version 1.20-rc3 (March 2008). It will still be described
1932below in case that you get an old trace and want to understand it.
1933
1934In any case the trace is a simple text file with a single action per line.
1935
1936
19378.1 Trace file format v1
1938------------------------
1939Each line represents a single io action in the following format:
1940
1941rw, offset, length
1942
1943where rw=0/1 for read/write, and the offset and length entries being in bytes.
1944
1945This format is not supported in Fio versions => 1.20-rc3.
1946
1947
19488.2 Trace file format v2
1949------------------------
1950The second version of the trace file format was added in Fio version 1.17.
1951It allows to access more then one file per trace and has a bigger set of
1952possible file actions.
1953
1954The first line of the trace file has to be:
1955
1956fio version 2 iolog
1957
1958Following this can be lines in two different formats, which are described below.
1959
1960The file management format:
1961
1962filename action
1963
1964The filename is given as an absolute path. The action can be one of these:
1965
1966add          Add the given filename to the trace
1967open         Open the file with the given filename. The filename has to have
1968             been added with the add action before.
1969close        Close the file with the given filename. The file has to have been
1970             opened before.
1971
1972
1973The file io action format:
1974
1975filename action offset length
1976
1977The filename is given as an absolute path, and has to have been added and opened
1978before it can be used with this format. The offset and length are given in
1979bytes. The action can be one of these:
1980
1981wait       Wait for 'offset' microseconds. Everything below 100 is discarded.
1982read       Read 'length' bytes beginning from 'offset'
1983write      Write 'length' bytes beginning from 'offset'
1984sync       fsync() the file
1985datasync   fdatasync() the file
1986trim       trim the given file from the given 'offset' for 'length' bytes
1987
1988
19899.0 CPU idleness profiling
1990--------------------------
1991In some cases, we want to understand CPU overhead in a test. For example,
1992we test patches for the specific goodness of whether they reduce CPU usage.
1993fio implements a balloon approach to create a thread per CPU that runs at
1994idle priority, meaning that it only runs when nobody else needs the cpu.
1995By measuring the amount of work completed by the thread, idleness of each
1996CPU can be derived accordingly.
1997
1998An unit work is defined as touching a full page of unsigned characters. Mean
1999and standard deviation of time to complete an unit work is reported in "unit
2000work" section. Options can be chosen to report detailed percpu idleness or
2001overall system idleness by aggregating percpu stats.
2002
2003
200410.0 Verification and triggers
2005------------------------------
2006Fio is usually run in one of two ways, when data verification is done. The
2007first is a normal write job of some sort with verify enabled. When the
2008write phase has completed, fio switches to reads and verifies everything
2009it wrote. The second model is running just the write phase, and then later
2010on running the same job (but with reads instead of writes) to repeat the
2011same IO patterns and verify the contents. Both of these methods depend
2012on the write phase being completed, as fio otherwise has no idea how much
2013data was written.
2014
2015With verification triggers, fio supports dumping the current write state
2016to local files. Then a subsequent read verify workload can load this state
2017and know exactly where to stop. This is useful for testing cases where
2018power is cut to a server in a managed fashion, for instance.
2019
2020A verification trigger consists of two things:
2021
20221) Storing the write state of each job
20232) Executing a trigger command
2024
2025The write state is relatively small, on the order of hundreds of bytes
2026to single kilobytes. It contains information on the number of completions
2027done, the last X completions, etc.
2028
2029A trigger is invoked either through creation ('touch') of a specified
2030file in the system, or through a timeout setting. If fio is run with
2031--trigger-file=/tmp/trigger-file, then it will continually check for
2032the existence of /tmp/trigger-file. When it sees this file, it will
2033fire off the trigger (thus saving state, and executing the trigger
2034command).
2035
2036For client/server runs, there's both a local and remote trigger. If
2037fio is running as a server backend, it will send the job states back
2038to the client for safe storage, then execute the remote trigger, if
2039specified. If a local trigger is specified, the server will still send
2040back the write state, but the client will then execute the trigger.
2041
204210.1 Verification trigger example
2043---------------------------------
2044Lets say we want to run a powercut test on the remote machine 'server'.
2045Our write workload is in write-test.fio. We want to cut power to 'server'
2046at some point during the run, and we'll run this test from the safety
2047or our local machine, 'localbox'. On the server, we'll start the fio
2048backend normally:
2049
2050server# fio --server
2051
2052and on the client, we'll fire off the workload:
2053
2054localbox$ fio --client=server --trigger-file=/tmp/my-trigger --trigger-remote="bash -c \"echo b > /proc/sysrq-triger\""
2055
2056We set /tmp/my-trigger as the trigger file, and we tell fio to execute
2057
2058echo b > /proc/sysrq-trigger
2059
2060on the server once it has received the trigger and sent us the write
2061state. This will work, but it's not _really_ cutting power to the server,
2062it's merely abruptly rebooting it. If we have a remote way of cutting
2063power to the server through IPMI or similar, we could do that through
2064a local trigger command instead. Lets assume we have a script that does
2065IPMI reboot of a given hostname, ipmi-reboot. On localbox, we could
2066then have run fio with a local trigger instead:
2067
2068localbox$ fio --client=server --trigger-file=/tmp/my-trigger --trigger="ipmi-reboot server"
2069
2070For this case, fio would wait for the server to send us the write state,
2071then execute 'ipmi-reboot server' when that happened.
2072
207310.1 Loading verify state
2074-------------------------
2075To load store write state, read verification job file must contain
2076the verify_state_load option. If that is set, fio will load the previously
2077stored state. For a local fio run this is done by loading the files directly,
2078and on a client/server run, the server backend will ask the client to send
2079the files over and load them from there.
2080