Lines Matching refs:emph
62 one is able to \emph{replay} the IO again on the same machine or another
71 the only traces you are interested in are \emph{QUEUE} requests --
106 IOs during the sample workload. \texttt{btreplay} \emph{attempts} to
115 during replays. In addition the actual ordering of IOs \emph{between}
133 multiple sequential (in time) IOs and put them in a single \emph{bunch} of
134 IOs that will be processed as a single \emph{asynchronous IO} call to the
138 code, and thus are submitted to the block IO layer with \emph{very small}
139 time intervals between issues.}. To manage the size of the \emph{bunches},
145 for \emph{bunching.} The default time is 10 milliseconds (10,000,000
149 \item[\texttt{--max-pkts}] A \emph{bunch} size can be anywhere from
152 decrease the maximum \emph{bunch} size. Refer to section~\ref{sec:c-o-M}
158 about \emph{bunches} of IOs to be replayed. \texttt{btreplay} operates on
165 Each submitting thread simply reads the input file of \emph{bunches}
175 on the recording system, we wrap CPU IDs. This \emph{may} result in an
190 \emph{We could institute the notion of global time across threads,
198 \emph{This is the primary problem with any IO replay mechanism -- how
201 into the kernel, where you \emph{may} receive more responsive timings.}
207 \emph{The user has \emph{some} control over this (via the
208 \texttt{--max-pkts} option). One \emph{could} simply specify
218 \emph{It should be relatively trivial to add in the notion of
226 \medskip\emph{One could also add in the notion of CPU mappings as well --
318 option (section~\ref{sec:c-o-m}), smaller values \emph{may} or \emph{may not}
491 The utility \emph{does} allow for multiple \texttt{-M} options to be
514 to store information concerning each \emph{stall} and IO operation
527 As a precautionary measure, by default \texttt{btreplay} will \emph{not}
528 process \emph{write} requests. In order to enable \texttt{btreplay} to
529 actually \emph{write} to devices one must explicitly specify the