Table of Contents
lmbench - system benchmarks
lmbench is a series of micro
benchmarks intended to measure basic operating system and hardware system
metrics. The benchmarks fall into three general classes: bandwidth, latency,
Most of the lmbench benchmarks use a standard timing harness
described in timing(3)
and have a few standard options: parallelism, warmup,
and repetitions. Parallelism specifies the number of benchmark processes
to run in parallel. This is primarily useful when measuring the performance
of SMP or distributed computers and can be used to evaluate the system’s
performance scalability. Warmup is the number of minimum number of microseconds
the benchmark should execute the benchmarked capability before it begins
measuring performance. Again this is primarily useful for SMP or distributed
systems and it is intended to give the process scheduler time to "settle"
and migrate processes to other processors. By measuring performance over
various warmup periods, users may evaulate the scheduler’s responsiveness.
Repetitions is the number of measurements that the benchmark should take.
This allows lmbench to provide greater or lesser statistical strength
to the results it reports. The default number of repetitions is 11.
Data movement is fundemental to the performance on most computer
systems. The bandwidth measurements are intended to show how the system
can move data. The results of the bandwidth metrics can be compared but
care must be taken to understand what it is that is being compared. The
bandwidth benchmarks can be reduced to two main components: operating system
overhead and memory speeds. The bandwidth benchmarks report their results
as megabytes moved per second but please note that the data moved is not
necessarily the same as the memory bandwidth used to move the data. Consult
the individual man pages for more information.
Each of the bandwidth benchmarks
is listed below with a brief overview of the intent of the benchmark.
messages are also fundemental to the performance on most computer systems.
The latency measurements are intended to show how fast a system can be
told to do some operation. The results of the latency metrics can be compared
to each other for the most part. In particular, the pipe, rpc, tcp, and
udp transactions are all identical benchmarks carried out over different
- reading and summing of a file via the read(2)
- memory reading and summing.
- memory writing.
- reading and summing of a file via the memory mapping mmap(2)
- reading of data via a pipe.
- reading of data via a TCP/IP
- reading data from a UNIX socket.
Latency numbers here should mostly be in microseconds
- the time it takes to establish a TCP/IP connection.
- context switching; the number and size of processes is varied.
- fcntl file locking.
- ‘‘hot potato’’ transaction through a UNIX FIFO.
- creating and deleting small files.
- the time it takes
to fault in a page from a file.
- memory read latency (accurate
to the ~2-5 nanosecond range, reported in nanoseconds).
- time to
set up a memory mapping.
- basic processor operations, such as integer
XOR, ADD, SUB, MUL, DIV, and MOD, and float ADD, MUL, DIV, and double ADD,
- ‘‘hot potato’’ transaction through a Unix pipe.
creation times (various sorts).
- ‘‘hot potato’’ transaction through Sun
RPC over UDP or TCP.
- select latency
- signal installation
and catch latencies. Also protection fault signal latency.
trivial entry into the system.
- ‘‘hot potato’’ transaction through TCP.
- ‘‘hot potato’’ transaction through UDP.
- ‘‘hot potato’’ transaction
through UNIX sockets.
- the time it takes to establish a
UNIX socket connection.
Funding for the development
of these tools was provided by Sun Microsystems Computer Corporation.
- processor cycle time
size and TLB miss latency
- cache line size (in bytes)
- cache statistics,
such as line size, cache sizes, memory parallelism.
- John McCalpin’s
- memory subsystem parallelism. How many requests
can the memory subsystem service in parallel, which may depend on the location
of the data in the memory hierarchy.
- basic processor operation
large number of people have contributed to the testing and development
The benchmarking code is distributed under the GPL with
additional restrictions, see the COPYING file.
Carl Staelin and Larry
Comments, suggestions, and bug reports are always welcome.
Table of Contents