Things that other people seem to like about LMbench include the following:
Portable test of operating system primitives
The benchmarks are all in C, fairly portable (although they do prefer
to be compiled with GCC). This is useful for generating a set of apples
to apples systems comparisons.
Keeping up with the Jones
LMbench is very useful for motivating action. When
confronted with numbers that prove that BloatOS is 4 times slower than
all of the competition, resources tend to get allocated to fix the
problems.
Database of results
The database of results includes runs from most all of the major computer
workstation manufactors.
Memory latency results
The memory latency test shows the latency of all of the system (data)
caches, i.e., level 1, 2, and 3, if present, as well as main memory and
TLB miss latency. In addition the sizes of the caches can be read off
of a properly plotted set of results. The hardware folks like this.
This benchmark has found bugs in operating system page coloring schemes.
Context switching results
Everybody seems to love context switching numbers. This particular
benchmark is quite careful not to just quote the ``in cache'' numbers.
It varies both the number and size of the procesess and plots the
results in such a way that it is easy to see when you don't fit in
the cache. You can also see the real costs of a cold cache context
switch.
Regression testing
Sun & SGI have used these benchmarks to find and fix performance
problems.
Intel used them during P6 development.
Linus uses them to do performance tuning of Linux.
New benchmarks
The source is small, readable, and easy to extend. It is routinely
massaged into different form to measure something else. For example,
the networking metrics include libraries to handle connection
establishment, server shutdowns, etc.