dd is not a benchmarking tool

There is a widely held idea in the Internet that a written snippet will be universally valid to test and produce comparable results from any machine. Said like that, this assertion is globally false but a piece of code, valid in a context, can do a lot of road on the web and easily fool a good amount of people. Benchmarks with dd are a good example. Which Unix nerd didn't test his brand new device with dd ? The command outputs an accurate value in MB/sec, what more ?

The problem is already in benchmark conception

If I quickly get the dd's user manual or more simple, the help text, I can read: Copy a file, converting and formatting according to the operands. If my goal was to benchmark a device, it already appears that this tool is not the most appropriate. Firstly, I don't aim to copy anything but just read or write. Next, I don't want to work with files but with block device. Then, I don't need the announced features about data handling. The three points are really important, because they show how much the tool is inappropriate. Don’t get me wrong, I don't denigrate dd. It personally saved me tons of hours with ISOs or disk migrations. But use it as a standard benchmark tool is more a hack than a reliable idea.

The first issue: The files

A major misconception of benchmark is in what I want to test and how I'll do it. Here, our goal is HDD/SSD performance and pass by a filesystem can create a big biais in your analysis. Here the kind of command findable on the Internet:

dd bs=1M count=1024 if=/dev/zero of=/root/test

For those not familiar with dd, the above command creates a 1GB file containing only zeros at root user's home: /root/test.  The authors generally claim the goal is to collect performance of the device where the file is stored, it's poorly reached. Storage performance are mainly affected by a set of caches/buffers from the user level to the blocks located in SSD. File system is the main entry for users but as it is a software, it can hide you the reality of your hardware as good as well as bad. By default, dd toward a file systems uses an asynchronous method, meaning that the if the written file is small enough to fit in RAM, the OS won't write it on drive and will wait the most appropriate time to do so. In this configuration, the command's output will absolutely not represent storage's performance and as only volatile-memory is implied, dd displays very good performance. At Cloud Mercato, as we want to reflect infrastructure performance, we bypass file system as much as possible and directly test device by its absolute path. So from our benchmark you know your hardware possibilities and can boost them with the file system of your choice. There's only few cases where files are involved such as test root volume in write mode, you mustn't not write on your root device directly or you'll erase its OS.

Second issue: A tool without data generation

dd is designed around the concept of copy, it is also quite well explained by its long name "Data Duplicator". Fortunately in Unix everything is a file and kernels provide pseudo-files generating data. There are:

  • /dev/zero
  • /dev/random
  • /dev/urandom

Under the hood, these pseudo-files are real software and suffer from this. /dev/zero is CPU bound but because it only produces zeros, it cannot represent a real workload. /dev/random is quite slow due to its high randomness and /dev/urandom is too intensive in term of CPU cycles. Basically, you may not reach the storage maximum performance if you are limited by CPU. Moreover, dd isn't a multi-thread software, so only one thread at once can stress the device decreasing chances to get the best.

Third: A lack of features

It is said, dd is not a benchmarking tool, if you look at the the open-source catalog of storage testing and the common features, dd, not being intended for this purpose, it is out of competition:

  • Single thread only
  • No optimized data generation
  • No access mode: Sequential or random
  • No deep control such as I/O depth
  • Only average bandwidth, no IOPS, latency or percentiles
  • No mixed patterns: read/write
  • No time control

This shortened list is eloquent, Data-Duplicator doesn't provide the necessary features to be declared as a performance test tool.

Then the solution

Here are real benchmark tools that you can use:

FIO is really our daily tool (if not hourly), it brings to us possibilities not imaginable with dd like IO depth or random access. vdbench is also very handy, in a similar concept than FIO, you can create complex scenario such as imply multiple files in read/write access. In conclusion, benchmark is not only a suite of commands ran in a shell. Executed tests and expected output really depend of context: What do you want to test ? Which component should be implied ? Why this value will represent something ? Any snippets taken on the Internet may have its value in a certain environment and be untruthful in another. It's up to the tester to understand these factors and chose the appropriate tool to her/his purpose.