Author Topic: What tools do you use to benchmark storage appliances?  (Read 21 times)


  • Calf
  • *
  • Posts: 1
Hey yíall, so Iím trying to benchmark a certain new storage appliance. Iíve been trying to learn fio on the Linux side and have been using iometer on Windows. I keep getting different results running the same tests and Iím pretty sure Iím doing something wrong.

What does everyone here use to benchmark stuff?

Also, Is there anyone who is good with fio or one of the other tools that wouldnít mind spending a few minutes here or on a direct chat walking me through all the options? Iím confused as to the relationship between certain things in fio like the relationship between nrfiles and numjobs and in general what the best way to structure a job is. I keep getting varying results using similar inputs. Starting to bag my head against the wall.


  • Debian Wizard
  • El Toro
  • ****
  • Posts: 301
  • There's no problem so bad you cannot make it worse
Re: What tools do you use to benchmark storage appliances?
« Reply #1 on: Today at 09:11:25 am »
I struggle with this too.

For something like a NAS there are a number of layers each with multiple factors, I don't yet understand them all.

For most of the benchmarks I run I use pv, iotop and iftop. I often use pv to test sequential read/write on drives and within filesystems which helps determine the maximum capability of drives/raid arrays/filesystems/etc even remotely over varios protocols like smb/nfs. top, iotop and iftop allow you to monitor what's going on while tests are going on which provides some insight as well. I usually follow this up with a more real-world test like moving a large number of files via rsync.

The part I struggle with is often how to actually improve or figure out what the bottleneck is exactly. My real-world speeds tend to be less than half the top benchmark, likely due to filesystem overhead and random vs. sequential read/write. I haven't come up with a good way to quantify exactly how those factors interact or how to improve them much.