RAID Levels – Spans, Stripes, IOPS, Oh My!

You’re probably familiar with the different standard RAID levels (1, 5, 10, 50, 60… avoiding the odd ones like 4, etc.) but maybe you’ve always wanted to see what effect changing the numbers of spans or RAID level has on the same hardware set.  I’ve always searched and found benchmarks of hardware similar to what I was looking for, but it was always hard to find the same exact hardware configured and benchmarked in different configurations.  So, I decided I would do some testing myself to quantify the performance differences between configurations to help determine what I should run at the end of the day.

Whenever someone asks what type of RAID they should run the answers are always “RAID 10 for speed RAID 5 for space and RAID 1 if you don’t have more than 2 disks.”  Well, is that true?  Is RAID 10 truly faster than a RAID 5?

Hardware Used for Testing

For this test, I used a Dell R710 with a PERC6/i controller.  The server is equipped with two Intel Xeon E5530’s and 12GB of RAM.  I also gathered 8 Dell/HP 2.5″ SAS 146GB 10K RPM disks.  The server is running ESXi 6.0.0b from a USB thumb drive and I’ve installed OMSA within ESXi so that I can reconfigure the RAID throughout the test without having to enter the RAID controller at boot.

When configuring different levels of RAID you might notice a setting called “Disks per span”.  Disks per span is the number of disks that are included in the subset of the RAID configuration.  Since we know that a RAID 5 configuration is just one set of disks, there is no “disks per span” to configure – if you have 8 disks and select all of them to be in the VD (Virtual Disk) then you have 8 disks in the array with zero hot-spares.  RAID 10, however, is comprised of multiple RAID 1’s striped – right?  So, if you have 8 disks in RAID 10 you have two choices: you can have a disks per span of 2, or a disks per span of 4.  Obviously with the former you will have 4 spans of 2 disks each.  If you do the latter, you’ll have 2 spans of 4 disks each.  RAID 50 is just a striped set of RAID 5 configurations.  With 8 disks you really only have two options: you can use 3 disks per span or 4 disks per span – however, if you only use 3 disks per span then you’ll end up having 2 unused drives in the array (which you could configure for hot-spares).

In order to find out which of these configurations offers the best performance… you guessed it, we’re going to build each, let it fully initialize, create a datastore on it, vMotion/svMotion a VM to the datastore, and run the test.  For the test I’ll be using the Crystal DiskMark.  I will be running the test as follows 3 times, and then averaging the results:

Crystal DiskMark Settings

Crystal DiskMark Settings

I will be running the test as mentioned above – RAID 5 with all disks, RAID 10 with 2 disks per span, RAID 10 with 4 disks per span, RAID 50 with 4 disks per span (not hot-spares).  All tests are using Write-back Cache and 64k stripe size in the configuration.  A later test will be to play with the stripe size and see what yields the best performance into the virtual environment, but that will come later.  The results, averaged, for each test are below:

RAID 5 with all disks – 64k stripe

RAID 5 all disks Avg MB/s Avg IOPS
Sequential Read (Q= 32,T= 1)       490.41  –
Sequential Write (Q= 32,T= 1)       273.33  –
Random Read 4KiB (Q= 32,T= 1)         15.26    3,726.73
Random Write 4KiB (Q= 32,T= 1)            5.36    1,307.63
Sequential Read (T= 1)       277.74  –
Sequential Write (T= 1)       269.36  –
Random Read 4KiB (Q= 1,T= 1)            2.01       491.47
Random Write 4KiB (Q= 1,T= 1)            4.06       989.97

RAID 10 with 2 disks per span – 64k stripe

RAID 10 – 2 disks per span Avg MB/s Avg IOPS
Sequential Read (Q= 32,T= 1)       378.52  –
Sequential Write (Q= 32,T= 1)       284.55  –
Random Read 4KiB (Q= 32,T= 1)         14.43    3,523.87
Random Write 4KiB (Q= 32,T= 1)         12.29    3,000.27
Sequential Read (T= 1)       268.94  –
Sequential Write (T= 1)       287.47  –
Random Read 4KiB (Q= 1,T= 1)            1.63       397.53
Random Write 4KiB (Q= 1,T= 1)         10.80    2,637.40

RAID 10 with 4 disks per span – 64k stripe

RAID 10 – 4 disks per span  Avg MB/s  Avg IOPS
Sequential Read (Q= 32,T= 1)       394.92  –
Sequential Write (Q= 32,T= 1)       211.76  –
Random Read 4KiB (Q= 32,T= 1)         13.65    3,333.63
Random Write 4KiB (Q= 32,T= 1)         12.89    3,147.30
Sequential Read (T= 1)       258.42  –
Sequential Write (T= 1)       219.38  –
Random Read 4KiB (Q= 1,T= 1)            1.20       293.40
Random Write 4KiB (Q= 1,T= 1)            9.15    2,232.77

RAID 50 with 4 disks per span – 64k stripe

RAID 50 – 4 disks per span  Avg MB/s  Avg IOPS
 Sequential Read (Q= 32,T= 1)       341.76  –
 Sequential Write (Q= 32,T= 1)       259.94  –
 Random Read 4KiB (Q= 32,T= 1)         12.86    3,138.43
 Random Write 4KiB (Q= 32,T= 1)            5.80    1,415.80
 Sequential Read (T= 1)       156.03  –
 Sequential Write (T= 1)       266.63  –
 Random Read 4KiB (Q= 1,T= 1)            1.85       450.60
 Random Write 4KiB (Q= 1,T= 1)            4.02       982.10

Keep the figures above in mind as you read the charts below.  Understand that queue depth is an important thing to consider when looking at throughput (MB/s) and IOPS.  With that said, the below graphs compare each array configuration to one another in the same test manner.  This is where the configurations will start to stand out from one another.

SeqReadQ32Above you’ll find that a RAID 5 with the 8 drives in it will outperform every other configuration in the sequential read test with a queue depth of 32.
SeqWriteQ32

Not surprising, the only array configuration that struggles in the sequential write with a queue depth of 32 is the RAID 10 with 4 disks per span.  This is because the RAID 10 with a 4 disk per span configuration results in, basically, two RAID 1’s with 4 disks in each group.  This slows down performance as you’re really not able to reap the benefits of RAID 0 in this configuration since you only have one striped set.

RandomReadIOPSAbove you will find that in a random read test the RAID 5 and 50 win.  The RAID 5 wins because it has 8 disks assisting in read speed vs. the RAID 10’s having fewer in striped configurations.  RAID 50 does almost as well as RAID 5 because it has 4 disk assisting in the read but is striped across two spans.  So, don’t discount RAID 5 or 50 just yet.
RandomWriteIOPSNow things start to get interesting!  What you see here is more like what you would expect based on RAID configuration reputation you may read about.  Note that the RAID 5 and RAID 50 configuration perform the worst in this random write test.  Notice, though, that the RAID 10 configurations crush the RAID 5 and 50.  These write tests are skewed by the write-back cache in the controller but the data still needs to leave the cache and be written to the disks, so the test is valid.  The cache on a PERC6/i is not huge though – it’s usually only 256MB – 512MB of DDR2.  Considering we’re doing 1,000MB tests we unload the cache to disk several times during the process.

So, what does all of this mean?  

To me, it means three things:  You need to know what your workload will be in order to choose a configuration.  You need to know what the ideal disks per span is for your configuration and total number of disks as the RAID 10 setups obviously differ in performance.  Finally, RAID 5/50 may not be dead depending on your workload.  I can hear the gasps now but it’s true.  If you have a setup that relies heavily on deep queue reads (like copying large files for video editing, etc.) and you care most about peak throughput then RAID 5 is your best choice.  This also works to your advantage in that RAID 5 is the “least wasteful” configuration in terms of disk space.  There are caveats to building RAID 5/50 with large disks but taking it at face value it’s the best for this type of application.

However, if you are building storage for a VM setup or database then there are better options.  We all know that RAID 10 wins in “database” situations, since it provides better read and write performance at the cost of halving your space.  RAID 10 really wins in any application where IOPS are preferred in general – VM storage included.  Now, an 8 disk RAID 10 is not going to win any contests in total supported IOPS but it sure is better than a RAID 5 or RAID 50.  And, imagine if there were 8 SSDs in the array instead of spinning disks.

If you’re wondering what JonKensy.com runs on it’s not a straightforward answer.  I use 8 Western Digital 4TB Red drives in a RAID 50 on an LSI 9260-8i controller as my primary storage in my main VMware ESXi host.  The reason I went RAID 50 is because I wanted as much storage as possible while offering me some hope of rebuilding the array should a disk fail, while also performing well.  I have two spans of 4 disks which is still a pretty large disk size but it kept my span size small – so hopefully I can rebuild a failed drive in one half of the RAID 50 should it fail.  But, to complicate matters, I use SSD caching within VMware to enhance read performance of frequently used “hot” data.  VMware Virtual Read Flash Cache operates at a per-VM level, though.  So, most of this website will be served up at SSD speeds because the database is utilizing that SSD Virtual Read Cache.  And, to further complicate matters, I do have a dedicated SSD within the ESXi host that serves up VMs that do not need any redundancy but do need as many IOPS as possible.  An example of a server that runs off of that dedicated SSD is my terminal server – I have 1 – 4 users on it any given time which benefits greatly from IOPS.

There are further settings that can be tweaked within a RAID configuration, like strip size (64k was used in these tests).  There are different schools of thought on this revolving around block size of the underlying file system, etc.  That is a bit outside of this test but do realize that I am running all of my tests within a VM residing on a VMFS5 datastore.  You will always, always want to use write-back caching if your controller has it available, but realize you will want to use a battery backup module on the controller so that data can be held in the cache should power go out before it’s written to disk.  You can also use an SSD as cache at the RAID controller level to further enhance your IOPs.

I know this article may seem redundant since most will know that a RAID 10 outperforms a RAID 5 but I wanted to do an actual test of different RAID configurations, fully initialized, on the same hardware with the same disks.  Hopefully you find it useful and realize that there is no universal “best configuration” – it is truly important that you understand your workload (including read and write frequency)!  At my current employer, we have clients with configurations ranging from RAID 5, 10, and 50.  I’ve posted this on many forums and the first thing people snap back with is “RAID 5 is dead” or “RAID 50, ew!”  But, if you need fast storage with as much usable space as possible, these are the way to go.  Sure, 2 – 6TB drives will take a while to rebuild (and possible fail while doing so), but if the data is replicated and backed up then that’s unimportant.  Things to think about!

Thanks for reading!

Author: Jon

Share This Post On

5 Comments

  1. How does RAID 6 figure into the mix?

    Post a Reply
  2. Muito bom. Tbm poderia ter feito com raid 60 :).

    Post a Reply
  3. Appreciating the hard work you put into your blog and detailed information you offer. It’s good to come across a blog every once in a while that isn’t the same out of date rehashed information. Great read! I’ve bookmarked your site and I’m adding your RSS feeds to my Google account.

    Post a Reply
  4. Jon if you go ssd would you say to go for the raid 50 ? since iops should stay very high and you can maximize performance

    Post a Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.