DIY SAN/NAS – quest for fast, reliable, shared storage with a twist of ZFS! (Part 1)

Update:  I have finally updated this original post with Part 2 which covers the actual hardware, configuration, and some performance details:

DIY SAN/NAS – fast, reliable, shared storage, with FreeNAS and switchless 10 Gbps! (Part 2)

 

What’s an ESXi cluster worth without fast, shared storage?  Not much!  That’s the case of my lab.  Actually, I may be a bit cynical there, but the truth is while I have two very capable lab hosts (Dell R710s with 24 vCPU and 144GB RAM each, but local storage) I lack the ability to deploy VMs to quick shared storage allowing me to more efficiently use HA and DRS.  I have a Synology DS1513+ which has four 1 GbE interfaces in LACP for iSCSI storage, but since there are only 5 spindles in the device the IOPs suffer a bit.

Before we get started

I’d like to thank everyone who frequents my blog and or YouTube channel.  You probably don’t realize it, but because you guys continue to follow my ramblings I’ve been able to purchase the server (and HBA, NIC, etc.) in this feature based on AdSense earnings..  I truly appreciate all the people who subscribe, comment, and share my content because it is now allowing me to pick up new (old) stuff and not only learn things myself, but also allows me to share with others.  I’m not raking in huge monthly figures by any means, but every bit counts and until now I’ve been purchasing all the hardware you see on this blog or on my YouTube channel out of pocket.  Thanks again – I truly appreciate your support and interest!

If you don’t want to read the textual version of my rambling, you can enjoy the full aural experience via YouTube:

The Problem

When you start to lean on your lab more and more you quickly find the weak links.  While each of my two hosts have a RAID50 virtual disk comprised of eight SAS 2.5″ 15k 146GB drives, with one host having ~10TB of RAID50 storage in an MD1000 hanging off of it, running VMs from the Synology iSCSI target was not very impressive.  I recently installed Mellanox 10 GbE interfaces in the two hosts and can svMotion fairly quick should I need to evacuate a host, but it’s not shared storage.

To solve my issue, I could do one of a few things:

  • Buy a SAN.  Something like an Equallogic PS4000 or PS6000, Netapp filer, older EMC unit, or Dell MD3000i/3200i
  • Try and leverage SSD caching in the Synology by pulling two drives/getting a larger enclosure (DS1815+)
  • Buy a “storage server” and run hardware RAID presented to the hosts via NFS or iSCSI
  • Go on an adventure battling hardware, firmware, and software in order to build an affordable, high-performance DIY SAN and learn something along the way

While an Equallogic would perform great and support MPIO this solution is both risky and boring at the same time.  Risky because I have about twenty or so 1TB enterprise SATA 7.2K disks that are not EQL firmware flashed so I could spend $600 – 700 on an Equallogic PS only to find my disks don’t work (people have had mixed results depending on firmware versions – blech).  This option is also boring because I use these at work all day long – we have a ton of EQL groups and it’s so standard, typical, and run-of-the-mill that I don’t see myself learning much from it.  I’m also not sure I want to own an Equallogic SAN without support though they’ve been overall decently reliable.

Leveraging SSD caching in the Synology is interesting, but I have read about mediocre results and the formula for this would involve me pulling two of the five 4TB disks from the Synology I have and replacing them with a pair of ~250GB SSDs.  Alternatively, I could pick up a DS1815+ but that’s expensive (about $850 and I’d need SSDs) and also not a sure-shot.  Not a bad option overall, but not super creative – the “knowledge value” would be short-lived for certain.

Buying a storage server with many drive bays to accommodate my 3.5″ SATA disks I already have sounds more interesting to me.  Though, with hardware RAID I’d be, essentially, limited to write-back cache size and speed (not large) and ultimately spindle count/speed.  I could experiment with SSD caching on a hardware controller but that’s usually a paid feature and only a marginal increase in performance.

As a sort of derivative of the last option, I have decided to buy a “storage server” but in addition to that, I’m going on an adventure of learning about ZFS.  This will involve me flashing a RAID controller to “IT mode” or “HBA mode” whereas the controller will pass raw disks through to the host.  I’ll have to run Ubuntu 16.04, FreeNAS, Nexenta, or something similar in order to leverage all of this and that sounds interesting!  And so, let’s begin…

Enter the Dell R510

After recommending this machine to a colleague of mine for his storage behind his CrashPlan Enterprise endeavor, I realized that this machine is also a great solution for me.  While I recommended he go with a 12-bay chassis, but with a Dell PERC H700 and 16GB of RAM, I knew I was going to need more RAM and an HBA for my project.  Using the same reseller my colleague used resulted in a very good deal on an R510 with two E5620 CPUs, 64GB of RAM, and (my own mistake) a PERC 6/i.  Featuring 12-bays that hold 3.5″ SAS or SATA disks, the R510 makes a GREAT “DIY SAN” platform.

Dell R510

There are a few significant areas that differentiate the R510 from the R710.  Firstly, the drive capacity is much larger with regards to the R510.  However, do realize there is a 4-bay, 8-bay, and 12-bay R510 chassis available.  The only version that makes any sense to me for a SAN/NAS is the 12-bay otherwise just get an R710.  In addition, the R510 has only 8 DIMM slots.  While it supports 128GB of RAM, filling it with 16GB DIMMs would be rather costly.  I opted for 8GB DIMMs as they are much more reasonable and 64GB of RAM should be plenty.

R510 Internals

Taking a closer look inside, you’ll notice that the R510 lacks a lot of orange parts compared to an R710.  This is because much of the components inside (the fans, etc.) are not hot-swappable like they are on other models.  They’re colored blue indicating they’re swappable just not while the server is on.  The server does feature hot-swap 750W power supplies, however.

R510 Internals

The PCI Express layout in the R510 is actually very convenient.  By default, the “storage slot” is placed such that there are still three additional slots usable.  The lowest PCIe slot is an 8x slot while the middle and top slot are 4x.  So, if you’re going to run an internal HBA/RAID controller and an external controller, you’ll want to populate the bottom slot at the rear for 8x performance.  The heatsinks do not latch on like the R710 model and instead use screws – honestly, this is more conventional and I find that the latching style heatsinks slide around a lot while installing.  I ordered my R510 with iDRAC enterprise which I would absolutely encourage you to do especially if you’re going with a 12-bay chassis as you’ll have a tough time figuring out errors/faults as there is no front LCD display (or DVD drive).

R510 HBA and SSD

Many people state that you need to put only Dell firmware/branded  HBA/RAID controllers in the storage slot position shown above.  I am not sure if that was true with other BIOS/firmware but on my system, running the latest firmware, I am able to run my controller flashed with LSI (Avago, now) 9211-8i firmware in the storage slot without issue.  Perhaps other cards or combinations of components would change this – I don’t know, but my setup is working.  You’ll also see in the image above that there is a small carrier that holds two 2.5″ SAS/SATA disks internal to the R510.  These two positions actually populate “Slot 12” and “Slot 13″ of the backplane.  There is a cable that extends from the backplane over to the 2.5” carrier providing both power and SAS/SATA interfaces.  This is how I am going to provide my setup with SSD caching!  If you were crazy, you could actually populate the Dell R510 with 14 SSDs – imagine the IOPS!

What I will say is, don’t order a 12-bay R510 with a PERC 6/i controller.  Not only will this not help you run ZFS (which I knew), but it’s plainly not compatible with the backplane.  I had only three disks installed yet slots 1 – 3, 5, 8, and 10 were flashing amber…sometimes, even without drives in them!  I had all sorts of warnings about firmware compatibility in Open Manage and disk health alerts scrolling on the FreeNAS console.  After browsing all of the R510 documentation I could find (along with scouring the internet), it turns out the 12-bay R510 never shipped/sold with a PERC 6/i.  They are just not compatible.  Oops – my fault!

R510 Backplane

Speaking of backplanes, above is the backplane from the R510 removed.  This is the board that runs perpendicular to the direction the drives get inserted in the front of the machine.  Essentially, one side of this board is cabled up to the rest of the server while the other side provides the SAS/SATA and power connector that interfaces with the drive in the trays on the front.  While it may look difficult to remove based on images above, it’s actually really simple – you just remove the SFF-8087 connectors on the board along with every other power and accessory cable and lift it up and out.  There’s only two tricky cables – one is a flat ribbon cable and the other is a sort of laptop LCD panel connector.  Both awkward connectors are at the very edge of either side of the board which makes them easy to deal with, however.

You may notice that there is a board sort of piggy-backed to the main backplane board.  That’s the SAS expander.  Perhaps you’ve researched how your 9211-8i (or other controller) can support 256 SATA/SAS devices.  It’s that little guy with the SFF-8087 connectors that does all the magic.  You basically chain SAS expanders together until you’ve got enough SAS/SATA ports for your devices.  Unfortunately, you’re usually limited by the chassis/backplane your device has, but HBA’s/expanders don’t have to be internal!

It’s key that the backplane comes out easily because believe it or not removal is almost mandatory in order to utilize the two internal USB ports that happen to be on it.  I personally could not plug a USB flash device into the ports with the backplane installed in the chassis.  The R710 is so much easier as the internal USB port is front and dead-center, however, there is only a single internal USB port (but support for an SD module).  I am utilizing the USB ports for a mirrored installation of my base OS.

R510 Backplane USB

That’s pretty much it!  Other than the specifications above, I’ve added an Intel Pro/1000 VT which is my go-to cheap quad port 1 GbE NIC of choice.

For now, I have installed FreeNAS 9.10-STABLE to a pair of 32GB SanDisk Cruzer Fit USB flash drives.  I am testing and playing with different pool configurations in order to decide where I want to land.  In in addition to dabbling in FreeNAS, I am spending an awful lot of time on forums, manuals, and guides learning about ZFS itself.  I have experimented with ZFS in Ubuntu 16.04 in a VM with a dozen or two virtual disks so I am somewhat familiar with the layout, but FreeNAS is proving that I have a lot more to learn about ZFS itself.  While FreeNAS itself is interesting, I am really interested in leveraging ARC, L2Arc/SLOG, snapshots, compression, etc. that ZFS has to offer.  This is going to be fun!  But remember, the goal isn’t to just generate heat and spin rusty platters – the goal is to create a reliable, redundant, high-performance storage platform that I can use as the base for my vSphere 6 cluster.

Stay tuned for the next post(s) where I’ll discuss flashing the Dell PERC H200 to LSI 9211-8i, installing and configuring FreeNAS (and ZFS), and making blocks fly through the wire!

Update:  I have finally updated this original post with Part 2 which covers the actual hardware, configuration, and some performance details:

DIY SAN/NAS – fast, reliable, shared storage, with FreeNAS and switchless 10 Gbps! (Part 2)

Author: Jon

Share This Post On

15 Comments

  1. Hi. I see that you don’t update your website too often. I know that writing content is time
    consuming and boring. But did you know that there is a tool that
    allows you to create new articles using existing content (from article directories or other pages from your niche)?
    And it does it very well. The new posts are unique and pass the copyscape test.
    You should try miftolo’s tools

    Post a Reply
  2. Hey thanks for sharing, I’ve a question, reseller, can you share company name / website?

    Thanks

    PS: I want to buy 720 and ebay works but would like to have more options

    Post a Reply
  3. This is a great article. I am working on a similar project. I am building a home lab for mass automation testing and MCSE/CCNA studies.

    I am running a 24bay supermicro 4u chassis with 2x xeon E52630 cpus, 128gb ddr3 and 24 HGST enterprise drives (refurbished). I use this for my NAS.

    Next I will be building a SAN using a 12 bay Supermicro 2u case. I have 14 120GB Samsung Evo 950 SSD’s that I planed to use in some sort of raid array.

    Have you ever thought about using StarWind for your SAN’s Operating System?

    Post a Reply
  4. Nice article! i look forward to pat two 😉

    Post a Reply
    • One advice, do not buy a Dell PERC H310 for a R510 because i don’t fit 😛
      The Dell PERC H310 have the Mini-SAS SFF-8087 connectors on the back of the card. You need a Dell PERC H200 with the Mini-SAS SFF-8087 connectors on the side of the card :D.

      Post a Reply
      • Or longer cables! I ended up just buying longer cables.

        Post a Reply
        • Also with longer cables i cannot fit it into the bracket.

          How did you do this? did you have some pictures from it?

          Post a Reply
  5. Thanks for the write-up. This is right up my alley. I would love to hear the second part of this. Thank you and keep up the good work!

    Post a Reply
  6. Did you ever get around to writing part 2? I’m just about to do pretty much exactly the same, but i’m having a hard time getting my head around freenas!

    Post a Reply
  7. Hello JON,
    many thanks for your sharing, very inspired by your blog, im trying to make the same work. hope will success.

    Post a Reply
  8. Any word on when Part 2 will be released? I was embarking down this exact same path when I realized I cant use the PERC H700.

    Post a Reply
    • Hi Justin – yeah right now I am testing both FreeNAS and Nexenta. I’ll be providing more information soon!

      Post a Reply
      • Did you finish setting up FreeNAS on your 12 bay R510? I’m curious what storage configuration you went with.

        Post a Reply
        • I did! It’s actually using 10 GbE in a switchless configuration w/ two R710 ESXi hosts. I have been meaning to write up a Part II on that and sharing info. I’ll put that on the priority list thanks for asking!

          Post a Reply
          • can you direct me to a link or can you assist (willing to pay) to get a SAN loaded upon my R510?

            Dick

Leave a Reply to Dale Pitulski Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.