Dell VRTX officially supports ESXi 6.0!

vSphere 6 on the Dell VRTX is here!

It’s been a long time coming!  I had tried using ESXi 6.0 on a Dell VRTX chassis several months ago only to end in failure and frustration.  Only after I had ESXi 6.0 installed on an M520 VRTX blade did I read on Dell’s site that the VRTX was not supported.  Shucks!  The reason it wasn’t supported was because of the lack of support for the Shared PERC8 RAID controller – there was no driver available within ESXi 6.0 which was a bummer.  This meant that you can install ESXi 6.0 on the blades but you’ll only have access to the internal (on the blade, 2-drive) PERC and external storage (NFS, iSCSI, etc.) but not the shared storage within the chassis.

The VRTX also suffered from an issue whereas you had to elect for either high availability or performance on the RAID controller but not both.  I covered this issue earlier, but if you opted for a VRTX with a redundant Shared PERC 8 controller then you would have horrible write performance because Dell forgot that you may want to have write-back caching enabled between the two (active/standby) controllers and have them highly available.  They fixed that with a later firmware release (which I covered in this blog post), but it involves shutting all of the blades off in the chassis and doing a CMC/mainboard/PERC firmware update.  I had updated the firmware to support write-back cache on the VRTX I am using and expected ESXi 6.0 to start seeing the shared storage but alas it did not…

With Dell’s release of the new firmware/driver, VMware updated their Hardware Compatibility List (HCL) to list the specific version.  You can check out the HCL update here.  It would seem you just need to flash that version of firmware to meet the requirements of the HCL and grab the latest Dell Customized ESXi 6.0.0b image and you’re ready to rock… but not so fast.  You also need the following other supporting firmware versions that are called out in a supporting Dell PDF (listed here on page 4):

  • Shared PERC8 Firmware version 23.12.56-0086
  • Shared PERC8 OS Driver version 6.804.60.00
  • CMC Firmware version 2.04
  • Chassis Infrastructure Firmware version 2.1
  • Expander Firmware version 2.0

I had updated all of these components just a few weeks earlier (including the switch module) except for the Shared PERC8, of course, which just became available.  So, with all of my components were already up to snuff, I just applied the Shared PERC8 firmware (which requires that you shut all of the blades down) alone:

CMC and Infrastructure Firmware Current

CMC and Infrastructure Firmware Current

Shared PERC8 firmware version

Shared PERC8 firmware version 23.12.56-0086

When you upgrade the CMC/etc. it is also recommended you bring the blade itself up to date.  You can download a bootable ISO built by Dell from this link (change the model in the URL for other blades/servers) that will upgrade literally all of the firmware within the server.  Dell supposedly updates these ISOs relatively often, but you can create your own using repository manager if you need to.  So after you’ve met all of the firmware version requirements, pop in the Dell Customized ESXi 6.0.0b ISO, kick off the installer, and configure the host as needed:

Let's try this again...

Let’s try this again…

And…wait for it…

vSphere showsing Shared PERC8 Storage

vSphere showsing Shared PERC8 Storage

Finally, the Dell VRTX runs vSphere 6 and can access the storage on the Shared PERC8!  But, not only that, we also have fully-functional writeback-cache between the active and passive controller.  This particular VRTX chassis has two M520 blades and one M620 blade all using the shared storage within the chassis.  So, what you end up with is a potent virtualization platform that can provide small to medium businesses with more than enough horsepower and storage to support their entire organization using vSphere 6 while providing it in a highly available fashion (multiple blades, redundant PERC controllers).  Or, if you’re looking for a solid VDI setup look no further!  The Dell VRTX can be loaded up with up to 4 half-height blades chock full of memory, connected with fast 10 GBe internal networking, all running on storage backed by up to 25 2.5″ SAS or SSD storage.  25 SAS drives can pull decent IOPS alone, but add some SSD caching on top and it will rip!  This thing, now that it’s fully supported both from a caching standpoint and vSphere standpoint, is a pretty darned powerful solution.

My only gripe with the VRTX is that there is an option of having an active/passive PERC8 controller but in order to update the firmware you have to shut every server off.  I like being able to update firmware in other storage arrays with no downtime by failing back and forth between the controllers.  Dell should try and work on this and then I think the VRTX would be near perfect.

Thanks for reading!  I will be bringing the other M520 and M620 in this chassis up to date and upgrading them to ESXi 6.0 so that I can cluster them for HA in a development environment.  Stay tuned!

Author: Jon

Share This Post On

12 Comments

  1. I wonder if im missing something here. I try ever possible solution for my vrtx with M620 installed. But nothing works. Do i need to have vcenter install to have access to the perc 8?

    I try, esxi 6.0.0b and all the otrher options to 6.5. But it simply refuses to show the perc 8 on the system.

    Perc Firmware 23.14.06.0013
    CMC firmware 3.00.200.201708033700
    Main board firmware 2.21.A00.201510302495
    M620 Firmware 2.50.50.50 (33)

    I try with windows 2012r2 but the same thing, i only see the H710, but the Perc8 is missing.

    Any advice??

    Thank you

    Post a Reply
  2. Can you share your thoughts on SSD I/O performance? I am thinking of pulling the trigger on a VRTX with 24 SSD drives for VMWare Datastore but I have read some articles that the performance is terrible. Any insights and or suggestions that you can give me would be great.

    Thank you in advance.

    Post a Reply
    • Gary – I don’t see why performance would be terrible. I’d imagine that if you had write-back cache turned on and 8+ SSDs in a mirrors span it should fly. I have a VRTX chassis with shared PERC but I don’t have any SSDs in it to test with unfortunately.

      Post a Reply
      • Have you run sqlio to see what I/O you are getting from the array?

        Post a Reply
        • No I haven’t but I use VMware IO analyzer. I can grab sqlio and test for you tomorrow. Perhaps the performance issues you have heard about are from people with the cache issue pre-update?

          Post a Reply
  3. Can you please post the link to the Dell ESXi 6.0.0b image? For some reason I can’t find any VMware 6 stuff on the Dell support site.

    Post a Reply
  4. This is great news, John!
    For a long time I was waiting for this upgrade to esxi 6.
    On reddit someone from Dell promised to release back in June …

    Post a Reply
    • I know I remember reading that and was excited, but then June came and went. They had fixed the shared PERC8 cache issue though, so that was something. But yeah, it’s a shame they didn’t release support with the rest of the PowerEdge line up.

      Post a Reply
  5. I must have missed a step. Thought I got all the firmware up to date and installed the Dell iso for esxi 6.0. Still no PERC8 listed. Any thoughts? The one thing you listed that I wasn’t sure about was the •Shared PERC8 OS Driver version 6.804.60.00. Where is that installed? I thought it would be part of the Dell esx iso?

    Post a Reply
    • Jamie did you notice that Dell released a newer version of their customized ESXi image? The link here: http://www.dell.com/support/home/us/en/19/Drivers/DriversDetails?driverId=HJFY8 is released 11 Jun 2015. That is the image I applied, though I can’t remember if I also updated to 6.0.0b after the fact. I can check tomorrow. Did you make sure that you applied the CMC, infrastructure, and PERC8 firmware as mentioned in the HCL/Dell PDF? Make sure all of those components are up to date.

      Post a Reply

Leave a Reply to Jon Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.