I’ve been talking about this one for a while but finally picked up a pair of Xeon E5649 CPUs (2.53GHz with 6-cores). I kept going back and forth between the E5649 and the X5670 – the main difference is 2.53 GHz vs. 2.93 GHz, respectively. The other difference is an 80W TDP (which is what the E5530’s are rated for) and a 95W TDP. So, in theory, the X5670’s could use more power. Would they? That depends on the load, really. But, with the E5649 being the more power-conservative way to do it I thought that 2.53GHz is plenty fast for a lab while still adding 24 vCPU to my ESXi host.
Another option for low-power 6-core computing is the Intel Xeon L5640 but at 2.26 GHz it’s a little too low on horsepower for me. I’ve been using 2.4GHz E5530’s and it’s performed fine, so I figured 2.53GHz 6-core Westmere-EP units should perform fine as well. The E5649 is actually a more recently released model than the X5670 (Q1 2011 vs Q1 2010) so it should have a little more development under its belt which is probably how they got the 6-core 2.53GHz package down to 80W TDP.
This upgrade wasn’t due to running the E5530’s out of steam or anything – that setup still resulted in 16 vCPU in the host. But, I figure now that I have 144GB of RAM in the system I should back it up with as many cores as I can. The largest amount of RAM I allocated to my VMs is usually 4GB, but that means I could feasibly support about 30 – 34 VM’s with 4GB of RAM so long as I have enough cores to go around and now I should!
Some other cool features of the Westmere-EP architecture is a reduction of die size down to 32nm (from Nehalem 45nm) as well as the AES-NI instructions for hardware based encryption which should come in handy for OpenVPN tunneling.
Please note that all work involving the handling of the CPUs is performed on an anti-static mat – use caution when handling static sensitive devices.
Here’s how the upgrade went – the new CPUs arrived brand new unused out of an Intel tray:
Next up, I pull both ATX power cables from the rear of the server. If you’re having a NOC do this work remote, it’s always a good idea to turn off the PDU ports if you have them configured. A lot of people don’t bother with this step. Once the cables are out, press and hold the power button for 10 – 15 seconds to drain “flea” power (residual voltage stored in the capacitors on the mainboard):
Just an image showing the system with the PERC6/E (hooks up to an MD1000 with 15 1TB disks), Intel Pro/1000 VT Quad Port 1GBe NIC, and PERC6/i hiding down there. I may upgrade to an H700 controller some day:
Next up, lift the memory duct cover off and unclip the heatsinks and pop them off. You can see that these E5530’s were never removed/replaced, the original thermal compound is in place:
Plop the E5530’s back on the tray – we’ll clean them later:
Squirt some Goo-Gone on the original heatsink compound and let it sit for 30 seconds or so and it’ll start to dissolve – wipe it off with clean rags/towels:
After I clean the Goo-Gone gross gray crap off I wipe it down with a microfiber rag wet with Acetone:
I wiped down the clean E5649 with a towel damped with Acetone just to remove any oils. I then applied thermal compound (Arctic Silver 5) in a small line at the center of the CPU – the pressure of the heatsink will spread it adequately:
Clean off the original CPUs with the same method as the heatsinks we cleaned earlier so we can pop them up for sale or store them for later:
It’s always a good idea to boot into the BIOS to make sure that the new CPUs are detected. Please note that you must have your BIOS upgraded to a version that supports the Westmere-EP revised CPUs! If you are running an old or original BIOS and your system shipped with 5500-series CPUs then your system will not accept the new 5600 CPUs. I am running the latest available BIOS for the R710 (6.4.0):
Then you just boot up into ESXi and check to make sure you’ve got all your logical CPUs available – in this case it should be 24:
And the real secret to great-performing ESXi host is Pure Storage:
Now my R710 host is about maxed out in terms of CPU (at least in terms of power I am willing to consume!) and memory. This host will be hooked up to an MD1000 with ~11.8TB usable and iSCSI/NFS to a Synology DS1513+ with ~16TB usable. That should keep the setup running for a long time to come! I don’t think I’ll ever put 288GB of RAM in this thing unless it gets real cheap… which it doesn’t seem likely, but you never know!
Once I get more hours on this setup I’ll check the temperatures and power draw to compared to the original CPUs. The E5530s used about 220-240W while VMs were running but mostly idle. I’ll let the Arctic Silver 5 compound get situated and then pull some power figures.
Thanks for reading!