Building a new lab host – Intel E5-2670 and 128GB of RAM

Hi all!  I apologize for not posting more often!  I have been working on a lot of new, interesting things but haven’t had a whole lot of time to write about it.

You might remember my previous ESXi host build: Lenovo TS140 as ESXi and NAS box – with a twist!  The transplanted Lenovo TS140 has served me well – it still performs great and didn’t break the bank.  The specifications of that server are:

  • Lenovo TS140 mainboard
  • Intel E3-1246v3 CPU (3.5 GHz, 4c/8T)
  • 32GB DDR3 ECC memory
  • LSI 9260-8i RAID Controller w/ BBU
  • 8 Western Digital 4TB Red drives in RAID50 (32TB raw)
  • ESXi 6.0 Update 2

If you read my blog you might recall I have another lab with two Dell R710s (each with dual E5670 CPUs and 144GB of RAM) with a Dell R510 serving up 10 GbE FreeNAS/ZFS storage over NFS and iSCSI.  The lab I am upgrading in this post is not that one – that one performs great and has ample capacity.

The specs above have been perfect with the exception of one area – the memory.  Anyone who runs a hypervisor for testing, etc. knows that memory is everything.  While 32GB of RAM may seem like a lot and while it might be a ton for a workstation or desktop, it’s not a whole lot for an ESXi host.  The CPU has never been an issue since the E3-1246v3 operates at 3.5 GHz and although I over-commit CPU decently, it has never been a problem in this lab.  I always run out of memory capacity or have to size things extra small and suffer with the consequences.

Because I am testing with vCenter, vSphere Replication, a domain controller, various web servers, database servers, etc., 32GB of allocation goes by quickly.  In fact, for the last year or more I have only had ~2GB of RAM free on the host.  Because the E3-1246v3 Xeons can only support 32GB of RAM, I am out to build a new host for as cheap as possible while providing the most RAM capacity within reason.  Enter the Intel Xeon E5-2670.

Why use the E5-2670?

Supposedly Facebook.com upgraded their servers late 2015/early 2016 and a ton of the components ended up on eBay by means of various wholesalers, etc.  It seems that Facebook.com was using, mostly, Intel Xeon E5-2670 CPUs which originally carried a $1,550 MSRP.  This CPU offers 8 cores and 16 threads whiles supporting a maximum of 384GB of RAM each – perfect for my solution!  Considering they’re a few years old and the used market is flooded with them, they were available for anywhere between $60 – $70 a piece about 8 months ago (early 2016).  But, since many people caught on and started buying them up on eBay, the price climbed.  Right now eBay shows them for $190-210 for a pair.

When I started my build plans I knew I wanted to use the E5-2670s and I also knew I wanted to find a motherboard that had at least 16 DIMM slots as I wanted to use 8GB DIMMs since they’re more affordable.  I considered picking up an R510 (like in my DIY SAN/NAS build post) and using E5649’s (6 core 12 threads) but that would mean only having 8 DIMMs available, limiting me to 64GB of RAM (or having to find 16GB DIMMs which is too expensive).  Additionally, a dual CPU R510 with 64GB of RAM would likely cost upwards of $550 – 600 if I found a good deal and I think I can build something newer with more capacity for similar or cheaper while also likely being a little quieter (since I will go with a 4U chassis).  I managed to find a pair of Intel E5-2670’s on eBay for $150:

Intel E5-2670 CPUs

With the CPUs purchased, I can now hone in on the rest of the hardware.  After researching affordable dual-socket 2011 boards, I came across the Intel S2600CP2J being sold by a company Natex.us.  This board looks promising and so I decided to pick one up.  There are a couple quirks – mainly there is a BIOS/config file called the FRU/SDR that needs to be tweaked in order to operate fans appropriately.  However, thanks to our friends over at www.ServeTheHome.com I was able to find some information on that.  The board showed up very quickly and packed very well – thanks Natex.us!

Intel S2600CP2J motherboard

You’ll notice in the image above that there are a bunch of memory modules installed.  I also picked up (16) Hynix PC3-10600R ECC server DIMM modules for a total of 128GB of memory.  I considered picking up (8) 16GB DIMMs so I could expand later in the future to 256GB of memory, but I think 128GB will be more than enough and the 16GB DIMMs are just too expensive still (for lab use).

128GB PC3-10600R Memory

 

Keeping it cool

So far my plans have been going well.  This was where it started to get a little tedious.  Again, trying to keep everything as cheap as possible, I wanted to pick a cooler for the CPUs that would be decent yet economical.  I have used the Cooler Master Hyper 212+ (and Evo 212) on another workstation build and it was more than adequate.  However, I was worried it wouldn’t fit this setup.  The socket/motherboard combination should fit, but because I want to put this server in a 4U chassis I would need to be mindful in terms of overall height.  The Cooler Master Hyper 212+ is just too tall.  A real shame, because at $29.99 each, it would be perfect for this build.

The issue is, with socket 2011 server and workstation motherboards, you need to be careful.  There is a square ILM and a narrow ILM (independent loading mechanism).  My board has the square type, so I can use most coolers (so long as they’re not too tall).  After researching further it looked like I’d need to spend a few bucks on coolers to do this right.  Most 2U and 3U coolers will sit in chassis with ducting and use relatively high RPM fans.  In fact, many coolers for servers will be passive using only ducted airflow (common in 1U and 2U chassis).  I wanted something quiet but that would fit in a generic 4U.  The only thing that was a sure bet was the Noctua NH-U9DX i4.  There are cheaper coolers, but Noctua is great quality and quiet – sure to keep things cool.

The Noctua NH-U9DX i4’s arrived well packed from Amazon.  They weren’t cheap – they cost almost 3/4 the price of the CPUs at $55 each.  But, they’re flexible with socket 2011 and 2011-v3 so I can reuse them in a future build.  They’ll last me at least one more generation of ESXi host build.

Noctua NH-U9DX i4

Once removed from their boxes, I remembered why Noctua costs a few bucks more than other units.  They’re made/designed in Austria and you can tell a lot of care goes into the product.  Here are some images highlighting just how nice these coolers are:

Noctua NH-U9DX i4

Noctua NH-U9DX i4

Noctua NH-U9DX i4

In the image above, you’ll notice that the clear silicone adhesive-backed strip sticks up above the heat sink itself.  This strip is to eliminate any vibration from the fans attached to the unit while running.  Because the strips are adhesive-backed dust and lint tends to stick to the strip over time.  Do yourself a favor and use a straight edged razor and trim the excess from the top and bottom.  This will not only look better but will also keep the dust from building up.

Once installed (very simple with socket 2011/2011-v3, just apply thermal compound and tighten the screws until they stop), they look awesome on the board:

Noctua NH-U9DX i4

Even if you’re not into computer hardware, you have admit the two large radiators above look neat.  The only steps remaining is to install the fans in the configuration you prefer and wire them to the motherboard.  Here is an image of just two fans installed:

Noctua NH-U9DX i4

The above will work, but I’ve decided to install all four fans.  I’d like to keep the CPUs as cool as possible since I am going to go with as few chassis fans as possible while also letting the motherboard control the fans.  Here is all four fans installed:

Noctua NH-U9DX i4

Noctua NH-U9DX i4

Noctua NH-U9DX i4

If you’re very observant you’ll notice that there is a splitter being used at the fan headers on the motherboard.  I’ve decided to go this route because the motherboard is very likely to control fan speed of the CPU coolers just at the “CPU Fan” headers.  Alternatively you could power the fans from some of the “SYS Fan” headers, but then those fans may run at a different RPM as compared to CPU load or the main CPU fans.  As mentioned earlier, there is some work to be done with the BIOS or FDU/SDR file for fan control/speed but I’ll touch on that later as I actually get the thing powered up and running.

What’s next?

Next, I need to acquire a chassis to put this stuff in.  My current Lenovo TS140 system is transplanted into a Rosewill RSV-4000 4U chassis with internal storage.  The reason I can’t use that is because it does not support 12″ x 13″ EEB/E-ATX boards.  Drats.  So, I am considering keeping it cheap with a Rosewill RSV-L4000 or potentially pick up a Norco RPC-4224.  We’ll see!  The next post I make will involve picking up a power supply and firing this thing up.  I need to find a decent power supply that isn’t too expensive that will support the dual EPS12v/ATX12v plugs on the board.

Thanks for reading guys!  Stay tuned for more cool stuff!

Author: Jon

Share This Post On

15 Comments

  1. Hello Jon,
    How did you get around with CPU1 Fan blowing hot air into CPU2? Can NOCTUA u9dx fan’s do push on the outside fans and pull on the inside fans? Otherwise, I found that with both fans pull configuration (which is what you seem to have) CPU2 will get very hot (100 degree). Hope you can find sometime to answer my question. Otherwise, great article. Thanks,
    -bmo

    Post a Reply
    • Hi bmo – no issues at all with the fan configuration. I did experience motherboard failure with the Intel S2600CP2J board from Natex (totally dead, sad face). They weren’t interested in replacing it which is disappointing. I replaced it with a Supermicro X9DRI-LN4F+ board which has the same CPU layout so the fan payout is very similar and again, no issue. The nice part is that with the Supermicro I can see actual temperatures – CPU1 is 44 and CPU2 is 54 degrees C. So, yes CPU2 is warmer but both well within spec and this machine is running in a rather warm/small closet.

      Post a Reply
      • Hi. Thank you for sharing your experience with Natex. Could you expand on why they were not inclined to return/replace the mobo? Their site states 90 day return policy.
        I am asking before I commit to buy from them. It would be of great help!

        Post a Reply
  2. Hey there,
    Thanks for this tutorial it was very helpful.
    I’m interested in using it to build a new workstation server but before doing so I would like some further input on some of the hardware I should purchase since this article was written last year.

    Is the e5–2670 processor still a good choice or has there been a better version released? (in regards to pricing)

    I want to build the best workstation server on a budget but since this article is outdated I would like to know if I need to update anything before purchasing any hardware.

    Please let me know! Thank you 🙂

    Post a Reply
    • Hi Brian – thanks for the comment. The E5-2670 CPU is still a solid choice. It’s hard to beat from a price perspective with a pair coming in around $175 – $215 USD on eBay. There are faster models, such as the E5-2680 which offer a few hundred MHz over the E5-2670 but the price goes up to about $320 for a pair. If you can scrape a few more bucks, and if your applications can use it, the E5-2670v2 is a 10-core model which can be had for about $200/socket on eBay though the frequency is slightly lower. I did have the Intel S2600CP2J motherboard fail which was a bummer.

      Post a Reply
  3. Did you manage to get temperatures and values from sensors into vsphere/vcenter?

    Post a Reply
    • Unfortunately no – I think the temperatures coming off of the Intel S2600CP2J is “non standard” and there’s no prebuilt ISO from VMware or Intel for a driver or configuration for their IPMI implementation.

      Post a Reply
  4. Interesting build. Im also currently looking at this setup.

    Could you try both ESXi 6.5 and SmartOS and verify compatebility please?

    Post a Reply
    • Hi K – I am running vSphere 6.5 on this without issue. I don’t mess with containers much so no experience with SmartOS unfortunately. I’d be more likely to try VMware Photon OS. vSphere 6.5 is working great on this platform though!

      Post a Reply
  5. Hi Jon

    Have you managed to finish building this monster based on S2600CP2J ?
    I am also interested to buy this mobo from natex but I read that setting this CPU and case fans it’s not an easy task using non intel case.

    Post a Reply
    • Actually I JUST “finished it” this past weekend. I transplanted the whole ordeal into a Supermicro SC846! I am going to post about it shortly. It can be a PITA to get the fans to play nicely with the S2600CP2J but the alternative is a $500+ motherboard from another manufacturer. I’ve created a hybrid Intel/Supermicro box that performs great – look for a new post soon!

      Post a Reply
  6. First time caller, longtime listener. Loved following the RAID 50 ESXi build but this really gets me thinking differently about my future build plans!

    Curious if your previous ESXi/TS140 build enjoys a particular “sweet spot” which allows the TS140 BIOS version to play well with the LSI 9260-8i in either (PCIe 16X or 4X) slots? I’m on my second, new 9260-8i and simply cannot get a backplane to appear with healthy, HCL-approved SATA HDs. Controllers took latest firmware like champs and appear normally in MSM with Lenovo BIOS at latest version as well. Any experience with systematic downflashing on the TS140 BIOS? My kingdom for a backplane – flummoxed.

    Thanks. Very interested in your PSU choice, too.

    Post a Reply
    • Awesome Dan glad you follow and have posted!

      Actually, I never regarded the TS140 as problematic – in fact, after removing it from ESXi duty, it now serves as my wifes desktop. I know, I know, a bit overkill/waste but the truth is I would have built her an i7 machine anyhow – so it serves as the E3-1246v3 w/ 32GB of RAM running Windows 10 Pro and has no issues at all. Though, I never had a backplane hooked up to my 9260-8i while in the TS140, I do now have a SAS2 backplane hooked up to the 9260-8i that is in this Intel S2600CP2J system inside a Supermicro SC846 and all is well. MSM works, etc. I suspect maybe the backplane you’re using is just not happy. If you’re interested in experimenting I have a Supermicro SAS1 backplane that is basically useless to me. If you’d like I can mail it to you and you can see if its a backplane issue vs. controller issue. Let me know! Sorry for super delay in replying!

      Post a Reply
  7. Sweet Build! I am adding to my server collection as well and ended up choosing many of the same parts. Besides the cost, is there a reason you didn’t stick with a supermicro board?

    I picked up a Norco 450 case for another build and it’s pretty good. I’ve been using the SS-500 3-5 hard drive dock with that and it’s worked well. If you can swing it get a 4224 or a 4220. Overkill, YES!

    Look forward to the next post.
    Dave

    Post a Reply
    • Thanks for the comment Dave! I would have liked to use a SuperMicro board but at $175 from Natex.us the Intel S2000CP2J was more obtainable right now. Perhaps I’ll upgrade or replace in the future or in the event that I have an issue. There’s a dedicated IPMI/Remote Console NIC you can purchase for this board so I will be picking that up in the future should everything work out properly.

      The Norco 450 looks similar to my Rosewill that I have – good choice. I need something deep for this board. I think if I buy another case it’ll be the 4224 since I already have 12 disks in my other system and only going up. The PSU came in today! I am anxious to go home and plug it in and see some fans spin.

      Post a Reply

Leave a Reply to Jon Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.