HYEPYC - 2022 Server build-log

Post your computer builds here and ask for hardware assistance, buying advice or anything hardware related for computers, smart phones, tablets or other electrical devices.
User avatar
Pri
Site Admin
Site Admin
Posts: 5433
Joined: Fri Dec 14, 2007 8:59 am

HYEPYC - 2022 Server build-log

Sun Jun 19, 2022 6:41 pm

Image


Today I'm starting a build-log for a new server I'm building which will be going in our home. I'm calling this new server HYEPYC (pronounced high-epic), the name will be explained below.

Before getting into the specifications and build photos I want to detail what the server will be doing.

1. It will be a Hypervisor, which means it will run lots of virtual machines.
2. It will be a storage server with a lot of storage (125TB+).
3. It will be used for machine learning.
4. It will be used for Home Automation (lights, blinds, heating, people recognition etc).
5. It will run a lot of programs with high CPU usage, network needs and storage requirements.

So with all those tasks it needs some beefy components. I'm going to be using a 4U rackmount chassis which has 24 drive bays at the front. This gives a lot of storage expansion and a large area for a server-sized motherboard and large diameter fans which allows for higher performance parts.

Here is the complete spec list
  • AMD EPYC Milan 7443p 24 Core / 48 Thread 2.85 GHz to 4 GHz CPU
  • Samsung 256GB (8x32GB) DDR4 RDIMM ECC 3200MHz CL22 Memory
  • Asrockrack ROMED8-T2 EPYC Motherboard
  • Corsair HX1200 Platinum 1200 Watt PSU
  • Broadcom 9500-8i HBA (PCIe 4.0 NVMe/SAS/SATA Disk Controller)
  • Samsung PCIe 3.0 2TB 970 Evo Plus NVMe Drive
  • Western Digital PCIe 4.0 2TB SN850 NVMe Drive
  • Intel PCIe 2.1 X540-T2 2x10Gb/s Network Card
  • Toshiba 3x18TB SATA Hard Drives (Enterprise Editions)
  • Seagate 7x10TB SATA Hard Drives (NAS Editions)
  • Gooxi 24-bay Chassis with 12Gb SAS Expander by Areca
Edit made on the 15th of July:
Some of the specs from the above have since changed for a few reasons, below is what changed and why.
  • Motherboard: Asrockrack ROMED8-T2 was dropped in favour of the Supermicro H12SSL-NT. This board is higher performing, has more features, less bugs/better support and is actually available to purchase while only costing a little bit more.
  • NVMe SSD's: The 1 x 2TB SN850 has been altered to 2 x 2TB SN850's. I'm still likely to use the 2TB 970 Evo Plus, but not sure what for. This change happened due to very good Amazon Prime Day deals which essentially halved the cost of the SSD's.
  • SATA HDD's: The 3 x 18TB Toshiba drives were switched out for 4 x 18TB WD drives. This was also due to the Prime Day deals. I was able to get 4 x 18TB WD's for a little under the price of 3 x 18TB Toshibas.
In addition to those core specs there are some other auxiliary parts including the EPYC screwdriver with exact torque, custom cables and a Supermicro CPU cooler. There is also various Noctua fans which will be attached to the chassis internally (3x120mm and 2x60mm).

This build has very little compromises. I've gone with the second fastest EPYC CPU from AMD when it comes to single-threaded performance (the only one faster has only 8 cores while this one has 24). And it's no slouch when it comes to multithreaded loads either with 24 cores and 48 threads. It will make a great CPU for virtualisation.

This CPU like all Milan era (Zen3) EPYC's features 128 PCIe 4.0 lanes and 8 memory channels capable of upto 3200MHz speed. I've decided to pair it with that kind of memory, the fastest you can get that is ECC Registered and JEDEC certified right now.

The motherboard I've chosen is unique in that it allows almost total access to all 128 PCIe 4.0 lanes provided by the CPU by featuring 7 full length x16 PCIe 4.0 slots for a total of 112 accessible lanes and each slot supports bifurcation to x8x8 and x4x4x4x4 allowing you to theoretically connect 28 separate PCIe devices each with x4 PCIe 4.0 lanes. This is really great for expanding later with more SSD's in an adapter card which are quite affordable.

Unfortunately that motherboard is in very high demand and so although I placed my order this week I'm not expecting to receive it before August 1st. Similar situation with the Chassis, the reseller I'm using was expecting to receive stock from China on June 15th but as yet no delivery.

Like all the builds I do, I'm often waiting for various parts, even before the war in Ukraine and the Pandemic things have been difficult to get and in this situation something as simple as a cable can hold up the whole build being usable.

So far I have received the Power Supply, Rack and CPU cooler. Some parts like the 10TB Hard Drives, 970 Evo Plus and Noctua Fans I already have and will be moving from my old server.

You may recall the build log of PROMETHEUS featured here I built that server in 2014 and have been using it ever since. This new HYEPYC server will replace that after 8 years of 24.7 usage which is an excellent run in my opinion. The performance of the new EPYC chips provides 5.4x more performance in multithreaded benchmarks than PROMETHEUS did while using less than half the energy.

Looking past the hardware I will be using Unraid as the operating system for this new system instead of Windows which I used on PROMETHEUS. The main reason for this is Unraid is a great Hypervisor (thus the HY in HYEPYC) and it supports a really great unconventional RAID system which allows you to use differently sized disks and add/remove disks over time at your discretion. Something I've not been able to do with PROMETHEUS due to its hardware RAID card.

I'm quite excited to finish the build but I'm pretty sure it wont be complete until August. Mostly due to the Chassis and Motherboard. Until the next post I'll leave you with a couple of part images for the case, processor and motherboard.

Image
User avatar
Pri
Site Admin
Site Admin
Posts: 5433
Joined: Fri Dec 14, 2007 8:59 am

Re: HYEPYC - 2022 Server build-log

Tue Jun 21, 2022 7:34 am

While I wait for more parts to arrive I thought it would be interesting to explain why I've chosen to go AMD for this server after only using Intel for a very long time. My last at-home AMD based server was built in 2007 and lasted until it was replaced in 2009 by an Intel system.

Since then I've only used Intel and the main reason has been that Intel simply had the faster and more capable processors on the market every time I needed a new server, desktop or laptop.

But in 2017 AMD launched Zen which was a brand new architecture for them and it has been highly successful offering not only feature parity with Intel but often surpassing them, for instance AMD were first to market with PCIe 4.0 and they also beat Intel to releasing both 32-core and 64-core processors and finally they were first to support high-speed ECC memory (2933 MHz initially and later 3200 MHz).

I've used AMD EPYC and AMD RYZEN based systems as remote servers for my business needs over the past couple years and in all instances I've been impressed by their performance and stability. I most recently began renting 4 x RYZEN 9 5950X based systems about 6 months ago which are AMD's current fastest 16-core chips on their latest Zen based architecture (generation 3).

This new system dubbed HYEPYC that I'm building for my home is also based on Zen generation 3 but in the EPYC processor family which is more tailored towards server use with 128 PCIe 4.0 lanes instead of 24 like the RYZEN chips have, 8 channel memory support instead of 2 channels and a 2TB memory ceiling instead of 128GB.

When it comes to Intel I do still continue to evaluate their products on a regular basis and right now they are not competitive with AMD. They have lower clock speeds, lower core counts, higher power consumption, less PCIe lanes and higher pricing. In-fact the situation with Intel has become so dire that the major motherboard manufacturers (for server & workstation motherboards) have stopped releasing motherboards into the retail supply chain.

Right now availability for AMD server processors is poor, the demand far outstrips AMD's ability to supply. Similarly the motherboards for AMD EPYC are also sold out almost everywhere and many stores are projecting 1-2 month lead times. Contrast this with Intel where you can easily purchase any server processor you want as the demand is much lower for what is essentially inferior processors.

Personally I'm not beholden to any brands I simply base my purchase decisions on the merits of each individual product. If Intel were to release a more capable product than AMD when I'm in the market to buy then I would buy it. But since 2017 there hasn't been much reason to go Intel for servers and workstations.

I mentioned earlier I've been impressed by the AMD processors I've used recently. Those 5950X based servers I've been renting have really blown me away with their performance and stability. They were a sort of test run to really verify AMD's technology before I made what is a very large investment in my own home server and I'm pretty confident in their products. I hope this was interesting hopefully the next post will contain some photos of actual parts!
User avatar
Pri
Site Admin
Site Admin
Posts: 5433
Joined: Fri Dec 14, 2007 8:59 am

Re: HYEPYC - 2022 Server build-log

Thu Jun 23, 2022 9:01 am

My memory arrived today. This is 256GB (8x32GB) of Samsung DDR4 3200MHz CL22 ECC RDIMM Dual-Rank 3DS RAM. Good luck saying that five times fast, quite a mouthful.

These are the fastest memory modules available right now when it comes to ECC capability (without overclocking). And for EPYC it's quite important to use fast memory as the Infinity Fabric clock speed uses the memory clock speed to determine its own maximum throughput. Put simply, the slower the memory you use the slower your core chiplets can communicate between each other and the I/O die within the processor.

Image

The speed of the memory and its relationship with Infinity Fabric can be seen in this table I've included above. Please note this only applies to EPYC Milan based processors (Zen3). When it comes to EPYC Rome (Zen2) the maximum speed of Infinity Fabric is reached with 2933MHz memory modules which means if you use 3200MHz memory on EPYC Rome based systems you won't see a benefit when it comes to Infinity Fabric and in-fact the latency dramatically increases due to the memory and Infinity Fabric not being in a 1:1 ratio.

So for peak Infinity Fabric performance you want to pair your memory speed as thus:
EPYC2 = 2933MHz
EPYC3 = 3200MHz

And now some photos!

Image
Image

I'm still waiting on the case and motherboard but things are progressing.
User avatar
Pri
Site Admin
Site Admin
Posts: 5433
Joined: Fri Dec 14, 2007 8:59 am

Re: HYEPYC - 2022 Server build-log

Sun Jun 26, 2022 10:50 am

Thought I'd post an update today of some of the other parts I received already. Not really that interesting on their own so I'm posting them all together.

So lets start with perhaps the most ridiculous component of the whole build which is the "special screwdriver" needed for the SP3 socket which both EPYC and Threadripper share. This tool isn't required if you own a screwdriver or ratchet that lets you input a specific pound of force but as I don't have one of those I decided to buy the AMD approved tool.

Image

Next lets look at the Supermicro SP3 cooler which features a 92mm fan and a dense fin arrangement with 5 doubled over heatpipes.

This cooler is actually very hard to get, some retailers are listing it as End of Life (EOL). And others are simply stating it's out of stock. When it comes to horizontally mounted heatsinks for SP3 there's very limited choice. I would have loved to get a Noctua cooler but they only have ones that blow air vertically and not horizontally so it would have resulted in bad airflow within the chassis.

Luckily I was able to procure one for a fair price. If I find the fan is too loud I may switch it for a Noctua 92mm fan (or two, one on either side) but that depends how warm the CPU gets and whether changing the fans for quieter ones will hurt performance too much.

Image
Image

The final thing to share is the power supply. I went with the Corsair HX1200 for five main reasons.

1. Corsair make very high quality power supplies, I have no concerns with their power supply products.
2. Corsair offers a generous 10 year warranty which shows they trust their products will last
3. This particular model is 1200 Watts so there's more than enough power for whatever I use the server for.
4. It's platinum efficient reaching just under 95% efficiency between 360 Watt and 480 Watt loads.
5. It is very quiet with completely silent operation up-to 480 Watts with a ZeroRPM fan mode.

I also did make sure to verify the claims regarding the efficiency and noise level of this power supply with reviews. I did seriously consider the Seasonic Titanium 1000 Watt model but ultimately decided on the Corsair.

That decision was driven by the Corsair having 200 extra watts of capacity, its wide ZeroRPM fan mode, near similar efficiency (Corsair is 2% worse at 50% load but the same at 30-40%) a more standardised pin-out for its modular cables (as I'll be using custom cables, the ability to move those forward to another unit in the future was a consideration) and finally the price. The Corsair unit cost £201.88 vs the Seasonic at £300.

Now had Corsair offered a 1200 Watt PSU with Titanium efficiency I may have been swayed but they don't and in-fact there's only a handful of power supplies that offer Titanium efficiency on the market right now and most of those are in the super high capacity of 1600 Watt and above with prices double the HX1200.

Image
Image
Image
User avatar
Pri
Site Admin
Site Admin
Posts: 5433
Joined: Fri Dec 14, 2007 8:59 am

Re: HYEPYC - 2022 Server build-log

Tue Jun 28, 2022 12:35 pm

Today I wanted to answer a few questions I've had about the build and share the boot drive of the build which is by far the smallest component.

The two main questions I've received from users of the Renegades chat and IRC have been

1. Why are there two NVMe SSD's in the build and why are they different models.
2. Why am I using Unraid and not another operating system.

First Question:
So to answer the first question. I'm using a Samsung 970 EVO Plus and a Western Digital SN850. Both are 2TB in capacity and are M.2 NVMe SSD's. The SN850 is a PCIe 4.0 SSD while the 970 Evo Plus is a PCIe 3.0 SSD.

The reason I'm using both is because I already have the 970 Evo Plus in my current server and it's still very fast and has a lot of life left (over 1PB of writes left by its rating). So I'll be putting that in the new server as a cache drive. Meaning all writes to the server will be written to that cache drive before being stored long term on the built in storage array. I'll go into more detail as to why this is important.

The SN850 I do not currently have which will be purchased new, I already have one of these in my desktop and I know it's a very fast and capable SSD. I believe it has only just been dethroned as the fastest PCIe 4.0 NVMe M.2 drive you can get on the market so it's very fast (7GB/s read, 5GB/s write, 1.25 Million IOP's). And I'll be using this for Virtual Machines and Docker Containers.

So the Samsung is for cache and the Western Digital is for Virtual Machines and Docker containers. That's why there is two. In an ideal world I'd use two SN850's but as I already have the 970 Evo Plus there's really no reason to replace it. This SSD can still do 3.5GB/s reads and 3GB/s writes which is more than enough for its usage as a cache drive.

Second Question:
The second question I've been asked is why I'm using Unraid and not another operating system. For instance why not Windows, a generic Linux distro or TrueNAS.

Well Unraid is actually a Linux distribution and it's designed specifically for use as a NAS operating system. And Unraid features a very unique type of array system that unlike traditional RAID lets you use differently sized disks while maintaining protection against disk failure with the use of sector-based single and dual parity.

So in my system I'm going to be using 3 x 18TB Hard Drives which I'll be buying brand new combined with my older 7 x 10TB drives from my previous server. And thanks to Unraid I can make 100% use of all those drives. In a traditional RAID system I'd only be able to use the first 10TB on each of those 18TB drives with the remaining 3 x 8TB going unused.

Unraid also lets you increase or decrease the size of your storage array by adding or removing disks at your discretion. Traditional RAID like the hardware RAID card I have been using do allow you to expand your array by adding more drives but they have to be the same size as your previous disks and you cannot remove disks to shrink the array later, for instance if a drive becomes faulty but you don't want to replace it, well you're stuck, you can't just pull it out and reduce the array size.

So Unraid offers a lot of flexibility for home users who want to start with a smaller number of drives and add more over time and as the storage industry increases capacities you'll likely want to buy ever larger disks which Unraid is perfectly setup to handle.

But Unraid does have some disadvantages of its own. For example the way its disk array is setup it doesn't write to every disk simultaneously when you save a file. Instead it only writes to one disk at a time. So the performance of your array is always going to be limited by the singular disk you're reading a file from or writing a file to.

This is where the cache disk I mentioned above (the 970 Evo Plus) comes in because Unraid can be set to make all writes to a specific drive (called a cache disk) or set of drives (called a cache pool). The benefit here is you can use an extremely fast SSD to hold all your incoming writes and then later at a pre-set interval move those files from the cache to your storage array.

So with a 2TB cache I can keep all my incoming downloads and other things I'm writing in a very large temporary area on my very fast SSD and based on settings I decide (percentage used or just on a specific timetable) move those to the storage array.

Unraid makes it very easy to use the cache as it's well supported and transparent. Meaning when I mount my storage shares on my computers they see the large 100TB+ of data available from my entire disk based array but when I save files to any folder on that share it's actually being saved to the fast SSD. My applications aren't aware that there is this multi-level caching system in play making it very seamless.

Unraid also lets you pin certain things to certain drives and specify how you want your storage disks to balance your data. For example if you have 10 storage drives in your array you likely don't want them all powered up at once every time you're writing files to the array and Unraid can be set to fill them one at a time instead of all at once like traditional RAID. This means the server uses less energy, creates less noise and your drives last longer as most of them can stay powered down.

There are also plugins for Unraid that allow for things to be cached in memory like directory structures and file metadata so that your hard drives don't need to spin up just to check if a file exists in a folder (like how the media server Plex would cause when scanning folders for new files to index).

So lets get to the photo which is the point of this post, the smallest part of the build is the USB stick where Unraid will live. This is a 32GB Samsung BAR v2 drive and it has been specifically chosen for its excellent reliability. When your Unraid system first boots up the contents of the memory stick are copied into system memory and then all interactions with the Unraid operating system happen in memory.

This greatly extends the life of the USB stick. So that's todays updates if anyone has any questions feel free to hit me up on Discord or WinMX or wherever else I'm chitchatting :)

Image
User avatar
Pri
Site Admin
Site Admin
Posts: 5433
Joined: Fri Dec 14, 2007 8:59 am

Re: HYEPYC - 2022 Server build-log

Fri Jul 01, 2022 6:30 pm

Bit of a great update today. The chassis I wanted finally came into stock which meant I was able to place my order and should have it within a few days. Since the case is now locked in I placed an order for two Noctua 60mm fans, these are the NF-A6x25 PWM fans. Quite high quality long-life fans from Noctua.

Their main attributes are their PWM control, they're very quiet, they move a substantial amount of air for their noise level, they have a 17.1 year Mean time between failure (MTBF) and a 6 year manufacturer warranty.

Image
Image
Image

There is an awful lot of engineering that has gone into these fans. I'm also planning to use 3 x 120mm Noctua fans which are presently in my current server but will be moved over once the new server is finished.

In unrelated news I've also purchased a new UPS as the one I have is coming up on three years old and it's very difficult to get CyberPower batteries in the UK. I decided on an APC unit the BR1600SI which is a 1600 VA / 960 Watt unit that is very efficient with a pure sinewave and it's passively cooled making it completely silent under all load conditions.

This will be replacing my CyberPower 1500 VA / 900 Watt model.

The next post will contain photos of the chassis and/or the UPS. Hopefully the motherboard will come into stock before July 29th because that is the final piece of the puzzle, everything else like the CPU and Hard Drives/SSD's are easy to get being readily available at multiple stores for next day delivery.
User avatar
Pri
Site Admin
Site Admin
Posts: 5433
Joined: Fri Dec 14, 2007 8:59 am

Re: HYEPYC - 2022 Server build-log

Sat Jul 02, 2022 3:40 pm

The UPS arrived today and I have unboxed it and put it into service. It does work quite well but there is one caveat, the official APC software for the unit does not support Windows Server and I'm currently using Windows Server 2012 R2.

Normally this would be a problem but I do have pfSense which supports the NUT utility to handle this UPS and there is a NUT client for Windows so I can make it work that way and I'm moving to Linux soon with Unraid which also uses NUT and apcupsd to communicate with APC model UPS so I'm all good there.

In the mean time I'm just using the built in Windows UPS driver which also works fine with this model and allows for automatic shutdown and all the other functionality one would expect. So without more rambling lets get to the photos.

Image
Image
Image

This unit does have a lot of power connectors and a very large LCD on the front. It's considerably heavy but around the same size as my older unit so it fit right in the previous units place.

So far it's working great and is completely silent like my old unit. Hopefully in three years time acquiring new batteries for this will be a lot easier than it has been with my CyberPower model.
User avatar
Pri
Site Admin
Site Admin
Posts: 5433
Joined: Fri Dec 14, 2007 8:59 am

Re: HYEPYC - 2022 Server build-log

Mon Jul 04, 2022 11:28 am

Today I received the CPU (AMD EPYC 7443P) and HBA (Broadcom 9500-8i) for the server. Tomorrow the Chassis arrives!

Image
Image
Image

This processor features 4094 pins which it needs to support its extensive I/O. 8 channels of DDR4 memory and 128 PCIe 4.0 lanes. That kind of connectivity requires a lot of pins. This is also the reason for the special screwdriver to guarantee that all the pins make proper contact in the socket.

I mentioned that I also received the HBA (Host Bus Adapter) this is the storage controller card for the server. I did go with the Broadcom 9500-8i which is their current generation Tri-Adapter supporting SATA, SAS and NVMe drives. I'll only be using SATA drives with mine.

Image
Image

I included that second picture of the edge connector because this is a very unique connector. It's called SFF-8654 8x and it is capable of 96Gbps (12GB/s) data transfers. That's extremely high and the backplane on the server chassis I've chosen is also capable of the same speed.

Normally SATA drives are limited to 6Gbps but as both my Chassis and this HBA card support a technology called Databolt they can negotiate with each other at the full 12Gbps speed of SAS and as the cable load balances between 8 of these connections through a single cable you get the aggregate speed of 96Gbps.

In my server this means each individual drive slot (I have 24) is capable of 500MB/s simultaneously. This isn't a speed I expect to hit on a single drive but if I did install a SATA SSD in every single 24 slot I could utilise all of them at their maximum speed at the same time.

The main benefit though to buying this specific HBA is its very low energy (5.95 Watts under typical usage), it's a current card which means it's easy to buy brand new, it's PCIe 4.0 which pairs perfectly with my CPU and its performance is so high I shouldn't have any concerns when performing parity checks or rebuilds which will run all the drives in the server as fast as they will go.

So that's the update for today. The next post should be about the case which as mentioned above is arriving tomorrow.
User avatar
Pri
Site Admin
Site Admin
Posts: 5433
Joined: Fri Dec 14, 2007 8:59 am

Re: HYEPYC - 2022 Server build-log

Tue Jul 05, 2022 7:01 pm

The chassis arrived today and there is a problem. The ATX bracket for the power supply was not in the box with the chassis as-advertised. This appears to be a screwup with the store I bought from either they didn't put the bracket in the box or they miscommunicated with the manufacturer.

In any eventuation I can still use the case if I make my own bracket which I probably will out of acrylic if the store I bought from can't send me the appropriate bracket as I certainly am not sending the entire case back over a small piece of metal being absent.

I'm including only two photos at this time as I'm understandably annoyed by the situation.

Image

The photo above depicts the rear of the backplane within the case. This is a Gooxi made backplane which contains an Areca SAS3 expander chip with Databolt and SGPIO functionality. Surprisingly it does feature four PWM headers for fans but I probably wont use them instead controlling the fans via my Motherboard.. whenever that arrives.

Image

Also look how thick these fans are! I measure them and they are just a hair under 40mm thick which is .. insane. And that is the actual fan thickness, that doesn't include the plastic shroud the fan is sitting in. By contrast a normal PC fan is 25mm thick.

So that is the update for today. I did also order my custom molex cables from CableMod today I'm not expecting those for 15 days but I waited until now to order them so I could get the length right, I wanted to put the PSU in the case and measure between its connectors and the backplanes connectors, I ended up going with 40cm long cables which should provide 12cm more than I need for tidying.
User avatar
Pri
Site Admin
Site Admin
Posts: 5433
Joined: Fri Dec 14, 2007 8:59 am

Re: HYEPYC - 2022 Server build-log

Fri Jul 08, 2022 3:26 pm

Got a good update today. I was able to contact the company that shipped me the case and they did in-fact have the missing bracket so they were able to send me one really quickly. That bracket arrived today so I fitted the power supply and the rear Noctua 60mm fans.

I would have liked this case to have 80mm rear fans like my current server does but for whatever reason the manufacturer went with smaller ones. Doesn't really make sense to me logically but perhaps it's so more airflow reaches the expansion cards. On to the photos!

Image
Image
Image

Looking quite good. So I now have almost every item for the build. The main thing I'm now waiting for is the motherboard.

Return to “Hardware Hangout”

Who is online

Users browsing this forum: No registered users and 8 guests