top of page

SNP52

Hiking Challenge.

Start Your

Journey Today!

[UPDATED] Building a DIY NAS for Video, Photography, Filmmaking, and Editing: UnRaid Server Setup - Work in Progress :)

In this documentation, we're going to provide you with everything we have learned from building a DIY NAS for filmmaking. This is ongoing and we learn new things frequently, but this guide is a summary of everything that we have done. We encourage you to use this in embarking on this journey.


And if you're here because your performance is SLOW, we have very relevant information for you too. The following is a guide that helps you build the server, calls out what you need to look for, and discusses why you need to understand how it all fits together.


Now ... you might say, "I'm just a retired weekend photographer that knows nothing about computers" or "I'm a dad, I know weekend baseball, but have no clue what a server is" -- and to that, I say, you can. It's easier than it looks and UnRaid is very forgiving, you can't break much going through this process.


Your Problem

Often times on the forums, there are posts that are complaining about Parity Check speeds are slow, or my transfer speeds are even slower. Many replies state the end user should expect certain speeds and that poster has accepted the performance. I'll be honest, at some point, I was there. However, it was a conversation via eBay with The Art of Server YouTuber that changed my view on that.


Your performance is a function of how you have your hardware configured. And if you don't have the right cards in the right slot or your motherboard doesn't have the right rating for speed or your disks are not setup for your needs, you're not going to fix your problem.


I realize that I am jumping ahead for the new users, but all that said, this is why it's important to understand how your server is built and how everything is connected. For me, I call this the "Chain of Throughput." You can buy a faster processor and more ram and change configuration file settings in UnRaid all day long, but if you fail to consider limitations of your hardware -- you will never overcome that.


Let's start from the top and build this in a linear format. I will start by talking about building a server with consumer hardware in an enterprise chassis (a case decommissioned from a datacenter). Should you already have your server built and you are looking to figure out why your equipment might be slow, we'll get there after the hardware component, bear with us.


The Storage Conundrum

It's no mystery that storage of video and photography files is a problem. The files are large, require a more and more hard drives as the storage needs grow, which means that often times you have multiple hard drives that your daisy chain together once each hard drive runs out of space.


There is a better solution. In a previous post, we talked about how to build an AFFORDABLE FILE STORAGE! UNRAID+Used SuperMicro Server: Filmmakers Video/Photo Storage NAS Solution. Since this post, we've advanced into the world of ZFS pools and increased our speeds quite a bit.


However, building the machine is only half the battle, so we're going to go into detail about how to create an environment where you can store your files, edit them from the server, and grow to upwards of 580TB if needed.


Key Elements of a Server that Will Allow Network Editing

In order to do this, you're going to need to follow along, watch some YouTube troubleshooting, and read the forums when you get the random error that makes you want to pull your hair out. But start with the basics, here is what we're going to talk about.

  1. Server Hardware

  2. Hard Drives

  3. The Chain of Performance Throughput

  4. Connecting the Chain Together

  5. Software & UnRaid

  6. Networking & the Interwebs


In fact, we have set up this environment in two places and successfully am able to edit 4K video over the network using Adobe Premiere Pro (with scratch disks set to local NVME/Solid State Drives). So, embark on the journey, the time is now to consolidate your large video and photography files to a single source on a server. It's time to eliminate the need to have them spread across multiple drives.


What is UnRaid and Why the DIY NAS?

Let's be honest, we're all not running enterprise-grade hardware with files in the cloud, gigabit backbones, and 99.999% uptime SLAs on our equipment. Those of us who are weekend warriors, prosumers, or have a hobby in filmmaking - need manageable space at an affordable price point.

  • UnRaid allows you to build a server from pretty much any computer on the planet that has a minimum of 2GB RAM, a CPU that's relatively new, and hard drives of any size or brand.

  • Unfortunately, those minimum specs will get you a file server that is not build for much more than holding files in the case of video and photography, certainly you cannot edit 4K video on that.

  • DIY NAS solutions are meant for those of us who need something in our house that can store our content at an affordable price


I've written about UnRaid before and use it today because it's easy to set up, easy to manage, and relatively cheap when you compare it to the alternative. The software doesn't need a fast processor or the latest and greatest (my current Intel Xeons were released in 2015), it runs on solid and relatively affordable hardware. So, let's get started.


For this series, we are going to build a SuperMicro UnRaid server.


1 | Server Hardware

The first question you need to ask yourself is: how much storage do I need? And then you might ask, why not just buy a QNAP or Synology NAS Server? To me, these are both great questions and come with fairly straight forward answers. Building your own does not marry you to a fixed product with no ability to change it. You can replace the hardware as you see fit. It's an ATX case and fits both consumer and enterprise hardware.


So related to size, the server chassis are measured in "U". The type of SuperMicro servers vary and most come from decommissioned servers out of datacenters. You can buy new, but those are just a little more expensive.

  • 825: 8-bay 2U chassis

  • 826: 12-bay 2U chassis

  • 836: 16-bay 3U chassis

  • 846: 24-bay 4U chassis (although many of these are listed at 36-bay)

  • 847: 36-bay 4U chassis


The 846 24-bay is the unicorn of the SuperMicro servers. They sell quickly when the decommissioners get them in stock. It's a 4U server, so it's tall and holds 24 drives. The 36-bay server has 12-drives in the back, so the motherboard space is shallow.


I run both an 836 as my primary server and 846 as the backup for my data and offsite for a secondary storage via VPN network. So why the 836 and 846.

  • The 836 is a 3U, so it allows me to put in full length PCIe cards and it's big enough for sufficient air flow to keep things cooler.

  • The 846 is a 4U, so both full length cards and this one actually has retrofit kits for fans to be replaced. But is also allows us to expand the drives as the video storage grows.

  • And that 2U with the 12-bays is my lab and test server.

My Configuration: Production, Backup, and Lab Servers.

Why Not a Branded NAS Solution?

You could go out and buy a QNAP or Synology server and do just fine, however there are limitations.

  • Branded solutions have a set number of drives and can be extremely costly.

  • They cannot scale with hardware and have limitations for what you can do with them.

  • They run on proprietary software platforms.

  • It's a plug in and power on solution, no changes, no upgrades. And that might be good for you.


Why Custom Chassis and UnRaid for me?

  • The platform is built on Linux and receives frequent enough updates.

  • You can use any brand, any size, and in any order drives - my backup server runs 20TBs, 18TBs, 16TBs, and 14TBs all in different order and were added as I built out the boxes.

  • Parity ... yes, Parity. UnRaid offers the option in their array for disks called Parity drives. They need to be the largest disks in the array and serve as backup drives. If you by chance have a drive that fails, once the drive is replaced, the software will restore the data on the drive that was replaced, and your data will be back to normal.

  • For my setup, I have 2 disk configurations - In the 846 (24-bay) I have (16) drives in a cold storage format which is all the files that are static and set to backup the 836 (why I am running only RAIDZ1 on the 836) and in the 836 I have (6) 16TB drives that are configured with ZFS RAIDZ1.


What do I Need to Build the Server?

Depending on the number of drives that you need, your case will be dependent on that.

  • ARRAY Build: If you are editing photos or infrequent video, then the UnRaid ARRAY should be fast enough. I was operating on this for about 6 months with no issues. Fairly consistent with performance from Windows. I would strongly discourage this for Apple/Mac. It's not fast, but it's not that slow.

  • ZFS Build: For those that are editing video and need the performance ... you'll want to go with a ZFS pool. ZFS allows for significantly faster read performance. It essentially uses a group of drives to speed up performance.


The ARRAY is sturdy and tried and true. ZFS while it's been around for probably 20 years is a little trickier in my opinion. BUT, with ZFS, the more drives that you have configured in what are called VDEVs, the faster that it can read from the drives.


With ZFS, the reason it's faster with reading is the files are spread across multiple drives and use the speed of multiple drives to read. Imagine putting a small part of 1 file on 6 disks and when you call it, you get the speed of the 6 disks in aggregate reading it back to you. Whereas on the ARRAY, it goes on one drive and your read is only as fast as that one drive.

Parts

Consumer (ARRAY) My Backup Server

Enterprise (ZFS) My Production Editing Server

Motherboard

Buy a board with the most PCI lanes on each slot, the MSI Hero Crosshair VIII is frequently available on eBay for cheap. Other boards with high PCI lanes per slot are ASRock X570/B550 Taichi and Aorus Master X570 or MSI Meg X570 (but remember, you only have 20-24 PCI lanes on the processor if you go consumer grade, so make sure you calculate your capacity needs)

Motherboard Power Adapter: 8-pin 12V ATX EPS Power Extension Cable Male to Female, 8 inch / 20 cm : Electronics (the cable on the SuperMicro power is not long enough)

Control Panel/Power Button Adapter: Amazon.com: Supermicro 6-Inch 16Pin Front Control Split Cable (CBL-0084L) : Electronics (adapts enterprise to consumer motherboard)

For us, we are running a Intel CPU, so we went with a SuperMicro X10DRI-T4+, but you can get away with a SuperMicro X9DRD-7LN4F. There are also a variety of other motherboards out there that handle Xeon processors, we're just familar with the X10DRI-T4+. If you really wanted to get fancy, you could get an SuperMicro or ASROCK with AMD Eypc processor, but that's a little expensive for us. We'll probably get there. NOTE: I've heard stories where the AMD Epyc runs at 40watts idle and the Intel Xeons are roughly 150watts at idle of power.

CPU

AMD Ryzen 5 3600 Matisse 3.6GHz 6-Core AM4 Boxed Processor - Wraith Stealth Cooler Included - Micro Center (if you want more power, look at the 5700X which is only 65watts too; if you need any more than that - I get it. I am running 5900X and 5800X CPUs on our 2 backup servers.

For our build with the X10DRI-T4+, we're running Intel E5-2690 V4's which consume A LOT of power. If you build with X9DRD-7LN4F, you'll need compatible processors, which I believe are the E5-2600 V2 processors.

Fans

For the 846 24-Bay Server: Supermicro SC846 / CSE846 120mm Fan Array Bracket 3dprint - Etsy  PWM Max Artic Fans 80MM or 120MM (500-5000RPM) -- NOTE: IF you retrofit with a consumer motherboard, you can set the bios to dial down the fans that come stock with the server from the 8,000rpm to closer to 2,000rpm.

PWM Max Artic Fans 120MM in the front. And the back of the case, I have the PWM Max Artic Fans 80MM.

RAM

TeamGroup T-FORCE VULCAN Z 16GB (2 x 8GB) DDR4-3200 PC4-25600 CL16 Dual Channel Desktop Memory Kit TLZGD416G3200H - Gray - Micro Center (If you are running VMs, you might want the faster processor and significantly more RAM).

Requires Registered ECC Memory that typical consumer CPU and motherboards do not offer. For our build, we are using DDR4 PC4-19200 2400Mhz Regsitered ECC SuperMicro or Micron chips. ZFS is a memory monster, so you'll want to stock up. For us, it's 256GB.

SAS Card

GPU

Graphics are built into the SuperMicro motherboard.

Power

Built into the server, you will want to swap these for the SQ version to reduce the noise - supermicro sq power supplies for sale | eBay

Built into the server, you will want to swap these for the SQ version to reduce the noise - supermicro sq power supplies for sale | eBay

Other parts that you might need:

  1. Dummy Monitor Adapter: DisplayPort Headless Ghost Display Emulator for PC 4K DP Dummy Plug (fit Headless 3840x2160@17hz)-3Pack : Electronics

  2. NVME: TEAMGROUP T-Force CARDEA Zero Z44Q 2TB, if you decide that you need a cache drive for speed with write on the array.

  3. Keyboard: Find a cheap one.


Here is the simplified parts list for the servers that I am running (minus the lab and testing unit).

Parts

BACKUP SERVER (ARRAY) Mirror of ALL Files

EDITING SERVER (ZFS) Video Editing Server

Case

SuperMicro 846 24-Bay 4U Case

SuperMicro 836 16-Bay 3U Case

Motherboard

MSI B550 VC Pro

SuperMicro X10DRI-T4+

CPU

AMD Ryzen 9 5900X

Dual Intel Xeon E5-2690 V4

CPU Fan

DeepCool AK620 retrofit with Arctic 120MM PWM Max fans (spin up to 3300rpm)

Noctua NH-D9DX i4 3U this fan is designed for 3U case height

Case Fans

Front of case with 120MM 846 Retrofit Kit (from Etsy) with (3) Arctic 120MM PWM Max (spin up to 3300rpm) Back with (2) Arctic 80MM PWM Max (spin up to 5000rpm)

Front of case self-retrofit with foam for (3) Arctic 120MM PWM Max (spin up to 3300rpm) Back with (2) Arctic 80MM PWM Max (spin up to 5000rpm)

RAM

TeamGroup Create 128GB RAM TEAMGROUP T-Create Classic 10L DDR4 64GB Kit (2 x 32GB) 3200MHz (PC4 25600)

Supermicro/Micro 512GB ECC Registered MTA72ASS8G72LZ-2G3A1 64GB DDR4 PC4-19200 (DDR4-2400) 288-pin LRDIMM ECC, 4Rx4 Quad Rank, 4G x72, 1.2V

SAS Card

Supermicro AOC-S2308L-L8i PCIe 3.0 6Gbps SAS Purchased from The Art of Server on eBay

Supermicro AOC-S2308L-L8i PCIe 3.0 6Gbps SAS Purchased from The Art of Server on eBay

Network Card

Intel X540-T2 10GbE Dual Port Converged Network Adapter

On-board NICs (Intel X540-T4)

GPU

PNY NVIDIA PCI-E PNY NVS 310

Built into the Motherboard/CPU

Power

(2) SUPERMICRO PWS-1K28P-SQ 1280W 1U 80 Plus Platinum Switching Power Supply

(2) SUPERMICRO PWS-1K28P-SQ 1280W 1U 80 Plus Platinum Switching Power Supply


So I'm just a weekend filmmaker, what is a CPU, GPU, XYZ'er?

  • The case - where you put the stuff.

  • Motherboard - is the foundation of the system, the board that everything plugs into - find a board that has enough PCI lanes per slot, this is important and will require you to read the manufacturer specifications.

  • Adapter for Power - this is needed to adapt the commercial plug to consumer motherboard.

  • CPU - the processor that does the thinking.

  • RAM - it's the memory where the UnRaid software is stored to do it's thing.

  • Power - well, you know... but make sure that you buy the SQ version, they are virtually silent.

  • GPU - it's what allows you to see what's on the screen, it's the graphics card.

  • NVME - this is a solid state hard drive that reads and writes at nearly 10 times a magnetic hard drive, the big chunky metal hard drives.

  • Keyboard - the server won't boot unless a keyboard is plugged in (some of the time).


What Have You Learned?

There are a few lessons learned from building PCs. You could use the old computer that is sitting by your side table and skip this step, but if you're going to build it from scratch and with the cheapest of all the categories -- here is the rationale.

Throughout this post, you will see conversion of Mbps or Gbps (Megabits or Gigabits per second) to MB/S or GB/s (Megabytes or Gigabytes per second). These are two different measurements, like F and C in temperature. The hardware providers don't seem to keep a consistent measurement and they are very different in speed.


[AND when this post becomes outdated because of hardware changes, the same principles follow.]


Buying a faster CPU or more RAM is really not going to help performance. Buy the cheapest 'good' stuff that is compatible. Trust me, I have an AMD 9 5900X running 12 cores, 24 threads with 128GB of RAM in a SuperMicro Chassis and it performs no different than the AMD 5 3600X with 32GB of RAM on the backup server (unless there are virtual machines running, then I found that you need the bigger processor).


Whether that makes sense to you or doesn't, it just means, there is no difference in my opinion.

  • Right Case: The reason why I choose the SuperMicro chassis is because they are easy to find on eBay and are usually pulled from enterprise datacenter environments. These boxes are built for 24/7 environments with dual power supplies for failure tolerance and the hard drive trays make it easy to add more drives.

  • Expandable Motherboard: The motherboard has 8 SATA connectors on board. That means you can connect 8 hard drives without having to buy an expansion card.

  • SAS Card Expander: This is where it gets a little tricky, but you'll need an expander card that allows you to obtain the most about of throughput to the hard drives.

  • Enough RAM: UnRaid only needs a minimal amount of RAM to run the software. UnRaid does not use the processor, it loads the software onto the RAM and only needs a little bit.

  • AMD vs. Intel: If you buy an Intel Motherboard and Intel CPU, you don't need a GPU (graphics card).

  • GPU and Slots on the Motherboard: The drawback to putting a GPU on the motherboard is you lose a PCIe slot to expand more hard drives, but UnRaid can only accept 30 (expander of 16 plus 8 on-board will get you to 24). If you have more than 24 hard drives, you need to move to a Server Chassis.

  • Power Consumption: If you're going to buy a consumer board, find a CPU that has low power consumption. The AMD 5 3600X consumes 65watt vs the AMD 9 5900X that consumes upwards of 150watts. Also, the 5800X and 5700X runs a bit lower on the watts side and is an affordable upgrade. If you get the Intel Xeon, forget about power, all bets are off. They are power hogs. Just know your electric bill is going to bump up a little bit.

  • Editing over the Network: Buy the 2.5Gbe or 10Gbe switch with CAT6a cables, it's worth it.


NOTE: IF you use a consumer motherboard, you do not need to do this. You can use the bios to decrease the speed of the fans. If you retrofit the fans, you will sometimes need to cut the tabs from the fan housings.

Fan tabs that need to be cut off with dremel for retrofit to PWM Artic fans.

2 | Hard Drives

There are a few places that we get our hard drives from, and I buy refurbished, mostly because I like to live dangerously but mainly because they are pretty much half the cost.


Many of the forum and Facebook groups suggest eBay, but I have yet to find a deal that I trusted on eBay for hard drives. But don't rule it out. The consensus is that 14TB drives at the time this is being written is the most bang for your buck. They are currently around $120 for refurbished or what's classified as renewed.


The counts of hard drives are going to bring us to a discussion about throughput. Below we'll talk about "the Chain of Performance Throughput."


  • ARRAY: Reads one drive at a time. I was seeing 75-150MB/s read and 40MB/s write on magnetic disks if you have parity turned on to protect your data. That seems to be fast enough to edit video or it was for me.

  • ZFS: Reads based on the number of drives that you have in your server and how they are configured with VDEVs. For me, I am running 2 VDEVs of 3 disks with 16TB drives. This is where it gets complex, you can only add the same size and same number of disks if you expand. If I expand, I have to add 3 more 16TB drives. In 3 disks intervals of the same size. The recommended VDEV size is 6-10 disks. I don't need that much space and the more drives you add, the more power you consume. My reads are upwards of 200-400MB/s and writes are 100-200MB/s.


3 | The Chain of Performance Throughput

In building a home lab to host my files for video and photography, I was concerned about performance.


On my chassis when it was built with an ARRAY, I was running 11 drives.

  • In the array or Cold Storage, it's (3) 16TB drives ... 1 parity drive that serve as the fault tolerance aspect of the data protection and 2 drives. Meaning if a drive in the "array" (the set of hard drives with the data} fails, the "parity" drives ensure that I don't lose any data. The 2 drives that are housing all of my data have less accessed data.

  • In the pool or Active, it's (8) 4TB sata SSD drives. Careful to look at SLC, MLC, TLC on the SSDs. That has implications on speed.


Today, it's changed to ZFS.

  • The array has (2) 128GB SSDs with nothing on them. It is required in UnRaid to have something in the ARRAY.

  • The ZFS pool is build with (6) 16TB drives. With 2 groups of 3. That's (3) drives of 16TB in each of the VDEVs. In UnRaid, they are labeled as "Groups."


Drive performance is dependent on the throughput. This is where the "Chain of Throughput" comes into consideration.


Getting from your computer to your data on the server, there are no less than probably 20 transaction points whether that's a connection to the network, the CPU calling the information from the PCIe SAS Card from the Hard Drives, network switches, and the like. It's mind numbing. I have spent many of hours trying to figure out the bottlenecks for accessing my information and have been able to piece together an explanation of how it all connects together - pay close attention. This is the difference between 30MB/s and 200MB/s. And if that is foriegn, think of it this way, it's the difference between a go kart and F1 racing.


How is Performance Rated?

You will see two types of ratings MB/s or GB/s and Gbps or Mbps. They are not the same. It's the same as when you get a hard drive from the store that is rated at 16TB on the box but measured as TiB. The drive is technically 16TB, it's just the difference in ratings.

  • Gbps/Mbps is Gigabits or Megabits per second

  • GB/s and MB/s is Gigabytes or Megabytes per second


To convert from Gbps to GB/s, just multiple by 1000, then divide by 8. It will get you close to the number. A 6Gbps connection is 6000 Megabits divided by 8 = 750MB/s. And 12Gpbs is 12000 Megabits per second = 1,500 Megabytes or 1.5GB/s.


Yes, it's complicated. I am sure that someone, somewhere has a logical explanation for this.


How Does This Connect Together?


Here are the bottlenecks in building this server and how we are going to work around them:


CPU Lanes

Every computer processing unit or referred to as the processor has what are called PCIe Lanes. Think of it this way, it's the highway from the motherboard to the processor and how much information can pass. The CPU will dedicate lanes to the slots on the motherboard where you plug in cards, solid state drives in the form of what's called M.2 slots, the network, your graphics processor, and everything in between.

  • Most consumer processors have 20 to 24-ish PCIe lanes, for the AMD Ryzen 5 3600 above, there are 24 lanes. That means that we have to be very strategic with WHERE we place the cards on the Motherboard.

  • The Intel Xeon E5-2690V4 have 40 lanes each. That's 80 PCIe lanes. On the Epyc chips, you can get 128 PCIe lanes.


PCIe Lanes

Your motherboard has long slots on it called PCI slots. These are the slot that face the back of the computer and not the RAM slots. Each of these slots are rated for speed. Some (myself included) just put cards in the motherboard and ignore the ratings on the slots and it's important to consider.


NOT ALL SLOTS ARE CREATED EQUAL:

  • On your motherboard, each slot is rated as X1, X4, X8, X16 which is the number of connections to the board. It's how long the slot is. But there is a catch here, some of the X16 slots are not rated at X16, some are actually rated at less, maybe even X1.

  • At PCIe 3.0 ... the ratings of X16 means that it will process 16GB/s, X8 will process 8GB/s, X4 will process 4GB/s, and X1 will process 1GB/s. This is important. If you have 10 drives connected to a card that's in an X16 slot that is only rated at 1GB/s, that means that 1GB/s divided by 10 drives is roughly 100MB/s. No drive will operate any faster than 100MB/s which is roughly 1/3rd the total capability. You are creating a bottle neck right at the card slot before you even get started. In comparison, if you put the PCIe Card in the slot that is X16 that has 16 lanes, each of those 10 drives will get 1.6GB/s in performance.

  • So where do I find the PCIe lanes for my motherboard? This is a good question. Go to the manufacturers website and look at the technical specifications for PCI_e or PCIe for each slot. Most of the time, you will see that 1 of the slots is X16 by 16 lanes and the other X16 are rated at 4 or 1 lanes.

  • You have 3 cards that are going to plug into the server - the SAS card connecting to the expander on the SuperMicro that must be placed in the X16 by 16 lanes slot. Then the 10Gbe network card (that's 10 gigabits) which will plug into the X16 by 4 slot. And the GPU or graphics card that needs to go into the X16 by 1 lane. Since we're not running plex or anything that needs media transcoding, we put the GPU in the slot with the least number of lanes.



Example of Motherboard PCI Lanes

You will see that in this specific example, there are specifications listed on the manufacturer's website, but they are complex.


This is right off the manufacturer's website:


If you go over to NewEgg, you'll see a better view on the actual slots. Which in this case, you can see that the board if using a 5000 or 3000 series CPU will have:

  • (2) X16 PCIe 4.0, (1) x PCIe 3.0, and (1) X1 x PCIe 4.0

  • If this is just a storage server, you should be able to fit a SAS card, Network Card, and Video Card with the right speeds.


Just make sure that you're looking at if the motherboard has enough capacity.


SAS Card

When you are building a server, you want to make sure that you can grow. This is where the SAS card comes into the equation. The SAS card allows you to expand your hard drive capabilities through 1 or 2 cables. There is also a catch here.

  • PCIe has 2 primary card versions that are used in the home lab market, PCIe 2.0 and PCIe 3.0. But this is how it is today, you will mainly see 2.0 and 3.0 now, that's not to say it won't change.

  • For PCIe 2.0, you can get 500MB/s per port. And PCIe 3.0 will handle 1GB/s per port. So if you have a PCIe 3.0 card with 2 connectors rated at 8-ports each, you are flowing 16GB/s through that port.

  • On the actual SAS card, there are multiple options, in our case we are using a SuperMicro AOC 3008 which is rated at 2 connectors by 8 ports each at 6Gbps (converting Gbps to GB/s is the # divided by 8) or 750MB/s. So 2 ports times 8 is 16. Then 16 multiplied by 750MB/s is 12GB/s.


SAS cards come in 6Gbps and 12Gbps and must be flashed in IT Mode. Be weary of fake SAS Cards on eBay. The Art of Server has a great video on spotting fake SAS cards. In fact, just buy from The Art of Server on eBay. He's fast to respond to questions and posts plenty of videos to educate you on these topics.


So the motherboard can handled 16GB/s through the PCIe slot to the CPU, and the card that plugs to the hard drives can handle 12GB/s. If you plug in the SAS card to a slot that is X16 by 1 lanes, you will have 12GB/s coming from the SAS card but unable to use that throughput because the lane will only handle 1GB/s. This is why it's important to know where to plug-in the SAS card.


SAS Expander/Backplane

Wait, there is another SAS something or other. Yes. On the SuperMicro server, there is an expander that sits where the hard drives slide in and out of the server. This makes it so you can slot drives in and out easily. If you were not using a server, you would have to get a SAS expander card and plug-in with SATA cables. We are not doing that. There are plenty of YouTube videos that talk about SAS Expanders.

Sample Backplanes.

For SuperMicro, there are a number of variations of SAS Expanders:

  • BPN-SAS2-836-EL1 - is the one we're using , this is 6Gbps on the backplane


However, there are the following that you need to be careful which you select or purchase with on the chassis:

  • BPN-SAS-826TQ - this is a SATA expander, that means that each hard drive connection is a SATA connector

  • BPN-SAS3-826-EL1 - this is a 12Gbps ... if you can get one of these, even better!

  • BPN-SAS3826-EL2 - this is a 12Gbps with 2 expander chips built-in ... you're really not going to get the benefit of EL2 unless you are running SAS Hard Drives

  • BPS-SAS-836A - no, do not get this one or plan to replace this backplane. It will require a SAS connector for each set of 4 hard drives; so, a 24-bay backplane required 6 SAS connections


IMPORTANT: Connect 2 connectors on the SAS Backplane, The Art of Server discusses this a little in their videos but will allow for more throughput. I have 2 connected on the 846 backplane (there are 3 available, but 1 is for expanding to JBOD).

Backplane for 826.

The main thing to look at with the SAS expanders is SAS or SAS2 and EL1 or EL2. We're building a server with SAS2-EL1. The good news is that you can plug in 2 cables with the EL1 and get better throughput. I have read that the only benefit of the SAS3 is if you are running SAS drives that are fast enough to need that throughput, which I would assume includes SSDs.


Hard Drive Speeds

This is important with UnRaid. There is very specific speed differences between platter HDD and solid state SSD's. I can edit video from the server if it's configured correctly over a 10Gbe network and have tweaked the HDD v. SSD to move more towards a pool for stripping and faster speeds.

  • HDD (platter or magnetic) Hard Drives are rated at 250MB/s most of the time - go to the Server Parts Website and get Seagate Exos or Western Digital drives - cold storage.

  • SSD (solid state drives) have no moving parts and mostly are rated at 550MB/s. About twice the speed and about 4x the cost at the time this is written. You can get 4TB SDD for about $200 or 16TB HDD for about $160. That's a big difference especially when you are working with large amounts of data and limited slots on the server chassis.


If you decided that you wanted to go straight SSD, there are SuperMicro chassis that are 24-bay configurations that will take the SSD. Just note that UnRaid does not like SSDs in the array at the moment because there is an issue with TRIM and how the data is organized.

  • SSDs should only be used in pools according to my research.

  • And when you build a software RAID pool, be careful with BTRFS v. RAIDZ (ZFS) because when it comes time to expand, most of the time the data needs to be pulled off and recopied to get the full benefit of the software RAID configuration.

  • ZFS needs ECC RAM, so if you go the ZFS route, you need to make sure that you have the Error Correcting RAM.


So let's recap:

  • CPU - look at how many lanes are available on your processor. Consumer CPUs are around 20-24 lanes and server grade processors are around 40 and above. If you were to buy an AMD Epyc or Intel XEON CPU used on eBay for $200, you'd get ~40-100 lanes if it's dual CPU, but you'd also need to buy a different motherboard.

  • Motherboard PCIe Lanes - not all motherboards are created equal. Our build has a MSI B550-VC Pro. If you wanted to spend a bit more on a motherboard, you could look at a Asrock Taichung x570 or Asus Pro WS X570-ACE or the boards listed above, each are about $300 but offer 3 PCIe card slots that run X16 at 16 lanes. The SuperMicro boards have quite a bit of headroom on the server side and should not be an issue. Just know that you are limited by the number of lanes that go to the CPU that sits on the motherboard.

  • SAS Card PCIe Card Version - if you're running 2.0 or 3.0, that will dictate the throughput from the card to the motherboard that makes it to the CPU for processing.

  • SAS Card Ports - the number of ports that are offered that are rated at either 6Gbps or 12Gbps. Go to The Art of Server and make sure that you buy a card that is in IT Mode and has greater than 6Gbps, especially for a 12-bay server.


IMPORTANT: On UnRaid, there is an app called DiskSpeed. You need to run the benchmark for the controller to make sure you're getting full throughput.


4 | Connecting the Chain Together

The tricky part with this equation is ensuring that each component of your PCIe cards is in the right slot to ensure that you are not creating a bottleneck. For example, if you stick a 10Gbe network card that requires 1.25GB/s in a PCIe 2.0 X1 slot, you are only giving that card 500MB/s in PCIe lane capacity.


This is where you have to read the technical specification for your motherboard and match that to the cards that you are using. If you have a PCIe 3.0 SAS Card that has 12Gbps with 2 connectors and 8-ports, you need to make sure that it's plugged into a PCIe 3.0 X16 slot that has the full X16 to the CPU.


Fair warning, some X16 slots are rated at X1 or X4. Some PCIe slots on the motherboard will not even work if you have M.2 cards plugged in or you happen to be using a X16 slot. Some PCIe slots run through an onboard chip. And on and on. Read the technical specifications for the motherboard and sometimes you need to go to websites like NewEgg to really understand what the manufacturer confusingly publishes on their website.


5 | Software - UnRaid, a Linux Platform

UnRaid is fantastic. Load it on a USB stick, plug it is and boot the server. There are a few calls-out for UnRaid.

  • When you boot up with the hard drives in place, you will be able to build what's called an "Array." The software uses a UnRaid specific feature called FUSE to link the drives together as 1 single drive.

  • You have the option to create "Pools" which can be SSDs and used to write files to the server faster. You can actually use the "Pools" for write and read.


There are things that you need to consider when building:

  • You are limited to 30 drives in the Array. Choose the sizes wisely.

  • Use "Pools" to your advantage for speed - read and write for editing because of stripping or writing a portion of the data on each drive and then using speed from multiple drives during read and write to increase performance.


NOTE: If you get an error on loading the server while boot-up, it's probably related to UFEI and boot order. The boot order tells you pretty much what device to look at for the operating system. When your computer turns on, the bios prompts you to hit most of the time DEL or F2, this is where you would edit this.


How Do I Connect to the Server? Google this.

  • SMB for Windows and MAC.

  • For MAC users, there are some more tweaks that need to be made to the server. MAC does not like SMB and UnRaid is tricky with MAC.


6 | Networking

None of this works unless you have the right backbone. Most networking today is rated at 1Gbps, remember the equations, 1Gbps is 1000 Mbps or 1000 Megabits per second divided by 8 = 125MB/s. Not enough to edit from the server. If you're just straight storing data, it's probably fine. Just know in your workflow that data will transfer at a maximum of 125MB/s and most likely will only be at 75MB/s, but that's just personal observation from early on before I moved to 10Gbe.


If you want the most throughput, you need to move to a setup that allows for 10Gbe. That means that the server network card, your computer, and networking equipment all need to be 10Gbe connected together. And 10Gbe is just shorthand for 10Gbps. Again, 10Gbps is 10,000Mbps or 10,000 Megabits per second divided by 8 = 1.25Gb/s. Unless you're using SSD or NVMEs, you won't be able to get 1.25GB/s. However, you would be bottlenecked below the HDDs total capacity.


Alternatively, you could go with 2.5Gbe which would save you money but won't allow you to maximize SSDs or NVMEs if you choose to use them. The 2.5Gbe is 2Mbps or 2,500 Megabits per second divided by 8 = 310MB/s. That's enough for the HDD capacity which are typically rated at 250MB/s.


So what do you need, this is what I am currently running:

  • Network Cards: 10Gbe network cards on both the server and workstation that I edit from: Intel 540X-T2. Get them on eBay for about $50-75 each.

  • Network Switch: There are 5-port of 8-port switches that are available on Amazon for $200-500. TP-Link is my go-to. There are a few YouTube videos that call out the differences between the other brands that appear to be just white labeled.

  • Cabling: If you choose to run fiber, you are my hero. I run CAT8. The RJ-45 connections on the CAT8 plug into the network switches and don't require SPF connectors. If you want to get crazy, some of the network switches will run 40Gbps over fiber.


We'll keep editing and adding as we learn, but this is where the build stands today.

bottom of page