Building a Homelab

This thread was originally posted in April 2018 in sysnative.com/blogs, by @Tekno Venus. This has been moved over to the main forum as part of a software migration.

What is a Homelab and why build one?
Good question! A homelab is a general term for a sandbox environment that can be used for experimentation and learning, where it doesn’t matter if it all breaks and goes down. Most people build their homelabs from used enterprise equipment as it can be purchased for relatively cheap and is still reasonably powerful. Different people use their labs for different things, for example working towards a certification (e.g. CCNA, MCSA) or testing products before they deploy them in a production environment. Homelabs can vary in size from a single machine to a whole rack of servers, storage arrays and network switches.

Personally, as I do a mostly software development work, I am looking for an environment where I can create isolated programming environments, servers and databases, as well as to host some services at home.

Hardware Hunting
As a university student, I don’t have a large amount of spare money around to spend on hardware and power bills for this lab. I am also planning on keeping the lab in my bedroom so need something that I can sleep next to. So the main host for my homelab needed to be something quiet with a low power usage and relatively cheap. A machine that wasn’t huge would also be an advantage as I might be moving it to and from University occasionally. I set myself a £350 budget for the host, with £50 set aside for any spare parts or accessories that I might need.

Dell R710
Dell R710
The common homelab recommendation of the Dell R710 was quickly ruled out due to the noise. Whilst it is relatively quiet for a 2U rack server (especially compared to the DDR2 PE 1950 series machines that sound like a jet engine and eat power like it’s going out of fashion), it’s still not exactly silent. The Westmere/Nehalem era CPUs are also a getting a little old for my tastes.

The R210 II is another common recommendation, and this seemed like a more promising choice. The R210 II (not the original R210) is known for being pretty quiet, and since it uses an E3 v2 Ivy Bridge Xeon has a low idle power consumption of around 20-30W, compared to the ~100W of the R710. It is also a half-depth 1U chassis, so is much more compact than the R710. However, from reading discussions online, the R210 II isn’t quite as quiet as I wanted. It’s almost silent at idle, but under load it can start to get loud – especially if the ambient temperature isn’t below 21C. It is still a 1U server after all. The R220 is the same chassis as the R210 II so is unlikely to be any quieter either.

HP’s offerings include the DL360/80 G7 at 1U and 2U sizes respectfully. The G6 and G7 HP servers use the same socket CPUs, but the G7 is known for being slightly quieter and more energy efficient. HP servers do have a reputation for being fussier on what hardware is installed in them, and it is well documented that they will spin the fans up to full speed if a non-HP expansion card is installed.

Since I couldn’t find a rack server that fit my needs, I moved to looking at tower servers. Dell and HP servers of the same era (e.g. T110 II, T310) are much harder to come by at a reasonable price here in the UK and actually took up as much space if not more than the R210II. The Intel NUC was a good contender though – it’s very low powered (~10-15W), small and very quiet. 4th/5th generation NUCs can be found around my £350 price point, and I was very close to buying one. However, those generation NUCs only have dual core CPUs and max out at 16GBs of RAM. The RAM limit was probably going to be fine for my requirements for now, but the dual core CPU felt very limiting.

Eventually I looked down the desktop workstation route and found two models that seemed promising – the Lenovo C30 and Dell T1700. Both are SFF workstation machines around my price point. They are quite different internally though. The Lenovo C30 had two Xeon E5-2609 CPUs, which could be cheaply upgraded to dual E5-2640’s within my budget. This would give 12C/24T, more than enough for my needs. These are Sandy Bridge E CPUs on Socket 2011 and common homelab choices (especially the eight core E5-2670 variants). This performance comes at a cost, and a dual E5 system idles around 150-200W which quickly adds up on the power bill.

On the other hand, the T1700 SFF from Dell features a single E3-1240v3 CPU, a Haswell part with 4C/8T. These are one generation newer than the CPUs in the popular R210 II as mentioned earlier and are great in terms of power consumption. Looking at the Passmark scores (not a particularly accurate means of comparison, but it does provide a quantitative value), a single 2650 is only just ahead of the 1240v3: PassMark - CPU Comparison Intel Xeon E3-1240 v3 vs Intel Xeon E5-2650.

It all got a bit overwhelming, so I found 4 eBay listings and stuck them all in Excel:



After a lot of deliberation, I settled on the T1700. Whilst it wasn’t the best in any area, it seemed like the best compromise between power, size and noise. Internally it was very similar to the R210 II, but in a SFF tower format. I picked it up for £332 from PCBitz (PCBITZ.com | Desktop PCs, Laptops, Hard Drive & Components) with the following specification:

  • Intel Xeon E3-1240v3 @ 2.5Ghz
  • 16GB Non-ECC DDR3 UDIMMs (4x4GB)
  • NVidia Quadro K600
  • 2TB Hitachi Ultrastar HDD
  • 255W PSU
Dell Precision T1700 SFFDell Precision T1700 SFF Internals



Whilst I ideally would have preferred 2x8GB sticks of RAM (and preferably ECC), this specification should be enough for now with scope to upgrade in the future. Unfortunately this machine uses ECC UDIMMs, which are much more expensive than the more common ECC RDIMMs found in servers.

The only real limiting factor of this SFF machine is storage – there is only one 3.5″ HDD bay and one ODD bay. The 2TB drive in the machine at the moment passes all SMART health checks and has no bad sectors, and should be fairly reliable as it’s an Ultrastar enterprise drive, but it’s really loud! Even after adjusting the AAM settings it’s still far louder than the 2TB Seagate Barracuda drive in my main PC. Performance was clearly prioritised over noise.

Software
The purpose of this machine is to run virtual machines so I can experiment with various OS’s and move some of my programming environments off my main PC. I’m also hoping to setup a full Windows AD domain to get some more experience in that area. There are 3 main hypervisors that I am considering.

  • Proxmox. Proxmox is a KVM hypervisor built on Debian. It supports LXC containers and full VMs and is completely free and open-source
  • ESXi. Probably the most popular choice in the enterprise, ESXi is a free(mium) hypervisor from VMWare
  • Hyper-V. Another popular enterprise option for Microsoft environments
I am currently planning on using Proxmox for a variety of reasons.

  1. Free and Open-Source. Proxmox uses the Kernel Virtual Machine hypervisor on Debian and all the features of Proxmox are completely free. ESXi is free in itself but does have some limitations. It no longer has any RAM limits and has fairly generous CPU limits (max 2 physical CPUs and 8 vCPUs per VM). However, it doesn’t support vCenter management or backups (among other things) without paying for a full licence.
  2. Web UI. I prefer to manage things with a web UI rather than needing to download another program to manage my VMs. Hyper-V doesn’t support this and ESXi’s web client is only just starting to mature. Proxmox has a feature rich web UI that’s pretty decent
  3. Containers. Whilst I am planning on running a few Windows Server VMs, a lot of my VMs are going to be linux. To save resources and make my hardware go much further, Proxmox supports LXC containers which share the host kernel, eliminating the need to virtualise a the whole OS.
The disadvantage of Proxmox is the fact it’s not well-known in the enterprise. If I get a job as a sysadmin or similar, the chances are I will be working with ESXi or Hyper-V. Despite that, I am planning on starting with Proxmox and seeing how I get on. I can always play with ESXi or Hyper-V in the future.

Next Steps
Time to get started setting everything up. Here are the main tasks I need to get done before I can start setting up my VMs.

  1. Storage. The single 2TB HDD is loud and too slow to run lots of VMs on once. The plan is to get a 3.5″ to 2x 2.5″ HDD converter and replace the existing drive with a 250GB Samsung 840 EVO and a 750GB 2.5″ HDD, both from my spare parts box. I can then put the 2TB drive in a USB 3 enclosure and use it for backups.
  2. Cooling. Currently the CPU hits 95C under full load. This is less than ideal, and the fans do make a lot of noise at full speed. Dell’s propriety 5 pin fan header makes replacing the intake fan (a Foxconn PVA080F12H 80mm fan) a bit difficult. I do plan on replacing the old stock TIM with some better performing paste to hopefully lower the temperatures by a few degrees. Fortunately the machine is in a good condition and doesn’t require any other cleaning or dust removal.
  3. Networking. Depending on where I put the machine I will need to decide how I want to connect this to the network.
Resources
Some recommended resources and sites I’ve found so far:

  • The /r/homelab subreddit – a community of other homelabbers. Be sure to check out their wiki
  • Lab Gopher – a tool that searches eBay for various servers and ranks them for you
  • Bargain Hardware UK – a good place to look instead of eBay, selling various used servers
  • PCBitz – where I ended up buying my machine from
  • Intel ARK – the best place to find out the specs of every CPU Intel has made
 

Has Sysnative Forums helped you? Please consider donating to help us support the site!

Back
Top