Building an Enterprise-Level Home Lab (Without the Enterprise Rack Gear)

Updated: Dec 1, 2020

(In Progress Post as of 11/23/2020)


Over the decades, enthusiastic and motivated IT professionals from all over have gone to lengths to try and create environments with the same features they're surrounded by at work. Until recently though, that meant you'd either have to

Search Craigslist or other secondhand sources to find old, full-size rack gear for cheap

-This does get you true enterprise functions, but you're doing it on aged gear with WAY more capacity than a home network needs (eg. multi-socket servers, 48+ port POE switches, etc). With the age, you're getting out of date software and firmware that's either nearly impossible to find updates for now or is completely unsupported by and incompatible with the whole industry by now anyway. Then, worst of all, you have the size+space+power+heat issues. Rack gear is heavy, power hungry, hot running, and will make all but the most sound-proofed homes sound like there's a 747 jumbojet roaring to life in your home 24/7.

OR

Go out and jury rig systems you have at home together, like repurposing your old laptops or desktops as virtual hosts, running nested hypervisors *within* hypervisors for clustering, and otherwise achieving the concept of 'enterprise level' without the 'enterprise deployment' steps.

-This is fun, but when you really want to try some cool new stuff out and your desktop's out of RAM, or you want to game on your main rig and that means shutting down your VM's to free up CPU and memory, or just simply shut the PC down, you can say goodbye to that whole virtual enterprise environment. Also, you'll run into compatibility issues for some software and functionality when your hardware isn't up to snuff in terms of enterprise features (stuff like TPM's, Secureboot options, Out-of-Band management, chipset support, and in really niche but relevant cases there's stuff like ECC memory, or instruction set and feature set support that your consumer CPU might not have).


But now, in 2020, there's finally a better way! We can now get the best of all worlds- and I'm being serious, this isn't some cop out 'we can spin stuff up in the cloud!' approach, this is all physical, affordable, new equipment available for use right now.


HARDWARE:


Dell T140 Tower Server:

CPU: Xeon 6-core/12-thread E-2246G

RAM: 48GB (expandable to 64GB) DDR4 2666MHz Drives:

(vSAN cache tier): Kingston 240GB m.2 SATA (over Dell BOSS card)

(vSAN storage tier): Crucial 500GB m.2 NVMe

Reasoning: Traditionally you'd put NVMe storage as the cache tier, but for a home network I won't see the increased throughput performance even over 2.5GbE connections- so I utilized the extra capacity in the 500GB drives.

Networking: Built-in 2x 2GbE ports, PCIe dual-port 2.5GbE NIC

Video: On-board motherboard video output, and E-2246G integrated graphics


HPE ML30 Gen10 Tower Server:

CPU: Xeon 6-core/12-thread E-2246G

RAM: 48GB (expandable to 64GB) DDR4 2666MHz Drives:

(vSAN cache tier): Kingston 240GB m.2 SATA

(vSAN storage tier): Crucial 500GB m.2 NVMe

Reasoning: Traditionally you'd put NVMe storage as the cache tier, but for a home network I won't see the increased throughput performance even over 2.5GbE connections- so I utilized the extra capacity in the 500GB drives.

Networking: Built-in 2x 2GbE ports, PCIe dual-port 2.5GbE NIC

Video: On-board motherboard video output, and E-2246G integrated graphics


Asus Custom C246-based Tower Server (built from spare/unused hardware in the other 2 servers):

CPU: Xeon 4-core/4-thread E-2224

RAM: 32GB (expandable to 64GB) DDR4 2666MHz Drives:

(vSAN cache tier): Kingston 240GB m.2 SATA

(vSAN storage tier): Crucial 500GB m.2 NVMe

Reasoning: Traditionally you'd put NVMe storage as the cache tier, but for a home network I won't see the increased throughput performance even over 2.5GbE connections- so I utilized the extra capacity in the 500GB drives.

Networking: Built-in 2x 2GbE ports, PCIe dual-port 2.5GbE NIC

Video: Cheap old spare ATI PCIe GPU with VGA output (CPU doesn't have onboard GPU, and motherboard only has outputs but no onboard video)



DESIGN:

(Architecture diagrams):



Out of Band Management:

-Dell iDRAC9: x.x.1.201

-HPE iLO5: x.x.1.202

-Lifecycle Managers and review on each

-theoretical iKVM


Virtualization (VMware vSphere 7):

-Clusters: Main Tower cluster spec'd above

8GB RPi 4's on ARM Fling with POE hats for single-cable connection per host

-vSAN enabled and licensed

-vMotion and DRS enabled

-Redundant 1GbE management connections

-Redundant 2.5GbE vSAN/vMotion connections

-SRM enabled as proof of concept with RPi4's as secondary site (>coming soon<)

-vRealize Operations (>coming soon<)


Domain Controller:

Server 2019 Intel NUC


DNS:

DHCP:


Firewall:


SQL:

On Server 2019 NUC, static TCP connections allowed over firewall for remote access from domain servers such as external Veeam ONE server


Networking:

-VLANs:

-Subnets:

-Switches: (POE, Cisco SG350, managed TP-link 8 port, 2x TP-link 5-port 2.5GbE)


Backup:

VEEAM Availability Suite 10a

VEEAM One


Certificate Authority:

NAS:

Server 2019 Intel NUC, File and Storage Services with deduplication enabled on NFS-based 1TB m.2 NVMe share over a 2.5GbE connection


Secureboot Functions:

On-board TPM's on all hosts


Physical/DR Redundancy:

-1-host fault tolerance (compute, storage, and memory)

-Redundant networking on all hosts to separate physical switches for each of the 2 connections (making them easier to physically move/ tolerant to losing power to one switch)

-1500VA AVR UPS from Cyberpower


Network Management:

Solarwinds Orion


Monitoring:

vRealize Operations

Veeam One


Azure Service Integrations:


Azure Host/Network Integrations:






119 views0 comments