HLB part 3: DAS enclosures and capacity SSDs for your all-flash vSAN cluster

In my “Part 2” homelab build post I went over the process of selecting a HBA (host bus adapter) for my lab, then the process of installing and flashing firmware to the new HBAs. With the HBAs installed, both of my servers were now equipped to connect to the external storage. Now it was time to figure out what combination of storage enclosure and storage devices would be a good choice for my hosts’ vSAN capacity disk groups.

Before beginning the lab upgrade project, I didn’t really have any idea about the cost and availability of used enterprise-grade SSDs. I’d sort of assumed that I wouldn’t be able to afford SSDs in the quantity and capacity required to achieve the total capacity I’d had in mind for my hosts’ capacity tier. I was tentatively planning to build what VMware refers to as a hybrid vSAN, with SSDs for cache devices and HDDs for capacity devices. But after spending some time searching eBay for part numbers from the VMware HCL (hardware compatibility list), it looked like several different models of enterprise-grade SAS-2 SSDs could be had for $0.11-$0.20/GB, which was well within my budget.

From VMware’s vSAN Design Guide:

As a recommended practice, VMware recommends deploying ESXi hosts with similar or identical configurations across all cluster members, including similar or identical storage configurations.

This is especially important to consider when adding EOL/previous generation components to servers that will be configured as members in a cluster. While I only had two hosts at the time, I planned on adding at least one more host to my lab in the next year. So before I committed to one particular SSD model, I took a look at completed listings on eBay for the last month or so, hoping to get an idea of which models were more commonly available and (hopefully) more likely to reappear in new listings months from now.

Reasonably-priced Samsung SM1625 SSDs seemed to make regular appearances on eBay, and according to the datasheet, they perform well in terms of IOPS and write endurance. Within a week I was able to collect twelve 400GB SM1625s from two listings and was all set for my hosts’ capacity devices.

Samsung SM1625 SSDs

Because I had committed to 2.5″ SAS-2 SSDs—and based on my past experiences with and research into with storage enclosures—the Dell PowerVault MD1220 stood out as the ideal candidate for the final piece of storage gear for my lab build.

The MD1220 is apparently compatible with just about any 2.5″ SAS storage device and doesn’t require a specific version of any kind of proprietary firmware to be present on installed HDDs or SSDs. The MD1220 also supports a “split mode” configuration, which permits users to trade SAS multipath/HA on all 24 SSDs/HDDs for the ability to “split” the enclosure between two hosts @ 12 SSDs/HDDs each. Another benefit of going with a MD1220 (or a MD1200 for 3.5″ devices), is that they can be reliably found on the used/second-hand market for a couple hundred dollars each (or less).

After a couple days of refreshing eBay searches, I found a couple of used MD1220s (with no drives, dual SAS-2 EMMs, and dual power supplies) for ~$155/ea, shipped. I would also need drive caddies for all twelve SM1625 SSDs, as well as a pair of SAS cables to connect the MD1220s up to the HBAs on my Superservers. The following week—after my MD1220s, drive caddies, and cables showed up on my front porch—I was able to install and connect my vSAN capacity storage to both of my lab hosts. The final storage requirements for vSAN could now be checked off of the list!

  • Storage for the hypervisor OS (ESXi)
  • Storage for the VSAN caching tier
  • Storage for the VSAN capacity tier

Now that all of the required hardware was present in the lab, I could begin working through VMware’s two-node vSAN cluster setup guide.

Samsung SSDs, disk caddies

Pair of PowerVault MD1220 DAS enclosures

In my next lab build post, I’ll be switching gears from storage to getting all of my homelab components organized and into a rack. Thanks for reading!