StorSimple 8100 – reuse

I bought a Microsoft StorSimple 8100 unit. The only catch, it did not contain its SSDs and the password for it was unknown. Fair enough.

Its quite an interesting unit, hardware-wise. Its a 2U with 2x750W PSU, 12×3,5 SAS bays, and 2x compute nodes. Each compute node is in fact a Xeratex CS-6000-AB containing:

  • 1* E5-2648L 1,8Ghz 8 core 70Watt CPU
  • 4* 8GB DDR3 memory
  • 1* LSI/Avago/Broadcom 2308 SAS HBA (1*SFF8088 and 1*internal link)
  • 1* Mellanonx ConnectX3 10/40/56Gbit dual QSFP
  • 1* 128GB SSD for OS.

The two compute nodes share a Xeratex HB1235 enclosure with the 12 3,5″ drive bays. This enclosure is used for many other storage vendors as HPE 3PAR or Dell Compellent SANs.

IPMI/BMC enable

Not having a DisplayPort to connect a screen so you can see what is going on is making this a very proprietary piece of hardware. But when having access to the IPMI then all of sudden it becomes easy to reuse the hardware for something different than the StorSimple software.

This is how to enable the IPMI/BMC hardware.

  1. Reseat one of the controllers or power cycle the appliance with a console cable connected
  2. Press Esc to enter the boot options
  3. Select “Setup Utility” from the list
  4. It will prompt for a password (E1aD8wAbMxB3XcpjwVKD)
  5. Go to Advanced tab
  6. Go to  IPMI BMC Configuration
  7. Go to BMC Configuration
  8. Scroll down till you get to the bottom and you will see the network configuration
  9. Select LAN Channel number 1 and static IP source
  10. Enter the IP, subnet, and gateway
  11. Press F10 to save and exit
  12. Log into the BMC with web browser and access the console from there Log in Username: admin  Password: admin
The IPMI/BMC interface of Xeratex CS-6000 node

Now you can open a java based KVM tool to get the display from the node and do what you want. Awesome!

Java….

But there is a small catch, you can’t just open and run the IPMI. The firmware is old and uses encryption algorithms that are not allowed anymore. So you need to change the security properties of your java install and run the IPMI in an Internet Explorer running compatibility mode.

This is quite an easy fix. What I did was to open notepad as administrator, and edit the following file:

C:\Program Files\Java\jre1.8.0_131\lib\security\java.security

find and comment out the line that starts with “jdk.jar.disabledAlgorithms” by prefixing a #. Note that this will allow jar files signed with any algorithms to run, which can is to be considered insecure! But for us a necessary measure for getting access to the IPMI.

StorSimple software

Each compute node is using VHDX native-boot. So the SSD has a boot loader, and then each VHDX is in that boot loader. That means that they can deploy a newer version or factory reset by switching over to another VHDX disk. I was actually not aware of something like VHDX native boot, but its a very nice feature. For sure going to use that on my windows based laptop in the future. So much easier than having to do the native OS install.

The StorSimple software is based on Windows Server 2012R2. You are normally only able to use use the console connection for direct management, but it actually also has an IPMI/BMC feature on each compute node you can look deeper into the system.

Since I did not have the device password the StorSimple software could do nothing. So I got my fingers on PCUnlocker, a password reset tool. Booted through IPMI, where I could attach the VHDX file and have it reset the passwords of the administrator. This account was also disabled, but PCUnlocker did also take care of that part.

Now boot back into the StorSimple software I could now choose an administrator account, type in my new password and now I had access to a cmd. It was using server core install, so no GUI but that’s ok because now I had access to all the other HCS PowerShell cmdlets.

Unfortunately the former owner had also tried to mess around with it, so the factory default VHDX images and the compute node signatures did not match and therefore the “reset-hcsfactorydefault” could not validate the factory default images. Bummer.

Many of the HCS cmdlets where PowerShell cmdlets referring to a DDL, so no way to see what was going on. But the “test-hcsfactoryimage” and reset/initialize scripts where full-blown PowerShell. So from there, I could see what was checked for the VHDX image to validate. I actually did a bypass on the validation, and did the reset command, but after each node had generated a new VHDX from the factory VHDX files I booted but was stuck in the boot state of HCS software.

I found an eagerness to find a way to fix it, but then again the time spent would not payout. You need an Azure subscription to actually manage StorSimple since there is no local GUI, only the serial console. So I decided to install Windows Server 2019 in it instead. 🙂

Conclusion

It’s a nice piece of hardware, StorSimple should have been nice to use if it was not depended on Azure. I now have a 2-node possibility to run an HCI cluster running Storage Spaces and with a failover cluster, presented to each node with CSV volumes. I could run HyperV and have a 2U box with full redundancy. I still feel the eager to fix the StorSimple software but not for now 🙂

Homelab – v1

This will be a short blog series of the new setup and how you can start to do your own homelab.

The basic idea of a homelab

I have always had a homelab, small, but enough to learn and the more you learn the bigger your need is. The first homelab consisted of 2* Apple Mac mini. The Apple Mac mini is very power efficient and very quiet. Not the beefiest hardware, but just enough to be able to run a vCenter and have vSAN running.

  • Apple Mac Mini v5.1 mid-2011 A1347
  • 2.3 GHz Core i5 (I5-2415M)
  • 16GB DDR3 memory
  • Dual Drive kit
  • 256GB Cache disk
  • 600GB capacity disk

They were mounted in a Sonnet MacRack mini 1U enclosure. Which have been perfect for many years. In my small setup I have been running my pFsense firewall and all sorts of small VMs, due to the small memory amount I was primary FreeBSD VMs with services as Zabbix, Weewx, OpenHAB, Unify controller, TOR and things like that. All stuff to play around with besides VMware of cause.

“Homelabbing” is where is see people learn and are having fun, without breaking too much.

The idea of new homelab

I have always had a way higher power bill than other “normal” people”. Servers, NAS and home automation gear standing around are not good for you power bill. And that’s also why my first homelab was made of Mac mini.

So instead of having huge servers in the garage or basement, I have always tried to keep the footprint down. The WOF(WifeAprovalFactor) also makes a hit here 🙂

I have a wall-mounted 19″ rack with 12U and 600mm depth. Placed in the garage where noise is not a problem anymore.

I want to run an all-flash VMware vSAN cluster with three nodes. I don’t want only two hosts and a witness appliance, even if it works and it is a fully supported concept for small- or branch offices. I want a real scale. Each server should have one cache device and at least one SSD for the capacity tier. I went all-in and decided to go with two SSDs for capacity. All servers have to be connected with 10Gbit SFP+ for vSAN and vMotion.

Conclusion of upcoming homelab

  • Small footprint, both power, and space.
  • 3 node all-flash vSAN cluster
  • 10Gbit SFP+ networking
  • Formfactor must be rack

The new hardware

Decided to go with Supermicro hardware. They have IPMI and actually some of the E300 series is now on the VMware compatibility list.

Supermicro kits such as the E300 are a very popular choices amongst the VMware community. It got a powerful Xeon-based CPUs and support for up to 128GB of memory, it is perfect for running a killer vSphere/vSAN setup and still keeping cost, noise and power bill down.

BOM

Here is a list of what the hardware consists of. This gives a hell of horsepower for a homelab and plenty of memory and CPU for doing nested environment so test our NSX-V to NSX-T migrations etc.

NumberItem
3Supermicro E300-8D 4core Intel D-1518 2Ghz
3Intel 790p 128GB disk for cache tier
3Supermicro Riser card
3Supermicro SATA-DOM 32GB
12Samsung 32GB DDR4 memory modules
3Intel 600GB SSD for capacity tier
3Supermicro Rackmounts

Now the hardware is documented. Next will be rack and stack and how the environment will be designed 🙂