Posted on Feb 4, 2013 in Fun
I’m sure we’ve all dabbled with VMs a time or two (and if you haven’t, you should). However, I had a specific goal of creating a robust purely virtual environment for training, labs, and testing. While a plain old Linux VM loaded over Windows, Linux, or Mac would have been fine, it doesn’t have the flexibility to really give me all the power out of my hardware. At best I’ll have a couple VMs running at once, because at some point I’ll run out of RAM and my host OS won’t be able to handle it.
So, enter the vSphere Hypervisor. Using this software, I was able to create my own ‘whitebox’ out of commodity equipment from Amazon. The whole thing came to about $840 (though price increases make it about $900 now). Check out the stats:
- AMD A10 Trinity 3.8GHz Quad Core Processor
- 32GB DDR3 1600 RAM
- 64GB SATA3 SSD (OS and NAS)
- 4TB 7200RPM SATA3 Disk
- VMWare vSphere Hypervisor Free 5.1
An exact hardware list is at the bottom of this post. The high level process is like this:
- Put the server together. That was easy, right?
- Burn the vSphere Hypervisor software to a USB thumb drive and boot up with it. Pause to enjoy the new UEFI (replacement to BIOS) with mouse support. Important: Make sure you use a USB 2.0 port for the thumbdrive. The UEFI doesn’t seem to play nice with booting from a USB 3 port.
- Install ESXi to the 64GB SSD and use it as your boot volume (a decent guide can be found here).
- Once ESXi is installed and loaded up (minus the boot drive), you can use another computer to go to the link displayed on the ESXi server. From there you can download the vSphere Client and connect to your ESXi server to create virtual machines.
- Once you’re in the client and connected, the first step is to create a FreeNAS Virtual Machine in the default “datastore1″ datastore. Here is a good guide for that.
- Using FreeNAS, you can create a ZFS volume out of the 2x2TB drives (I used RAID0) and configure them for iSCSI. Another guide!
- Lastly, you need to setup ESXi for iSCSI and create a datastore out of your new 4TB volume. And here’s a guide for that.
Once you’re done with all these steps you basically have the ability to create as many virtual machines with any combination of four CPU cores (1×4, 2×2, 4×1) up to 32GB RAM using 4TB of disk space on your new datastore. On mine, I have Oracle 11gR2 on OEL6, a three node MySQL Cluster on OEL6, MarkLogic 6 on CentOS 6, OpenVPN, Postgres, and two NoSQL/Apache/PHP boxes. With that, I’m at about 30% capacity and barely a whimper on the CPU monitor.
There are a ton of extra tricks and things you can do with this setup. Since the motherboard/CPU support IOMMU (AMDs virtualization passthrough), you can enable it in the UEFI and it will auto-enable in ESXi. This gives you the ability to passthrough PCI, PCIe, and USB ports straight to a VM for a lot of awesome possibilities. You could also look at something like PLEX to make a Media Server, set up a Minecraft Server for the kids (sure, the kids), anything you want. You’ll have tons of room.
- I’d recommend setting up a dedicated IP range in your home network for this. That way you can keep it all organized and assign static IPs to all the VMs.
- One really good VM is a VPN server. I used OpenVPN which is pretty decent and easy to set up. Forward a port on your router to it for an instant VPN into your home network.
- One downside of the FreeNAS setup of a VM inside your ESXi server is that when the server comes up, none of the VMs in your iSCSI datastore will be able to come on by default. You will need to start the FreeNAS VM and re-scan the iSCSI interface in ESXi each time after a reboot. Thankfully, you shouldn’t have to reboot the actual server very often. If you know a way around this, let me know!
- From what I understand, passthrough of a PCIe Video Card to a VM is possible with IOMMU enabled but is a pain in the butt. With Windows it’s nearly impossible, and a bug requires the Windows VM use no more than 2GB RAM. I think with Linux it is quite a bit easier. Also note, while the AMD A10 processor is an APU (CPU with a built in GPU), it can’t be passed through.
Here’s the final hardware list: