The following is part of a series of posts called "Building a data center at home".
Living in the SF bay area, you can acquire 2nd hand data center equipment relatively cheap. The following are a series of posts detailing my deep dive in building a cluster with data-center equipment at home (often called a homelab) which consists of 48 CPU cores, 576Gb RAM, 33.6TB storage across 60 x 6Gb/s HDD's with a combined weight of just over 380lb/170kg within a budget of $3000.
Now that I have my home data center project up and running, I thought I’d put a quick post together as a summary of the project and a few insights that I’ve picked up along the way.
When building such a system for home use, I’ve found that the main concerns for most people is the power usage of the system and keeping noise to a minimum. While I haven’t got a good feel for the how much power the entire system uses as yet, getting the noise to a level which is acceptable took quite a bit of my time but was worth it.
While it’s difficult to get a good indication of volume, the following two videos give a indication as to the relative volume and sound of the entire system. Would have been helpful to have added some more reference sounds other than just my footsteps, but hey, took these videos at the time more for my own reference and didn’t think that they’d be on the blog.
This first video is prior to the fan changes I made on the X400
and is everything starting up,
so basically the most noise the stack could make.
This second video the sound the system makes after all the servers have started and reduced their
fans. This is also with the X400
turned off so is basically a representation of how the system
sounds today given the X400
is almost inaudible compared to the other servers. Still quite an
audible sound, but it’s a level which means I can actually run it in the house somewhere.
At first, when purchasing everything, I was hesitant to put the money into buying proper server rails as a set costs typically about $80, which for some of these servers is more than I paid for the actual hardware.
I bought one pair off eBay and after installing them, immediately knew that I needed to purchase them for every server. Rails makes working on this hardware possible and very quick where as going skint and just stacking these units makes it impossible.
I managed to find a seller on eBay who was only asking for $20 per set due to the rails missing the hooks at the front. This means that for one or two units, they don’t lock back into the rack when you slide them in. For the cost, this isn’t too much of an issue, it just means that when plugging in cables to the back of these given servers they can roll forward out of the rack, so I just need to hold them when doing so.
This project has been a lot of fun and for the cost, I’ve ended up with a very powerful system that can be used for any number of applications, especially those of a contemporary nature that run on Kubernetes.
I’m looking forward to using this stack to test the performance of certain aspects of Kubernetes and scalability questions that I have regarding some of the micro service applications that I have built in the past.
Using this system for offline computational tasks is definitely another use case. There is a lot of data manipulation and processing that we do with Deckee and there might be a chance to use this system instead of doing these tasks on cloud provider hardware. Doing these kinds of tasks on hired hardware typically doesn’t make as much sense from cost point of view as the service level of hired machines is typically much higher than is required.
I’m also looking forward to learning how I can use this hardware in relation to my current Master of Data Science studies and what frameworks exist in DS related to distributed computation.
Hopefully will get the chance to write up some further post regarding these investigations in the near future!