Setting up a home server

It's fun Carnage, the only time I have had issues is with formatting, there's a trick to formatting if you have a disk that isn't in a RAW state, it can't delete partitions automatically for some reason when you go to configure the disk for use in the machine, have to manually delete them then setup the disk image.
 
The

Here is the 22U HP server rack I picked up. It is on casters, so mobility is great should there ever be a need to move it. I am not in a HUGE rush, but a future project will be to cover the bottom space, likely with a thin sheet of plywood, or something like that, then to incorporate some type of filter media to keep the equipment inside clean.

20150322_155118.jpg

20150322_155100.jpg




Here is the pretty much completed hard drive caddy. It has the two OCZ Agility 60 gig's on the bottom. One will be the OS, and the other one I am thinking I will keep as a copy of the OS drive, NOT IN RAID 1, just as a copy that I can move to should anything happen to the main drive. The middle shelf is dual Toshiba Q Series HDTS225XZSTA - 256 Gig drives. These will both have an 8 gig partitions, mirrored for ZIL. Will have the remainder of one of the drives for L2ARC. I am not quite sure what to do with the other remaining ~240 gig partition, but I am sure I will figure out something.

20150326_090955.jpg


Here is the fully trimmed hard drive caddy inside of the server. There is PLENTY of room for it inside of the server. It will be a bit tight with drives on the top shelf, but it will be manageable.
NOTE - The Raid card still makes me nervous with the bend radius of the cable. I may end up finding a new home for this one and pick up one that has the SFF connectors on the end of the card facing INTO the chassis rather than facing up, but I have to look into this further. I honestly haven't done anything with any of the hot swap drives to see if the the cable bend down as pictured would have any effect on the connectivity of the drives.

20150326_154917.jpg



Here are the CPU heat sinks. They don't have fans on them, but there are just about 2U tall. they go from the CPU to just under the plastic cover that keeps airflow within the CPU lane. There are 3x 80 mm PWM fans which you can see in the lower left portion on the frame.

20150326_154549.jpg



Here is the memory that the server came with. I will be looking into increasing this amount of memory, as ZFS recommends 1 Gig of RAM for each TB of storage space. I am not sure if that is for TOTAL capacity, or USED capacity, but I figure it would be best to just max out what I can in the beginning so I don't have to take if offline if it isn't needed. Plus I will need some RAM for the OS, and any virtual machines running. For now, I am thinking of doubling to 48 Gigs, for under $180 shipped. Not quite ready to pull the trigger on the memory yet, but I have the funds.

Memory:
Nanya - NT4GC72B4NA1NL-CG
Speed - DDR3-1333 PC3-10600 667MHz (1.5ns @ CL = 9)
Organization - 512Mx72
Power - 1.5V
Contacts - Gold

20150330_202520.jpg



I have Ubuntu server 14.04 installed at this point, but not much else. I have only run a few commands to update the server, but I still don't know a whole lot about what I am doing. I have LOTS of reading left to do.

apt-get update
apt-get upgrade

I can't recall the other few commands I have run so far, but I also installed and updated ebox. My understanding is that it allows for web-base administration of the Ubuntu server, but perhaps I am wrong. For the life of me I can't figure out how to get it running, but again, more reading to do.
 
Oh, dang, guess it was the angle of the older picture that made it look like those was under-sized. Cooling shouldn't be an issue at all with those suckers till they get clogged up.

If you are looking into filtration of any form, your going to need to seal the doors up and do some fancy duct-working... or just get a really good sized HEPA filter for the whole area....

As for RAM, it's best to try and max it out with in the next year since it's using DDR3 ECC... DDR4 ECC will become standard soon enough, and prices of DDR3 will sky-rocket just like it has for consumer systems.

Can't ever have too much RAM when it comes to building a host. :)
 
Last edited:
Interestingly I got all of my current desktop components with the aim of running it as an ESXi box (more as a hobby project than for any practical use, given I only have four cores and one would be taken up by ESXi). The CPU and mobo both support vt-d, and the NICs are all supported by ESXi out of the box.

As a side note, IKEA's Lack side table can make an affordable rackmount if you're looking to save. Plus, you can get it in the color of your choice.

Kudos on the new toys.
I have that exact sidetable. I will now ensure I keep it in case I want to convert it :D
 
Last edited:
Even with just a Quad core processor, ESXi is very capable, it doesn't exactly use a whole core for it self IMO.

A 965BE with ESXi can run Server 2k8 with Plex installed streaming multiple 1080p files, and two PFSense 2.1.5 installs, granted the routers don't use much power, I always see the full 10Ghz assigned to my Server 2k8 being used, with about another 1.1ish Ghz remaining for assignment, and it never registers. Pretty sure your i5-4440 will run far more. I have in the past had 6 VM's running on this.

Never will I truly understand all the resource management of ESXi, they have a 100+ page document just on how to setup and use system resources properly.
 
Last edited:
Oh, I was under the impression that you had to dedicate cores to VMs under ESXi. Perhaps it was only true of older versions. It did seem to slightly defeat the point of virtualising if you had to manage the hardware at all.

Anyway, I'm now waiting for my next upgrade to turn my desktop into an ESXi box. I'll just shunt all of it over to my old case and it'll just be running server duty.
 
Well, that's the tricky part, the cores are virtualized to the OS, ESXi 5.x will spread the load evenly among all the physical cores, the trick is, how many virtual cores do you want to give to each OS, as well as how many shares. Can't give more virtual cores than you have of physical cores, and you never want to assign more virtual than you have physical if you plan on having each of those virtual cores peaked frequently. Basically in my case, I have four physical cores, but with the way my load is, I have two virtual cores assigned to one router, two assigned to another router, and two assigned to Server 2k8, that would be 6 cores, but provided not all three will run at peak assigned frequency, it works properly, and not a single VM has over the total physical count of four. If they DO happen to hit peak frequency though, that is where shares come into place, basically, one VM has a higher share than another, which means it gets priority. At least from my limited understanding and from what I have seen, plus, it helps that I didn't tell every VM to use 10Ghz worth of processing power, two are 1500Mhz, and 2k8 is 10Ghz. If all three had equal shares, things would lock up and bog down very quickly.

Basically, ESXi will use some processing power yes, but it doesn't require a core fully dedicated to it, and Hyper-threading makes things really confusing though, because the same rules apply... Most people I speak with just leave HT disabled as it has no good benefit in a host really.

I really need to find that PDF document I had... I have a whole collection for ESXi, as well as a massive 2500 page PDF for ESX.

Can-do's: You can assign more virtual cores than you have physical to multiple VM's, doing this requires you have proper shares and/or resource pools setup, otherwise the system hits a screeching halt.
Can't-do's: No single VM can have more virtual cores than you have physical. Basically you can't assign six cores to a single VM if you only have four.
 
Last edited:
I have made some significant progress with the server so far this week.

DONE list:
  • Verified that all 12 of the hot swap bays on the backplane work. Haven't moved data to drives in each slot yet, but verified that the drives are at least seen by the OS in each slot.
  • Flashed IBM M1015 to run in IT mode with no BIOS, for quick boot, verified that it sees all 8 of the drives connected to it in JBOD.
  • Fresh install of Ubuntu Server 14.04.1 LTS Trusty Tahr
  • openssh-server package installed and updated
  • verified able to ssh to server from local network machine via putty
  • Stumbled my way through getting ISPConfig 3 web admin panel setup(https://www.howtoforge.com/perfect...p-mysql-pureftpd-bind-dovecot-ispconfig-3-p3)
  • ISPConfig web admin panel up and working, able to reach locally and from internet, changed outside port on router so it isn't using default port, but it is easy enough to disable if I need to, won't be enable for outside normally, just wanted to test
  • Squirrelmail setup and able to log in

TO DO list
  • Get front bezel and get it installed with filter to keep dust and junk out of chassis
  • Original install of OS is sitting on sdb, need to figure out how to WIPE that drive, or delete the install from the command line. Need to then make copy of OS residing on sda onto sdb
  • Get squirrelmail WORKING. Not getting emails yet. Not sure what is up here, have to do more reading. I WAS getting bounce backs, but I am no longer getting those from my Gmail, but nothing is coming into squirrelmail web interface. Oh well, not imperative, but I want to get it working eventually. Not sure if I should leave this on the host, or put it on a guest machine.
  • Set up ZFS
  • Figure out how to divide up drives into Vdev's, single raidZ2 with 6 drives, or two raidZ's with 3 drives each.
  • Figure out how to create a virtual host, will have Plex Media Server running in the first one, with access only to the media directory, not the whole zpool. Will be tinkering with a Minecraft server on the other. Not sure what else I will look into for guest machines.
  • Get HP server rack into the house and in the basement where it will be living.
  • Get rack mount rails for the chassis, and install it in the rack
  • Get a battery backup for the server to be able to gracefully power down during power loss. (rack mount, most likely an APC unit)


I have been TRYING to do research to get a front bezel for the server, but it has been a PITA. If anyone HAS one, and can enlighten me on the details of how exactly it mounts and whatnot, I would appreciate it. I sent emails to just about every seller on ebay with them, asking if they come with keys, not a SINGLE one knew, they all punted to Supermicro. Other pages I tried to find info (picture below), were TOTALLY fudged, would show completely different part numbers, so no what to know what is ACTUALLY being ordered. I even went as far as to send a detailed email to Supermicro directly explaining I am trying to get more info on part MCP-210-82601-0B to find out if it comes WITH keys, or what, how it mounts, etc. I got a single sentence response saying that "only a bezel itself and nothing else." I sent a response asking if there is a part number for the mounting bolt and keys for the lock, was told this has no mounting bolt, and that keys are generic. SOOoooo... can anyone tell me how the bezel attaches to the chassis (if additional parts are needed), and where I can find a set of keys for it?? I would be most appreciative.

whichone_1.jpg



Got the new SAS breakout cables installed, they are PERFECT! The sheathing pulled up pretty easily from the SFF-8087 connector, so the bend radius isn't an issue. One of the short sets of cables I had the sheath heat shrink at the connector was pretty much fused to the connector, so it didn't bend very well at ALL. There is also PLENTY of clearance for the last set of hard drives on the on the top of the drive sled, even with the SAS breakout cables. Power cables are all set as well for 2 additional drives if/when they get installed.

20150401_163250.jpg



The cable management on the back plane needs some work. The SATA cables are good, but the power cables are a PITA, and I want to try and get them tucked more out of the way, so not to disrupt airflow, although, I guess with the back plane being there it probably doesn't much matter.

20150401_163305.jpg


Another few shows of the drive sled and cable management .I also have a free PCI Express slot between the IPMI card and the IBM M1015 card. The ONLY thing I didn't take into account is the possibility of adding in a network card for future expansion. There ARE two 1 gig ports build onto the Motherboard, which I can (down the road, when I get a switch that supports it) team them up for a 2 gig link. Not totally urgent, just something to think about.

20150401_163313.jpg


20150401_163326.jpg



I messed with the order of the cables on the motherboard connectors until I got the drive arrangement that I wanted, so that sda is boot, sdb will be boot backup, sdc and sdd will be LOG and L2ARC drives. The good news is that all 8 of the hot swap drives in place are all seen as well.

20150403_220049.jpg
 
You might want to call up Supermicro... But I am pretty sure when you order the front bezel it will come with everything needed, usually two keys, and any additional components needed to attach it to the case. In most servers, the left side slides into the handle, and the right side slides just under the other handle barely, then you push the panel down and can lock it.

The keys, depends on the manufacture of the system... Dell has this habbit of not using different keys for their locks... Any DELL branded key will usually open the case lock on almost any DELL server. Chances are, SuperMicro is the same. It's not a good form of security, I would suggest removing and replacing the lock cylinder that's guaranteed to not have a common keying. Manufactures stick with common keying for two reasons, they are cheap, and in all honesty, if you have 100 servers, do you want 100 keys, or one single key? It's really meant to keep things from accidentally being removed.

I would just order the faceplate from Amazon or Newegg... Skip ebay.
 
Last edited:
The server I have setup was originally a pc tower which I changed the insides and made able
To not Only Store invoices and customer details but
Also big enough to play the online games that I enjoy getting involved with I think the server is very useful
 
Back
Top Bottom