ymboc's server build
Page 1 of 1

Author:  ymboc [ Tue Nov 29, 2011 9:17 pm ]
Post subject:  ymboc's server build

My EX47x was frustrating me. I'll probably be looking to sell once I confirm it will still pass stress testing.

I finally got an alternative OS up and working on it when I started to run into troubles while trying to migrate the data. It's a long story but I think bad sectors are the culprit.

Anyway, all the bits and pieces have arrived and the build is underway:
Nothing that hasn't been done before but here we are none the less.

As is typical for these types of builds, drive trays and controllers cost significantly more than the rest of the hardware that ties it together.

Provided there are no more rude & unwelcome surprises, target software is ZFSGuru (FreeBSD) with a WHS 2011 VM.

Storage Pool will consist of two 7-drive RaidZ+2 vdevs (Think Raid6) with one bay left over for a hotspare.

First vdev will be all 2-TB drives of varied makes and models while the second vdev will be a mix of 1 & 2tb drives (until I am able to source more 2TB drives at reasonable prices).


Author:  yakuza [ Wed Nov 30, 2011 10:14 am ]
Post subject:  Re: ymboc's server build

That looks awesome, I'd like to see your specific equipment you chose, case/mobo/etc. That case looks a bit like a humidifier. :D

Author:  ymboc [ Sun Dec 04, 2011 5:44 pm ]
Post subject:  Re: ymboc's server build

yakuza wrote:
That looks awesome, I'd like to see your specific equipment you chose, case/mobo/etc. That case looks a bit like a humidifier. :D
Thanks & NP. I wanted to finish the build and reply earlier but I didn't get it all up and running properly until early this morning.

So my server build consist of...
Build-wise everything has pretty much worked out the way I had hoped. The only thing that bothers me is that Fan Holders on the SuperMicro 5in3's interfere a little too much with the motherboard - specifically the middle one is nearly/practically crushing half of the wires on the ATX power connector.

Also, I discovered a little late that the second 'x16' slot on the motherboard I chose is only an 'x4' slot electrically. So Asus' M4A89GTD (or any other 890GX based board) probably would have been a better choice but its not really available in my neck of the woods anymore. meh. Too late now. :(

Software-wise I actually ran into the same problem I did running zfsguru on the mediasmart. That is I was experiencing a panicless hard-lock/crash when copying files from the WHS/ntfs disks to the array. Fortunately the problem seems to have gone away by switching to the latest zfsguru experimental release. Not-quite-so-fortunate however because the same confounding problem I was encountering on the EX47x was the whole justification for building the server from scratch. sigh.

Edit: Great. I Jinxed it. As it turns out, I'm still experiencing the panicless hard-lock/crash. No log entries make these things exceedingly difficult to troubleshoot.

Author:  ymboc [ Wed Dec 21, 2011 7:59 am ]
Post subject:  Re: ymboc's server build

I think a belated followup is in order now that I've managed to work out the software kinks... well mostly anyway... it's a long story but but I'll try to keep it as short as I can.

So FreeBSD is out (this means no ZFSGuru or FreeNAS). While I can't prove it I think the SAS controller drivers (mps) are the culprit for the errorlog-less hard locks (under heavy I/O) I was experiencing. That said, Performance of a 7-disk RaidZ2 ZFS array under FreeBSD (even with compression enabled) is staggering -- I'll have to try ZFSGuru again when FreeBSD gets a new mps driver.

After double checking hardware stability under Windows/OCCT (12 hrs) & memtest86 (12hrs), I tried Ubuntu Server 11.1 with the Native ZFS PPA packages. I was able to import the zpool I created under FreeNAS, however I ended up setting Ubuntu aside in favor of OpenMediaVault's (which is also debian based) slick web interface.

Unfortunately debian 'squeeze' that OMV runs on top of has a problem with the onboard NIC (it uses the wrong realtek driver), which I corrected by upgrading the underyling system to 'wheezy' (there by moving the kernel from v 2.6 to 3.1).

The side effect of moving from Ubuntu to OMV/Debian is that no precompiled native zfs packages exist. So I downloaded, compiled & installed the native zfs sources (from The Zpool was mounted automatically and while it works zfs volumes won't appear in the OMV interface because it's mounted & managed outside of the typical fstab configuration approach which means my Samba/CIFS shares had to be setup manually.

So after all that work -- ditching Ubuntu and it's more 'just works' environment in favor of OMV's whizzbang interface -- all I'm able to really do with the OMV interface is user management (bahumbug!). Oh well, it was a good 'nix refresher I suppose (the first/last time I used 'real' debian was in 2000).

Next I installed VirtualBox (yet another Sun/Oracle technology) & phpVirtualBox (a Complete Web Management interface). After much fiddling about (I compiled from scratch a couple times) I learned the trick to getting it working nicely is to not use the official debian packages but rather Oracle's packages which is a bit counter intuitive seeing as they're only available/intended for the previous debian releaese. Forunately the 'squeeze' packages work well on 'wheezy' and I've got a fully functional headless WHS 2011 VM running happily on the same hardware. The nice thing about VirtualBox is that even the 'physical' console is available using MS's RDP client (or via the web interface).

The only negative thing about the FreeBSD to Linux transition so far is that the ZFS performance just isn't as good on Linux as it is on FreeBSD. Under FreeBSD I was seeing hundreds of MB/s but on linux before tuning, I was seeing slower-than-single-drive speeds. Now, after some preliminary tuning performance is still only just matching gigabit speeds. Hopefully I'll be able to successfully tune it a little further


Author:  yakuza [ Wed Dec 21, 2011 9:40 pm ]
Post subject:  Re: ymboc's server build

Wow, quite a journey you've taken there. I was just talking with my sysadmin at work, he recently set up our largest storage server with the Ubuntu ZFS PPA packages and is using compression and deduplication and it's working well. I am considering giving it a shot on an HP Micro Server, so good timing to hear about your experience.

I have to say I rarely hear good things about Realtek NICs, they seem to cause more trouble than they are worth.

Do you think you'll stay with this last config, or do you have more things to try? :D

Author:  ymboc [ Wed Dec 21, 2011 10:21 pm ]
Post subject:  Re: ymboc's server build

While I'll stay with this build simply for driver stability reasons, I would have preferred to stick with ZFSGuru (FreeBSD).

ZFSGuru makes the critical ZFS portions of the work easier and features the same VirtualBox VM. ZFSGuru also installs right on the pool instead of requiring a dedicated system disk.

Also, the state of ZFS development on FreeBSD is much more stable than that on linux. The development team behind the zfsonlinux project (and ZFS PPAs) still appear to be making major revisions to the code fairly regularly -- I think it will be a while before things settle down in that regard.

Re: Any 'nix + ZFS... Make sure you have plenty of ram. While ZFS will run with less, it won't be happy until you have at least 4Gb of Ram --preferably 8Gb.

Even if you do end up selecting Ubuntu over FreeBSD, I'd still recommend creating your pool using ZFSGuru's liveCD because it will take care creating your pool and aligning the filesystem to 4k sectors as simple as selecting a checkbox. Keep in mind if you have a non-AF or mixed pool but expect to upgrade to AF drives later you can only optimize for 4k sectors at the time of pool creation.

Author:  ymboc [ Sun Jan 01, 2012 10:07 am ]
Post subject:  Re: ymboc's server build

Yet another followup:

After much fiddling with different samba & ZFS tuning settings, I've finally managed to get file transfers consistently in the 800-900 megabit range (100+ MB/s).

Although I tried many more different settings, on the zfs side of things it didn't take much more than setting the min & max arc sizes and making sure the zfs prefetch is enabled:

options zfs zfs_prefetch_disable=0
options zfs zfs_arc_min=4294967296
options zfs zfs_arc_max=8589934592
I probably could have left the arc sizes half or 3/4 the size without impacting performance significantly but I had since upgraded the system to 16GB as a result of a new egg special, so lots of memory to go around.

Most of the fiddling & work was on the samba side of things. Now, I'm not sure if these are 'good' settings but they seem to work for me.

//enable experimental SMB2 Support
max protocol = smb2

//Oplock settings for smooth Offline Files operation
oplocks = yes
kernel oplocks = no
level2 oplocks = yes

//Fix Permissions
create mask = 777
directory mask = 777
delete readonly = yes
inherit permissions = yes

//Fix Dos Attributes.
store dos attributes = yes
map acl inherit = yes
map archive = no
map system = no
map hidden = no
map readonly = no

max xmit=131072
use sendfile = yes
min receivefile size = 16384
read size = 65536
aio read size = 65536
aio write size = 65536

The above buffer & write/read sizes may (again) be overkill, but meh.

Cheers & Happy New Year!

Author:  ymboc [ Thu Feb 02, 2012 10:14 pm ]
Post subject:  Re: ymboc's server build

Self Follow-up #3... The next version of zfsguru based on freebsd 9 is in beta... and freebsd 9 still doesn't have a stable-enough driver for my sas2008 controllers.

... and I've ditched my wheezy'd OMV setup in favour of ubuntu (precise) server for the following reasons
  • I kept on bumping into issues with unescaped symbols in the config files (&quot; &gt; &lt; instead of ", <, > etc) everytime there was an update to the OMV packages.
    (This is likely due to upgading OMV's debian squeeze underpinnings to wheezy which helped correct a NIC driver issue but evidently caused other issues)
  • I've been configuring my server manually anyway since I can hardly use any of OMV's interface because it doesn't recognize my ZFS volumes (they're not mounted in the usual linux way)
  • Running Ubuntu I can use the zfsonlinux ppa instead of having to follow the git repositories and compile updates (quasi)manually

Page 1 of 1 All times are UTC - 7 hours [ DST ]
Powered by phpBB® Forum Software © phpBB Group