Page 1 of 1

System Donation (usage for a couple weeks)

Posted: 24 Jun 2013 17:08
by cozmok
Summary:
I'm building a VMware ESXi system for multiple uses, one being a NAS (either NAS4Free or the one we do not mention). I'm interested in trying out several options, all virtualized, and could run scenarios for you guys if you want comparisons for reference or development.

Details:
I've used FreeNAS before the split for several years (oldest cd I found has 0.671, whenever that was). Initially using 4x 250gb in software RAID5, then upgraded to 4x500gb in software raid5). But then after having the config messed up after an upgrade, losing my volume, and needing to recover the software raid5 using RStudio, I switched to 8x1.5tb hardware raid6 serving AFP natively on my hackintosh for the last couple years.

Now I am building a brand new system based around a E3-1230V3 cpu and X10SLM+-F mobo with 32gb of ram, i350-t4 NIC, and a LSI 9286CV-8e with 8x 4tb WD RE hard drives in RAID6 (probably partitioned into a 20tb chunk to pass through to the NAS VM, and 1.8tb or whatever's left for the other VMs). I have a server at work running a similar setup with smaller drives and an older card, and it works great. For the new build, I'd like to make sure it's the "best" solution, as it's very difficult (and costly) for me to fix this setup (to backup 20tb of data) if I find issues or need improvements. So I will be running some tests in the coming weeks to hopefully figure resources I should allocate, which things to pass-through, and the general hardware/software configuration.

I, personally, need to serve AFP locally to several machines in my home and externally to my macbook mostly, but also to watch video on ipad running XBMC). I also need to serve SMB, FTP, and maybe NFS if I ever get my MythTV system going again. I have not and do not wish to use ZFS, but only due to my lack of experience/knowledge of ZFS usage, adjustment, and recovery. That could change if it's recommended...

In planning my tests and researching options, I thought I could reach out to you folks and see if you had 1) recommendations, or 2) requests for me to try certain scenarios since this is a completely dev system for the next couple weeks. Also, if everyone recommends ZFS I would try it if I get pointed in the right direction for best practices. My primary concern once the system is up and running is reliability and stability. I have numerous people counting on the data stored on it. While it's not enterprise-worthy, it is important stuff (family photos, videos, documents, wedding photography, etc). I need to feel comfortable that I can resolve any issues without regretting that I set it up the wrong way. (I do and will make backups of a small portion of the uber-important things, but can't backup everything)

So, ideas? Requests? Need more info?

Re: System Donation (usage for a couple weeks)

Posted: 25 Jun 2013 01:15
by RedAntz
Hi cozmok,
Long question requires long answer :D

I have similar but older setup compared to yours.

My hardisks setup :-
1. 1 1TB hardisk for VMs ( you can do RAID-1 if you care a lot for uptime and cannot afford to lose these VMs )
2. 6x4TB hardisk for NAS4Free via IBM ServRAID M1015 controller (LSI 9240-8i) (flashed to IT mode)
3. 1x8GB USB Flash drive for ESXi
4. Separate external hardisk for backup that will stay offsite*

Software :-
1. ESXi 5.1 (boot via USB flash drive)
2. VM Data store on the 1 TB hardisk (configure via ESXi)
3. Pass through M1015 controller to NAS4Free VM (to present 6x4TB hardisk for NAS)

NAS Setup :-
1. NAS is presented as Samba share and (SSH/SFTP) (I do not need iSCSI at home to keep it simple and to remove NAS dependency)
2. No other services are enabled on NAS VM. I use VMs for that (e.g. ownCloud in a box, bittorrent of your choice in other VM)
3. My media PC (XBMC) is connected to NAS via SMB
4. NFS with other *nix VMs (planned)

Advantage :-
1. Power savings and less wear out - I do not have to spin up my NAS hardisks all the time since VMs are sitting on separate hardisk
2. No dependency - There is no dependency to wait for NAS4Free to boot up before other VMs can start. This is important to me as I update NAS4Free very regularly when I test patches
3. Flexibility and Security - My NAS serves 1 purpose only - being a NAS. I use other VMs to do file sharing / transcoding. This allows me to update services / choose services independently and not being restricted to what's offered in the bundle / FreeBSD
4. Cost saving - Less physical box = less hardware/electricty costs
5. Centralised administration - The only time I need access to my physical box is when I update ESXi, otherwise, I can do everything via IPMI / VSphere client


Disadvantage :-
1. Complexity. VM setup can be overkill for average users and can be intimidating. That also mean you *can* potentially lose data by accident
2. Reliability. Some do not believe that hypervisors can provide solid reliability compared to bare metal. You make your own judgement :)
3. VM hardisk can fail. But I do not worry much about losing my VMs. Whatever deemed important stuff inside VM are periodically backup to my ZFS NAS. A few hours of downtime is ok to me. If my VM hardisk die, I swap with a new one and copy VM templates that I backup on NAS and turn them on again. (this can be improved, but I'm ok with it now)


There are many different setup that can be done with All-in-One setup like this. I'm just choosing a setup that suits me and one that I'm most comfortable with (i.e. when things go wrong, I know what/where to look at). While I'm very comfortable with this setup, I do not usually recommend to others as support and troubleshooting can be a nightmare. Choose a setup that you are comfortable with your technical skills. If you are worried about ZFS / setup and configuration, you can test it out in VMs until you are comfortable to use it in 'production'.

The whole point of using FreeBSD based NAS distribution is to enjoy the advantage of ZFS filesystem. Otherwise, you can just make use of LSI's hardware RAID 6 and not worried about all these.

* More importantly, have a backup plan. RAID is NOT backup. ZFS filesystem is NOT backup. It is NOT a backup if it resides at the same site with original copy. Imagine the following scenarios: break in and entering, house on fire, flood *touch wood*

Recommended readings for ZFS :-
ZFS_Best_Practices_Guide

Re: System Donation (usage for a couple weeks)

Posted: 25 Jun 2013 15:55
by cozmok
Wow, that is QUITE similar. Thanks for the write up.

I am very familiar with ESXi, and agree with your advantages/disadvantages 100%. I have this setup running at work, and have wiped it out and recreated it several times, and think it could work perfectly for what I need at home.

The big differences (issues) I have at home are that the data is 1) more important(haha) and 2) cannot be backed up entirely. I won't have the luxury of being able to back up all the data and redo it with some slight change. I'll backup what I REALLY need, but I don't have 20tb for everything, so I'd like to make sure things are as stable as they can be. I'm comfortable with the stability/risks/usability of the RAID card and ESXi, but just want to make sure NAS4Free config and setup is "right" and can have some comfort with adjustments/upgrades/backups of it.

ZFS is new to me, and frankly worries me with the "pools" and whatever, but it's probably not as complicated as it seems. I will read up on it, but I would need some reassuring before feeling comfortable with it. Even if I don't use ZFS, I'll still use NAS4Free, for a clean and simple file server. Like you, I'll put any other services in other VMs, so I will just have it serving one big 20tb share over a gigabit network via AFP and SMB primarily, but also FTP/TFTP/RSYNC and maybe NFS.

Questions:
1) What kind of resources did you (should I) give the FreeNAS VM? 1 or 2 cores (3.3ghz)? RAM if ZFS? RAM if not ZFS? Which NIC em
3) Which NIC type did you use? I usually use e1000 because it matches the real NIC, but then saw they recommend VMXNET3. Or do you pass-through the real NIC?
2) Is LAGG working/worthwhile?
3) LiveCD, Embedded, Full? What size virtual disk do I need for the system?
4) Did you do any resource pooling to prioritize inside ESXi?
5) For the backups, do you plug the external drive into the host, and pass it through, or use another client machine on the network?
6) What advantages does ZFS have if I'm giving it a 20tb chunk of a hardware RAID6? And is it worth me learning/messing with it?
7) NAS4Free vs FreeNAS. I'm partial to NAS4Free due to old school comradery from years ago when I used it, but I haven't really looked into the "new" one.
8) Are there any optional new ways I could do it, that I should plan on testing out before I commit? Tuning stuff? I don't mind trying things out during this testing phase, or even system changes afterwards if my data can remain intact.
9) Is it worth documenting any of my testing? The hardware list I found on the site seemed pretty light, is it old/outdated/not used, or just not popular and needs some sprucing up?

Thanks!

Re: System Donation (usage for a couple weeks)

Posted: 28 Jun 2013 01:00
by RedAntz
Please note that I use my IBM M1015 as HBA and pass through (VMDirectPath, not RDM) all hardisks attached for NAS4Free VM to manage. I do not use other advanced features such as dedup, L2ARC, dedicated log devices for ZIL yet as I have not experienced performance bottleneck.

Most of the answers below will be :- it depends. This use case suits me as I have rather low usage (ok, usually only 2, sometimes 3 concurrent connections). This is merely sharing an example of what can be done, not a guideline.

1) I allocate 2 cores for it, but it works well with 1 core (you'll find people putting NAS4Free on old PCs with way lower specs). I assign 2 cores "just in case". I'm not worried about over allocation as ESXi handles resource allocation quite ok ( I rarely max out my Xeon x3440 except when compiling kernel )

e1000. From what I read on VMware forum, VMXNET3 is fast but can be unstable compared to e1000. I didn't test VMXNET3 as e1000 is good enough for me.
Unless your application can be split into multiple simultaneous connections, you'll be getting 1 Gbit/s each , not 2Gbit/s per connection. LACP works best with one-to-many connections.

3) Please refer to http://www.nas4free.org/general_information.html . I install full via LiveCD for convenience, as it's straight forward to install on VM. You may encounter IRQ storm on VM, but that's normal ( it's a FreeBSD + ESXi thing ), look up NAS4Free forum for fix. I do not add additional packages / use jails, but it allows me to test my patches that survive restarts.

4) Yes. I assign 24GB RAM to NAS4Free ( it only gobbles up 10GB ), and leave the rest to other VMs. I can lower it and allocate more for other VMs, since I do not use dedup (I think dedup is pointless in home setup). However, I capped CPU resources on other non *nix VMs, although my NAS4Free VM is only using like 22MHz to 3/400Mhz most of the time.

5) I use my desktop to backup via SMB with file checksum to verify as I already have it setup before I use NAS4Free. I have not tried USB passthrough.

6) Are you trying to present a 20tb logical disk to NAS4Free to manage ? That is not recommended. It is recommended to present each hardisk to NAS4Free ( your expensive RAID card acts as a dumb HBA ) to manage. That means, to preserve your existing 20tb of data, you will need a 'new' set of hardisks, set it up with ZFS, and copy your files over.
It depends on how 'paranoid' you are ? :-) You can Google 'ZFS advantage' to read it up.

7) Well, what can I say :D . From codebase and webgui usability point of view, NAS4Free is the continuation of FreeNAS 7 while FreeNAS 8 is a fork of FreeNAS 7. You can upgrade from FreeNAS 7 to NAS4Free, but not from FreeNAS 7 to FreeNAS 8.

8) To start out, it's easier to test it in VM that you can build and destroy quickly and easily. If you have spare disks and machine, you can install on it to see if you encounter any hardware compatibility issue, and stress test it until you're happy. NAS4Free forum also has lots of excellent user-contributed materials as your bedtime reading material :-)

I do not fiddle much with default installations as it's easier for me to develop/troubleshoot/test patches. You can ask others who use ZFS kernel tuning. Other than that, FreeBSD provides excellent documentation. You should look it up. For a start :-
https://wiki.freebsd.org/ZFSTuningGuide

9) Yes ! The more the merrier :)
Since NAS4Free is based on FreeBSD 9.1, you can refer to hardware compatibility list on FreeBSD 9.1. You may also refer/contribute to NAS4Free's wiki :-
http://wiki.nas4free.org/doku.php?id=na ... s_hardware

However, if you use ESXi and/or want to pass through your controllers, it's best to check if ESXi supports it via VMware forums / on the internet to see if anyone else has done it ( I was caught out on this )

Re: System Donation (usage for a couple weeks)

Posted: 28 Jun 2013 17:28
by armandh
KISS proponent, the simpler the better.
so no virtual server

I have had drive failures and near hardware failures
the more copies and less stuff between me and the data the better I sleep.

the drive that failed was 4 years in to its 5 year warranty
I built a new N4F
I transferred the data out of the FN7 degraded pool.
I later rebuilt with new drives and upgraded to N4F

If you have constructed your storage planing for the worst
your recoveries should be the easiest.

a large complex system to do simple storage is a risk
an un necessary risk.
copies in several smaller systems cuts in to that risk

Re: System Donation (usage for a couple weeks)

Posted: 28 Jun 2013 18:45
by cozmok
Thanks for the responses guys. I appreciate the input and suggestions. However,...(haha), I am pretty set on using one ESXi server to host several VMs, one of which is NAS4Free. My scenario (apparently) is far from normal, so anyone reading this for advice will probably want to do it another way.

Currently I have two machines (one NAS, one ESXi), and need to upgrade both (NAS needs more space and ESXi needs more beef). I am selling them both as a whole to a coworker, so I cannot reuse any parts. Also, I need several spindles on the ESXi host to get the throughput I need for some of the VMs, so I need an 8 port raid card that ESXi supports. In looking for raid cards and drives, in addition to two nearly identical systems, the costs were adding up...

Option1 (normal N4F plus ESXi)
$5220

Code: Select all

NAS Box (N4F using ZFS-RAIDz2)
$50 - Case
$220 - CPU (cheapest, new, intel, xeon)
$190 - Mobo (server class that supports the new intel xeons)
$140 - RAM (2x 8gb ecc)
$2720 - 8x 4tb drives (enterprise)
$3320 - Total

VM Host
$50 - Case
$260 - CPU (cheapest, new, intel, xeon, w/ hyperthreading)
$190 - Mobo (server class that supports the new intel xeons)
$280 - RAM (4x 8gb ecc)
$400 - RAID Card (cheapest 8 port that supports ESXi, dual-core RoC, cache, w/ bbu or cachevault)
$720 - 8x 500gb drives (enterprise)
$1900 - Total

Option2 (what I am doing, probably)
$3900

Code: Select all

$50 - Case
$260 - CPU (cheapest, new, intel, xeon, w/ hyperthreading)
$190 - Mobo (server class that supports the new intel xeons)
$280 - RAM (4x 8gb ecc)
$400 - RAID Card (cheapest 8 port that supports ESXi, dual-core RoC, cache, w/ bbu or cachevault)
$2720 - 8x 4tb drives (enterprise)
$3900 - Total

Option1 has the NAS box configured like everyone suggests. The disadvantage is that I need the ESXi host to have high disk throughput, forcing me to get a pretty nice RAID card, even though I don't need the space (~2tb). Also, the NAS box doesn't really need much CPU (or RAM if I don't use ZFS), so it could easily fit into my resources of the ESXi host. It just ends up (with my scenario) that many of the parts are overlapping, and could be combined saving money, space, power,... at the cost of added complication, and not using ZFS (but I like and am familiar with hardware raids anyway). In short the big downsides of a hardware raid (vs ZFS) are it's costly and if the card fails you need another. Well, this plan (as shown above) isn't more costly, and we use similar cards at my work, and I could borrow one of those to recover my data if I need to. As far as "complicating the setup", I use ESXi all the time at work, and have been virtualizing N4F for a while now on several machines, using hardware raids (not ZFS). This is the main reason I knew this could work to combine the resources (cpu/ram/power/case). My only real concern is the stability of N4F config (me making the right choices in the setup).

So, for me and my quirky scenario, Option2 just seems like a no-brainer. My posting was originally just for help on how to set it up this way though I am interested in why or why not, but in the end, I'm pretty set on this plan, unless someone has a deal-breaker that I'm not thinking of.

I honestly am interested in your view on this plan now that you know more of what I need. Thanks for the discussion so far.

Re: System Donation (usage for a couple weeks)

Posted: 01 Jul 2013 00:40
by RedAntz
It depends on how serious you are with your data. I'm not sure if the latest RAID controllers protect you from bit rot, but I moved from hardware RAID (HP P410) to ZFS because of bit rot :-
http://wiki.nas4free.org/doku.php?id=faq:0144
http://storagemojo.com/2007/09/19/cerns ... -research/
http://www.exlibrisgroup.com/files/Cust ... arke-1.ppt

However, since you're constraint by the number of hardisks you have in hand and do not have spare hardisks to create ZFS, then hardware raid is probably suits you best. Besides, when you mentioned high disk throughput, what sort of throughput are we talking about ? Do you have some ballpark numbers ? Your current setup (Option 1?) are more likely capped by your GBe ethernet than your hardisks throughput.

Either way, I still suggest that you isolate hardisks with VM host and hardisks for NAS to provide optimal performance and easier performance troubleshooting.

Re: System Donation (usage for a couple weeks)

Posted: 01 Jul 2013 16:07
by cozmok
Bit rot does worry me, but I don't see a zfs solution without spending a lot more (Option1 above), right? This is the sole reason that I'm using enterprise hard drives, to at least attempt some resistance.

The high throughput is for other VMs on the ESXi server, as local storage for video work, not over the NICs. I formerly use SSDs as I need something with 400+MBps read and write. My existing RAID6 w/ 8x 1.5tb drives results in 450-500, so I'm hopeful. I don't need tons of space for these VMs, but more than I'd like to pay in SSDs, and that's what keeps leading me back to splitting up that RAID6 and using it for both a NAS and the datastore for those VMs.

Your "optimal performance" comment is precisely why I'm trying to get this all figured out in the next week or two, before I migrate. I think I can do it, and have everything work nicely, but I need a proof of my concept. I just got all the parts over the weekend, so I'll now set it up and see.

Any other thoughts or bit rot solutions?

Re: System Donation (usage for a couple weeks)

Posted: 08 Jul 2013 03:28
by RedAntz
For your video work, hardware RAID is probably best suited for you. For best performance, what you need is direct-attached storage, not network attached storage, so that OS used for video work can read/write directly to storage. With any NAS, you're going to have network bottleneck, whether it's on the same physical box (virtualised) or not. And you should probably set up for RAID 10 instead of RAID 6, as RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations.

Note that you still haven't specified how many hardisks / RAID controllers / budget constraints you have ( or maybe you did, but i am drown in sea of words :-p) , so I'm going take a stab here : You can use hardware RAID 6/10 for VM host, and pass through hard disks connected via SATA controller to NAS4Free for backup and storage, all in 1 box :-

1. ESXi via USB
2. hard disks with hardware RAID :- ( VM hosts and VM with Virtual Disks for video work)*
3. hard disks pass through via onboard SATA controller (NAS for slower backup)

* I doubt you'll get consistent 400+MB/s from there, when your VM hosts busy reading and writing to the same RAID concurrently

This setup will save you some $, but it adds complexity and dependency. If funds are permitted and time is not a resource to worry, then this may work. Otherwise, your initial Option 1 is still the recommended option.What I'm suggesting may give you more headache than it's worth.

Re: System Donation (usage for a couple weeks)

Posted: 08 Jul 2013 22:36
by cozmok
Right, for my video VMs, I am using direct-attached storage (via the external RAID6 array). I've used this before, and also with using a SSD to hold the datastore for those VMs. The NAS is completely for other uses, and I completely understand the bottleneck of my gige network from there out. That's not a problem. The challenge is combining the video VMs with the NAS VM into one physical machine.

I am thinking about doing what I'm guessing you're saying, sort of. You suggest using even more drives with another controller. But I might just try it with my small SSD as the primary datastore and not put too many "other video" VMs on this machine. This way it frees up the big HDDs(and controller) for direct pass-through to the NAS VM, and let ZFS handle it. This means some reconfiguration, with some other issues.

1) I have 8x 4tb disks, in a nice 8x external enclosure, fed from my nice 8x port raid card. ZFS RAIDZ2 (which I would use) seems to like 4,6,or 10 disks. So, I either abandon my last two drives, or use up my two spares to form a 10x disk array, which sounds awesome, except in addition to needing spares.... I also cannot use my raid card, nor my enclosure.....

Is a 8x 4tb RAIDz2 really that bad? As far as performance, it just has to saturate a gigabit connection. How does it affect performance? Is it going to make I/Os slower or will it just require more CPU to maintain the same speeds?

After a quick look around, it seems like my only 10 drive solution is to return my external stuff and get a big honkin case with 12 hot swap bays, and a 16 port hba card.


2) Also, how much CPU can I count on this ZFS option using to serve a solid gigabit stream of data? My CPU is a quad-core Xeon E3-1230V3. I was planning on giving it 2 cores, but even using N4F without ZFS, just to serve the files off a hardware raid6 uses half of each core solid when transferring a fully-saturated gigabit stream. What can I expect from adding the ZFS/RAIDZ2 overhead. This doesn't leave too much for my other VMs....

Thoughts?

Re: System Donation (usage for a couple weeks)

Posted: 09 Jul 2013 13:56
by RedAntz
All my suggestion in previous post is mainly based on the assumption that you require 400MB/s+ for video editing work. You didn't mention that you have an SSD to use in your first post.
I might just try it with my small SSD as the primary datastore and not put too many "other video" VMs on this machine. This way it frees up the big HDDs(and controller) for direct pass-through to the NAS VM, and let ZFS handle it. This means some reconfiguration, with some other issues.
Yes, this can work.

Any single modern hard disk can almost saturate your gigabit ethernet, although you can potentially setup 10GBe ethernet driver within VM to lift the bottleneck.

sub.mesa did a benchmark long time ago, but still applies :-
http://hardforum.com/showpost.php?p=103 ... stcount=26

Since you already have all hardware in place, you have to decide which way's best suits you going forward.

I personally prefer and use 6-disk RAIDZ-2. But bear in mind that your ZFS performance is based on how many vdevs you have, not the number of disks in 1 vdev.

NAS4Free is lightweight. My NAS4Free VM is only using like 12MHz to 3/400Mhz most of the time. Single core is sufficient. It only uses all 2 cores assigned during ZFS Data Scrubbing. With virtualisation, you can afford to over-provision your vCPU so that scrub can take advantage of extra vCPUs when needed (during off-peak time)

Re: System Donation (usage for a couple weeks)

Posted: 09 Jul 2013 17:17
by cozmok
Yes, I have a SSD, but it's not large enough to run what I would like to run. That's why I put the ~2tb requirement initially (which would be 4x 512gb SSDs), which is WAY too much to pay right now. I'm conceding with trying just a couple "other video VMs" on this SSD, in trade for bit rot prevention.

I don't need 10ge drivers or anything like that. I think you're confusing my two needs. I have two separate needs:

1) fast throughput (400MB/s+) for "other video" VMs, meaning the datastore that those VMs are hosted on is capable of that throughput (meaning a fast raid with more than 4 drives, or an SSD))

2) big NAS as a vm, using 8x 4tb in a RAID6 (ish) configuration yielding ~24tb of storage for use over a gig network. It MUST be able to saturate a complete gigabit (for backups and generic file serving via AFP & SMB)

My goal is to combine those two needs into one box, preferably using only those 8x 4tb drives, but as a second (not as good) option, I could just use a SSD for vm datastore storage (fulfilling need #1), and throw the whole raid controller (with the 8x 4tb drives) at the N4F VM, but then I get to my two questions above:

1) I have 8 drives, yet 6 or 10 is recommended for a vdev. Will a single vdev ZFS even be able to saturate the gig network? Official specs say so, but what about in reality? If I transfer a large file over AFP or SMB, will it copy at 100+MB/s? If not, this is a deal-breaker for ZFS for me.

2) CPU usage of this ZFS N4F. Your 12MHz is almost definitely while at idle. I'm interested in the usage while being used (at full gigabit speeds). With my non-ZFS N4F VM that I currently have, it uses half of each 3.3ghz core I gave it while transferring at gigabit speeds. It might even be using both cores fully (and displaying 50% due to hyperthreading). See attached pic. My worry is that with ZFS, this can only be higher, right? Can you check your CPU usage (either via the N4F gui or esxi) while transferring a large file over a gig connection?

Re: System Donation (usage for a couple weeks)

Posted: 11 Jul 2013 04:00
by RedAntz
cozmok wrote: 1) I have 8 drives, yet 6 or 10 is recommended for a vdev. Will a single vdev ZFS even be able to saturate the gig network? Official specs say so, but what about in reality? If I transfer a large file over AFP or SMB, will it copy at 100+MB/s? If not, this is a deal-breaker for ZFS for me.
Yes.
I did a benchmark last night and created a new blog to host the finding here: ( it's too lengthy to post in forum )
http://nas4freeinternals.blogspot.com.a ... -cifs.html

In summary, this is my result:-

Image
cozmok wrote: 2) CPU usage of this ZFS N4F. Your 12MHz is almost definitely while at idle. I'm interested in the usage while being used (at full gigabit speeds). With my non-ZFS N4F VM that I currently have, it uses half of each 3.3ghz core I gave it while transferring at gigabit speeds. It might even be using both cores fully (and displaying 50% due to hyperthreading). See attached pic. My worry is that with ZFS, this can only be higher, right? Can you check your CPU usage (either via the N4F gui or esxi) while transferring a large file over a gig connection?
See chart above.
What filesystem are you using on top of your hardware RAID 6 ?
You should look at 'Performance' tab, not Consumed Host CPU field on 'Summary' tab. It does not refresh in real time. If that stays true, I'd say your CPU usage is not normal.

Re: System Donation (usage for a couple weeks)

Posted: 11 Jul 2013 05:16
by cozmok
AWESOME, thanks for the work. This is exactly what I was wondering.

I think in my case, maybe the N4F WebGUI is correct (showing two cores each at ~50%), and the esxi gui just being weird. Then we can also probably assume that the addition of ZFS and RAIDz2 will not grossly affect my CPU usage also. These assumptions, along with your findings gives me enough information to go ahead and give ZFS a shot, meaning I have to return some stuff (just a couple days before the return period). Now back to details...

I was using the default N4F UFS for my tests (on top of the hardware RAID6).

My new plan (starting from your template):


Hardware :-
1. Quad-core Xeon E3-1230V3 (3.3ghz). 2 cores going to N4F VM
2. 32 GB of RAM, 20gb going to the N4F VM. (hopefully preventing L2ARC).
3. 1x 256GB SSD for ESXi datastore (not redundant, oh well)
4. 10x4TB hardisk for NAS4Free via Adaptec 71605H controller (16 port internal SATA HBA)
5. 1x8GB USB Flash drive for ESXi
6. Offsite N4F system with 2x3TB ZFS mirror (can only backup "really" important stuff)

Software :-
1. ESXi 5.1 (boot via USB flash drive)
2. VM Datastore on the 256GB SSD (configure via ESXi)
3. DirectPath I/O the 71605H controller to NAS4Free VM (to present 10x4TB hardisk for NAS)

NAS Setup :-
4. ZFS using 10 drive RAIDz2 vdev, single pool, single dataset, no separate ZIL device, L2ARC, or Dedupe.
1. NAS is serving only SSH,AFP, SMB, FTP, TFTP, RSYNC. (I will not use iSCSI, ESXi VMs will just use the SSD datastore)
2. No other services are enabled on NAS VM. I use VMs for that.
3. SMB for my media PC (XBMC), along with other VMs
4. AFP with Macs in the house
5. RSYNC to the offsite N4F system

Advantages (blue is compared to Hardware RAID6) :-
1. Power savings and less wear out - I do not have to spin up my NAS hardisks all the time since VMs are sitting on separate hardisk
2. No dependency - There is no dependency to wait for NAS4Free to boot up before other VMs can start. I don't test NAS4Free, but could, meh
3. Flexibility and Security - My NAS serves 1 purpose only - being a NAS. I use other VMs to do anything else.
4. Cost saving - Less physical box = less hardware/electricty costs
5. Centralized administration - The only time I need access to my physical box is to install ESXi, everything else via IPMI / VSphere client
6. Protection from bit rot.

Disadvantage (blue is compared to Hardware RAID6) :-
1. Complexity. VM setup can be overkill for average users and can be intimidating. That also mean you *can* potentially lose data by accident
2. Reliability. Some do not believe that hypervisors can provide solid reliability compared to bare metal. I do, but you make your own judgement
3. VM SSD can fail. But I do not worry much about losing my VMs. Whatever deemed important stuff inside VM are periodically backup to my ZFS NAS. A few hours of downtime is ok to me. If my VM hardisk die, I swap with a new one and copy VM templates that I backup on NAS and turn them on again. (this can be improved, but I'm ok with it now). I do this exact method at work for our esxi dev boxes and it works great.
4. Throughput? probably, but as long as it can saturate a gigabit network, I'm ok with that. If the system can pump out higher numbers, I might set up a LAG to my switch and play around, but not required.
5. Familiarity. I'm counting on you guys to help me if things go south and I have to type zfs command lines..... I'm ok with linux/freebsd a CLI, but I REALLY don't understand the inner workings of ZFS and it scares me a little. Vdevs, pools, datasets, oh my.
6. Gobbles up my RAM (and maybe more CPU) which limits the number of other VMs I can run on that machine. oh well... I can build another esxi machine if I need it later.



So, go ahead, shoot holes in this plan now.

Re: System Donation (usage for a couple weeks)

Posted: 11 Jul 2013 07:14
by RedAntz
cozmok wrote: Hardware :-
1. Quad-core Xeon E3-1230V3 (3.3ghz). 2 cores going to N4F VM
2. 32 GB of RAM, 20gb going to the N4F VM. (hopefully preventing L2ARC).
3. 1x 256GB SSD for ESXi datastore (not redundant, oh well)
4. 10x4TB hardisk for NAS4Free via Adaptec 71605H controller (16 port internal SATA HBA)
5. 1x8GB USB Flash drive for ESXi
6. Offsite N4F system with 2x3TB ZFS mirror (can only backup "really" important stuff)
Can we swap hardware ? ;)
cozmok wrote: 3. DirectPath I/O the 71605H controller to NAS4Free VM (to present 10x4TB hardisk for NAS)
Make sure you test this out as ESXi can be picky.

cozmok wrote: NAS Setup :-
4. ZFS using 10 drive RAIDz2 vdev, single pool, single dataset, no separate ZIL device, L2ARC, or Dedupe.
This is not a black/white suggestion, but I'll suggest a smaller vdevs (say 6 hardisks per vdev), even though the recommendation is <=10 disks per vdev.
a) 1 vdev means write IOPS performance will be that of a single disk. Smaller but more vdevs mean > throughput. See https://blogs.oracle.com/relling/entry/ ... erformance
b) Cheaper to upgrade in future, e.g. if you choose to upgrade by replacing an existing vdev of 6x4TB with a larger vdev (say 6x8TB) compared to replacing 10x4TB with 10x8TB .
c) Resilver time will be very long because of (a)


cozmok wrote: 5. Familiarity. I'm counting on you guys to help me if things go south and I have to type zfs command lines..... I'm ok with linux/freebsd a CLI, but I REALLY don't understand the inner workings of ZFS and it scares me a little. Vdevs, pools, datasets, oh my.
Looks like you find yourself some bedtime reading materials :lol:

Re: System Donation (usage for a couple weeks)

Posted: 11 Jul 2013 15:03
by RedAntz
cozmok wrote:6. Gobbles up my RAM (and maybe more CPU) which limits the number of other VMs I can run on that machine. oh well... I can build another esxi machine if I need it later.
By the way, you don't really need that much RAM if you do not use dedup. Looking at my box now, 9.81TB data of RAID-Z2 only uses 6.5GB RAM.

Re: System Donation (usage for a couple weeks)

Posted: 11 Jul 2013 15:50
by cozmok
Can we swap hardware ? ;)
No way! It took months to get her to approve this.... Luckily I spent quite a lot on my last server (before we were married), so I just petitioned for the same budget. And with the new changes (new case, ssd, different HBA card) I think I'm over now... ut oh
Make sure you test this out as ESXi can be picky.
Yeah, Adaptec and vmware both say it should pass-through fine. The RAID version says it'll work with FreeBSD, so I'd guess we'll be ok.
Stuff about two vdevs
Yeah, I know, but I really can't spend another $800 on two more drives. Especially when it'll be basically wasted to gain performance/flexibility. And it'll use up every drive space in my soon to be new chassis (leaving no room for the ssd). Hopefully I should be able to server gigabit throughput on one vdev, or at least close. Your test is at least one proof of concept. Regarding upgrades, I better not run out of ~32tb anytime soon, and if I do, it'll require way more hardware, so I'll just deal with that when it comes (in at least 15-25 years). Resilvering, yeah, it'll take a while. I can turn things off if/when/while it's going. Nothing is mission critical on there.
Looks like you find yourself some bedtime reading materials
I don't have much more free time. I can't start anything til my 2yr old goes to bed (at like 9:30), then I begin by pulling my hair out planning this build, then writing these ridiculously long posts on here, and then studying for the CCNA til I can't keep my eyes open.
ram
Yeah, we'll see. The whitepapers say once you get up there, to use 1gb per 1tb, which would mean 32, soooooo, I can't do that. 20gb would leave me enough to put a couple VMs on there (probably not the video ones, ha... good thing I spent so much time trying to make that work...)

Re: System Donation (usage for a couple weeks)

Posted: 25 Jul 2013 22:46
by cozmok
So, I got my new HBA, but when I nas4free hangs on boot when I set the HBA to pass-through. Works fine when I remove it though. Seems like this is common, and I tried a couple things relating to LSI, but they didn't work. There are a couple more things I need to do (along with getting the logs working in esxi), but I was wondering if you had ideas, or things I could try. Thanks.

Re: System Donation (usage for a couple weeks)

Posted: 29 Jul 2013 00:29
by RedAntz
cozmok wrote:So, I got my new HBA, but when I nas4free hangs on boot when I set the HBA to pass-through. Works fine when I remove it though. Seems like this is common, and I tried a couple things relating to LSI, but they didn't work. There are a couple more things I need to do (along with getting the logs working in esxi), but I was wondering if you had ideas, or things I could try. Thanks.
What is the error message ? At which point did it hang ?

Re: System Donation (usage for a couple weeks)

Posted: 02 Aug 2013 19:20
by cozmok
Well, it was right at the initial boot, before any OS feedback. I tried installing N4F natively and still had issues with the disks even showing up. I reached out to Adaptec and they came back saying FreeBSD was unsupported (even though the same model RAID version is....). So I gave up once more and bought card #3 (LSI 9201-16i) and it all works fine...

But I have another question in this post:

viewtopic.php?f=66&t=4738