This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Help me understand ZFS

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
Derek Zeanah
NewUser
NewUser
Posts: 7
Joined: 24 Nov 2013 22:56
Status: Offline

Help me understand ZFS

Post by Derek Zeanah »

I've been a hardware RAID fan for a long time. I'm looking at using the L2ARC caching functions of ZFS in my environment, plus the HAST capabilities of NAS4Free, and now I'm troubled. I hope you can help me understand things better.

I'm running a bunch of VMs on an EqualLogic box now. It works, but I can see a point where I'll have outgrown it and I'd much rather move to something clustered and less expensive, and (hopefully) more performant. I think I can accomplish this, but I need a reality check.

To start, I have an R710 that's been running NAS4Free for a year or so. Now I'm thinking about reconfiguring it. My goal is to increase overall storage capacity while increasing the number of IOPS I can sustain versus my current 14-spindle RAID array.

What I think I want to do is this:

1) Install 4 2TB drives as JBOD so NAS4Free can see them.
2) Install a pair of 250G SSD drives as a mirror, so NAS4Free sees them as a single unit
3) Configure a ZFS volume using the 2TB drives for storage, and the mirror for both L2ARC cache and ZIL cache

That's pretty straightforward.

Now, the questions:

1) Am I configuring things properly?
2) Is it possible to do something like RAID10 in this situation, or am I stuck using 2 parity drives?
3) Is the performance in this situation likely to be reasonable? The last time I used software RAID was ~ 12 years ago and I became a hardware RAID advocate after my first drive failure...
4) What should I be aware of before I buy some drives to start testing with? I'm limited to 6 drive bays for now, and with the size of modern drives I think that's probably sufficient.

And here's the tough question:

5) Would anyone recommend I spend the money on a CacheCade RAID card and do it in hardware instead? Other than the money involved, why or why not?

Thanks.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Help me understand ZFS

Post by raulfg3 »

Please read this powerpoint to clarify some items. http://forums.freenas.org/threads/slide ... oobs.7775/
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

Derek Zeanah
NewUser
NewUser
Posts: 7
Joined: 24 Nov 2013 22:56
Status: Offline

Re: Help me understand ZFS

Post by Derek Zeanah »

raulfg3 wrote:Please read this powerpoint to clarify some items. http://forums.freenas.org/threads/slide ... oobs.7775/
Thanks, that updated my thinking slightly.

After reading the linked PowerPoint file, I'm inclined to forego the SSD for caching, and bump the RAM on the server to 128G instead. That would allow me to configure a 6 drive RaidZ2 array of 2TB drives and get more longevity out of this server as a storage solution. Throughput isn't much of a concern for me, but stability and being able to keep up with high IOPS for database use and backup services *is* important, and I'm thinking a large amount of RAM along with big, slow, parity RAID would probably work just fine (until resilvering time, anyway).

Reading this thread (https://bugs.freenas.org/issues/1531) is worrying me though. It looks like excess RAM may lead to configurations where timeouts are likely to occur, at least a year ago in FreeNAS. Is that going to be a concern in recent NAS4Free versions? Would setting up a Zvol of 3 vdev mirrors (did I get the terminology right) rather than using parity raid alleviate this problem?

My intended use is to host virtual machines for a XenServer cluster using iSCSI or (preferably) NFS. I've been doing testing using iSCSI and it runs fine with my current setup, but that's exporting a UFS drive that is defined in the hardware RAID, not using ZFS. (Sorry if I mixed terminology again.)

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Help me understand ZFS

Post by raulfg3 »

NAS4Free and ISCSI have the better possible performance, so please test it.

You can see other hardware & performance info on this thread: http://forums.freenas.org/forums/performance.37/

there are no simmilar post on Nas4Free but all numbers that you see in FreeNAS, must be better on Nas4Free ( Not allways but a great number of times).


some interesting post:

http://forums.freenas.org/threads/some- ... nas.13633/
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

Derek Zeanah
NewUser
NewUser
Posts: 7
Joined: 24 Nov 2013 22:56
Status: Offline

Re: Help me understand ZFS

Post by Derek Zeanah »

Wow. It's amazing how my estimate of what's appropriate in this role changes the more I read.

So, how's this for a reasonable guess at an NFS server for a virtual machine cluster:

1) More memory is better. 32GB minimum, 64GB might be more appropriate. With enough RAM, a dedicated SSD for L2ARC is not necessary.
2) Lots of memory causes problems without a really fast SLOG, and that SLOG needs to be mirrored.
3) So, we're talking about a system with two SSD drives in a mirrored VDEV to use as a SLOG.
4) That needs to be connected to a non-RAID controller so the system can poll SMART status and send alerts as appropriate.
5) From there I can happily add 2 drives at a time, creating each pair as a vdev and expanding the existing ZVol to incorporate the new drives. This way I'm getting something approximating RAID-10, and we don't need to deal with parity calculations when things break.

What else am I missing? At this point I'm already thinking about new hardware with more hot-swap drive space available.

Is it possible to designate a hot spare, so ZFS will automatically replace a failed drive in a VDEV without user intervention?

User avatar
siftu
Moderator
Moderator
Posts: 71
Joined: 17 Oct 2012 06:36
Status: Offline

Re: Help me understand ZFS

Post by siftu »

If you want IOPS dont even consider RAIDZX. Use mirrors.

Start by reading

http://nex7.blogspot.com/2013/03/readme1st.html
System specs: NAS4Free amd64-embedded on ASUSTeK. M5A78L-M LX PLUS - AMD Phenom(tm) II X3 720 Processor - 8GB ECC Ram, Storage: 2x ZFS mirrors with 4x Western Digital Green (WDC WD10EADS)
My NAS4Free related blog - http://n4f.siftusystems.com/

Post Reply

Return to “ZFS (only!)”