Hi guys,
I am fairly new to the land of large storage volumes on a NAS platform, and new to ZFS itself.
What I would like to do is - and sorry if this post is long - is have a duplicated, fast recoverable, high capacity, expandable (set of) filesystem(s).
I plan to do something like this;
Have a front-end with SSH based access (perhaps will settle on SFTP in a chroot'd environment initially). (this front end may or may not be NAS4Free directly - yet to determine that).
The NAS4Free portion will be 13 x 4TB disks and this is where the ZFS config query comes into play.
These disks will be 5 x 4TB onboard SATA3 controller (JBOD)
+ (most likely) 8 x 4TB on a M1015 / LSI9211-8i IT mode SAS/SATA3 (4 on each channel - JBOD)
The boot and ZIL etc will be on a 120GB SSD.
System will have 16GB DDR3 1600Mhz RAM probably using an i3-4130 CPU.
ZFS dedupe is not required. (most data will likely be encrypted and compressed - eg AES256bit etc)
The duplication/redundancy will be a second system with the same configuration and rsync'd on a regular basis with configuration (users, permissions, custom config etc) duplicated enabling fast change over in case of outage/failure/maintenance on the primary server (hopefully as simple as changing hostname/IP address to become active/primary or even just pointing front-end system to secondary back-end).
So if using a front-end system, I expect the ZFS volume/volumes would be mounted via NFS (from NAS4Free active server to Front-End Server).
I am considering if ZFS software RAID 5 is useful in a setup like this to reduce the chances of a whole of system outage - will this run into trouble with having 13 disks spread across 2 differing controller hardwares?
Is 13 disks in a ZFS volume too many?
Are there better suggestions for having so much capacity with a high(ish) degree of availability?
If I can get this right I'd like to have the option of adding new storage nodes easily (hence the architecture of probably having the front-end (auth etc) split from the back-end storage volumes).
Is NFS a bad idea in this sort of configuration?
From a big picture approach am I possibly looking at over complicating N4F and trying to make it fit into a box best suited for something like OpenStack? (OpenStack requires more infrastructure and no doubt more work to setup and I'm not sure all the storage services are available as per N4F...)
Thanks for reading and thanks for any advise, hopefully I can return the favour some day.
Cheers,
Linz
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
Help me decide best approach / ZFS config
-
Linz
- NewUser

- Posts: 3
- Joined: 05 Nov 2013 03:13
- Status: Offline
-
kenZ71
- Advanced User

- Posts: 379
- Joined: 27 Jun 2012 20:18
- Location: Northeast, USA
- Status: Offline
Re: Help me decide best approach / ZFS config
I suggest reading this https://wiki.freebsd.org/HAST. Sounds Luke it will do what your looking for.
11.2-RELEASE-p3 | ZFS Mirror - 2 x 8TB WD Red | 28GB ECC Ram
HP ML10v2 x64-embedded on Intel(R) Core(TM) i3-4150 CPU @ 3.50GHz
Extra memory so I can host a couple VMs
1) Unifi Controller on Ubuntu
2) Librenms on Ubuntu
HP ML10v2 x64-embedded on Intel(R) Core(TM) i3-4150 CPU @ 3.50GHz
Extra memory so I can host a couple VMs
1) Unifi Controller on Ubuntu
2) Librenms on Ubuntu
-
Linz
- NewUser

- Posts: 3
- Joined: 05 Nov 2013 03:13
- Status: Offline
Re: Help me decide best approach / ZFS config
Thanks for the HAST suggestion. I will look into that in more detail.
I believe there are a number of pros and cons for running clusters.
Some of the pros;
- Almost instant change over to up to date data in a cluster proven scenario
- Automatic data syncing
- If configured correctly very little work to switch back to primary node after secondary taking over
Some of the cons;
- Usually requires higher skill level to support due to the complicated nature of clusters
- Split brain type scenarios could end up in data loss if not identified and managed correctly
- Typically requires more infrastructure than non-clusters (although I haven't checked HAST requirements other than VIP)
- Outages appear more likely in a cluster environment from an operational perspective (contentious point perhaps but plenty of discussion can be had on the topic, perhaps another time..)
So thanks for the suggestion, I'm still interested in other feedback also.
Cheers
I believe there are a number of pros and cons for running clusters.
Some of the pros;
- Almost instant change over to up to date data in a cluster proven scenario
- Automatic data syncing
- If configured correctly very little work to switch back to primary node after secondary taking over
Some of the cons;
- Usually requires higher skill level to support due to the complicated nature of clusters
- Split brain type scenarios could end up in data loss if not identified and managed correctly
- Typically requires more infrastructure than non-clusters (although I haven't checked HAST requirements other than VIP)
- Outages appear more likely in a cluster environment from an operational perspective (contentious point perhaps but plenty of discussion can be had on the topic, perhaps another time..)
So thanks for the suggestion, I'm still interested in other feedback also.
Cheers