Sorry we lost some posts because of database errors!

*New 12.1 series Release:
2020-09-01: XigmaNAS 12.1.0.4.7728 - released

*New 11.4 series Release:
2020-08-27: XigmaNAS 11.4.0.4.7718 - released!


We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money. Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

RFC on system re-architecture

Everything Hardware related to build that XigmaNAS system!
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
MarSOnEarth
NewUser
NewUser
Posts: 5
Joined: 15 Nov 2014 22:30
Status: Offline

RFC on system re-architecture

#1

Post by MarSOnEarth »

Currently I have two z1 3-drive pools. One consists of 2TB, and the other of 3TB HDs. One pool is for backups of all other systems in our house, and the other for archival storage of personal files.
Question; would there be any advantage in creating a single z2 pool of all those HDs and using datasets for the different uses?

Relatedly, the system has 8GB of memory and XigmaNAS (XN) is running embedded. In addition to the two pools, I also have a single 1TB disk that's used (hardly ever) as swap.
Question; would there an advantage to installing XN on that disk?

Thanks
Case: Node-804 Antec NSK2400 + Mediasonic 4HD ProBox
MB: Asus-m3a78-em w/PMP eSATA + 8GB DDR2-800 CRC memory
CPU: AMD Athlon(tm) II X4 610e Processor
Controllers: 2 port ASMedia ASM1062 + 4 port Marvell 88SE9215 SATA cards
ZFS: 2*2TB + 2*3TB mirrors 3*2TB RaidZ + 3*3TB RaidZ

MarSOnEarth
NewUser
NewUser
Posts: 5
Joined: 15 Nov 2014 22:30
Status: Offline

Re: RFC on system re-architecture

#2

Post by MarSOnEarth »

No one commented on my request, maybe because there’s plenty of information about ZFS in general, and specifically on this very issue. So, lots of reading later, it was this article by Jim Salter that convinced me about the advantages of using mirrored vdevs, and then guided me through my undertaking.

A little background first: After a catastrophic failure of a 2nd gen 4 HD Drobo (Raid-whatever that lost its “brains” (indexes) and access to the corresponding data), I build the initial ZFS (NAS4Free) storage appliance with a single 3 * 1TB drives RaidZ pool. Six years later my XigmaNAS consists of two RaidZ pools (3 * 2 and 3 * 3 TB HDs) with anywhere from 2 to nearly 4 years on its HDs.

Expecting that the time is coming when those HDs will start failing, I bought 3 4TB drives as replacements --my planned strategy for dealing with failure was a common one, replace a failed HD with the current best-deal larger drive—- but then I looked at how I use the space I already have:

Code: Select all

               capacity	
pool        alloc   free
----------  -----  -----
Backups     2.41T  5.72T
Store       2.05T  3.38T
Lots of free space! Or, to look at the from the other side; I use little of what space there is. Long story short, this realization led me to the following back-of-the-napkin consideration:

Code: Select all

Original RaidZ configuration                 Totals
Two RaidZ * 3 HDs each:
Used: 6 HDs -> 3 * 2T + 3 * 3T                  10T
Spare: 3 HDs -> 3 * 4T                          12T

Actual RaidZ capacity:                        ~9.6T
    currently used:                           ~3.4T
    potential need for:                       ~5.0T

Possible mirrored configuration:
To use: 4 HDs -> 2 * 2T + 2 * 3T:               ~5T
Spare:  5 HDs -> 1 * 2T & 1 * 3T & 3 * 4T:      17T
   mirror total capacity:                     ~4.8T
After cleaning of some debris, I convinced myself I will have enough space with just two mirrors (and thinking, if not, then I have enough spares to expand), so I took the plunge. Using a mirror of two 4TB HDs I siphoned off all my data from both pools, created a BIG pool of two mirrored vdevs, and moved all the data to the new pool (easy-peasy, but it took me three tries and a week as this was more than just shoveling data. I also re-thought how I structured and configured my dataset hierarchies and that tripped me up on copying. I ended up with `rsync’ as the best tool not only to copy, but to also to verify the integrity of the copied data):

BIG pool right after creation:

Code: Select all

              capacity 
	pool        alloc   free
	----------  -----  -----
	BIG         1.77M  4.53T
	  mirror     800K  1.81T
	    ada2        -      -
	    ada3        -      -
	  mirror    1012K  2.72T
	    ada6        -      -
	    ada7        -      -
...and after having been seeded with data:

Code: Select all

-------------------------
               capacity 
pool        alloc   free
----------  -----  -----
BIG         2.86T  1.67T
  mirror    1.17T   657G
    ada2        -      -
    ada3        -      -
  mirror    1.69T  1.02T
    ada6        -      -
    ada7        -      -
Looking at the end result one might ask, “What’s still with all that free space?”, and the answer is, compression! Even on already compressed data (and I have lots of such) it turns out LZ4 compression does a pretty good job, and I didn't have it turned on before.

So, thank you ZFS, thank you Jim for your article, and thank you XigmaNAS team for all your work. Couldn't do it without you.
Case: Node-804 Antec NSK2400 + Mediasonic 4HD ProBox
MB: Asus-m3a78-em w/PMP eSATA + 8GB DDR2-800 CRC memory
CPU: AMD Athlon(tm) II X4 610e Processor
Controllers: 2 port ASMedia ASM1062 + 4 port Marvell 88SE9215 SATA cards
ZFS: 2*2TB + 2*3TB mirrors 3*2TB RaidZ + 3*3TB RaidZ

Brahiewahiewa
Starter
Starter
Posts: 38
Joined: 01 Aug 2012 15:54
Status: Offline

Re: RFC on system re-architecture

#3

Post by Brahiewahiewa »

I hate to disappoint you, but like any other file system, ZFS gets less happy when utilization rises.
Rule of thumb is 50% for heavy used systems and 66% for normal usage.
On a home system, you might get away with 75%.
Currently you're just below 66%.
On the bright side: nowadays the sweet spot for hard drives is at the 8TB models; something you might want to take in consideration when planning replacements

MarSOnEarth
NewUser
NewUser
Posts: 5
Joined: 15 Nov 2014 22:30
Status: Offline

Re: RFC on system re-architecture

#4

Post by MarSOnEarth »

Brahiewahiewa wrote:
03 Nov 2020 03:43
I hate to disappoint you, but like any other file system, ZFS gets less happy when utilization rises.
Rule of thumb is 50% for heavy used systems and 66% for normal usage.
On a home system, you might get away with 75%.
Currently you're just below 66%.
True, but between those last two numbers I project the utilization will stay.

Mine is a low-end home system where performance is secondary to resiliency (and ease of maintenance). BUT (and this I didn’t expect), with the two RaidZ pools the CPU load hardly ever exceeded 25% utilization and network throughput mostly hovered ~40MB/s. After the re-organization CPU utilization is hitting 70+%, and network throughput is maxing out. Also, I’m seeing IOps upward 3K/s while before there were in the low hundreds. I observed all that on a Windows backup (I'm using Veeam Agent), so it’s all new writes (and yes, the pool is not fragmented, but still, these are huge increases).
BIG_IO.png
Brahiewahiewa wrote:
03 Nov 2020 03:43
On the bright side: nowadays the sweet spot for hard drives is at the 8TB models; something you might want to take in consideration when planning replacements
Well, I got lucky with a great deal on 3 new WD40EZRZ HDs @ $55 each, and, as I mentioned in my write-up, I still have 2TB*1 and 3TB*1 spares for the mirrors. Yes, they may have limited number of hours left on them (no errors so far), but that’s where the new 4TB HDs come in (while I’ll be looking for a good deal on next set of stand-by HDs). It's all looking good from my point of view
You do not have the required permissions to view the files attached to this post.
Case: Node-804 Antec NSK2400 + Mediasonic 4HD ProBox
MB: Asus-m3a78-em w/PMP eSATA + 8GB DDR2-800 CRC memory
CPU: AMD Athlon(tm) II X4 610e Processor
Controllers: 2 port ASMedia ASM1062 + 4 port Marvell 88SE9215 SATA cards
ZFS: 2*2TB + 2*3TB mirrors 3*2TB RaidZ + 3*3TB RaidZ

Post Reply

Return to “Server Build, Tips, Help and Suggestions”