This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

raidz2 and raidz3 disk layout

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
coiner
NewUser
NewUser
Posts: 5
Joined: 22 Jun 2015 03:41
Status: Offline

raidz2 and raidz3 disk layout

Post by coiner »

I know that data disks should be placed in powers of 2: 4, 6, or 10 disks should suit raidz2 and 7, 11 or 15 disks are good for raidz3.

I have one system that holds 12 4TB disks and want 32TB pool (~28TB usable). So I see three options with this system.
  • 1. 10 disk raidz2 and two hot spares.
    2. 11 disk raidz3 with one hot spare.
    3. two 6 disk raidz2.
What I know:
  • - All three options offer 32TB pool.
    - All three options offer perfectly good redundancy for the number of disks. Option 3 has the best active redundancy though.
What I am wondering is, which will perform the best? I will be in a position to test this out soon but for now I'd like to decide theoretically.

In my mind, I think that option 3 would perform the best because it is essentially a raid 6+0. Therefore, reads and writes would be striped across both arrays.

So I am leaning toward option 3 because it has the best redundancy and - I think - the best performance. Am I correct here?

User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: raidz2 and raidz3 disk layout

Post by erico.bettoni »

Yeah, option 3 is the best choice, with great redundancy.

User avatar
ChriZathens
Forum Moderator
Forum Moderator
Posts: 758
Joined: 23 Jun 2012 09:14
Location: Athens, Greece
Contact:
Status: Offline

Re: raidz2 and raidz3 disk layout

Post by ChriZathens »

Yeap... I would go with 2xraidz2 vdevs, too...
My Nas
  1. Case: Fractal Design Define R2
  2. M/B: Supermicro x9scl-f
  3. CPU: Intel Celeron G1620
  4. RAM: 16GB DDR3 ECC (2 x Kingston KVR1333D3E9S/8G)
  5. PSU: Chieftec 850w 80+ modular
  6. Storage: 8x2TB HDDs in a RaidZ2 array ~ 10.1 TB usable disk space
  7. O/S: XigmaNAS 11.2.0.4.6625 -amd64 embedded
  8. Extra H/W: Dell Perc H310 SAS controller, crosflashed to LSI 9211-8i IT mode, 8GB Innodisk D150SV SATADOM for O/S

Backup Nas: U-NAS NSC-400, Gigabyte MB10-DS4 (4x4TB Seagate Exos disks in RaidZ configuration - 32GB RAM)

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: raidz2 and raidz3 disk layout

Post by b0ssman »

according to http://www.servethehome.com/raid-calcul ... tdl-model/

with 12 4tb raidz3 across all is better than raid 6+0
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

coiner
NewUser
NewUser
Posts: 5
Joined: 22 Jun 2015 03:41
Status: Offline

Re: raidz2 and raidz3 disk layout

Post by coiner »

Using 12 drives in raidz3 doesn't follow the rule for multiples of 2 though - there would be 9 data disks - so performance would take a big hit. So, with raidz3, you want 11 drives and hot spare, not 12 drives.

I took some time to do some calculations. raidz3 always wins in terms of redundancy because you can always lose up to 3 drives. However, there are some cases in raid 6+0 where you can lose 3 drives and lose your data. For this reason, raidz3 is always superior for redundancy.

There should be quite a difference in performance between the 11 disk raidz3 and the 12 disk raid60. Overall, there are more parity calculations in the raid60, but each individual zvol has less parity calculations than the single raidz3. Thus, striped raid60 should outperform raidz3 in all cases, perhaps even being 2x faster or more.

Assuming a single drive failure, resilver times should be faster for the raid60 since only half the dataset needs to be resilvered. If one drive fails in both raidz2 zvols (two total), the resilver times should be more or less the same as raidz3 between the two zvols, but still half the total time.

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: raidz2 and raidz3 disk layout

Post by b0ssman »

In either configuration the limiting factor will be gig Ethernet


Sent from my iPhone using Tapatalk
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

coiner
NewUser
NewUser
Posts: 5
Joined: 22 Jun 2015 03:41
Status: Offline

Re: raidz2 and raidz3 disk layout

Post by coiner »

I will be using 10GbE, so network should not be a limiting factor. :)

User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: raidz2 and raidz3 disk layout

Post by erico.bettoni »

Even with 1gb ethernet I would still go for 2 x z2, otherwise scrub and replace times would be too big.

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: raidz2 and raidz3 disk layout

Post by b0ssman »

the replace time between 2x z2 and z3 should not differ by much since all drives will be working in parallel.
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

coiner
NewUser
NewUser
Posts: 5
Joined: 22 Jun 2015 03:41
Status: Offline

Re: raidz2 and raidz3 disk layout

Post by coiner »

I disagree. The resilver time for a single drive failure should be less, up to 2x faster, than raidz3. This is because half the pool is stored on six drives rather than the whole pool on all the drives. Thus, only half the parity bits need to be written and, in theory, it should take half the time to resilver.

Another way to look at it is, only six drives would be involved in the resilver. Even though the data is striped, one zvol can resilver independently. It is only looking at the stripes it is storing, not the entire pool. The second zvol would not need to even be read in order to resilver the first.

Scrub times should also benefit. Since the data is striped, read speed is doubled so scrubs should take half the time as well. At first I was thinking scrubs wouldn't benefit much, but zfs is scrubbing the pool not the zvols. Therefore, a scrub doesn't pay attention to the zvol layout at all, just the max read speed for the entire pool.

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: raidz2 and raidz3 disk layout

Post by b0ssman »

you have a small error in your logic.

assuming that the data is evenly spread.

if your pool is half full in a raidz3 that means that all drives are half used.

in a 2x raidz2 if the pool is half full that means both vdevs are half full which also means that each drive is half full.

during a rebuild that half full drive needs to be rewritten.

scrub should also be no different, because each drive needs to be read in full.

whether you are reading 2 vdevs or one. each drive has to do the same amount of work and in both cases each drive works its in parallel with the others.
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

coiner
NewUser
NewUser
Posts: 5
Joined: 22 Jun 2015 03:41
Status: Offline

Re: raidz2 and raidz3 disk layout

Post by coiner »

That is definitely some sound logic. I suppose I need to think more in terms of a single drive instead of array when it comes to resilvering and scrubbing. It seems easier to think of those being limited by the performance of one single drive and aren't really affected by the layout of the drives.

Although scrubbing, it seems, is more limited by the controller and CPU since you can read from more than one drive at once.

Post Reply

Return to “ZFS (only!)”