This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Performance of a ZFS Stripe

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
fallen2109
Starter
Starter
Posts: 51
Joined: 27 Oct 2014 22:38
Location: The Netherlands
Status: Offline

Performance of a ZFS Stripe

Post by fallen2109 »

Recently I bought 2 factory new HGST 6 GB HDDs and made a ZFS Stripe out of them. I am having my original data on RadZ1 ZFS Pool (3 x 6 GB - WD Red). My plan is to use the ZFS Stripe pool (2 x 6 TB HGST) as a backup of my main data (3 x 6 TB WD Red). Can I hear some advice on the following topics:

1) Is it advisable to use ZFS Stripe as a backup of one's data ?
2) I have expected the Stripe to have a greater performance then single drive during rsync. Yesterday I have started a copy of my data
from ZFS RaidZ1 to the ZFS Stripe. It took 13 hours for 4.4 TB of data - which is an average of 340 GB per hour - or less then 100 MB
per second. The single drives are performing at about 170-180 MB/s. When I do a simple test of the write speed with the dd command:

nas4free: ~ # dd if=/dev/zero of=/mnt/NASVAS1_BACKUP_VOL1/test.dd bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 57.266968 secs (366206224 bytes/sec)

I am getting ~ 360 MB/s. Is it there something I can do to improve the actual real life performance of my ZFS stripe?

Thank you in advance for all your concerns and comments.
12.0.0.4 - Reticulus (revision 6928)
x64-full on Intel(R) Xeon(R) CPU L5640 @ 2.27GHz (Dual CPU)
Supermicro X8DT3, 48 GB Ram

ku-gew
Advanced User
Advanced User
Posts: 172
Joined: 29 Nov 2012 09:02
Location: Den Haag, The Netherlands
Status: Offline

Re: Performance of a ZFS Stripe

Post by ku-gew »

180 MB/s is the maximum read/write speed. During normal operation with various files probably fragmented, 100 MB/s is good.

ZFS stripe is not advisable for data since it is prone to total loss. If you use it as backup, it is still prone to total loss, and that means:
- if this backup stripe dies, you will be without backup until you make the first post-loss rsync. But you have RAIDZ1 so it is not a big issue.
- if your RAIDZ1 dies, you are left with only the stripe, that has no fault tolerance. If you do regular scrubs to ensure the data is good, it MAY be an acceptable risk.

In general, this is the issue with ZFS: expanding is nearly impossible, or extremely expensive. In your case you should have done a RAIDZ2 since the beginning... but I understand that is not easily foreseeable. Or second option is to buy a third 6TB and use each one of them to mirror each disk of the RAIDZ1.

This is very clear about this issue and why commercial NASes based on Linux mdadm are from this point of view better:
http://louwrentius.com/the-hidden-cost- ... e-nas.html
Many home NAS builders consider using ZFS for their file system. But there is a caveat with ZFS that people should be aware of.

Although ZFS is free software, implementing ZFS is not free. The key issue is that expanding capacity with ZFS is more expensive compared to legacy RAID solutions.

With ZFS, you either have to buy all storage you expect to need upfront, or you will be wasting a few hard drives on redundancy you don't need.

This fact is often overlooked, but it's very important when you are planning your build.

Other software RAID solutions like Linux MDADM lets you grow an existing RAID array with one disk at a time. This is also true for many hardware-based RAID solutions. This is ideal for home users because you can expand as you need.

ZFS does not allow this!
Worth reading.
HP Microserver N40L, 8 GB ECC, 2x 3TB WD Red, 2x 4TB WD Red
XigmaNAS stable branch, always latest version
SMB, rsync

fallen2109
Starter
Starter
Posts: 51
Joined: 27 Oct 2014 22:38
Location: The Netherlands
Status: Offline

Re: Performance of a ZFS Stripe

Post by fallen2109 »

ku-gew wrote:180 MB/s is the maximum read/write speed. During normal operation with various files probably fragmented, 100 MB/s is good.

ZFS stripe is not advisable for data since it is prone to total loss. If you use it as backup, it is still prone to total loss, and that means:
- if this backup stripe dies, you will be without backup until you make the first post-loss rsync. But you have RAIDZ1 so it is not a big issue.
- if your RAIDZ1 dies, you are left with only the stripe, that has no fault tolerance. If you do regular scrubs to ensure the data is good, it MAY be an acceptable risk.

In general, this is the issue with ZFS: expanding is nearly impossible, or extremely expensive. In your case you should have done a RAIDZ2 since the beginning... but I understand that is not easily foreseeable. Or second option is to buy a third 6TB and use each one of them to mirror each disk of the RAIDZ1.

This is very clear about this issue and why commercial NASes based on Linux mdadm are from this point of view better:
http://louwrentius.com/the-hidden-cost- ... e-nas.html

Worth reading.

Thank you for your reply! Indeed a very interesting article - I have read it twice :). Well you are right - data preservation is something we need to pay for. I am thinking about adding a third drive to my backup (striped) ZFS pool and thus making it RAIDZ1. This will mitigate a single drive loss scenario with it.
In fact - I do at least one scrub weekly. All my drives are factory new (not that it will guarantee anything :)) and with 1000 or less load cycles and about 1200 working hours. I used to have 3 x 3 TB Barracudas in the RAIDZ1 pool, but I have upgraded them (one-by-one with re-silvering method mentioned in the article) to 3 x 6 TB WD Reds - which have not shown any errors since the day I started using them. Seagate HDDs were cr*p and were getting full too.

On the performance side - I got a huge improvement. I did turn prefetch on and deleted completely all data sets, the (ZFS stripe) pool itself and the underlying vdev. Then I did create the same (striped) vdev (with nop wrappers), the same pool and all the data set. I am not sure what actually helped, but now - I have rsync-ed the complete 4.4 TB of data from my raidz1 to my striped pool in 7 hours (average performance of 600 GB/hour, 10 GB/minute or 170 MB/second) - a very acceptable result indeed!

Thank you one more time for the comments and the advice you have given me.
12.0.0.4 - Reticulus (revision 6928)
x64-full on Intel(R) Xeon(R) CPU L5640 @ 2.27GHz (Dual CPU)
Supermicro X8DT3, 48 GB Ram

Post Reply

Return to “ZFS (only!)”