*New 11.4 series Release:
2020-07-03: XigmaNAS 11.4.0.4.7633 - released!

*New 12.1 series Release:
2020-04-17: XigmaNAS 12.1.0.4.7542 - released


We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money. Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

Definition of a new setup: how much space? which RAID?

Software RAID information and help
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
ku-gew
Advanced User
Advanced User
Posts: 173
Joined: 29 Nov 2012 09:02
Location: Den Haag, The Netherlands
Status: Offline

Definition of a new setup: how much space? which RAID?

#1

Post by ku-gew »

Hello,
I'm building a server (not only NAS) and I have to decide how much space to prepare and which configuration to use.
First of all: I have 1.5 TB data and I thought, few months ago, to build a RAID1 mirror using external 2x3 TB drives. I thought, it would provide me enough space for a long time.
Now I will have in a single shot 1.3-1.5 TB more, thus requiring an expansion. I don't want too many external disks, so I bought a server that can host up to 7 internal drives (6+boot, but I have also internal USB and internal SD).

My idea would be to use RAID1 striped, so that I can buy only two 3 TB drives now and have 6 TB total. Thanks to ZFS, I would be able to expand again in the future, using another couple of HDs. That's the advantage of RAID1 with ZFS.
However, I calculated (according to the official manufacturer specifications and to the laws of simple statistics) the probability of an unrecoverable error during rebuild and for a 3 TB RAID1: it is 20%!!. Ok, it's not that high and I would keep backups of the most important data, but maybe I should already evaluate RAIDZ2? I would gain reliability but lose performances (the maximum random IOPS for small files don't improve AT ALL with RAIDZx, it's like having a single disk). Of course the performances with small files are what actually makes the server feel slow, since the throughput really decreases (random reads: 0,3 MB/s... about nothing).

Do you have suggestions? I cannot change the RAID later, I wouldn't be able to move the data anymore, I have to decide now. If I decide for RAIDZ2 I would have to use 6 drives (more expensive) and that would bring me 12 TB space, definitely more than I could possibly need in the foreseeable future.

In case I decide for RAID1, I thought about putting the two old WD Green drives together with the new WD Red drives so that each vdev has one and one: different disks, different age (3 months difference), meaning much lower probability to have them crash (mechanically, has nothing to do with read errors) at the same time.

Thanks in advance for any suggestion you will be able to provide.
HP Microserver N40L, 8 GB ECC, 2x 3TB WD Red, 2x 4TB WD Red
XigmaNAS stable branch, always latest version
SMB, rsync

User avatar
shakky4711
Advanced User
Advanced User
Posts: 273
Joined: 25 Jun 2012 08:27
Status: Offline

Re: Definition of a new setup: how much space? which RAID?

#2

Post by shakky4711 »

Hi,
I thought, it would provide me enough space for a long time.
Forget about it, free space filles itself :lol:
My idea would be to use RAID1 striped, so that I can buy only two 3 TB drives now and have 6 TB total. Thanks to ZFS, I would be able to expand again in the future, using another couple of HDs. That's the advantage of RAID1 with ZFS.
Mixture of mirror and stripes is obselete raid behavoiur and the reason for many problems, read section software raid and data recovery in this forum. Keep it simple, put vdev mirrors or raid-z1 vdev into your pool.

So when you speak about 6 drives with zfs raid z2 I would recommend to use 2 raid-z1 vdev in your pool. When one disk gets damaged only this vdev must sync and only 3 drives are affected during rebuild.
the probability of an unrecoverable error during rebuild and for a 3 TB RAID1: it is 20%!!.
While a casual raid array simply syncs bit by bit ZFS s much more intelligent, only used data is synced. So when you have 3TB of data the snc is done within the half time compared to standard raid. One of the main reason for dying raid arrays during rebuild are heat and vibrations which kill d*k drives, so generally it is recommended to speed up reduced blowers during a sync.
I thought about putting the two old WD Green drives together with the new WD Red drives
These green drives and the WD-RED are not comparable, the first ones are cheapest consumer drives and known for not so strong reliability while WD-RED are designed to operate 24/7 in raid arrays. So my personal recommendation would be to create a separate pool as zfs mirror with the two green drives for not so important data and create a second pool with the much more reliable WD-RED for more important stuff.

Shakky

Post Reply

Return to “Software RAID”