Here is the setup: 1 stripe vdev consisting of 2 HDD. 1 mirror vdev consisting of 2 HDD. Both vdev in the same Pool.
A couple questions. First, to 'grow' this Pool, all I need to do is add new VDEV. When I add, say, another 2-disk-mirror vdev to the existing Pool, it is 'empty'. Will N4F continue to try and 'spread out' new data written? Or will it try to equalize the % Used of the new vdev by writing only to it for a while?
Next, if a physical disk in a (mirror) vdev fails, all I need to do to replace it is "offline" the disk, power down N4F, replace the disk, power up N4F, Import and Format the new (similar sized) physical disk, then Add it to the damaged vdev?
Lastly (for now, heh), if I wanted to upgrade the physical disks in an existing (mirror) vdev due to a lack of new port on the mobo, what should I do? "Offline" one disk as if it failed and replace it, then after the resilver, do the same to the other disk? Or do I need to somehow copy the data off that specific vdev, delete it, then create a new vdev with the new disks and add it to the Pool?
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
Pool growth, upgrades, or replacements
-
c71clark
- Starter

- Posts: 15
- Joined: 07 Sep 2014 17:05
- Status: Offline
-
kenZ71
- Advanced User

- Posts: 379
- Joined: 27 Jun 2012 20:18
- Location: Northeast, USA
- Status: Offline
Re: Pool growth, upgrades, or replacements
Not sure about the first question.
As for replacing a failed drive check out this link.
https://www.google.com/url?sa=t&source= ... RAP78CWs0w
To expand an array, simply swap out one drive for another larger drive using the steps in the above link. After the rebuild is done repeat then your pool will be larger. This is the process I followed when I outgrew my 2 500 gig drives in a mirror. I simply swapped in a pair of 2TB drives.
As for replacing a failed drive check out this link.
https://www.google.com/url?sa=t&source= ... RAP78CWs0w
To expand an array, simply swap out one drive for another larger drive using the steps in the above link. After the rebuild is done repeat then your pool will be larger. This is the process I followed when I outgrew my 2 500 gig drives in a mirror. I simply swapped in a pair of 2TB drives.
11.2-RELEASE-p3 | ZFS Mirror - 2 x 8TB WD Red | 28GB ECC Ram
HP ML10v2 x64-embedded on Intel(R) Core(TM) i3-4150 CPU @ 3.50GHz
Extra memory so I can host a couple VMs
1) Unifi Controller on Ubuntu
2) Librenms on Ubuntu
HP ML10v2 x64-embedded on Intel(R) Core(TM) i3-4150 CPU @ 3.50GHz
Extra memory so I can host a couple VMs
1) Unifi Controller on Ubuntu
2) Librenms on Ubuntu
- b0ssman
- Forum Moderator

- Posts: 2438
- Joined: 14 Feb 2013 08:34
- Location: Munich, Germany
- Status: Offline
Re: Pool growth, upgrades, or replacements
there is no way to influence the "spread".
just to be sure. you are aware that if one vdev dies the entire pool data is gone?.
i am saying that because you have a stripe in a pool. if either of those drives dies your pool is gone.
just to be sure. you are aware that if one vdev dies the entire pool data is gone?.
i am saying that because you have a stripe in a pool. if either of those drives dies your pool is gone.
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.
- raulfg3
- Site Admin

- Posts: 4865
- Joined: 22 Jun 2012 22:13
- Location: Madrid (ESPAÑA)
- Contact:
- Status: Offline
Re: Pool growth, upgrades, or replacements
it's possible to add stripe disk to mirrors vdev but it's not recomended because if only one disk fail, you lose your entire pool.c71clark wrote:Here is the setup: 1 stripe vdev consisting of 2 HDD. 1 mirror vdev consisting of 2 HDD. Both vdev in the same Pool.
actually, because you have 2 striped disk added to your pool , if one disk in your mirror fail, you loose all your data.c71clark wrote:Next, if a physical disk in a (mirror) vdev fails, all I need to do to replace it is "offline" the disk, power down N4F, replace the disk, power up N4F, Import and Format the new (similar sized) physical disk, then Add it to the damaged vdev?
Please read this powerpoint to understand ZFS a bit better: http://forums.freenas.org/index.php?thr ... oobs.7775/
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)
Wiki
Last changes
HP T510
Wiki
Last changes
HP T510
-
c71clark
- Starter

- Posts: 15
- Joined: 07 Sep 2014 17:05
- Status: Offline
Re: Pool growth, upgrades, or replacements
I'm not trying to be pedantic here, just feeling around for the edges of the issues! Earlier I asked about how ZFS handles writing data to a pool, and the answer basically was that N4F tries to evenly spread data, but it's not for sure. So technically, would any data written to one of the mirror vdevs ONLY (zfs doesn't write it to the stripe vdev) be recoverable if the stripe vdev fails?
The Guide linked above says that if any 'vdev fails' all data in the pool is gone forever. I think I understand, but want to be clear that this means an ENTIRE vdev has to fail for the Pool to be lost. Specifically, for an entire vdev mirror fail, ALL of the drives in that vdev would have to fail for the 'vdev' to fail, correct? For a stripe vdev, only one drive has to fail, no matter how many drives in the vdev stripe. This points back to the question above.
The Guide linked above says that if any 'vdev fails' all data in the pool is gone forever. I think I understand, but want to be clear that this means an ENTIRE vdev has to fail for the Pool to be lost. Specifically, for an entire vdev mirror fail, ALL of the drives in that vdev would have to fail for the 'vdev' to fail, correct? For a stripe vdev, only one drive has to fail, no matter how many drives in the vdev stripe. This points back to the question above.