*New 11.3 series Release:
2019-10-05: XigmaNAS 11.3.0.4.6928 - released, 11.2 series are soon unsupported!

*New 12.0 series Release:
2019-10-05: XigmaNAS 12.0.0.4.6928 - released!

*New 11.2 series Release:
2019-09-23: XigmaNAS 11.2.0.4.6881 - released!

We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money. Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

[SOLVED] How to replace mirror disk on RootOnZFS

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
Crunk_Bass
NewUser
NewUser
Posts: 3
Joined: 03 Sep 2014 03:45
Status: Offline

[SOLVED] How to replace mirror disk on RootOnZFS

#1

Post by Crunk_Bass » 26 Feb 2017 18:00

Yesterday I built a new NAS Server and installed NAS4Free on a ZFS Mirror.
When I disconnected one disk for testing the system did not boot.
So I decided to reinstall the system as it was late and maybe I did something wrong.

Installation looks good so far...
Image

NAS4Free installed on Mirror...
Image

After reboot checking zpool status...
Looks like the installer created a stripe instead of a mirror.
Image

EDIT: I used NAS4Free-x64-LiveCD-11.0.0.4.3948.iso for installation.

Any ideas how to fix this?
Last edited by Crunk_Bass on 01 Mar 2017 22:19, edited 1 time in total.

User avatar
JoseMR
Hardware & Software Guru
Hardware & Software Guru
Posts: 1150
Joined: 16 Apr 2014 04:15
Location: PR
Contact:
Status: Offline

Re: RootOnZFS Mirror Installation creates Stripe?

#2

Post by JoseMR » 28 Feb 2017 18:26

Crunk_Bass wrote:
26 Feb 2017 18:00
Yesterday I built a new NAS Server and installed NAS4Free on a ZFS Mirror.
When I disconnected one disk for testing the system did not boot.
So I decided to reinstall the system as it was late and maybe I did something wrong.

...

EDIT: I used NAS4Free-x64-LiveCD-11.0.0.4.3948.iso for installation.
Any ideas how to fix this?
Hello, thank you for reporting this bug, I can't believe I overlooked this silly mistake :oops: , I will add the required option today and sent fix to zoon asap, and will be available on next release, I'm very sorry for this inconvenience :roll: .

In the mean time, please use the below command while using Live Media to add the missing option before new installation:

Code: Select all

 ee /etc/install/zfsinstall.sh
Edit line 290 and add mirror between ${ZROOT} and devices as shown in the below picture:
zmirror_fix.PNG
After edit press "Esc" to leave ee editor, save changes when prompted.

EDIT: Proper zroot mirror creation has been fixed as for revision: 3957, and new release is available at SourceForge.
NOTE: Reinstall is required to create new zroot mirror pool, please save config.xml and any custom hand edited scripts, sorry for inconveniences.


Regards
You do not have the required permissions to view the files attached to this post.
System: FreeBSD 12 RootOnZFS, MB: Supermicro X8SI6-F, Xeon X3450, 16GB DDR3 ECC RDIMMs.
Addons at GitHub
JoseMRPubServ
Boot Environments Intro

Crunk_Bass
NewUser
NewUser
Posts: 3
Joined: 03 Sep 2014 03:45
Status: Offline

Re: RootOnZFS Mirror Installation creates Stripe?

#3

Post by Crunk_Bass » 01 Mar 2017 22:18

Thank you for the quick fix. Works like a charm.
Image

Hint: When installing you'll want to change the partition size to be a couple hundred MB smaller than the complete drive.
I tested with an Intenso 16GB Stick that was ~500MB smaller than my SanDisk Sticks which I used for installing.

I tried unplugging one drive (in my case a USB stick) and everything worked as expected.
Resilvering to a completely new drive and booting only having the new one connected also worked after I figured out how to do it.
Because of the boot partition it is not as easy as adding the new drive to the pool.

In case anyone is wondering how to replace a drive, here are the notes I took when testing.

First you want to check zpool status.

Code: Select all

zpool status zroot
  pool: zroot
 state: DEGRADED
status: One or more devices has been removed by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 914K in 0h0m with 0 errors on Wed Mar  1 21:48:19 2017
config:

        NAME                     STATE     READ WRITE CKSUM
        zroot                    DEGRADED     0     0     0
          mirror-0               DEGRADED     0     0     0
            gpt/sysdisk1         ONLINE       0     0     0
            1798456474314614830  REMOVED      0     0     0  was /dev/gpt/sysdisk0

errors: No known data errors

Looks like sysdisk0 failed. Let's detach it.

Code: Select all

zpool detach zroot /dev/gpt/sysdisk0

Next you need to create the partition table on the new drive.
To do this let's find out what partitions are on the working drive and what the labels are.

Code: Select all

gpart show
=>      40  31260592  da1  GPT  (15G)
        40      1024    1  freebsd-boot  (512K)
      1064      7128       - free -  (3.5M)
      8192  29999104    2  freebsd-zfs  (14G)
  30007296   1253336       - free -  (612M)
  
gpart show -l
=>      40  31260592  da1  GPT  (15G)
        40      1024    1  sysboot1  (512K)
      1064      7128       - free -  (3.5M)
      8192  29999104    2  sysdisk1  (14G)
  30007296   1253336       - free -  (612M)

Now we can create the partition table and write the bootcode to the new drive. Note, that the labels have to be changed accordingly.

Code: Select all

sysctl kern.geom.debugflags=0x10
gpart create -s GPT da0
gpart add -b 40 -s 512K -l sysboot0 -t freebsd-boot da0
gpart add -b 8192 -s 29999104 -l sysdisk0 -t freebsd-zfs da0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
sysctl kern.geom.debugflags=0x00

Finally attach the sysdisk partition of the new drive to your pool.

Code: Select all

zpool attach zroot /dev/gpt/sysdisk1 /dev/gpt/sysdisk0

User avatar
JoseMR
Hardware & Software Guru
Hardware & Software Guru
Posts: 1150
Joined: 16 Apr 2014 04:15
Location: PR
Contact:
Status: Offline

Re: [SOLVED] RootOnZFS Mirror Installation creates Stripe?

#4

Post by JoseMR » 01 Mar 2017 22:41

Hi Crunk_Bass, thank you so much for taking the time and explain in detail on the failed disk removal and replacement. :)

Currently I planning to create a small script as a command(/sbin) for administrators to easily replace a failed drive on a valid NAS4Free zroot mirror installation.

Regards
System: FreeBSD 12 RootOnZFS, MB: Supermicro X8SI6-F, Xeon X3450, 16GB DDR3 ECC RDIMMs.
Addons at GitHub
JoseMRPubServ
Boot Environments Intro

Crunk_Bass
NewUser
NewUser
Posts: 3
Joined: 03 Sep 2014 03:45
Status: Offline

Re: [SOLVED] RootOnZFS Mirror Installation creates Stripe?

#5

Post by Crunk_Bass » 01 Mar 2017 23:14

I figured, I already did the work so I might also share it.

I'm looking forward to your script. I will definitely check it out when it is ready for testing.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4917
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: [SOLVED] How to replace mirror disk on RootOnZFS

#6

Post by raulfg3 » 02 Mar 2017 07:29

really interesting post, moved to ZFS, and sticky for a time.
12.0.0.4 (revision 6766)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes

User avatar
JoseMR
Hardware & Software Guru
Hardware & Software Guru
Posts: 1150
Joined: 16 Apr 2014 04:15
Location: PR
Contact:
Status: Offline

Re: [SOLVED] RootOnZFS Mirror Installation creates Stripe?

#7

Post by JoseMR » 03 Mar 2017 08:03

Crunk_Bass wrote:
01 Mar 2017 23:14
I'm looking forward to your script. I will definitely check it out when it is ready for testing.

Script HERE
Sample Usage

Hi, I just made a dirt-quick(experimental, testing) script for easy replacement of missing/unavailable device on zroot mirror pool, the script will read the necessary information from the current alive zroot disk for build the same exact partition scheme on the specified device, then will try to attach the required device/partition to the zroot pool and the swap mirror as well.

However there are few limitations/pre-requisites I want to note:
1) Script will look for missing/dead (UNAVAIL) zroot device before continue.
2) A replacement device with same or greater size is mandatory regardless of the manually customized root partition.
3) For now, the script will not use debug flags nor wipe any metadata on the specified device, the administrator should provide a fully clean device before proceed with the zroot rebuild.

Remember that this simple script need many optimizations, but I will improve it after finish some pending works with root-on-zfs installer.

Note: This is best tested on test-hardware with spare disks/USB devises to fully emulate real-life hardware failure/rebuild, though can be done on VM's as well.

EDIT: I will make proper script for easy NAS4Free RootOnZFS failed disk detection and replacement soon.

Regards
System: FreeBSD 12 RootOnZFS, MB: Supermicro X8SI6-F, Xeon X3450, 16GB DDR3 ECC RDIMMs.
Addons at GitHub
JoseMRPubServ
Boot Environments Intro

Post Reply

Return to “ZFS (only!)”