This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Problem after replacing drive in mirror

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
bunk3m
Starter
Starter
Posts: 42
Joined: 16 May 2013 21:36
Status: Offline

Problem after replacing drive in mirror

Post by bunk3m »

Hi.
I replaced two 3T drives with two 4T drives in my pool. I physically replaced one drive first and attached it to the pool. Then replaced the second drive.

The first replace & attach went smoothly. The zpool status was fine after the resilvering.

I'm having a problem with the second replace & attach. The zpool status says following:

Code: Select all

# zpool status
pool: Pool1
state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
	the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: resilvered 2.60T in 7h3m with 0 errors on Sun Mar 25 22:28:45 2018
config:

	NAME                      STATE     READ WRITE CKSUM
	Pool1                     DEGRADED     0     0     0
	  mirror-0                DEGRADED     0     0     0
	    11150646291478578060  UNAVAIL      0     0     0  was /dev/da1.nop
	    da1.nop               ONLINE       0     0     0
	    da2.nop               ONLINE       0     0     0
The issue is the 11150646291478578060 disk (which I think is actually da2.nop).
I thought I could detach or remove the disk but it doesn't appear to exist.

Code: Select all

# diskinfo -v 11150646291478578060
diskinfo: 11150646291478578060: No such file or directory
From the GUI, I get this:

Code: Select all

Pool1_mirror_0 	mirror 	Pool1 	/dev/11150646291478578060, /dev/da1, /dev/da2
I tried to synch from the GUI and export and import the pool but I can't seem to get rid of the issue.

What should I do to fix this?

Thanks in advance.
9.1.0.1 - Sandstorm (revision 775)
Mirrored ZPool

User avatar
ms49434
Developer
Developer
Posts: 828
Joined: 03 Sep 2015 18:49
Location: Neuenkirchen-Vörden, Germany - GMT+1
Contact:
Status: Offline

Re: Problem after replacing drive in mirror

Post by ms49434 »

zpool detach Pool1 11150646291478578060
1) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 22GB out of 32GB ECC RAM, LSI 9300-8i IT mode in passthrough mode. Pool 1: 2x HGST 10TB, mirrored, L2ARC: Samsung 850 Pro; Pool 2: 1x Samsung 860 EVO 1TB, SLOG: Samsung SM883, services: Samba AD, CIFS/SMB, ftp, ctld, rsync, syncthing, zfs snapshots.
2) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 8GB out of 32GB ECC RAM, IBM M1215 crossflashed, IT mode, passthrough mode, 2x HGST 10TB , services: rsync.

bunk3m
Starter
Starter
Posts: 42
Joined: 16 May 2013 21:36
Status: Offline

Re: Problem after replacing drive in mirror

Post by bunk3m »

Hey @ms49434! Thank you very much. That worked!
9.1.0.1 - Sandstorm (revision 775)
Mirrored ZPool

Post Reply

Return to “ZFS (only!)”