Latest News:
*New 11.2 series Release:
2019-06-20: XigmaNAS 11.2.0.4.6766 - released!

*New 12.0 series Release:
2019-06-20: XigmaNAS 12.0.0.4.6766 - released!

We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS cost money, please consider a donation to our project so we can continue to offer you the best.
We need your support! eg: PAYPAL

[RESOLVED] ZFS - replaced drive - status DEGRADED

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
FredTheFrog
NewUser
NewUser
Posts: 4
Joined: 16 Mar 2019 02:30
Status: Offline

[RESOLVED] ZFS - replaced drive - status DEGRADED

#1

Post by FredTheFrog » 16 Mar 2019 02:56

NAS4Free 11.1.0.4 (revision 4619) full version
Intel Xeon E3-1226 v3 @3.30GHz
booting from Intel 240GB SSD connected to motherboard Intel Lynx Point controller
Marvell 88SE9230 AHCI SATA Controller to manage four 3.0 TB WDC disc drives.

Originally, 4 x WDC WD30EFRX-68EUZN0 3.0 TB drives as /dev/ada0p1, /dev/ada1p1, /dev/ada2p1, /dev/ada3p1
Verified serial numbers of drives before attempting to remove/replace /dev/ada3p1 device
Offlined /dev/ada3p1, waited several minutes, shut down, replaced drive with WDC WD60EFRX-68L0BN1 6.0 TB drive, rebooted
Still have original 3.0 TB drive to replace, if absolutely necessary.

Current zfs pool status is:

Code: Select all

  pool: zfsPool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
	the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: none requested
config:
	NAME                     STATE     READ WRITE CKSUM
	zfsPool                  DEGRADED     0     0     0
	  raidz2-0               DEGRADED     0     0     0
	    ada0p1               ONLINE       0     0     0
	    ada1p1               ONLINE       0     0     0
	    3661819670150165202  UNAVAIL      0     0     0  was /dev/ada2p1
	    1452713289130508182  UNAVAIL      0     0     0  was /dev/ada3p1

errors: No known data errors
1. So, will ZFS do anything automagically, since the new larger drive is connected to the same controller port as the older smaller drive?
2. Are there zpool commands I should use to bring the bottom two devices online again?

At the current time, this NAS handles media storage of MP3 audio and MP4 video plus a few megabytes of ordinary Windows shares via Samba, so it's not terribly dynamic and doesn't see a lot of activity. At this point, I'm heading to bed, and hoping for a helpful answer. Flames >>/dev/null please. If required, I can re-install the previous 3.0 TB drive as it's untouched since the shutdown and removal.

Thank you in advance for the benefit of your expertise.
Last edited by FredTheFrog on 16 Mar 2019 22:26, edited 1 time in total.

FredTheFrog
NewUser
NewUser
Posts: 4
Joined: 16 Mar 2019 02:30
Status: Offline

Re: Long-time user, first-time screw-up - ZFS - replaced drive - status DEGRADED

#2

Post by FredTheFrog » 16 Mar 2019 12:09

Good night's sleep, thinking a bit more clearly now. It seems when I replaced the drive, I may have caused a cabling issue. It doesn't appear in the dmesg output that all four drives came online properly. I'll finish this cup of coffee, shutdown the NAS, then re-check the cabling.

Indeed, it was a power cabling issue that caused two drives to be offline, instead of only the replaced drive. Now that the cabling is connected properly, only one drive shows as unavailable, and it appears the pool has resilvered a few megabytes of data from the now-restored older drive at ada2p1. Here is the current status of the pool, showing the new drive in the bottom slot with the numeric id.

What commands (or amount of time) do I need to get this new drive online in the pool?

Code: Select all

  pool: zfsPool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
	the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: resilvered 3.31M in 0h0m with 0 errors on Sat Mar 16 19:52:47 2019
config:
	NAME                     STATE     READ WRITE CKSUM
	zfsPool                  DEGRADED     0     0     0
	  raidz2-0               DEGRADED     0     0     0
	    ada0p1               ONLINE       0     0     0
	    ada1p1               ONLINE       0     0     0
	    ada2p1               ONLINE       0     0     3
	    1452713289130508182  UNAVAIL      0     0     0  was /dev/ada3p1

errors: No known data errors

FredTheFrog
NewUser
NewUser
Posts: 4
Joined: 16 Mar 2019 02:30
Status: Offline

Re: Long-time user, first-time screw-up - ZFS - replaced drive - status DEGRADED

#3

Post by FredTheFrog » 16 Mar 2019 15:31

Digging further, while /dev/ada3 is present, partition /dev/ada3p1 has not been created. (See output below.)
I'm guessing I need to use 'gpart' to add partition p1 to /dev/ada3 as type 'freebsd-zfs' ???

Code: Select all

nas4free: ~# dmesg | grep ^ada | grep 'SATA 3.x device'
ada0: <WDC WD30EFRX-68EUZN0 82.00A82> ACS-2 ATA SATA 3.x device
ada1: <WDC WD30EFRX-68EUZN0 82.00A82> ACS-2 ATA SATA 3.x device
ada2: <WDC WD30EFRX-68EUZN0 82.00A82> ACS-2 ATA SATA 3.x device
ada3: <WDC WD60EFRX-68L0BN1 82.00A82> ACS-2 ATA SATA 3.x device
ada4: <INTEL SSDSC2BP240G4 L2010420> ATA8-ACS SATA 3.x device

nas4free: ~# ls -la /dev/ada*
crw-r-----  1 root  operator  0x78 Mar 16 19:52 /dev/ada0
crw-r-----  1 root  operator  0x79 Mar 16 19:52 /dev/ada0p1
crw-r-----  1 root  operator  0x7a Mar 16 19:52 /dev/ada1
crw-r-----  1 root  operator  0x7e Mar 16 19:52 /dev/ada1p1
crw-r-----  1 root  operator  0x7b Mar 16 19:52 /dev/ada2
crw-r-----  1 root  operator  0x7f Mar 16 19:52 /dev/ada2p1
crw-r-----  1 root  operator  0x7c Mar 16 19:52 /dev/ada3
crw-r-----  1 root  operator  0x7d Mar 16 19:52 /dev/ada4
crw-r-----  1 root  operator  0x80 Mar 16 19:52 /dev/ada4p1
crw-r-----  1 root  operator  0x81 Mar 16 19:52 /dev/ada4p2
crw-r-----  1 root  operator  0x82 Mar 16 19:52 /dev/ada4p3
crw-r-----  1 root  operator  0x83 Mar 16 19:52 /dev/ada4p4

nas4free: ~# gpart show
=>        40  5860533088  ada0  GPT  (2.7T)
          40        8152        - free -  (4.0M)
        8192  5860524032     1  freebsd-zfs  (2.7T)
  5860532224         904        - free -  (452K)

=>        40  5860533088  ada1  GPT  (2.7T)
          40        8152        - free -  (4.0M)
        8192  5860524032     1  freebsd-zfs  (2.7T)
  5860532224         904        - free -  (452K)

=>        40  5860533088  ada2  GPT  (2.7T)
          40        8152        - free -  (4.0M)
        8192  5860524032     1  freebsd-zfs  (2.7T)
  5860532224         904        - free -  (452K)

=>       40  468862048  ada4  GPT  (224G)
         40       1024     1  freebsd-boot  (512K)
       1064       7128        - free -  (3.5M)
       8192    8388608     2  freebsd-ufs  (4.0G)
    8396800   33554432     3  freebsd-swap  (16G)
   41951232  426909696     4  freebsd-ufs  (204G)
  468860928       1160        - free -  (580K)

FredTheFrog
NewUser
NewUser
Posts: 4
Joined: 16 Mar 2019 02:30
Status: Offline

Re: Long-time user, first-time screw-up - ZFS - replaced drive - status DEGRADED

#4

Post by FredTheFrog » 16 Mar 2019 15:52

Quick answer for other clueless n00bs (like me!) looking for a short, simple, easy-to-read example:
1. gpart create to create a GPT partitioning scheme on the new drive
2. gpart add to add a ZFS partition to the new drive
3. zpool replace to online the new drive
4. Patiently wait while ZFS restores the integrity of the pool.

Code: Select all

nas4free: ~# gpart create -s GPT /dev/ada3
ada3 created

nas4free: ~# gpart add -t freebsd-zfs /dev/ada3
ada3p1 added

nas4free: ~# zpool replace zfsPool /dev/ada3p1
nas4free: ~#

nas4free: ~# zpool status zfsPool
  pool: zfsPool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Mar 16 20:42:15 2019
        45.5G scanned out of 3.00T at 319M/s, 2h41m to go
        11.0G resilvered, 1.48% done
config:

        NAME                       STATE     READ WRITE CKSUM
        zfsPool                    DEGRADED     0     0     0
          raidz2-0                 DEGRADED     0     0     0
            ada0p1                 ONLINE       0     0     0
            ada1p1                 ONLINE       0     0     0
            ada2p1                 ONLINE       0     0     3
            replacing-3            OFFLINE      0     0     0
              1452713289130508182  OFFLINE      0     0     0  was /dev/ada3p1/old
              ada3p1               ONLINE       0     0     0  (resilvering)

errors: No known data errors
nas4free: ~#

Post Reply

Return to “ZFS (only!)”