However, when I did that, I noticed that only three of my six drives were showing up in my zpool, causing it to be faulted. I have checked my server's disk page and it does show 4 drives imported.
After bringing up my RAID card, all six drives show as ONLINE. I want to fix this issue in case two of my drives decide to die in the future.
Code: Select all
pool: vpool_1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-3C
scan: none requested
config:
NAME STATE READ WRITE CKSUM
vpool_1 UNAVAIL 0 0 0
raidz2-0 UNAVAIL 0 0 0
269575308383878843 UNAVAIL 0 0 0 was /dev/ada0.nop
6668283126672377540 UNAVAIL 0 0 0 was /dev/ada1.nop
918738776210664435 UNAVAIL 0 0 0 was /dev/ada0.nop
ada1.nop ONLINE 0 0 0
ada2.nop ONLINE 0 0 0
ada3.nop ONLINE 0 0 0
My zpools were originally set up like this:
Drive0 ada0 Onboard SATA Port0
Drive1 ada1 Onboard SATA Port1
Drive2 ada2 Onboard SATA Port2
Drive3 ada3 Onboard SATA Port3
Drive4 ada4 RAID Card Port0
Drive5 ada5 RAID Card Port1
However, after one of the NAS4Free updates, the system was detecting it as such:
Drive0 ada2 Onboard SATA Port0
Drive1 ada3 Onboard SATA Port1
Drive2 ada4 Onboard SATA Port2
Drive3 ada5 Onboard SATA Port3
Drive4 ada0 RAID Card Port0
Drive5 ada1 RAID Card Port1
I then had to resilver Drive0
Drive6 ada2 Onboard SATA Port0 (Resilvered)
Drive1 ada3 Onboard SATA Port1
Drive2 ada4 Onboard SATA Port2
Drive3 ada5 Onboard SATA Port3
Drive4 ada0 RAID Card Port0
Drive5 ada1 RAID Card Port1
This is the current state that I am in. It seems like the ZFS metadata is remembering the ada0 info. Since the ada info has changed and a resilver has been done, the ada info is not lining up and causes problems when drives fail.
Is there a way to repair it without backing up the data and starting the zfs pool from scratch?



