I'm not sure what happened, but I do know that I wish I had a backup of the config...
I had a disk fail which set the pool to DEGRADED, and before I could replace the disk we had a power outage. Since then I have been unable to get the ZFS pool back online.
The dead disk isn't being detected by BIOS. There are 6 2TB disks in the pool (ada0-5).
The system is showing the pool as "Unavailable", and when I try 'zpool status' and 'zpool import' it is telling me that there are no pools. When I try 'zpool import -f" (or any variation of import switches) it tells me that it cannot import due to I/O error, and to "destroy and re-create the pool from backup".
I have tried just about every suggestion I have found in this forum to no avail. I have even tried replacing the disk, but there is no pool to add it to. I'm hoping that there is an order of operations that I am unfamiliar with to get this back online and get access to the 6TB of data (that is not crucial, but would definitely be a pain to lose).
I realize that you guys are probably pretty tired of this question, but I would really appreciate any help you could offer.
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
Disk failure, power loss, ZFS pool unavailable
-
staticfactory
- NewUser

- Posts: 5
- Joined: 25 Apr 2016 20:43
- Status: Offline
-
sleid
- PowerUser

- Posts: 774
- Joined: 23 Jun 2012 07:36
- Location: FRANCE LIMOUSIN CORREZE
- Status: Offline
Re: Disk failure, power loss, ZFS pool unavailable
zpool import -F -d poolname or zpool import -f -d poolname
In desperation:
zpool import poolname -f -d /dev/mapper/
In desperation:
zpool import poolname -f -d /dev/mapper/
12.1.0.4 - Ingva (revision 7852)
FreeBSD 12.1-RELEASE-p12 #0 r368465M: Tue Dec 8 23:25:11 CET 2020
X64-embedded sur Intel(R) Atom(TM) CPU C2750 @ 2.40GHz Boot UEFI
ASRock C2750D4I 2 X 8GB DDR3 ECC
Pool of 2 vdev Raidz1: 3 WDC WD40EFRX + 3 WDC WD40EFRX
FreeBSD 12.1-RELEASE-p12 #0 r368465M: Tue Dec 8 23:25:11 CET 2020
X64-embedded sur Intel(R) Atom(TM) CPU C2750 @ 2.40GHz Boot UEFI
ASRock C2750D4I 2 X 8GB DDR3 ECC
Pool of 2 vdev Raidz1: 3 WDC WD40EFRX + 3 WDC WD40EFRX
-
staticfactory
- NewUser

- Posts: 5
- Joined: 25 Apr 2016 20:43
- Status: Offline
Re: Disk failure, power loss, ZFS pool unavailable
Every switch I throw at zpool import returns "Cannot import 'POOL': I/O error. Destroy and re-create the pool from a backup source."
Is there a way to destroy/recreate the pool and leave the existing data intact?
Is there a way to destroy/recreate the pool and leave the existing data intact?
-
sleid
- PowerUser

- Posts: 774
- Joined: 23 Jun 2012 07:36
- Location: FRANCE LIMOUSIN CORREZE
- Status: Offline
Re: Disk failure, power loss, ZFS pool unavailable
Are you sure of your hardware, cables, controllers, etc.?
Destroying a pool and recreating it destroys the data.
Destroying a pool and importing it with -f -d does not destroy the data.
Have you tried "zpool import poolname -f -d / dev / mapper /"?
Destroying a pool and recreating it destroys the data.
Destroying a pool and importing it with -f -d does not destroy the data.
Have you tried "zpool import poolname -f -d / dev / mapper /"?
12.1.0.4 - Ingva (revision 7852)
FreeBSD 12.1-RELEASE-p12 #0 r368465M: Tue Dec 8 23:25:11 CET 2020
X64-embedded sur Intel(R) Atom(TM) CPU C2750 @ 2.40GHz Boot UEFI
ASRock C2750D4I 2 X 8GB DDR3 ECC
Pool of 2 vdev Raidz1: 3 WDC WD40EFRX + 3 WDC WD40EFRX
FreeBSD 12.1-RELEASE-p12 #0 r368465M: Tue Dec 8 23:25:11 CET 2020
X64-embedded sur Intel(R) Atom(TM) CPU C2750 @ 2.40GHz Boot UEFI
ASRock C2750D4I 2 X 8GB DDR3 ECC
Pool of 2 vdev Raidz1: 3 WDC WD40EFRX + 3 WDC WD40EFRX
-
staticfactory
- NewUser

- Posts: 5
- Joined: 25 Apr 2016 20:43
- Status: Offline
Re: Disk failure, power loss, ZFS pool unavailable
Code: Select all
greedy: /# zpool import POOL -f -d /dev/mapper/
too many arguments
usage:
import [-d dir] [-D]
import [-d dir | -c cachefile] [-F [-n]] <pool | id>
import [-o mntopts] [-o property=value] ...
[-d dir | -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n]] -a
import [-o mntopts] [-o property=value] ...
[-d dir | -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n]]
<pool | id> [newpool]- b0ssman
- Forum Moderator

- Posts: 2438
- Joined: 14 Feb 2013 08:34
- Location: Munich, Germany
- Status: Offline
Re: Disk failure, power loss, ZFS pool unavailable
try importing the pool readonly
zpool import -o readonly=on poolname
zpool import -o readonly=on poolname
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.
-
staticfactory
- NewUser

- Posts: 5
- Joined: 25 Apr 2016 20:43
- Status: Offline
Re: Disk failure, power loss, ZFS pool unavailable
Same issue....
Code: Select all
greedy: /# zpool import -o readonly=on POOL
cannot import 'POOL': pool may be in use from other system
use '-f' to import anyway
greedy: /# zpool import -f -o readonly=on POOL
cannot import 'POOL': I/O error
Destroy and re-create the pool from
a backup source.- raulfg3
- Site Admin

- Posts: 4865
- Joined: 22 Jun 2012 22:13
- Location: Madrid (ESPAÑA)
- Contact:
- Status: Offline
Re: Disk failure, power loss, ZFS pool unavailable
at this point you need to think on Format your Disk and restore data from a Backup ( or lose data if you do not have a Backup).
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)
Wiki
Last changes
HP T510
Wiki
Last changes
HP T510
-
staticfactory
- NewUser

- Posts: 5
- Joined: 25 Apr 2016 20:43
- Status: Offline
Re: Disk failure, power loss, ZFS pool unavailable
Thanks everyone... I was worried it had come to that. I have backups of everything that was important but it will definitely be a pain to lose the rest. 