ZFS configuration corrupted??
Posted: 24 Dec 2015 22:56
Hello.
I originally thought I had the same issue as here viewtopic.php?f=66&t=9182&p=56700#p56700
But my issue seems to be deeper. Or at least I'm sure I no longer know what to do.
I have 2 mirrored pools, and decided I was going to correct the advanced format message.
I copied all my data onto one pool, leaving one set of disks empty. I thought I had removed the pool via the gui, and I learned about the faq where one needs to write the script to remove the bit at the beginning and end of the disk after a reformat, then export the pool and reimport in order to get the ashift correct. At one point I actually received a response to the ashift query that made me believe all was good -12.
Now my situation is that every time I reboot my system, the pool comes up as faulted. I had thought my issue was that I used the same name as previous, and I used a new name, rebooting produced the same message that the drives weren't there, and named the old pool name, despite my having destroyed the pool name.
I've gone around in many circles using clear, import, export, destroy, detach. Every time I either think I have success to destroy the pool name, or I have the pool online and stable, after a reboot I get the following message after <zpool status>
pool: set1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-3C
scan: none requested
config:
NAME STATE READ WRITE CKSUM
set1 UNAVAIL 0 0 0
mirror-0 UNAVAIL 0 0 0
9336664799176194296 UNAVAIL 0 0 0 was /dev/ada2
10676280146457818980 UNAVAIL 0 0 0 was /dev/ada3
I tried to do a synchronize regardless, and received this message from the gui.
Warning: Invalid argument supplied for foreach() in /usr/local/www/disks_zfs_config_sync.php on line 346 Warning: Cannot modify header information - headers already sent by (output started at /usr/local/www/disks_zfs_config_sync.php:346) in /usr/local/www/disks_zfs_config_sync.php on line 468
I actually at one point had one disk detached, and wasn't able to reattach it, but rebooting seems to clear the healthy pool and replace it with the faulted one.
This issue is way beyond my skill level. I'm hoping there's some quick modification I can do that I haven't found in the forums yet.
Bossman's link to the Oracle zpool admin database was a good read http://docs.oracle.com/cd/E19253-01/819 ... index.html
But all I could decipher from it was that I should find the zpool.cache file and delete it. I think that would be a mistake before I ask for help.
Any suggestions? I'm all ears.
Thanks
I originally thought I had the same issue as here viewtopic.php?f=66&t=9182&p=56700#p56700
But my issue seems to be deeper. Or at least I'm sure I no longer know what to do.
I have 2 mirrored pools, and decided I was going to correct the advanced format message.
I copied all my data onto one pool, leaving one set of disks empty. I thought I had removed the pool via the gui, and I learned about the faq where one needs to write the script to remove the bit at the beginning and end of the disk after a reformat, then export the pool and reimport in order to get the ashift correct. At one point I actually received a response to the ashift query that made me believe all was good -12.
Now my situation is that every time I reboot my system, the pool comes up as faulted. I had thought my issue was that I used the same name as previous, and I used a new name, rebooting produced the same message that the drives weren't there, and named the old pool name, despite my having destroyed the pool name.
I've gone around in many circles using clear, import, export, destroy, detach. Every time I either think I have success to destroy the pool name, or I have the pool online and stable, after a reboot I get the following message after <zpool status>
pool: set1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-3C
scan: none requested
config:
NAME STATE READ WRITE CKSUM
set1 UNAVAIL 0 0 0
mirror-0 UNAVAIL 0 0 0
9336664799176194296 UNAVAIL 0 0 0 was /dev/ada2
10676280146457818980 UNAVAIL 0 0 0 was /dev/ada3
I tried to do a synchronize regardless, and received this message from the gui.
Warning: Invalid argument supplied for foreach() in /usr/local/www/disks_zfs_config_sync.php on line 346 Warning: Cannot modify header information - headers already sent by (output started at /usr/local/www/disks_zfs_config_sync.php:346) in /usr/local/www/disks_zfs_config_sync.php on line 468
I actually at one point had one disk detached, and wasn't able to reattach it, but rebooting seems to clear the healthy pool and replace it with the faulted one.
This issue is way beyond my skill level. I'm hoping there's some quick modification I can do that I haven't found in the forums yet.
Bossman's link to the Oracle zpool admin database was a good read http://docs.oracle.com/cd/E19253-01/819 ... index.html
But all I could decipher from it was that I should find the zpool.cache file and delete it. I think that would be a mistake before I ask for help.
Any suggestions? I'm all ears.
Thanks