Page 1 of 1

Can't completely get rid of ZFS

Posted: 14 May 2016 13:13
by serverguy
Running on: 9.3.0.2 - Nayla (revision 1283)

I finally gave up on ZFS. After I got it all running I read where I must have ECC memory or I am liable for corruption. I was running it but now when I try to use it, the system just freezes. I can't see any SMART health warnings. I started two RAID5 builds and can't reboot right now without killing them. That build out will take several days, per the status messages. I have rebooted after deleting everything I can get to with the ZFS screens and this remains.

So now I want to get rid of the last vestiges of ZFS and cannot. Please help and tell me what commands will resolve this. I removed them all via the web GUI but it still won't go away.

I still see it on the Status>System page like this:

Code: Select all

ZFSGroup
- of -B
Total: - | Used: - | Free: - | State: FAULTED
I still see it on the Disks>ZFS>Configuration>Detected like this:

Code: Select all

Virtual devices (1)
Name 	Type 	Pool 	Devices
ZFSGroup_raidz1_0 	raidz1 	ZFSGroup 	/dev/16019638266287627750, /dev/1331267156417451110, /dev/5562498439359852965, /dev/7613700024863661660, /dev/9896129922327343340, /dev/7852625792253031690, /dev/296427981162474949, /dev/2014313777056583386
I still see it on the Disks>ZFS>Pools>Information like this:

Code: Select all

Pool information and status

  pool: ZFSGroup
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
	replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-3C
  scan: none requested
config:

	NAME                      STATE     READ WRITE CKSUM
	ZFSGroup                  UNAVAIL      0     0     0
	  raidz1-0                UNAVAIL      0     0     0
	    16019638266287627750  UNAVAIL      0     0     0  was /dev/ada0
	    1331267156417451110   UNAVAIL      0     0     0  was /dev/ada1
	    5562498439359852965   UNAVAIL      0     0     0  was /dev/ada2
	    7613700024863661660   UNAVAIL      0     0     0  was /dev/ada3
	    9896129922327343340   UNAVAIL      0     0     0  was /dev/ada4
	    7852625792253031690   UNAVAIL      0     0     0  was /dev/ada5
	    296427981162474949    UNAVAIL      0     0     0  was /dev/ada6
	    2014313777056583386   UNAVAIL      0     0     0  was /dev/ada7
They don't show up in /dev under the File Manager screen. Well, ada0-7 do, but they are in use elsewhere being built into RAID5.

No command I can find searching will touch any of this. I would like to get rid of it from the Status>System page most of all. The rest is no biggy but I suspect I have to find out how to get rid of it from the ZFS pages before it will disappear from Status>System. I don't see it anywhere else on any of the ZFS pages. They are all blank so there is nothing else to remove. All actions have been applied.

Thanks for any pointers on what to enter to make this ugly mess go away. I have searched a lot and can't find anything to address this situation. If it is already posted somewhere, I will be glad to read if someone can point me to an existing answer. No need to duplicate it here, of course.

Mike

Re: Can't completely get rid of ZFS

Posted: 14 May 2016 13:40
by b0ssman
have you synced the zfs in the gui?

Re: Can't completely get rid of ZFS

Posted: 15 May 2016 23:51
by Princo
Hello serverguy,

there is a problem with the internal cache of your pools.

Solution: removing internal cache file (it will rebuild afterwards doing a sync Disks|ZFS|Configuration|Synchronize)

Steps to remove internal cache file:

1. Connect as root via ssh-session
2. Get device with internal cache file
Example

Code: Select all

mount /cf
Result:

Code: Select all

mount: /dev/da0s1a: Device busy
So /dev/da0s1a is your device, and you have to use it in the next step.

3. Enter the following line:

Code: Select all

umount /cf;mount /dev/da0s1a /cf;rm /cf/boot/zfs/zpool.cache;reboot
4. After rebooting, go to Disks|ZFS|Configuration|Detected and click on "Import on-disk ZFS config"

5. Then you have to sync your ZFS-Configuration in Disks|ZFS|Configuration|Synchronize
Don't change anything here, simply click on synchronize-button below.

These steps wouldn't change your ZFS-configuration. If your have a valid ZFS-config on your disks, you will see it now.

Princo

Re: Can't completely get rid of ZFS

Posted: 17 May 2016 04:59
by serverguy
I am sorry but this makes little sense to me. I am running from a flash drive. Everything gets
Operation not permitted.

Code: Select all

[mikey@nas4movies /cf/boot/zfs]$ df
Filesystem 1K-blocks   Used  Avail Capacity  Mounted on
/dev/xmd0     122606  34044  88562    28%    /
devfs              1      1      0   100%    /dev
/dev/xmd1     736430 252988 483442    34%    /usr/local
procfs             4      4      0   100%    /proc
/dev/xmd2     126492   3292 113084     3%    /var
tmpfs          65536     40  65496     0%    /var/tmp
/dev/da0a     491246 151482 339764    31%    /cf
Can't umount /cf.

Code: Select all

[mikey@nas4movies /]$ umount /cf
umount: unmount of /cf failed: Operation not permitted
[mikey@nas4movies /]$ mount /dev/da0s1a /cf
mount: /dev/da0s1a: Operation not permittedd
I have full privs, as best I can read it (probably overdone, "admin" is supposed to be enough but no R/W access to the file from SSH):

Code: Select all

mikey  	Mike Morrow  	1000  	admin, Video, network, sshd, staff, sys, wheel, www 
I think I need deeper information. I don't want to keep the ZFS, I am trying to get rid of it.

I also tried going through the File Manager to try to delete this file but it would not let me. I tried to change permissions, it would not let me. This is when logged in on the admin userid (as altered on the GUI).

I am locked out everywhere I turn. I think what we are trying to do is mount a R/W copy of the boot and config files but I cannot make that happen nor can I manually delete the zpool.cache in any other way.

Thanks for the info, but... More help, please.

Mike

Re: Can't completely get rid of ZFS

Posted: 17 May 2016 10:54
by b0ssman
there was a bug in 9.3 that causes the zfs config to remain constant.

Please try with a 10 Release.

Re: Can't completely get rid of ZFS

Posted: 17 May 2016 11:48
by Princo
@serverguy

You must connect as root to perform the required steps.

To do so, you have to check "permit root login" in ssh config.

Princo