This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Lost ZFS volume after system hardware upgrade

Problems, solutions, software
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
popacio
NewUser
NewUser
Posts: 11
Joined: 31 Aug 2013 12:08
Status: Offline

Lost ZFS volume after system hardware upgrade

Post by popacio »

I recently upgraded my hardware.
I kept the usb live disk untouched and just plugged it in the new system. It worked fine.
Then i had the bad idea of trying FreeNAS (latest version) and booted from another live usb. I HAVE CHANGED NO SETTINGS and UPDATED NOTHING while in Freenas. Just looked around to see if it recongnised my pool. It did but I couldn't see or access my partitions (volumes) so i immediately reverted to NAS4Free.

Now i can't access my shares at all even if i have changed NOTHING at all in Freenas or Nas4free.

I did a clear config and import disk since Nas4Free gave me a "There is wrong ID in the disk of config. Please remove the disk and re-add it or use 'clear and import'". Probably phisically moving the hdd messed up their names. This shouldn't be a problem.

I also have a single disk UFS partition (not part of my ZFS pool obviously) and that one is inaccessible too. I can see the files in the advanced file manager but cannot access them via smb share even if i see the folders in this cifs/smb share. Very strange! How can you see the folders in a share but not access the files within? However this is not the real problem.

The problem is that there is no volume in my ZFS pool! No dataset either! What happened? There is nothing to share and it does not appear at the mountpoint in disks (the UFS volume does appear here). Here is the configuration detected. The data appears to be there but there is nothing to mount from my ZFS pool!

[img=http://s9.postimg.org/trcrrleq3/Untitled_1.jpg]

Some useful help will be greatly appreciated.
Last edited by popacio on 01 Sep 2013 10:56, edited 1 time in total.

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by b0ssman »

the latest version of freenas uses zfs version 5000.
if your pool was upgraded you wont be able to use it with nas4free v9.1

there should be a version of nas4free with freebsd 9.2 soon and that should be able to read the new pool.
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

popacio
NewUser
NewUser
Posts: 11
Joined: 31 Aug 2013 12:08
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by popacio »

First thank you for your time.
You mean Freenas upgraded my pool by default without asking just by booting in Freenas???
There is NO NEW POOL. I haven't created one. I left everything untouched. Except now it doesn't work. And why the UFS partition stopped working then? How can we explain that too by the new ZFS version? Are you sure about your answer?

I see the pool. I see the Vdev. But my volume is gone.... Scrub says all is ok.

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by b0ssman »

try mounting the ufs partition from the command line.
maybe a fsck is necessary.
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

popacio
NewUser
NewUser
Posts: 11
Joined: 31 Aug 2013 12:08
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by popacio »

i did that of course. the GUI mount command has bugs. It gave non-descriptive error despite the fact it mounted. I solved the problem by loosing the data on it. :) Forget about the UFS volume. How about an answer to my question above? Any ideas?

I quote from the official site: "ZFS pools that were created in FreeNAS® 8.3.1 (any patch level) use ZFSv28. [...] If you auto-import a ZFS pool from any 8.x version, it will remain at its original ZFS version unless you upgrade the pool. This means that the pool will not understand any feature flags, such as LZ4 compression, until the pool is upgraded. [...] The ZFS version upgrade must be performed from the command line, it can not be performed using the GUI." Of course, as i said, i did nothing of that sort.
You still stand by what you said before?

P.S. I just retried in FreeNAS 9.1.1 and the newly formatted UFS single drive works like a charm without doing anything. With the ZFS pool things are still broken. By the way i can confirm that FreeNAS sucks compared to NAS4Free as much as speed is concerned. 10-15 MB/s speed bump on the same hardware at default settings. Lame... I think i will keep my NAS4free after all. I know now why i chose it before.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by raulfg3 »

go to Disks|ZFS|Configuration|Synchronize and post a screen capture, is probably that you need to use "syncrhonize" button.
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

popacio
NewUser
NewUser
Posts: 11
Joined: 31 Aug 2013 12:08
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by popacio »

Already tried synchronise. No result. Nothing changed. Anyone has any idea where is the problem? There must be a solution. After all i did nothing. It must be some autoconfiguration made without user consent. Of course that is the way to write software this days.... Not that the "issue" between the chair and the keyboard couldn't be an explanation for all this. After all, things are so complicated today that noone really understands anything fully these days.

Image

Image

This is the history command result (somewhere here must be the problem but it simply hurts my brain to read whole pages of that after reading countless forum posts of codes):

2013-08-29.22:57:33 zpool import -d /dev -f -a
2013-08-30.18:50:40 zpool import -f -R /mnt 1011179111664647128
2013-08-30.18:50:40 zfs inherit -r mountpoint honeycomb
2013-08-30.18:50:40 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-30.18:50:41 zfs set aclmode=passthrough honeycomb
2013-08-30.18:50:46 zfs set aclinherit=passthrough honeycomb
2013-08-30.19:09:22 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-30.19:09:22 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-30.20:10:44 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-30.20:10:44 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-31.09:37:45 zpool import -d /dev -f -a
2013-08-31.09:56:14 zpool import -d /dev -f -a
2013-08-31.10:52:05 zpool import -d /dev -f -a
2013-08-31.11:14:56 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-31.11:14:56 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-31.11:33:25 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-31.11:33:25 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-31.13:56:18 zpool import -d /dev -f -a
2013-08-31.14:07:38 zpool import -d /dev -f -a
2013-08-31.14:29:21 zpool import -d /dev -f -a
2013-08-31.14:44:48 zpool import -d /dev -f -a
2013-08-31.15:08:00 zpool scrub honeycomb
2013-08-31.16:23:18 zpool scrub -s honeycomb

I see a lot of operations on the pool on the 30th yet i have issued NO such commands! WTF?
From my previous experience with software RAID nothing can ever be recovered after a malfunction. Redundancy is a joke. I always lost data because of a RAID malfunction never because of hardware failure. That's why i hoped for the best when i chose ZFS. 10 moths later, after the first ever change to the hardware system i finnaly got the chance to regret that trust. There is simply no safe way to keep large amounts of data today. Other than making backups on different devices. Of course that is not always a viable solution. Not when you are holding 18TB of data on a limited budget. Which kinda defeats the purpose of using redundancy like that supposedly offered by RAID solutions. My 20 years experience tell me that using 10 separate disks will be MUCH safer than using any type of RAID. In the meantime we keep loosing our files. maybe someday....

substr
experienced User
experienced User
Posts: 113
Joined: 04 Aug 2013 20:21
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by substr »

Is it possible that the 'zfs inherit -r mountpoint honeycomb', 'zfs set aclmode=inherit honeycomb' or 'zfs set aclinherit=passthrough honeycomb' commands came from freenas and are the problem?

Perhaps they have set the zfs datasets/volumes to settings that nas4free doesn't like. Maybe somebody else can answer that.

It seems really strange that you have done synchronize and it shows nothing.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by raulfg3 »

I do not see zpool status and zpool history and zpool Info, please post result.
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

popacio
NewUser
NewUser
Posts: 11
Joined: 31 Aug 2013 12:08
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by popacio »

Thank you all. I don't know why i still bother. I bet you all that the final result of 20 or so pages with code will be the same: me getting drunk, formatting the drives with NTFS and dropping ZFS altogether. If help doesn't help consolation is nice too.

But here is one last try just in case anyone brighter than me has an idea. Here are the details solicited (though most of them were posted above anyways):

Zpool info

Image
pc screenshot

Zpool status:

Image
image upload no ads

Sync result:

Image
image hosting no account

Zpool history (compelete this time) [bare in mind that the problem appeared on the 29th or 30th):

History for 'honeycomb':
2012-12-25.14:34:07 zpool create -m /mnt/honeycomb honeycomb raidz1 /dev/ada0.nop /dev/ada1.nop /dev/ada2.nop /dev/ada3.nop /dev/ada4.nop /dev/ada5.nop /dev/ada6.nop /dev/ada7.nop /dev/ada8.nop /dev/ada9.nop
2012-12-25.14:51:25 zfs create -o compression=off -o dedup=off -o sync=standard -o atime=on honeycomb/honeycomb
2012-12-25.14:53:52 zfs destroy honeycomb/honeycomb
2012-12-25.15:55:50 zpool import -d /dev -f -a
2012-12-25.16:35:42 zpool import -d /dev -f -a
2012-12-25.16:38:46 zpool import -d /dev -f -a
2012-12-25.16:42:53 zpool import -d /dev -f -a
2012-12-25.16:46:10 zpool scrub honeycomb
2012-12-27.01:06:25 zpool import -d /dev -f -a
2012-12-29.01:30:54 zpool import -d /dev -f -a
2012-12-29.11:10:43 zpool import -d /dev -f -a
2012-12-29.23:22:03 zpool import -d /dev -f -a
2012-12-30.11:53:01 zpool import -d /dev -f -a
2012-12-30.18:47:30 zpool import -d /dev -f -a
2013-01-01.15:13:49 zpool import -d /dev -f -a
2013-01-01.15:58:51 zpool import -d /dev -f -a
2013-01-03.00:07:11 zpool import -d /dev -f -a
2013-01-04.21:58:13 zpool import -d /dev -f -a
2013-01-05.03:58:13 zpool import -d /dev -f -a
2013-01-05.23:09:46 zpool import -d /dev -f -a
2013-01-06.13:24:16 zpool import -d /dev -f -a
2013-01-07.02:29:33 zpool import -d /dev -f -a
2013-01-07.18:53:27 zpool import -d /dev -f -a
2013-01-09.00:53:42 zpool import -d /dev -f -a
2013-01-10.02:21:56 zpool import -d /dev -f -a
2013-01-11.20:16:06 zpool import -d /dev -f -a
2013-01-13.02:47:05 zpool import -d /dev -f -a
2013-01-13.09:46:44 zpool import -d /dev -f -a
2013-01-16.22:46:06 zpool import -d /dev -f -a
2013-01-17.21:33:12 zpool import -d /dev -f -a
2013-01-19.00:02:07 zpool import -d /dev -f -a
2013-01-19.18:41:16 zpool import -d /dev -f -a
2013-01-23.23:16:36 zpool import -d /dev -f -a
2013-01-24.22:31:18 zpool import -d /dev -f -a
2013-01-26.10:04:04 zpool import -d /dev -f -a
2013-01-27.11:40:20 zpool import -d /dev -f -a
2013-01-28.01:02:16 zpool import -d /dev -f -a
2013-01-31.21:08:59 zpool import -d /dev -f -a
2013-02-01.00:24:20 zpool scrub honeycomb
2013-02-01.19:47:28 zpool import -d /dev -f -a
2013-02-01.21:55:23 zpool import -d /dev -f -a
2013-02-01.22:10:55 zpool offline honeycomb ada0.nop
2013-02-01.22:11:31 zpool offline honeycomb ada0.nop
2013-02-01.22:11:54 zpool online honeycomb ada0.nop
2013-02-01.22:11:55 zpool online honeycomb ada1.nop
2013-02-01.22:11:57 zpool online honeycomb ada2.nop
2013-02-01.22:11:58 zpool online honeycomb ada3.nop
2013-02-01.22:12:00 zpool online honeycomb ada4.nop
2013-02-01.22:12:01 zpool online honeycomb ada5.nop
2013-02-01.22:12:03 zpool online honeycomb ada6.nop
2013-02-01.22:12:04 zpool online honeycomb ada7.nop
2013-02-01.22:12:05 zpool online honeycomb ada8.nop
2013-02-01.22:12:11 zpool online honeycomb ada9.nop
2013-02-03.16:31:04 zpool import -d /dev -f -a
2013-02-05.22:22:31 zpool import -d /dev -f -a
2013-02-06.22:27:56 zpool import -d /dev -f -a
2013-02-09.18:09:05 zpool import -d /dev -f -a
2013-02-10.08:55:11 zpool import -d /dev -f -a
2013-02-11.21:24:06 zpool import -d /dev -f -a
2013-02-12.21:58:13 zpool import -d /dev -f -a
2013-02-15.21:52:10 zpool import -d /dev -f -a
2013-02-16.07:07:33 zpool import -d /dev -f -a
2013-02-16.21:42:59 zpool import -d /dev -f -a
2013-02-18.21:08:17 zpool import -d /dev -f -a
2013-02-19.19:17:11 zpool import -d /dev -f -a
2013-02-20.21:15:16 zpool import -d /dev -f -a
2013-02-21.19:14:08 zpool import -d /dev -f -a
2013-02-22.22:47:04 zpool import -d /dev -f -a
2013-02-24.03:34:55 zpool import -d /dev -f -a
2013-02-25.20:02:45 zpool import -d /dev -f -a
2013-02-27.19:03:05 zpool import -d /dev -f -a
2013-02-28.20:38:24 zpool import -d /dev -f -a
2013-02-28.20:50:12 zpool import -d /dev -f -a
2013-02-28.20:54:26 zpool import -d /dev -f -a
2013-03-02.00:20:06 zpool import -d /dev -f -a
2013-03-04.04:05:29 zpool import -d /dev -f -a
2013-03-06.19:46:03 zpool import -d /dev -f -a
2013-03-09.03:39:11 zpool import -d /dev -f -a
2013-03-10.10:21:16 zpool import -d /dev -f -a
2013-03-11.22:57:37 zpool import -d /dev -f -a
2013-03-12.21:42:02 zpool import -d /dev -f -a
2013-03-13.21:57:48 zpool import -d /dev -f -a
2013-03-14.19:34:00 zpool import -d /dev -f -a
2013-03-15.22:50:49 zpool import -d /dev -f -a
2013-03-16.14:57:58 zpool import -d /dev -f -a
2013-03-17.09:35:05 zpool import -d /dev -f -a
2013-03-23.12:22:58 zpool import -d /dev -f -a
2013-03-23.20:08:53 zpool import -d /dev -f -a
2013-03-24.14:07:40 zpool import -d /dev -f -a
2013-03-25.21:18:21 zpool import -d /dev -f -a
2013-03-26.20:43:02 zpool import -d /dev -f -a
2013-03-27.01:50:06 zpool import -d /dev -f -a
2013-03-27.19:12:06 zpool import -d /dev -f -a
2013-03-28.20:57:39 zpool import -d /dev -f -a
2013-03-29.20:38:10 zpool import -d /dev -f -a
2013-03-30.11:18:32 zpool import -d /dev -f -a
2013-03-31.14:27:23 zpool import -d /dev -f -a
2013-04-02.18:52:59 zpool import -d /dev -f -a
2013-04-04.19:03:41 zpool import -d /dev -f -a
2013-04-06.08:14:51 zpool import -d /dev -f -a
2013-04-07.10:44:43 zpool import -d /dev -f -a
2013-04-09.19:09:49 zpool import -d /dev -f -a
2013-04-12.19:31:20 zpool import -d /dev -f -a
2013-04-14.00:54:09 zpool import -d /dev -f -a
2013-04-16.22:17:04 zpool import -d /dev -f -a
2013-04-20.13:16:26 zpool import -d /dev -f -a
2013-04-21.13:25:04 zpool import -d /dev -f -a
2013-04-22.18:45:19 zpool import -d /dev -f -a
2013-04-23.20:02:06 zpool import -d /dev -f -a
2013-04-24.19:18:08 zpool import -d /dev -f -a
2013-04-26.20:17:44 zpool import -d /dev -f -a
2013-05-01.02:37:30 zpool import -d /dev -f -a
2013-05-02.18:34:17 zpool import -d /dev -f -a
2013-05-03.20:57:54 zpool import -d /dev -f -a
2013-05-07.19:38:19 zpool import -d /dev -f -a
2013-05-11.09:07:15 zpool import -d /dev -f -a
2013-05-14.18:14:11 zpool import -d /dev -f -a
2013-05-18.10:25:01 zpool import -d /dev -f -a
2013-05-19.11:58:09 zpool import -d /dev -f -a
2013-05-20.19:34:30 zpool import -d /dev -f -a
2013-05-21.20:00:12 zpool import -d /dev -f -a
2013-05-22.23:01:18 zpool import -d /dev -f -a
2013-05-25.10:51:03 zpool import -d /dev -f -a
2013-05-26.23:14:14 zfs snapshot honeycomb@snapshot
2013-05-26.23:14:25 zfs destroy honeycomb@snapshot
2013-06-01.21:51:53 zpool import -d /dev -f -a
2013-06-02.08:08:35 zpool import -d /dev -f -a
2013-06-04.21:57:25 zpool import -d /dev -f -a
2013-06-05.20:11:34 zpool import -d /dev -f -a
2013-06-05.23:40:25 zpool import -d /dev -f -a
2013-06-06.21:54:08 zpool import -d /dev -f -a
2013-06-08.09:19:49 zpool import -d /dev -f -a
2013-06-09.16:01:15 zpool import -d /dev -f -a
2013-06-13.19:39:07 zpool import -d /dev -f -a
2013-06-16.11:05:26 zpool import -d /dev -f -a
2013-06-16.11:11:02 zpool import -d /dev -f -a
2008-01-01.00:04:06 zpool import -d /dev -f -a
2008-01-01.00:07:00 zpool import -d /dev -f -a
2013-01-07.21:03:55 zpool import -d /dev -f -a
2013-01-08.22:34:24 zpool import -d /dev -f -a
2013-01-09.20:41:18 zpool import -d /dev -f -a
2013-01-10.21:23:49 zpool import -d /dev -f -a
2013-01-13.12:23:23 zpool import -d /dev -f -a
2013-01-14.10:26:04 zpool import -d /dev -f -a
2013-01-15.19:12:48 zpool import -d /dev -f -a
2013-01-16.20:02:58 zpool import -d /dev -f -a
2013-01-17.21:53:59 zpool import -d /dev -f -a
2013-01-19.08:58:22 zpool import -d /dev -f -a
2013-01-20.17:08:24 zpool import -d /dev -f -a
2013-01-21.20:43:55 zpool import -d /dev -f -a
2013-01-22.20:15:53 zpool import -d /dev -f -a
2013-01-23.21:17:39 zpool import -d /dev -f -a
2013-01-24.22:07:07 zpool import -d /dev -f -a
2013-01-26.09:49:50 zpool import -d /dev -f -a
2013-01-27.12:51:54 zpool import -d /dev -f -a
2013-01-30.01:08:34 zpool import -d /dev -f -a
2013-01-31.10:36:30 zpool import -d /dev -f -a
2013-02-02.12:26:01 zpool import -d /dev -f -a
2013-02-03.14:00:23 zpool import -d /dev -f -a
2013-02-07.22:53:37 zpool import -d /dev -f -a
2013-02-08.22:16:53 zpool import -d /dev -f -a
2013-02-09.08:26:38 zpool import -d /dev -f -a
2013-02-10.02:54:48 zpool import -d /dev -f -a
2013-02-10.15:07:07 zpool import -d /dev -f -a
2013-02-11.19:59:37 zpool import -d /dev -f -a
2013-02-12.21:16:26 zpool import -d /dev -f -a
2013-02-15.21:48:30 zpool import -d /dev -f -a
2013-02-17.10:31:53 zpool import -d /dev -f -a
2013-02-18.01:34:05 zpool import -d /dev -f -a
2013-02-18.21:22:51 zpool import -d /dev -f -a
2013-02-19.00:39:43 zpool import -d /dev -f -a
2013-02-19.00:41:29 zpool scrub honeycomb
2013-02-20.20:34:17 zpool import -d /dev -f -a
2013-02-23.12:58:07 zpool import -d /dev -f -a
2013-02-24.23:20:07 zpool import -d /dev -f -a
2013-02-25.19:16:30 zpool import -d /dev -f -a
2013-02-25.20:49:50 zpool import -d /dev -f -a
2013-02-26.20:53:32 zpool import -d /dev -f -a
2013-08-29.22:57:33 zpool import -d /dev -f -a
2013-08-30.18:50:40 zpool import -f -R /mnt 1011179111664647128
2013-08-30.18:50:40 zfs inherit -r mountpoint honeycomb
2013-08-30.18:50:40 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-30.18:50:41 zfs set aclmode=passthrough honeycomb
2013-08-30.18:50:46 zfs set aclinherit=passthrough honeycomb
2013-08-30.19:09:22 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-30.19:09:22 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-30.20:10:44 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-30.20:10:44 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-31.09:37:45 zpool import -d /dev -f -a
2013-08-31.09:56:14 zpool import -d /dev -f -a
2013-08-31.10:52:05 zpool import -d /dev -f -a
2013-08-31.11:14:56 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-31.11:14:56 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-31.11:33:25 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-31.11:33:25 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-31.13:56:18 zpool import -d /dev -f -a
2013-08-31.14:07:38 zpool import -d /dev -f -a
2013-08-31.14:29:21 zpool import -d /dev -f -a
2013-08-31.14:44:48 zpool import -d /dev -f -a
2013-08-31.15:08:00 zpool scrub honeycomb
2013-08-31.16:23:18 zpool scrub -s honeycomb
2013-08-31.16:37:05 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-31.16:37:05 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-31.17:55:00 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-31.17:55:00 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-31.18:32:11 zfs create honeycomb/jails
2013-08-31.18:32:38 zfs create -o mountpoint=/honeycomb/jails/.warden-template-9.1-RELEASE-amd64-pluginjail -p honeycomb/jails/.warden-template-9.1-RELEASE-amd64-pluginjail
2013-08-31.18:37:00 zfs snapshot honeycomb/jails/.warden-template-9.1-RELEASE-amd64-pluginjail@clean
2013-08-31.18:37:05 zfs clone honeycomb/jails/.warden-template-9.1-RELEASE-amd64-pluginjail@clean honeycomb/jails/dlna_1
2013-08-31.19:50:40 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-31.19:50:40 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-31.19:55:48 zpool import -d /dev -f -a
2013-08-31.20:02:07 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
2013-08-31.20:02:07 zpool set cachefile=/data/zfs/zpool.cache honeycomb
2013-08-31.20:06:13 zfs destroy -fr honeycomb/jails/dlna_1
2013-08-31.20:13:23 zpool export honeycomb
2013-08-31.20:15:33 zfs destroy -r honeycomb/jails/.warden-template-9.1-RELEASE-amd64-pluginjail
2013-08-31.20:16:14 zfs destroy -r honeycomb/jails
2013-08-31.20:21:33 zpool import -d /dev -f -a
2013-08-31.20:32:22 zfs set mountpoint=/honeycomb honeycomb
2013-09-01.07:50:29 zpool import -d /dev -f -a
2013-09-01.07:54:16 zpool scrub honeycomb

popacio
NewUser
NewUser
Posts: 11
Joined: 31 Aug 2013 12:08
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by popacio »

Anyone who actually knows something about ZFS out there? Please help. It must be something simple done badly by that awesome piece of crap called FreeNAS.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by raulfg3 »

I suppose that all goes wrong when you play with cache on FreeNAS

Code: Select all

2013-08-30.20:10:44 zpool set cachefile=/data/zfs/zpool.cache honeycomb
now , wait until scrub finish.
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

popacio
NewUser
NewUser
Posts: 11
Joined: 31 Aug 2013 12:08
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by popacio »

I DID NOT play with cache! I DID NOTHING myself! All was done without my express consent! I wouldn't EVER presume to change cache settings. I wouldn't even know how.
Also please speak plainly so that I can understand. What do you mean? Do you understand the problem or not? Do you have a solution or there isn't one? What should i do next? What has scrub to do with anything discussed? I don't see the relation. In fact scrub did nothing because it found no errors. Of course. Because the actual data is intact. Just the volume information is missing somehow.

Is there anyone else with some input?

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by raulfg3 »

popacio wrote:I DID NOT play with cache! I DID NOTHING myself! All was done without my express consent!
well, you play with FreeNAS and your valuable pool honeycomb so you are not innocent, a valuable experience of this is:
"Do not play with my data , if I want to play / test how ZFS works, I need to create a test enviroment in a separate machine ( VM or real)"
popacio wrote:Do you understand the problem or not?
Not totally sure, because I'm not a BSD or ZFS guru, I suspect that creation of cachefile=/data/zfs/zpool.cache in FreeNAS that is a option that do not exist on Nas4Free is the problem, is well know that if a cache dev dissapear, you lose your data ( at least to version 19 of ZFS).

if /data/zfs/zpool.cache exist, perhaps you can try this:

Code: Select all

zpool export honeycomb
zpool import -o cachefile=/data/zfs/zpool.cache honeycomb
this is very simmilar to this line in your ZFS history:

Code: Select all

2013-08-31.20:02:07 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1011179111664647128
of course the problem can be this line:

Code: Select all

2013-08-31.20:32:22 zfs set mountpoint=/honeycomb honeycomb
ZFS set are persistent so perhaps you set an inapropiate mount point, and need to set apropiate por Nas4Free:

Code: Select all

zfs set mountpoint=/mnt/honeycomb honeycomb
at the end you need to asume that is your problem not FreeNAS or Nas4Free problem, and be more humble when request for help.
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

popacio
NewUser
NewUser
Posts: 11
Joined: 31 Aug 2013 12:08
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by popacio »

Thank you for trying to help. I fail though to see my lack of humility. I admit being justifiably annoyed by all this Freenas experience yet there is nothing you could actually blame me for.

1. The fact i didn't tested Freenas on a VM is not really something you can blame me for. How could i ever know this would happen. I could have tested it in a VM for a year and when finally implemented it this would have surely happen. I'm no ZFS guru myself and that is why I use solutions like Freenas of Nas4Free. That's the whole point. If i was i wouldn't ask for help here and i would use FredBSD or even other safer solutions like commercial cloud storage. Blaming the user for something Freenas did WITHOUT my consent is ridiculous. And no testing would have prevented that. Because no user like me would ever see what happenes behind the GUI nor would he understand all that. That's why we need proper software to complement our lack of expertise in an ever more complex environement.

I do value your effort by i would have hoped for much more empathy than you presume to offer. Put yourself in my place. You did nothing to screw up things but boot in a live environment (FreeNAS) action that SHOULD HAVE CHANGED NOTHING yet this crap happens and you are blamed for it instead of getting some expert advice. Which is why this forum is all about, i presume?

I'm sorry to say this but alcohol makes me sincere: I knew i would loose my time asking for help on a forum. I had previous experience on that. Never once i have solved anything this way. Not that i blame the forum or anything. Don't get me wrong, I welcome it. People should always try to help each other. Even if the chances of success are slim. And working with people i know how hard it is to convince them otherwise. It's just that experience teaches me that after pages long of code and entropic discussions chances are that no one knows anything and the matter remains unsolved. It happened to me many more times that i can count on different Linux forums. So please understand my justifiably frustration.

Now let's get to the point.

2. As soon as scrub is finished i will try your export-import cache suggestion. I have tried something that looks similar to me, after reading on another forum. That is why you see the similarity in the history. Though i would bet you anything but my soul this won't work. Let's call it an un-educated guess. I will try it nevertheless presuming you are more savvy than I and because at this point i have nothing more to loose.

3. The other suggestion I am somewhat more optimistic about. I have no ideea how I set an inapropiate mount point since i haven't actually done anything previous of the FUBAR event. I will issue the specified command on the same grounds exposed above and with the same reservations.

FInally, as rule I ASSUME NOTHING! I only make educated guesses BASED ON THE FACTS AVAILABLE. So should you. Always. Why on earth should I then assume this is my doing. Quite a statement, dear sir. Allow me to strongly disagree.

If you can keep your wits about you while all others are losing theirs and blaming you, then you are a better man than me sir. Alas I am inherently human....

popacio
NewUser
NewUser
Posts: 11
Joined: 31 Aug 2013 12:08
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by popacio »

Follow up...

Your first suggestion f_* up everything. The pool completely disappeared. I had to re-import as suggested by Nas4free.

Your second suggestion had no effect.

Problem unsolved.

Good bye ZFS. Thread closed.

substr
experienced User
experienced User
Posts: 113
Joined: 04 Aug 2013 20:21
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by substr »

What is the output from 'zfs list' ?

substr
experienced User
experienced User
Posts: 113
Joined: 04 Aug 2013 20:21
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by substr »

I just tried a Nas4free->freenas->nas4free test.

When you are on Synchronize, you probably clicked on the import from ZFS disk button, but did you then click on the Synchronize button that appears at the bottom of the webpage with all the checkmarks?

My test was pretty minimal, but the test datasets were there. I don't think the cachefile is anything to worry about, and I don't think freenas upgraded your pool either. If your datasets still aren't there, it is probably just some little thing that is causing it. But the data is all there, as your pool shows 85+% full.

User avatar
siftu
Moderator
Moderator
Posts: 71
Joined: 17 Oct 2012 06:36
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by siftu »

I'm making a lot of assumptions here.. but I'm betting you never made a dataset or volume and are just using the root of the pool. do you see your pool mounted in "df -h". I'm guessing the mountpoint just changed from /mnt/honeycomb to /honeycomb and if you follow the zfs set mountpoint advice above you should see your data again, or even "cd /honeycomb && ls -la"

On another note. A single 10 disk RAIDZ is not a great idea, you would have been better splitting it up with 2x5 disk RAIDZ's to give you better performance and redundancy. Also you should keep your pool <80% full for performance reasons.
System specs: NAS4Free amd64-embedded on ASUSTeK. M5A78L-M LX PLUS - AMD Phenom(tm) II X3 720 Processor - 8GB ECC Ram, Storage: 2x ZFS mirrors with 4x Western Digital Green (WDC WD10EADS)
My NAS4Free related blog - http://n4f.siftusystems.com/

popacio
NewUser
NewUser
Posts: 11
Joined: 31 Aug 2013 12:08
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by popacio »

Thank you both for your input. You are both probably right. But i have yet to confirm it. Indeed i suspected that the data is still there and that it is probably just a minor glitch. However it's beyond my expertise to fix it.

@substr
1. Here is the zfs list command output:
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
honeycomb 14.1T 1.80T 14.1T /honeycomb

2. From what i remember i did click on the Synchronize button that appears at the bottom of the webpage with all the checkmarks but only after the FUBAR event happened and after beeing suggested accordingly on a forum (or after reading a post with that suggestion, i'm not really sure now). I tried a lot of things after this problem so i'm not really sure now. I do know certainly about using import from ZFS disk button as beeing suggested by either NAS4Free or Freenas (or both) immediately after upgrading the hardware. That i did do. The synchronize i remember trying only after the problem happened. It's not really important anymore.

3. I also don't think the cachefile is anything to worry about, and I am certain that freenas has not upgraded my pool either. The problem is elsewhere.

@sitfu:

1. You might actually be onto something here. I created the zpool almost a year ago and didn't touched it since. I remember following a guide then made by a more illuminated mind than mine (I was a complete noob about FreeBSD & ZFS and still am to a lesser extent). It is possible that i have not created a volume if ZFS actually permits using it without. I certainly didn't create a dataset. That i remember. I can't recall the guide i used so i cannot confirm.
Given these assumptions (let's call them educated guesses now) I suppose it's possinle that the problem lies in the mountpoint change.
Can you further elaborate on the commands suggested? How can I confirm this? What should i do next? I'm afraid I do not fully understand your phrase: "do you see your pool mounted in "df -h". I'm guessing the mountpoint just changed from /mnt/honeycomb to /honeycomb and if you follow the zfs set mountpoint advice above you should see your data again, or even "cd /honeycomb && ls -la"

2. I know about me surpassing the limitations of 9 disks pool for RAIDZ2 (with a Raidz1 no less). It just happened. Besides i always lost my data due to a RAID error and never due to hardware malfunction. Though I admit is possible. This is not a production file server, it's not always on and performance is not so critical. It easily saturates my dual gigabit aggregated link as it is without any problem. Besides, since my pool it's getting full I plan to replace the 2TB drives with 4 TB ones (Seagate NAS edition ones with better MTBF specs and firmware). That operation unfortunately gives me chills now after the new upgrade event. I can't do anything now about a 2x5 RAIDZ1 configuration without loosing my data. Of course, if I can't recover my data i will probably do as suggested. Though, to be honest, i really dislike having to part and manage my data between 2 different pools. After all, this is why we use RAID or ZFS: to avoid using individual and separated partitions (drives) and all the hassle emerging from that. I tested the hardware extensively for 2-3 moth on a RAID5 configuration and I'm pretty certain it is solid. Not to mention they are kept in optimal thermal conditions at all times. But i do understand the risks.

Thank you both in advance. You gave me a dim ray of hope. Luckily i didn't formatted the drives yet.
Last edited by popacio on 02 Sep 2013 23:01, edited 1 time in total.

User avatar
siftu
Moderator
Moderator
Posts: 71
Joined: 17 Oct 2012 06:36
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by siftu »

FreeNAS changed the mountpoint. To set it back get into a shell (console option 6 or ssh) and type

Code: Select all

zfs set mountpoint=/mnt/honeycomb honeycomb
After that all your shares etc should work as I'm guessing you have them pointed to /mnt/honeycomb

Code: Select all

df -h 
can also be executed in the shell and will show you the current filesystem mount points to verify if this was wrong. You should see honeycomb in there somewhere. Pop into IRC if you are having problems and I can assist. You can get there by selecting IRC under the help menu.
System specs: NAS4Free amd64-embedded on ASUSTeK. M5A78L-M LX PLUS - AMD Phenom(tm) II X3 720 Processor - 8GB ECC Ram, Storage: 2x ZFS mirrors with 4x Western Digital Green (WDC WD10EADS)
My NAS4Free related blog - http://n4f.siftusystems.com/

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by raulfg3 »

I say the same here viewtopic.php?f=68&t=4915#p26484 and this is the answer viewtopic.php?f=68&t=4915#p26489

Perhaps need to export and reimport pool to make effect?
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

fsbruva
Advanced User
Advanced User
Posts: 378
Joined: 21 Sep 2012 14:50
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by fsbruva »

@ popacio

First off - let me say how sorry I am that this forum left you feeling abandoned and like you were being told "it's your fault." If I had seen your post earlier (been VERY busy moving the family), I might have been able to help you out before you got all frustrated.

Now, down to brass tacks. It is indeed good that you didn't format the drives. The data is still there - we just need to figure out how to get at it.

Can you explain the hardware upgrade? Was it a motherboard? Disk controller? Did you export the pool before the upgrade, or was it a "it died, and I bought a replacement" type thing?

My thoughs are as follows:
1. I have no idea what sort of "auto-magic" nonsense that FreeNas does. My lack of familiarity means that there is a distinct possibility that the zfs metadata was "messed with" by the FreeNAS live CD. It shouldn't be the cause, but I cannot state that unequivocally.
2. Your post:
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
honeycomb 14.1T 1.80T 14.1T /honeycomb
Indicates that your 1.80 TB of data are still present, and accessible at /honeycomb.
Can you further elaborate on the commands suggested? How can I confirm this? What should i do next?
Let's take a look, and figure this out one step at a time.
"do you see your pool mounted in "df -h".
This command will output the space used by the various mountpoints on your system, along with how much disk space is free. This will not harm your system. If you can post the output, it will help us track down where your data is.

Most ZFS pools are mounted at /mnt/[pool_name]. Yours appears to be mounted at /honeycomb. This is unusual, but not bad. However, I suspect you don't have a folder named /honeycomb, so the mount operation might not have produced the desired results.

Can you type the following commands, and post the output?

Code: Select all

ls /

Code: Select all

ls /mnt/
The strange part is that the zfs list should have shown any and all volumes that you have within the pool. I don't know what would have caused them to disappear. Do you have a backup of your nas4free config?

popacio
NewUser
NewUser
Posts: 11
Joined: 31 Aug 2013 12:08
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by popacio »

The problem has fortunately been solved thanks greatly to sitfu and fsbruva. You gentlemen have made my day. :D

It was quite simple. Something changed the mountpoint (probably FreeNAS without me knowing it). It was sufficient to change it back and all was immediately well. I realise how noobish that sounds but i simply didn't thought of that.

I want to really thank everyone for their support and advice. There were quite a lot of you who took the time to read my post and offer suggestions and i sincerely appreciate that.

fsbruva
Advanced User
Advanced User
Posts: 378
Joined: 21 Sep 2012 14:50
Status: Offline

Re: Lost ZFS volume after system hardware upgrade

Post by fsbruva »

GREAT!

Did the volumes and/or dataset show up again?

Now you can mark this solved, and this can serve as a word to the wise - the FreeNAS liveCD touches the system even when told not to!!

Post Reply

Return to “Data recovery and backups”