*New 12.1 series Release:
2019-11-08: XigmaNAS 12.1.0.4.7091 - released!

*New 11.3 series Release:
2019-10-19: XigmaNAS 11.3.0.4.7014 - released


We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money. Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

WebGUI incorrectly states two pools Detected at same mount point

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
jadumeg99
NewUser
NewUser
Posts: 12
Joined: 27 Oct 2019 14:42
Status: Offline

WebGUI incorrectly states two pools Detected at same mount point

#1

Post by jadumeg99 » 05 Dec 2019 01:12

I had a pool (DATA-Pool) - 1 disk stripe - had data - all working well.
I got 3 new 8TB disks made a new raidz1 pool (tank). [if someone wants to know - I shucked 1 WD EasyStore and 2 WD Elements - all came out as HGST He10 256MB cache WD80EMAZ 00WJTA0]
I had to replicate my stripe pool data to the final raidz1 destination (tank)

I did below //X and Xdelta denote begin and end snapshots with title X, to ensure any changes to DATA-Pool while replication is going on is sync'd
zfs snapshot DATA-Pool@X
zfs send -Rv DATA-Pool@X | zfs receive -Fdus tank
done
zfs snapshot DATA-Pool@Xdelta
zfs send -i X DATA-Pool@Xdelta | zfs receive -Fdus tank
250GB done in 1hr or so - great.

In the Disks | ZFS | Configuration | Current page - the mount points are correct
DATA-Pool /mnt/DATA-Pool
tank /mnt/tank


whereas In the Disks | ZFS | Configuration | Detected page - the mount point of tank is wrong (using same point as DATA-Pool)
DATA-Pool /mnt/DATA-Pool
tank /mnt/DATA-Pool


zfs history on tank shows below
History for 'tank':
2019-12-03.21:26:44 zpool create -d -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@lz4_compress=enabled -o feature@multi_vdev_crash_dump=enabled -o feature@spacemap_histogram=enabled -o feature@enabled_txg=enabled -o feature@hole_birth=enabled -o feature@extensible_dataset=enabled -o feature@bookmarks=enabled -o feature@filesystem_limits=enabled -o feature@embedded_data=enabled -o feature@large_blocks=enabled -o feature@sha512=enabled -o feature@skein=enabled -o feature@zpool_checkpoint=enabled -o feature@device_removal=enabled -o feature@obsolete_counts=enabled -o feature@spacemap_v2=enabled -m /mnt/tank tank raidz1 /dev/ada0 /dev/ada2 /dev/ada4
2019-12-03.22:27:32 zpool upgrade tank
2019-12-03.22:29:51 zfs set compression=lz4 tank
2019-12-03.22:35:10 zfs set atime=off tank
2019-12-03.22:36:03 zfs set redundant_metadata=most tank
2019-12-03.23:40:54 zfs receive -Fdus tank
2019-12-03.23:52:16 zfs receive -Fdus tank
2019-12-04.00:10:03 zfs destroy -R tank@1044pmDEC032019delta
2019-12-04.00:12:25 zpool scrub tank

All attempts to change the tank pool mount point give below error
zpool export tank
cannot unmount /mnt/DATA-Pool/Media : Device busy

No clue - no services are running (SMB / PLEXPASS / nothing)

On the "Disks > ZFS > Pools > Management > Edit" | Pools | Management page - there is nothing specified for the Root or Mount Point

All datasets under tank (replicated vide above) are app pointing under /mnt/DATA-Pool - which is wrong.

It seems this is a bug - few posts like this exist - but those answers are not solving my scenario.

How do I rectify this ? - I want to continue my work on tank since this is my main storage... DATA-Pool was temporary being built as I awaited the 3 discs.

Please help.
Config: 12.1.0.4.7091 - RootOnZFS (120GB SSD), 3x8TB[raidz1], 1TB, 750GB on HP p6120F w/ Intel Core 2 Quad Q9650 - 8GB Non-ECC

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4978
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: WebGUI incorrectly states two pools Detected at same mount point

#2

Post by raulfg3 » 05 Dec 2019 08:23

first need to do a clear config and import disk to show correct disk Disks > Management > HDD Management


and finally sync webGUI and Real in ZFS Disks > ZFS > Configuration > Synchronize
12.0.0.4 (revision 6766)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes

jadumeg99
NewUser
NewUser
Posts: 12
Joined: 27 Oct 2019 14:42
Status: Offline

Re: WebGUI incorrectly states two pools Detected at same mount point

#3

Post by jadumeg99 » 06 Dec 2019 00:39

Thank you raulfg3 - I tried that steps too but it didn't work - I tried that so many times.
I had some doubts of what all Synchronize will do, but that is for later reading (because I got to a stable working setup now with below steps)

I did
- remove all snapshots on both pools I had created for the zfs send/recv - seemed like they were preventing the zpool export to succeed - weird (to me).
- zpool export repeatedly on the two pools until that "Device Busy" error went away. [as I recall, each zpool export attempt earlier was going one step further and giving Device Busy error on each FileSystem under the pool]
- zpool import
- got to a slightly better config at this time

I then just did backup config, manually edit the config xml to remove one of that old DATA-Pool and old drive, shutdown, disconnect old drive.
I reinstalled RootOnZFS 12.1 (I anyway had to do this to utilize 120GB SSD fully), import config ... installed extensions ... bit of work here and there and back to working state.

Thank you again.
Config: 12.1.0.4.7091 - RootOnZFS (120GB SSD), 3x8TB[raidz1], 1TB, 750GB on HP p6120F w/ Intel Core 2 Quad Q9650 - 8GB Non-ECC

Post Reply

Return to “ZFS (only!)”