This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

accidently formated wrong drives

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
goelli
NewUser
NewUser
Posts: 14
Joined: 09 Sep 2012 19:09
Status: Offline

accidently formated wrong drives

Post by goelli »

Hi,

I have build a NAS4free Server with 3x3TB HDDs in a RaidZ1. I created a Zpool and some datasets and migrated my data from my old ReadyNAS Box.
When everything was configuered well I decided to add the old 1,5TB drives from the old NAS. The Goal was to add them as another RaidZ1 to the existing Zpool.

What I did:
- added the newly installed discs via Management
- formated them as Zpool device
- created RaidZ1 vdv
- added vdv to existing Zpool

During this the WebGUI gives only drives, which make sense. So I formatted only the drives which where suggested by the GUI etc.

After adding the vdv to the Zpool it display much more space than it should. And via Information it showed the wrong drives in the pool.
I thought a reboot will make it, but after the reboot my pool was gone.

I tried to boot with the newest NAS4Free Live-CD added the drives and trie to import, but it only says error when synchronising.


Now I think the GUI displayed the wrong drives for formatting because the ata0-5 changed somehow and I formatted the discs with my data :shock:
Is there something I can to like "unformat" them?
Do I have any chance to restore my data?
Where can I learn more about CLI commands for ZFS?

If something is not clear, please ask. I hope someone can help me...

-----

On the Live-CD with only the 3 disks with data connected I tried via CLI the command "zpool import" - it says:

Code: Select all

nas4free:~# zpool import
   pool: GoelliZFS1
     id: 13653737229427241151
  state: FAULTED
 status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        GoelliZFS1   FAULTED  corrupted data
          raidz1-1   ONLINE
            ada0     ONLINE
            ada1     ONLINE
            ada2     ONLINE
the -f flag didn't change something.

So I know at least, that there is a chance for my data...
Perhaps it is the "wrong" (and now missing) vdv in the pool, wich makes the pool being faulted...

---

I've read about a recovery mode with the option -F for "zpool import"
But it says "invalid option"
When I type "zpool import -f -a" It says I should destroy the pool and recreat it from a backup :?
Isn't there a way to get the data on the discs back?

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: accidently formated wrong drives

Post by raulfg3 »

Do you boot openindiana or illumos in your system in any time?.

ZFS is never in illumos / Openindiana and BSD detect it. (Illumos uses ZFS ver 33 and BSD use V28)

Sorry I don't know what can do, google a bit or wait other user that can help you.

PD: If for some reason you "Upgrade" ZFS to rev33 . is NOT possible to read/use it on BSD=Nas4Free sorry, BSD ZFS rev28 can read old revision NOT future rev.
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

goelli
NewUser
NewUser
Posts: 14
Joined: 09 Sep 2012 19:09
Status: Offline

Re: accidently formated wrong drives

Post by goelli »

Thanks for you reply.
I never booted openindiana or illumus. My ZFS has ver28.

I did some reading and am now sure, that I did not format the wrong discs. There must have been something wrong during adding the nev vdev to the pool. I think it mixed up the ada labels of the discs...

I importing the pool with -F or -X did not help. I imported the pool with -V to get some use of the zdb command.
Before that I cecked the disc names with the smart output to verfiy, that it has the right discs.
After that I had a look on zdb and the labels of the discs via "zdb -l" - they seem to be ok...
Also the "zpool clear -FX" didn't help.

Here the log:

Code: Select all

goelli-nas4free:~# zpool status
no pools available
goelli-nas4free:~# zpool import
  pool: GoelliZFS1
    id: 13653737229427241151
 state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

GoelliZFS1   FAULTED  corrupted data
  raidz1-1   ONLINE
    ada3     ONLINE
    ada4     ONLINE
    ada5     ONLINE


goelli-nas4free:~# zpool import -fa
cannot import 'GoelliZFS1': I/O error
Destroy and re-create the pool from
a backup source.

goelli-nas4free:~# zpool import -faV
goelli-nas4free:~# zpool status
  pool: GoelliZFS1
 state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
   see: http://www.sun.com/msg/ZFS-8000-72
 scan: none requested
config:

NAME         STATE     READ WRITE CKSUM
GoelliZFS1   FAULTED      1     0     0
  missing-0  ONLINE       0     0     0
  raidz1-1   ONLINE       0     0     0
    ada3     ONLINE       0     0     0
    ada4     ONLINE       0     0     0
    ada5     ONLINE       0     0     0


goelli-nas4free:~# zpool clear GoelliZFS1
cannot clear errors for GoelliZFS1: I/O error
goelli-nas4free:~# zpool clear -F GoelliZFS1
cannot clear errors for GoelliZFS1: I/O error
goelli-nas4free:~# zpool clear -FX GoelliZFS1
cannot clear errors for GoelliZFS1: one or more devices is currently unavailable

goelli-nas4free:~# zdb
GoelliZFS1:
    version: 28
    txg: 0
    pool_guid: 13653737229427241151
    vdev_children: 2
    vdev_tree:
        type: 'root'
        id: 0
        guid: 13653737229427241151
        vdev_stats[0]: 771985407
        vdev_stats[1]: 4
        vdev_stats[2]: 2
        vdev_stats[3]: 0
        vdev_stats[4]: 0
        vdev_stats[5]: 0
        vdev_stats[6]: 0
        vdev_stats[7]: 0
        vdev_stats[8]: 3
        vdev_stats[9]: 0
        vdev_stats[10]: 0
        vdev_stats[11]: 0
        vdev_stats[12]: 0
        vdev_stats[13]: 0
        vdev_stats[14]: 5120
        vdev_stats[15]: 0
        vdev_stats[16]: 0
        vdev_stats[17]: 0
        vdev_stats[18]: 0
        vdev_stats[19]: 1
        vdev_stats[20]: 0
        vdev_stats[21]: 0
        vdev_stats[22]: 0
        vdev_stats[23]: 0
        vdev_stats[24]: 0
        children[0]:
            type: 'missing'
            id: 0
            guid: 11126174256441503561
            metaslab_array: 0
            metaslab_shift: 0
            ashift: 0
            asize: 0
            is_log: 0
            vdev_stats[0]: 771985407
            vdev_stats[1]: 7
            vdev_stats[2]: 0
            vdev_stats[3]: 0
            vdev_stats[4]: 0
            vdev_stats[5]: 0
            vdev_stats[6]: 4718592
            vdev_stats[7]: 0
            vdev_stats[8]: 0
            vdev_stats[9]: 0
            vdev_stats[10]: 0
            vdev_stats[11]: 0
            vdev_stats[12]: 0
            vdev_stats[13]: 0
            vdev_stats[14]: 0
            vdev_stats[15]: 0
            vdev_stats[16]: 0
            vdev_stats[17]: 0
            vdev_stats[18]: 0
            vdev_stats[19]: 0
            vdev_stats[20]: 0
            vdev_stats[21]: 0
            vdev_stats[22]: 0
            vdev_stats[23]: 0
            vdev_stats[24]: 0
        children[1]:
            type: 'raidz'
            id: 1
            guid: 2411670394542198210
            nparity: 1
            metaslab_array: 195
            metaslab_shift: 36
            ashift: 9
            asize: 9001764126720
            is_log: 0
            create_txg: 564829
            vdev_stats[0]: 771985407
            vdev_stats[1]: 7
            vdev_stats[2]: 0
            vdev_stats[3]: 0
            vdev_stats[4]: 0
            vdev_stats[5]: 0
            vdev_stats[6]: 8933531975680
            vdev_stats[7]: 0
            vdev_stats[8]: 3
            vdev_stats[9]: 0
            vdev_stats[10]: 0
            vdev_stats[11]: 0
            vdev_stats[12]: 0
            vdev_stats[13]: 0
            vdev_stats[14]: 5120
            vdev_stats[15]: 0
            vdev_stats[16]: 0
            vdev_stats[17]: 0
            vdev_stats[18]: 0
            vdev_stats[19]: 0
            vdev_stats[20]: 0
            vdev_stats[21]: 0
            vdev_stats[22]: 0
            vdev_stats[23]: 0
            vdev_stats[24]: 0
            children[0]:
                type: 'disk'
                id: 0
                guid: 9830619583012791022
                path: '/dev/ada3'
                phys_path: '/dev/ada3'
                whole_disk: 1
                create_txg: 564829
                vdev_stats[0]: 771985407
                vdev_stats[1]: 7
                vdev_stats[2]: 0
                vdev_stats[3]: 0
                vdev_stats[4]: 0
                vdev_stats[5]: 0
                vdev_stats[6]: 2977848710485
                vdev_stats[7]: 1
                vdev_stats[8]: 17
                vdev_stats[9]: 0
                vdev_stats[10]: 0
                vdev_stats[11]: 0
                vdev_stats[12]: 0
                vdev_stats[13]: 0
                vdev_stats[14]: 661504
                vdev_stats[15]: 0
                vdev_stats[16]: 0
                vdev_stats[17]: 0
                vdev_stats[18]: 0
                vdev_stats[19]: 0
                vdev_stats[20]: 0
                vdev_stats[21]: 0
                vdev_stats[22]: 0
                vdev_stats[23]: 0
                vdev_stats[24]: 0
            children[1]:
                type: 'disk'
                id: 1
                guid: 9079015106775653762
                path: '/dev/ada4'
                phys_path: '/dev/ada4'
                whole_disk: 1
                create_txg: 564829
                vdev_stats[0]: 771985407
                vdev_stats[1]: 7
                vdev_stats[2]: 0
                vdev_stats[3]: 0
                vdev_stats[4]: 0
                vdev_stats[5]: 0
                vdev_stats[6]: 2977848710485
                vdev_stats[7]: 1
                vdev_stats[8]: 10
                vdev_stats[9]: 0
                vdev_stats[10]: 0
                vdev_stats[11]: 0
                vdev_stats[12]: 0
                vdev_stats[13]: 0
                vdev_stats[14]: 656896
                vdev_stats[15]: 0
                vdev_stats[16]: 0
                vdev_stats[17]: 0
                vdev_stats[18]: 0
                vdev_stats[19]: 0
                vdev_stats[20]: 0
                vdev_stats[21]: 0
                vdev_stats[22]: 0
                vdev_stats[23]: 0
                vdev_stats[24]: 0
            children[2]:
                type: 'disk'
                id: 2
                guid: 18100977419577112130
                path: '/dev/ada5'
                phys_path: '/dev/ada5'
                whole_disk: 1
                create_txg: 564829
                vdev_stats[0]: 771985407
                vdev_stats[1]: 7
                vdev_stats[2]: 0
                vdev_stats[3]: 0
                vdev_stats[4]: 0
                vdev_stats[5]: 0
                vdev_stats[6]: 2977848710485
                vdev_stats[7]: 1
                vdev_stats[8]: 9
                vdev_stats[9]: 0
                vdev_stats[10]: 0
                vdev_stats[11]: 0
                vdev_stats[12]: 0
                vdev_stats[13]: 0
                vdev_stats[14]: 660992
                vdev_stats[15]: 0
                vdev_stats[16]: 0
                vdev_stats[17]: 0
                vdev_stats[18]: 0
                vdev_stats[19]: 0
                vdev_stats[20]: 0
                vdev_stats[21]: 0
                vdev_stats[22]: 0
                vdev_stats[23]: 0
                vdev_stats[24]: 0
    name: 'GoelliZFS1'
    state: 0
    timestamp: 1347197482
    hostid: 595290593
    hostname: 'goelli-nas4free'
    rewind-policy:
        rewind-request-txg: 18446744073709551615
        rewind-request: 1


goelli-nas4free:~# zdb -l /dev/ada3
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 9830619583012791022
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 9830619583012791022
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 9830619583012791022
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 9830619583012791022
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829


goelli-nas4free:~# zdb -l /dev/ada4
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 9079015106775653762
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 9079015106775653762
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 9079015106775653762
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 9079015106775653762
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829


goelli-nas4free:~# zdb -l /dev/ada5
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 18100977419577112130
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 18100977419577112130
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 18100977419577112130
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 18100977419577112130
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829
I'm sure now, that my data is on the discs. After adding the new vdev I had nothing changed.
So there must be a chance to tell the zfs to dissmiss the wrong entry in the metadata - or to edit the metadata myself...

When I think of the following case, you would agree that there has to be an opportunity to detach vdevs...
If you have a pool of 4 vdevs, which is full of data, you are supposed to add more space to the pool by adding a new vdev, right? If now, for some reason, after a short time and not much new data the new attached vdev completely fails - what do you do? ZFS is allways consistend on-disk. ZFS has copy-on-write, so no data is changed until it's touched. In this case you have a pool with 4 healthy vdevs with all you data and one faulty vdev with almost no data. And you get the message to discard all your data, destroy the pool and roll-back from backup?! Somehow rediculous, right?

BodgeIT
Starter
Starter
Posts: 74
Joined: 03 Jul 2012 17:39
Location: London
Status: Offline

Re: accidently formated wrong drives

Post by BodgeIT »

Could it be possible that one of the three has messed up meta? How about trying to import the pool from 2 of the 3 disks. Might just give you a degraded pool that can be repaired with a resilver. Obviously you would need to try all combos...
NAS4Free: 11.4.0.4 - (7682) amd64-embedded
Mobo: Supermicro X9SCL-F, CPU: Xeon E3-1230v2; RAM: Crucial 32GB ECC
System: IBM M1015it SAS controller(SAS2008 v20); Intel Dual 1Gb Server Nic; Thermaltake 1KW PSU;

Storage: Raidz1(3x Toshiba N300 6TB), Raidz1(3x WD Red 3TB), UFS(1x 1TB) Utility disk, ZFS Cache(64Gb SSD)

goelli
NewUser
NewUser
Posts: 14
Joined: 09 Sep 2012 19:09
Status: Offline

Re: accidently formated wrong drives

Post by goelli »

Hmm... That is at least a new idea :-)
I'm not sure, because it says it has an extra missing vdev, but I will try this when I'm back at home - that's nothing I can do over ssh ;-) At least I don't no how...

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: accidently formated wrong drives

Post by raulfg3 »

please google about The pool may be active on another system perhaps some can help you
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

goelli
NewUser
NewUser
Posts: 14
Joined: 09 Sep 2012 19:09
Status: Offline

Re: accidently formated wrong drives

Post by goelli »

So I did some more reading: Ben Rockwood's Blog about the basics of zdb and especially Max Brunning's Blog which has very good artikels for zfs recovery. I also asked the freebsd-fs mailing list. After that I can say now, that I was right with my first thoughts: I Hav formatted the wrong disks and I added the same disks to the pool which already where in it. It's very odd but true...
The lables of the disks pointed this out, when I looked at the uberblocks.

Code: Select all

goelli-nas4free:~# zdb -l -u /dev/ada3

--------------------------------------------
LABEL 0
--------------------------------------------
    version: 28
    name: 'GoelliZFS1'
    state: 0
    txg: 564831
    pool_guid: 13653737229427241151
    hostid: 595290593
    hostname: 'goelli-nas4free'
    top_guid: 2411670394542198210
    guid: 9830619583012791022
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 1
        guid: 2411670394542198210
        nparity: 1
        metaslab_array: 195
        metaslab_shift: 36
        ashift: 9
        asize: 9001764126720
        is_log: 0
        create_txg: 564829
        children[0]:
            type: 'disk'
            id: 0
            guid: 9830619583012791022
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 1
            create_txg: 564829
        children[1]:
            type: 'disk'
            id: 1
            guid: 9079015106775653762
            path: '/dev/ada4'
            phys_path: '/dev/ada4'
            whole_disk: 1
            create_txg: 564829
        children[2]:
            type: 'disk'
            id: 2
            guid: 18100977419577112130
            path: '/dev/ada5'
            phys_path: '/dev/ada5'
            whole_disk: 1
            create_txg: 564829
Uberblock[0]
magic = 0000000000bab10c
version = 28
txg = 564832
guid_sum = 13624038478904324968
timestamp = 1347197207 UTC = Sun Sep  9 15:26:47 2012
Uberblock[8]
magic = 0000000000bab10c
version = 28
txg = 564872
guid_sum = 13624038478904324968
timestamp = 1347197408 UTC = Sun Sep  9 15:30:08 2012
Uberblock[9]
magic = 0000000000bab10c
version = 28
txg = 564873
guid_sum = 13624038478904324968
timestamp = 1347197410 UTC = Sun Sep  9 15:30:10 2012
Uberblock[10]
magic = 0000000000bab10c
version = 28
txg = 564874
guid_sum = 13624038478904324968
timestamp = 1347197411 UTC = Sun Sep  9 15:30:11 2012
Uberblock[24]
magic = 0000000000bab10c
version = 28
txg = 564888
guid_sum = 13624038478904324968
timestamp = 1347197482 UTC = Sun Sep  9 15:31:22 2012
Uberblock[32]
magic = 0000000000bab10c
version = 28
txg = 564872
guid_sum = 13624038478904324968
timestamp = 1347197408 UTC = Sun Sep  9 15:30:08 2012
Uberblock[36]
magic = 0000000000bab10c
version = 28
txg = 564873
guid_sum = 13624038478904324968
timestamp = 1347197410 UTC = Sun Sep  9 15:30:10 2012
Uberblock[40]
magic = 0000000000bab10c
version = 28
txg = 564874
guid_sum = 13624038478904324968
timestamp = 1347197411 UTC = Sun Sep  9 15:30:11 2012
Uberblock[93]
magic = 0000000000bab10c
version = 28
txg = 564829
guid_sum = 13624038478904324968
timestamp = 1347197198 UTC = Sun Sep  9 15:26:38 2012
Uberblock[94]
magic = 0000000000bab10c
version = 28
txg = 564830
guid_sum = 13624038478904324968
timestamp = 1347197200 UTC = Sun Sep  9 15:26:40 2012
Uberblock[95]
magic = 0000000000bab10c
version = 28
txg = 564831
guid_sum = 13624038478904324968
timestamp = 1347197203 UTC = Sun Sep  9 15:26:43 2012
Uberblock[96]
magic = 0000000000bab10c
version = 28
txg = 564832
guid_sum = 13624038478904324968
timestamp = 1347197207 UTC = Sun Sep  9 15:26:47 2012
Uberblock[116]
magic = 0000000000bab10c
version = 28
txg = 564829
guid_sum = 13624038478904324968
timestamp = 1347197198 UTC = Sun Sep  9 15:26:38 2012
Uberblock[120]
magic = 0000000000bab10c
version = 28
txg = 564830
guid_sum = 13624038478904324968
timestamp = 1347197200 UTC = Sun Sep  9 15:26:40 2012
Uberblock[124]
magic = 0000000000bab10c
version = 28
txg = 564831
guid_sum = 13624038478904324968
timestamp = 1347197203 UTC = Sun Sep  9 15:26:43 2012
You can see clearly that the "create_txg: 564829" is from that time, that I added the new vdev to the pool. So I have overwritten all labels on my disks with new one. As I read Max Brunning's Blog Artikels I think it's possible to recreate or recover the correct label and with that I would be able to read my data, but this is far beyound my skills... I am sad to point out that I have to give up here. Without someone who will take me by the hand and say what to do step by step I think recovering/rewriting the right labels of my discs is something I will not be able to do within one year or so. It's a pity that ZFS still has no tools for recovering metadata built in. This would be a task to think of in future.

But it is a bit of consolation that at least I know now that I have done everything I could.

BodgeIT
Starter
Starter
Posts: 74
Joined: 03 Jul 2012 17:39
Location: London
Status: Offline

Re: accidently formated wrong drives

Post by BodgeIT »

Sorry to hear this goelli, I know what it's like to lose a motherload of data and files....
NAS4Free: 11.4.0.4 - (7682) amd64-embedded
Mobo: Supermicro X9SCL-F, CPU: Xeon E3-1230v2; RAM: Crucial 32GB ECC
System: IBM M1015it SAS controller(SAS2008 v20); Intel Dual 1Gb Server Nic; Thermaltake 1KW PSU;

Storage: Raidz1(3x Toshiba N300 6TB), Raidz1(3x WD Red 3TB), UFS(1x 1TB) Utility disk, ZFS Cache(64Gb SSD)

Post Reply

Return to “ZFS (only!)”