This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Disks|ZFS|Pools|Tools = KAPUTT!

German community

Moderators: b0ssman, apollo567, Princo, crowi

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
Digi-Quick
Advanced User
Advanced User
Posts: 198
Joined: 19 Jul 2013 04:21
Status: Offline

Disks|ZFS|Pools|Tools = KAPUTT!

Post by Digi-Quick »

Moin,
die Seite disks_zfs_zpool_tools.php ist bei einem meiner NASen "kaputt" / lädt nicht mehr

Code: Select all

Warning: SimpleXMLElement::__construct(): Entity: line 1: parser error : error parsing attribute name in /etc/inc/co_zpool_info.inc on line 675 Warning: SimpleXMLElement::__construct(): _disk>115341153411534' in /etc/inc/co_zpool_info.inc on line 675 Warning: SimpleXMLElement::__construct(): ath>11524115241152411524' in /etc/inc/co_zpool_info.inc on line 675 Warning: SimpleXMLElement::__construct(): ath>1151411514115141151411514' in /etc/inc/co_zpool_info.inc on line 675 Warning: SimpleXMLElement::__construct(): ath>1150411504115041150411504' in /etc/inc/co_zpool_info.inc on line 675 Warning: SimpleXMLElement::__construct(): ath>114941149411494__construct('scan() #2 /usr/local/www/disks_zfs_zpool_tools.php(48): co_zpool_info->__construct() #3 {main} thrown in /etc/inc/co_zpool_info.inc on line 675 
kann einer was mit der Ausgabe anfangen?

10.3.0.3 - Pilingitam (revision 3065)

ciao
Last edited by Digi-Quick on 01 Nov 2016 02:44, edited 1 time in total.

User avatar
ms49434
Developer
Developer
Posts: 828
Joined: 03 Sep 2015 18:49
Location: Neuenkirchen-Vörden, Germany - GMT+1
Contact:
Status: Offline

Re: Disks|ZFS|Pools|Tools = KAPUTT!

Post by ms49434 »

bitte mal die Ausgabe von zdb -C poolname posten.
1) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 22GB out of 32GB ECC RAM, LSI 9300-8i IT mode in passthrough mode. Pool 1: 2x HGST 10TB, mirrored, L2ARC: Samsung 850 Pro; Pool 2: 1x Samsung 860 EVO 1TB, SLOG: Samsung SM883, services: Samba AD, CIFS/SMB, ftp, ctld, rsync, syncthing, zfs snapshots.
2) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 8GB out of 32GB ECC RAM, IBM M1215 crossflashed, IT mode, passthrough mode, 2x HGST 10TB , services: rsync.

Digi-Quick
Advanced User
Advanced User
Posts: 198
Joined: 19 Jul 2013 04:21
Status: Offline

Re: Disks|ZFS|Pools|Tools = KAPUTT!

Post by Digi-Quick »

Bitteschön, Ich kann da nichts ungewöhnliches sehen (auf den ersten Blick).

Code: Select all

zdb -C Pool-01

MOS Configuration:
        version: 5000
        name: 'Pool-01'
        state: 0
        txg: 3077576
        pool_guid: 8000533302928430269
        hostname: ''
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 8000533302928430269
            children[0]:
                type: 'raidz'
                id: 0
                guid: 9625943845311684736
                nparity: 3
                metaslab_array: 34
                metaslab_shift: 40
                ashift: 12
                asize: 152029609852928
                is_log: 0
                create_txg: 4
                children[0]:
                    type: 'disk'
                    id: 0
                    guid: 104884123381222009
                    path: '/dev/da0.nop'
                    phys_path: '/dev/da0.nop'
                    whole_disk: 1
                    DTL: 162
                    create_txg: 4
                children[1]:
                    type: 'disk'
                    id: 1
                    guid: 7021807800089247604
                    path: '/dev/da1.nop'
                    phys_path: '/dev/da1.nop'
                    whole_disk: 1
                    DTL: 161
                    create_txg: 4
                children[2]:
                    type: 'disk'
                    id: 2
                    guid: 11426206341967318846
                    path: '/dev/da2.nop'
                    phys_path: '/dev/da2.nop'
                    whole_disk: 1
                    DTL: 160
                    create_txg: 4
                children[3]:
                    type: 'disk'
                    id: 3
                    guid: 105008371026357898
                    path: '/dev/da3.nop'
                    phys_path: '/dev/da3.nop'
                    whole_disk: 1
                    DTL: 159
                    create_txg: 4
                children[4]:
                    type: 'disk'
                    id: 4
                    guid: 15161685685320911377
                    path: '/dev/da4.nop'
                    phys_path: '/dev/da4.nop'
                    whole_disk: 1
                    DTL: 158
                    create_txg: 4
                children[5]:
                    type: 'disk'
                    id: 5
                    guid: 13949590773683811566
                    path: '/dev/da5.nop'
                    phys_path: '/dev/da5.nop'
                    whole_disk: 1
                    DTL: 157
                    create_txg: 4
                children[6]:
                    type: 'disk'
                    id: 6
                    guid: 13358265282459604108
                    path: '/dev/da6'
                    phys_path: '/dev/da6'
                    whole_disk: 1
                    DTL: 156
                    create_txg: 4
                children[7]:
                    type: 'disk'
                    id: 7
                    guid: 17817148861239713806
                    path: '/dev/da7'
                    phys_path: '/dev/da7'
                    whole_disk: 1
                    DTL: 155
                    create_txg: 4
                children[8]:
                    type: 'disk'
                    id: 8
                    guid: 17694506127454008390
                    path: '/dev/da8.nop'
                    phys_path: '/dev/da8.nop'
                    whole_disk: 1
                    DTL: 154
                    create_txg: 4
                children[9]:
                    type: 'disk'
                    id: 9
                    guid: 2778870826383208726
                    path: '/dev/da9.nop'
                    phys_path: '/dev/da9.nop'
                    whole_disk: 1
                    DTL: 153
                    create_txg: 4
                children[10]:
                    type: 'disk'
                    id: 10
                    guid: 17821906346042120560
                    path: '/dev/da10.nop'
                    phys_path: '/dev/da10.nop'
                    whole_disk: 1
                    DTL: 152
                    create_txg: 4
                children[11]:
                    type: 'disk'
                    id: 11
                    guid: 14344466425749064505
                    path: '/dev/da11.nop'
                    phys_path: '/dev/da11.nop'
                    whole_disk: 1
                    DTL: 151
                    create_txg: 4
                children[12]:
                    type: 'disk'
                    id: 12
                    guid: 9961324635208360208
                    path: '/dev/da12.nop'
                    phys_path: '/dev/da12.nop'
                    whole_disk: 1
                    DTL: 150
                    create_txg: 4
                children[13]:
                    type: 'disk'
                    id: 13
                    guid: 1369045969103165675
                    path: '/dev/da13.nop'
                    phys_path: '/dev/da13.nop'
                    whole_disk: 1
                    DTL: 149
                    create_txg: 4
                children[14]:
                    type: 'disk'
                    id: 14
                    guid: 13877262935998104256
                    path: '/dev/da14.nop'
                    phys_path: '/dev/da14.nop'
                    whole_disk: 1
                    DTL: 148
                    create_txg: 4
                children[15]:
                    type: 'disk'
                    id: 15
                    guid: 3039498527147728012
                    path: '/dev/da15'
                    phys_path: '/dev/da15'
                    whole_disk: 1
                    DTL: 147
                    create_txg: 4
                children[16]:
                    type: 'disk'
                    id: 16
                    guid: 6003273007774249802
                    path: '/dev/da16.nop'
                    phys_path: '/dev/da16.nop'
                    whole_disk: 1
                    DTL: 146
                    create_txg: 4
                children[17]:
                    type: 'disk'
                    id: 17
                    guid: 15007217418972645308
                    path: '/dev/da17.nop'
                    phys_path: '/dev/da17.nop'
                    whole_disk: 1
                    DTL: 145
                    create_txg: 4
                children[18]:
                    type: 'disk'
                    id: 18
                    guid: 4927143768441495916
                    path: '/dev/da18.nop'
                    phys_path: '/dev/da18.nop'
                    whole_disk: 1
                    DTL: 144
                    create_txg: 4
        features_for_read:
            com.delphix:hole_birth
            com.delphix:embedded_data

User avatar
ms49434
Developer
Developer
Posts: 828
Joined: 03 Sep 2015 18:49
Location: Neuenkirchen-Vörden, Germany - GMT+1
Contact:
Status: Offline

Re: Disks|ZFS|Pools|Tools = KAPUTT!

Post by ms49434 »

das hat uns sehr bei der Fehlersuche geholfen, vielen Dank für die Daten.

commit 3082:
bugfix: support added for pools with more than 10 vdevices
bugfix: support added for vdevices with more than 10 devices
enhancement: exception handling improved
improvement: conversion to xml
1) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 22GB out of 32GB ECC RAM, LSI 9300-8i IT mode in passthrough mode. Pool 1: 2x HGST 10TB, mirrored, L2ARC: Samsung 850 Pro; Pool 2: 1x Samsung 860 EVO 1TB, SLOG: Samsung SM883, services: Samba AD, CIFS/SMB, ftp, ctld, rsync, syncthing, zfs snapshots.
2) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 8GB out of 32GB ECC RAM, IBM M1215 crossflashed, IT mode, passthrough mode, 2x HGST 10TB , services: rsync.

Digi-Quick
Advanced User
Advanced User
Posts: 198
Joined: 19 Jul 2013 04:21
Status: Offline

Re: Disks|ZFS|Pools|Tools = KAPUTT!

Post by Digi-Quick »

War also tatsächlich ein Bug und wird demnächst gefixt - ich dachte schon ich hätte ein grösseres Problem :)

Digi-Quick
Advanced User
Advanced User
Posts: 198
Joined: 19 Jul 2013 04:21
Status: Offline

Re: Disks|ZFS|Pools|Tools = KAPUTT!

Post by Digi-Quick »

Löppt wieder mit dem aktuellem Update, Danke.

Post Reply

Return to “Deutsch”