Last night, I upgraded from v1906 to v2118. When i logged into the web interface, the ZPOOL BigRaid said that 2 Disks were missing (ADA6, ADA7). I rebooted and saw that 2 disks were still missing (ADA7, ADA8). I have never had a problem before with disks, and the 2 that go missing always changes.
I reinstalled 1906 and with a fresh config I imported the disks & zpool, 2 disks went missing again. The controller is detected as a 'Intel ICH9 SATA300 controller'
Help!
pool: BigRaid
state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: http://illumos.org/msg/ZFS-8000-HC
scan: scrub repaired 0 in 37h59m with 0 errors on Fri Sep 11 08:49:34 2015
config:
NAME STATE READ WRITE CKSUM
BigRaid UNAVAIL 0 0 0
raidz1-0 UNAVAIL 0 0 0
873529272192766280 REMOVED 0 0 0 was /dev/ada5
2949495835317950490 REMOVED 0 0 0 was /dev/ext2fs/Raid1
ada7 ONLINE 0 0 0
ext2fs/Raid3 ONLINE 0 0 0
ext2fs/Raid2 ONLINE 0 0 0
ada10 ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
<metadata>:<0x0>
<metadata>:<0x14>
BigRaid:<0x0>
nas4free: /mnt# dmesg | grep ada5
ada5 at ata2 bus 0 scbus2 target 0 lun 0
ada5: <Hitachi HDS5C3030ALA630 MEAOA580> ATA8-ACS SATA 3.x device
ada5: Serial Number MJ1321YNG07N3A
ada5: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada5: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
ada5: Previously was known as ad8
(ada5:ata2:0:0:0): FLUSHCACHE48. ACB: ea 00 00 00 00 40 00 00 00 00 00 00
(ada5:ata2:0:0:0): CAM status: Command timeout
(ada5:ata2:0:0:0): Retrying command
(ada5:ata2:0:0:0): FLUSHCACHE48. ACB: ea 00 00 00 00 40 00 00 00 00 00 00
(ada5:ata2:0:0:0): CAM status: Command timeout
(ada5:ata2:0:0:0): Retrying command
(ada5:ata2:0:0:0): FLUSHCACHE48. ACB: ea 00 00 00 00 40 00 00 00 00 00 00
(ada5:ata2:0:0:0): CAM status: Command timeout
(ada5:ata2:0:0:0): Retrying command
ada5 at ata2 bus 0 scbus2 target 0 lun 0
ada5: <Hitachi HDS5C3030ALA630 MEAOA580> s/n MJ1321YNG07N3A detached
(ada5:ata2:0:0:0): Periph destroyed
nas4free: /mnt#
nas4free: /mnt# dmesg | grep ada6
ada6 at ata2 bus 0 scbus2 target 1 lun 0
ada6: <Hitachi HDS5C3030ALA630 MEAOA580> ATA8-ACS SATA 3.x device
ada6: Serial Number MJ1323YNG1352C
ada6: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada6: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
ada6: Previously was known as ad9
ada6 at ata2 bus 0 scbus2 target 1 lun 0
ada6: <Hitachi HDS5C3030ALA630 MEAOA580> s/n MJ1323YNG1352C detached
(ada6:ata2:0:1:0): Periph destroyed
nas4free: /mnt# camcontrol devlist
<ST4000VN000-1H4168 0957> at scbus1 target 0 lun 0 (ada0,pass0)
<ST4000VN000-1H4168 0957> at scbus1 target 1 lun 0 (ada1,pass1)
<ST4000VN000-1H4168 0957> at scbus1 target 2 lun 0 (ada2,pass2)
<ST4000VN000-1H4168 0957> at scbus1 target 3 lun 0 (ada3,pass3)
<ST4000VN000-1H4168 0957> at scbus1 target 4 lun 0 (ada4,pass4)
<Port Multiplier 0325197b 000e> at scbus1 target 15 lun 0 (pass5,pmp0)
<Hitachi HDS5C3030ALA630 MEAOA580> at scbus3 target 0 lun 0 (ada7,pass8)
<Hitachi HDS5C3030ALA630 MEAOA580> at scbus3 target 1 lun 0 (ada8,pass9)
<Hitachi HDS5C3030ALA630 MEAOA580> at scbus4 target 0 lun 0 (ada9,pass10)
<Hitachi HDS5C3030ALA630 MEAOA580> at scbus5 target 0 lun 0 (ada10,pass11)
<CENTON DS Pro 8.07> at scbus6 target 0 lun 0 (pass12,da0)
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
After N4F Upgrade to 2118, Disks Missing in ZPool
-
efreem01
- NewUser

- Posts: 3
- Joined: 29 Nov 2015 15:55
- Status: Offline
-
kenZ71
- Advanced User

- Posts: 379
- Joined: 27 Jun 2012 20:18
- Location: Northeast, USA
- Status: Offline
Re: After N4F Upgrade to 2118, Disks Missing in ZPool
If reinstalling the OS don't solve the issue I would be looking at the controller.
Although to help troubleshoot first check that all drive cables are securely connected?
Do you have another machine that you could put the drives in to check for errors?
Although to help troubleshoot first check that all drive cables are securely connected?
Do you have another machine that you could put the drives in to check for errors?
11.2-RELEASE-p3 | ZFS Mirror - 2 x 8TB WD Red | 28GB ECC Ram
HP ML10v2 x64-embedded on Intel(R) Core(TM) i3-4150 CPU @ 3.50GHz
Extra memory so I can host a couple VMs
1) Unifi Controller on Ubuntu
2) Librenms on Ubuntu
HP ML10v2 x64-embedded on Intel(R) Core(TM) i3-4150 CPU @ 3.50GHz
Extra memory so I can host a couple VMs
1) Unifi Controller on Ubuntu
2) Librenms on Ubuntu
- raulfg3
- Site Admin

- Posts: 4865
- Joined: 22 Jun 2012 22:13
- Location: Madrid (ESPAÑA)
- Contact:
- Status: Offline
Re: After N4F Upgrade to 2118, Disks Missing in ZPool
efreem01 wrote:he ZPOOL BigRaid said that 2 Disks were missing (ADA6, ADA7). I rebooted and saw that 2 disks were still missing (ADA7, ADA8). I have never had a problem before with disks, and the 2 that go missing always changes.
I reinstalled 1906 and with a fresh config I imported the disks & zpool, 2 disks went missing again. The controller is detected as a 'Intel ICH9 SATA300 controller'
this is not a n4f or BSD or upgrade problem.efreem01 wrote:ada5: Previously was known as ad8
(ada5:ata2:0:0:0): FLUSHCACHE48. ACB: ea 00 00 00 00 40 00 00 00 00 00 00
(ada5:ata2:0:0:0): CAM status: Command timeout
(ada5:ata2:0:0:0): Retrying command
(ada5:ata2:0:0:0): FLUSHCACHE48. ACB: ea 00 00 00 00 40 00 00 00 00 00 00
Check your SATA cable and / SATA Controller, must work on 1906 & 2118
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)
Wiki
Last changes
HP T510
Wiki
Last changes
HP T510
-
efreem01
- NewUser

- Posts: 3
- Joined: 29 Nov 2015 15:55
- Status: Offline
Re: After N4F Upgrade to 2118, Disks Missing in ZPool
I wanted to test the entire data stream from drive to CPU, so I booted my NAS with a Fedora 23 CD and was able to access the RAW data on all 6 Hitachi Hard Drives. Then I wanted to look at the UFS data on the drives, so I booted a FreeNAS 9 CD and was able to import both pools. The system has been running throughout the day and the drives . Could this be a BSD 10 issue? Driver issue? Some other issue causing drives to time out?
- b0ssman
- Forum Moderator

- Posts: 2438
- Joined: 14 Feb 2013 08:34
- Location: Munich, Germany
- Status: Offline
Re: After N4F Upgrade to 2118, Disks Missing in ZPool
post the smart values of all drives.
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.
-
efreem01
- NewUser

- Posts: 3
- Joined: 29 Nov 2015 15:55
- Status: Offline
Re: After N4F Upgrade to 2118, Disks Missing in ZPool
It does look like drive ADA5 may be failing, but why were 2 drives going MISSING in Nas4Free? FreeNAS is showing a failing drive, but the pool is online and healthy.
zpool status BigRaid
pool: BigRaid
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub in progress since Tue Dec 1 00:00:14 2015
1.00T scanned out of 13.8T at 19.9M/s, 187h50m to go
565K repaired, 7.24% done
config:
NAME STATE READ WRITE CKSUM
BigRaid ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ada5 ONLINE 0 0 0 (repairing)
ext2fs/Raid1 ONLINE 0 0 0
ada7 ONLINE 0 0 0
ext2fs/Raid3 ONLINE 0 0 0
ext2fs/Raid2 ONLINE 0 0 0
ada10 ONLINE 0 0 0
zpool status BigRaid
pool: BigRaid
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub in progress since Tue Dec 1 00:00:14 2015
1.00T scanned out of 13.8T at 19.9M/s, 187h50m to go
565K repaired, 7.24% done
config:
NAME STATE READ WRITE CKSUM
BigRaid ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ada5 ONLINE 0 0 0 (repairing)
ext2fs/Raid1 ONLINE 0 0 0
ada7 ONLINE 0 0 0
ext2fs/Raid3 ONLINE 0 0 0
ext2fs/Raid2 ONLINE 0 0 0
ada10 ONLINE 0 0 0