This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
9.1.0.1.358 crashes under zfs replace
-
l8gravely
- NewUser

- Posts: 5
- Joined: 27 Oct 2012 19:15
- Status: Offline
9.1.0.1.358 crashes under zfs replace
Hi,
I'm working on upgrading an older FreeNAS 0.72 server with 512Mb of RAM to NAS4Free. Like an idiot, I upgraded my ZFS version to the latest. The problem I've been running into is that the system
keeps crashing and rebooting (while running under LiveCD mode for testing) and I can't get any logs out of the system so as to understand what's going on here.
I've got 4 disks in my ZFS pool, 2 x 300gb, 2 x 750gb. setup in two RAID1 mirrors. I'm replacing the 300Gb disks with 2Tb ones and doing the 'zpool replace <pool> <old> <new>' which works, it starts to do the replacement. But if craps out and reboots without any warning, which sucks to debug. In desperation, I've upgraded from an older Intel Dell with only 512Mb of RAM to a Dell Dimension 8300 with 3.5Gb of RAM, P4 CPU with hyper threading turned on. I've also got two built in SATA ports on the system and two add in SATA cards in PCI slots. Not the fastest, but it's been fast enough.
They are both based in SiL 3114 SATA150 controller chips. I can provide dmesg outsput if anyone has a clue.
I suspect I need to setup syslog or some other way of replicating the system logs else where. Or can I setup a serial console on the system and use that instead when booting from a LiveCD? That would be ideal since my server has a multiport serial card. Any other suggestions on how I can burn this system in before I send it to a remote site to be my backup server would be very appreciated.
Thanks,
John
I'm working on upgrading an older FreeNAS 0.72 server with 512Mb of RAM to NAS4Free. Like an idiot, I upgraded my ZFS version to the latest. The problem I've been running into is that the system
keeps crashing and rebooting (while running under LiveCD mode for testing) and I can't get any logs out of the system so as to understand what's going on here.
I've got 4 disks in my ZFS pool, 2 x 300gb, 2 x 750gb. setup in two RAID1 mirrors. I'm replacing the 300Gb disks with 2Tb ones and doing the 'zpool replace <pool> <old> <new>' which works, it starts to do the replacement. But if craps out and reboots without any warning, which sucks to debug. In desperation, I've upgraded from an older Intel Dell with only 512Mb of RAM to a Dell Dimension 8300 with 3.5Gb of RAM, P4 CPU with hyper threading turned on. I've also got two built in SATA ports on the system and two add in SATA cards in PCI slots. Not the fastest, but it's been fast enough.
They are both based in SiL 3114 SATA150 controller chips. I can provide dmesg outsput if anyone has a clue.
I suspect I need to setup syslog or some other way of replicating the system logs else where. Or can I setup a serial console on the system and use that instead when booting from a LiveCD? That would be ideal since my server has a multiport serial card. Any other suggestions on how I can burn this system in before I send it to a remote site to be my backup server would be very appreciated.
Thanks,
John
- daoyama
- Developer

- Posts: 394
- Joined: 25 Aug 2012 09:28
- Location: Japan
- Status: Offline
Re: 9.1.0.1.358 crashes under zfs replace
If you have less than 6GB memory, you need modify ZFS memory size.l8gravely wrote:I've got 4 disks in my ZFS pool, 2 x 300gb, 2 x 750gb. setup in two RAID1 mirrors. I'm replacing the 300Gb disks with 2Tb ones and doing the 'zpool replace <pool> <old> <new>' which works, it starts to do the replacement. But if craps out and reboots without any warning, which sucks to debug.
If you don't know kernel parameters, try use ZFS kerneltune.
ZFS kernel tune (WebGUI extension) 20121020
NAS4Free 10.2.0.2.2115 (x64-embedded), 10.2.0.2.2258 (arm), 10.2.0.2.2258(dom0)
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
-
l8gravely
- NewUser

- Posts: 5
- Joined: 27 Oct 2012 19:15
- Status: Offline
Re: 9.1.0.1.358 crashes under zfs replace
Hi Daoyama,
Thanks for the hints, I'll take a look at my ZFS memory sizes. But I must admit I'm *shocked* that NAS4Free doesn't tune ZFS parameters automatically depending on your memory. And if it doesn't do
that, it should at least give you a hint that you might run into problems when using less than the recommended or required memory sizes. But in general, I'm now pretty unhappy with ZFS and I'll probably
just rip it out instead when I get a chance. I'm interested in a stable system, not one that craps out without warning when it darn well knows you don't fit the parameters it's expecting.
Nor does the documentation seem to mention this anywhere in a nice and clear manny. Why not?
Sigh....
Thanks for the hints, I'll take a look at my ZFS memory sizes. But I must admit I'm *shocked* that NAS4Free doesn't tune ZFS parameters automatically depending on your memory. And if it doesn't do
that, it should at least give you a hint that you might run into problems when using less than the recommended or required memory sizes. But in general, I'm now pretty unhappy with ZFS and I'll probably
just rip it out instead when I get a chance. I'm interested in a stable system, not one that craps out without warning when it darn well knows you don't fit the parameters it's expecting.
Nor does the documentation seem to mention this anywhere in a nice and clear manny. Why not?
Sigh....
-
l8gravely
- NewUser

- Posts: 5
- Joined: 27 Oct 2012 19:15
- Status: Offline
Re: 9.1.0.1.358 crashes under zfs replace
Another thought, I should just run the x86_64 version instead of my current 32bit version, I suspect that will also help things. And is easy enough to do since I'm still in the LiveCD stage. I'll give that a whirl and see how that works, since I still have another 300gb -> 2tb disk replace to do.
-
l8gravely
- NewUser

- Posts: 5
- Joined: 27 Oct 2012 19:15
- Status: Offline
Re: 9.1.0.1.358 crashes under zfs replace
Nope, no such luck trying out the x86_64 version on my Dell Dimension 8300 with a P4 and 3.5Gb of RAM. Sigh... so it looks like I need to reduce the sysctl value of vfs.arc_max to 3/4 of RAM to make sure it's more stable. I guess I can do that... at some point. Any other ideas guys and gals?
- lux
- Advanced User

- Posts: 193
- Joined: 23 Jun 2012 11:37
- Location: Bielefeld, Germany
- Contact:
- Status: Offline
Re: 9.1.0.1.358 crashes under zfs replace
ZFS needs some tuning if you have less then 8Gb RAM - pls use google - ZFS uses a lot of RAM
if tuned (& if you have enough RAM) there's nothing similar rock-stable as zfs - just my opinion - never loose any Data since i'm using zfs
http://wiki.freebsd.org/ZFSTuningGuide
http://www.solarisinternals.com/wiki/in ... ning_Guide
if tuned (& if you have enough RAM) there's nothing similar rock-stable as zfs - just my opinion - never loose any Data since i'm using zfs
http://wiki.freebsd.org/ZFSTuningGuide
http://www.solarisinternals.com/wiki/in ... ning_Guide
Home:11.3.x.7538/emb@32GB USB|1270v2@X9SCA-F|ECC32GB|i340-T4[lagg@GS108Tv2&smb-mch]|M1015@IT|9HDD~40TB@3xRaidZ1+1HDD+2SSD i335&i520+1xi800P@ZIL|~44W idle@SS-400FL2|Nanoxia Deep Silence 6B|24/7
Services: CIFS, FTP, TFTP, SSH, NFS, Rsync, Syncthing, Webserver, BitTorrent, VirtualBox | Extensions: OBI, TheBrig[certbot, Asterisk] | Extensions via vBox: Pi-hole, Jellyfin & zigbee2mqtt @DebianVM's
Test:12.x/emb@16GB USB|X3 420e@M4A88TD-V|16GB|i350-T2|M1015@IT|8xHDD+3xSSD[different Size&Brand]RaidZ1+2|for TESTing only
Services: CIFS, FTP, TFTP, SSH, NFS, Rsync, Syncthing, Webserver, BitTorrent, VirtualBox | Extensions: OBI, TheBrig[certbot, Asterisk] | Extensions via vBox: Pi-hole, Jellyfin & zigbee2mqtt @DebianVM's
Test:12.x/emb@16GB USB|X3 420e@M4A88TD-V|16GB|i350-T2|M1015@IT|8xHDD+3xSSD[different Size&Brand]RaidZ1+2|for TESTing only
-
l8gravely
- NewUser

- Posts: 5
- Joined: 27 Oct 2012 19:15
- Status: Offline
Re: 9.1.0.1.358 crashes under zfs replace
It's all well and good to say that ZFS needs lots of memory, but why is NAS4Free NOT automatically tuning itself for more stable ZFS support? And why are there no warnings in the documentation about it either?
I guess I'll have to tune ZFS on here, or download the ZFStuner module/addon and use that, but I still feel that the NAS4Free developers should be automatically tuning the system for stability, over random crashes which are hard to debug. Heck, tuning the system by default for 1gb or 2gb of RAM would be the smart thing to do. People can always *increase* these settings if they're too low, but if they are too high by default and lead to random reboots without any notice in the logs, then that's just bad programming.
I guess I'll have to tune ZFS on here, or download the ZFStuner module/addon and use that, but I still feel that the NAS4Free developers should be automatically tuning the system for stability, over random crashes which are hard to debug. Heck, tuning the system by default for 1gb or 2gb of RAM would be the smart thing to do. People can always *increase* these settings if they're too low, but if they are too high by default and lead to random reboots without any notice in the logs, then that's just bad programming.
- daoyama
- Developer

- Posts: 394
- Joined: 25 Aug 2012 09:28
- Location: Japan
- Status: Offline
Re: 9.1.0.1.358 crashes under zfs replace
Currently, default assuming size is 4GB. (2GB is not enough)l8gravely wrote:NAS4Free developers should be automatically tuning the system for stability, over random crashes which are hard to debug. Heck, tuning the system by default for 1gb or 2gb of RAM would be the smart thing to do.
NAS4Free already have downsized arc(if mem <4GB) and disable prefecth, but it does not work with some load.
This is very hard to set what size is really safe.
For example, enable prefetch requires large temporary buffer on some work load, so it will crash even if you have 16GB memory.
NAS4Free 10.2.0.2.2115 (x64-embedded), 10.2.0.2.2258 (arm), 10.2.0.2.2258(dom0)
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
-
fsbruva
- Advanced User

- Posts: 378
- Joined: 21 Sep 2012 14:50
- Status: Offline
Re: 9.1.0.1.358 crashes under zfs replace
Did you have any luck disabling prefetch to do the disk replacement? And how stable is the system without attempting the disk replacement? Also, have you tried doing a scrub, and then just attaching the new larger disk to the pool? If this works, then you might just have to add the extra step of having a 3 disk mirror for a couple hours until the reslivering is done.
If you have a spare thumb drive, an embedded install might be a useful debugging tool, as you can redirect the system logs to an alternate location, and thus try and narrow down the error's cause.
If you have a spare thumb drive, an embedded install might be a useful debugging tool, as you can redirect the system logs to an alternate location, and thus try and narrow down the error's cause.
-
fsbruva
- Advanced User

- Posts: 378
- Joined: 21 Sep 2012 14:50
- Status: Offline
Re: 9.1.0.1.358 crashes under zfs replace
Also, can you run this code and post the result?
Then, can you also run this code and post the result? (replace ad1 with the device ID for the existing 750GB disk)
Lastly, can you run this code and post the result? (replace ad4 with the device ID for the new 2 TB disk)
Code: Select all
zdb -C | grep ashiftCode: Select all
smartctl -i /dev/ad1Code: Select all
smartctl -i /dev/ad4