This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
[SOLVED] Unsupportable block size 0 after upgrade
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
[SOLVED] Unsupportable block size 0 after upgrade
I've just upgraded nas4free from 9.0.0.1 - Sandstorm (revision 175) to the latest 9.1.0.1.236.
I did (as usual) a full OS upgrade, but now after the reboot my zfs pool is marked as FAULTED!
I've looked into the nas4free console and I can see a long list of "unsupportable block size 0".
I'm pretty sure there's no problem with the zfs poll as it was working perfectly before performing the upgrade.
Can anybody help please?
I'm stuck!
Thanks
I did (as usual) a full OS upgrade, but now after the reboot my zfs pool is marked as FAULTED!
I've looked into the nas4free console and I can see a long list of "unsupportable block size 0".
I'm pretty sure there's no problem with the zfs poll as it was working perfectly before performing the upgrade.
Can anybody help please?
I'm stuck!
Thanks
Last edited by rs232 on 25 Oct 2012 14:28, edited 1 time in total.
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
Code: Select all
pool: raid5
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-3C
scan: none requested
config:
NAME STATE READ WRITE CKSUM
raid5 UNAVAIL 0 0 0
raidz1-0 UNAVAIL 0 0 0
16605981846334390349 UNAVAIL 0 0 0 was /dev/da2p2
3757930375885694232 UNAVAIL 0 0 0 was /dev/da3p2
7709243216078954917 UNAVAIL 0 0 0 was /dev/da4p2
2556703805349698994 UNAVAIL 0 0 0 was /dev/da5p2-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
And here it's the console screenshot
You do not have the required permissions to view the files attached to this post.
- raulfg3
- Site Admin

- Posts: 4865
- Joined: 22 Jun 2012 22:13
- Location: Madrid (ESPAÑA)
- Contact:
- Status: Offline
Re: Unsupportable block size 0 after upgrade
Try to revert to latest Nas4Free 9.0.0.1.XXX
Latest Nas4Free 9.1 have a automatic config conversion for FreeNAS 0.7, I not a programmer so I don't know how it works, but I suscpect that your previosly config have some that make to Nas4Free 9.1 that is FreeNAs and try to convert HD names.
PD: really I'm not sure that this are the problem, is a speculation only.
PD2: Please compare config after and before upgrade to check changes.
Latest Nas4Free 9.1 have a automatic config conversion for FreeNAS 0.7, I not a programmer so I don't know how it works, but I suscpect that your previosly config have some that make to Nas4Free 9.1 that is FreeNAs and try to convert HD names.
PD: really I'm not sure that this are the problem, is a speculation only.
PD2: Please compare config after and before upgrade to check changes.
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)
Wiki
Last changes
HP T510
Wiki
Last changes
HP T510
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
What's the right way to do so?
Shall I "upgrade" to the old version?
Thanks!
Shall I "upgrade" to the old version?
Thanks!
- raulfg3
- Site Admin

- Posts: 4865
- Joined: 22 Jun 2012 22:13
- Location: Madrid (ESPAÑA)
- Contact:
- Status: Offline
Re: Unsupportable block size 0 after upgrade
Not tested for me, but in theory, you can boot from nas4Free 9.0.0.1.188 and upgrade full from 9.1.0.1.236
PD: if not work edit config.html and find string version, must be 1 or 1.0 not sure.
PD: if not work edit config.html and find string version, must be 1 or 1.0 not sure.
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)
Wiki
Last changes
HP T510
Wiki
Last changes
HP T510
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
Ok, I've downgraded to the previous version using the old ISO file I had.
Apart from finding the zfs pool unmounted (mounted manually) everything else seems to be working fine!
Just one thing.... the minidlna!
/usr/local/etc/rc.d/minidlna start
Starting minidlna.
minidlna: dlna: Invalid argument
/usr/local/etc/rc.d/minidlna: WARNING: failed to start minidlna
hum... :-/
Apart from finding the zfs pool unmounted (mounted manually) everything else seems to be working fine!
Just one thing.... the minidlna!
/usr/local/etc/rc.d/minidlna start
Starting minidlna.
minidlna: dlna: Invalid argument
/usr/local/etc/rc.d/minidlna: WARNING: failed to start minidlna
hum... :-/
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
it seems to be an rc.d script problem as running
/usr/local/sbin/minidlna -f /usr/local/etc/minidlna.conf
works totally fine. Any tip?
/usr/local/sbin/minidlna -f /usr/local/etc/minidlna.conf
works totally fine. Any tip?
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
reinstalled an updated version of minidlna and now it works fine.
- zoon01
- Developer

- Posts: 724
- Joined: 20 Jun 2012 21:06
- Location: Netherlands
- Contact:
- Status: Offline
Re: Unsupportable block size 0 after upgrade
Latest 9.0.0.1 has a automatic config conversion for FreeNAS 0.7 too.raulfg3 wrote:Try to revert to latest Nas4Free 9.0.0.1.XXX
Latest Nas4Free 9.1 have a automatic config conversion for FreeNAS 0.7, I not a programmer so I don't know how it works, but I suscpect that your previosly config have some that make to Nas4Free 9.1 that is FreeNAs and try to convert HD names.
PD: really I'm not sure that this are the problem, is a speculation only.
PD2: Please compare config after and before upgrade to check changes.
System specs: XigmaNAS 11.2.0.4 -embedded on Samsung 860 EVO 256GB and Supermicro X10SL7-F w / Bios v3.2, IPMI v.03.86 / CPU E3-1241 v3 @ 3.50GHz - 32GB Crucial DDR3L 1600mhz ECC 1.35v , LSI 2308 on PH20.00.07.00 IT mode, Storage: 5x Western Digital Red (WD30EFRX) raidz
Development system is same system in virtualbox.
Development system is same system in virtualbox.
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
For reference, what is the precise version since Nas4free supports config conversion?
- raulfg3
- Site Admin

- Posts: 4865
- Joined: 22 Jun 2012 22:13
- Location: Madrid (ESPAÑA)
- Contact:
- Status: Offline
Re: Unsupportable block size 0 after upgrade
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)
Wiki
Last changes
HP T510
Wiki
Last changes
HP T510
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
I've tried to upgrade my 9.0.0.1.175 to 9.1.0.1.306 (no luck) and 9.0.0.1.249 (no luck).
After the upgrade I get the following message:
I I downgrade to 9.0.0.1.175 everything works well again.
Not sure what I'm doing wrong...
After the upgrade I get the following message:
Code: Select all
Your config is a blacklist. You must reset the config or reinstall.
Do you want to reset To Factory Defaults now?Not sure what I'm doing wrong...
You do not have the required permissions to view the files attached to this post.
- raulfg3
- Site Admin

- Posts: 4865
- Joined: 22 Jun 2012 22:13
- Location: Madrid (ESPAÑA)
- Contact:
- Status: Offline
Re: Unsupportable block size 0 after upgrade
You aren't doing something wrong, but for your config if you want to upgrade, you need to do a fresh install, and reinstall all your packages, users, shares, etc....
PD: something in your config is bad and can't be convert to new schema, the blacklist is active since rev 247
viewtopic.php?f=78&t=1039&p=3675&hilit=blacklist#p3675
PD: something in your config is bad and can't be convert to new schema, the blacklist is active since rev 247
viewtopic.php?f=78&t=1039&p=3675&hilit=blacklist#p3675
Code: Select all
All version of NAS4Free below 247 have wrong input validations.
Due to this, any user can insert invalid characters to NAS4Free.
It's easy to destroy the system.
9.0.0.1.XXX have wrong config conversion at boot.
It is failed until you touch "Access > Users and Groups".
Once the config is converted, it's impossible to recover.
To avoid a security risk, all the previous versions were deleted from SF.
NAS4Free 9.1.0.1.247 is current developing version.
And it should be base version for future release.
If your hardware is not compatible with 9.1, please use 9.0.0.1.249 instead until a problem is fixed.
Don't forget create backup configuration of NAS4Free after upgrading!
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)
Wiki
Last changes
HP T510
Wiki
Last changes
HP T510
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
Finally I've installed Nas4free on a different VM. Importing the ZFS disks from 9.0.0.1.175 into the new 9.1.0.1.306 I still get lots of these: unsupportable block size 0
I'm not able to import the zfs pool, and if I go adding the disks manually (under disk management) they are all reported as size 0.
Using he very same disks into the old 9.0.0.1.175 VM everything works fine though!
I'm not able to import the zfs pool, and if I go adding the disks manually (under disk management) they are all reported as size 0.
Using he very same disks into the old 9.0.0.1.175 VM everything works fine though!
- raulfg3
- Site Admin

- Posts: 4865
- Joined: 22 Jun 2012 22:13
- Location: Madrid (ESPAÑA)
- Contact:
- Status: Offline
Re: Unsupportable block size 0 after upgrade
please maintain your pool on 9.0.0.1.175 until Daoyama read this post , I really don't know what to do.
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)
Wiki
Last changes
HP T510
Wiki
Last changes
HP T510
- daoyama
- Developer

- Posts: 394
- Joined: 25 Aug 2012 09:28
- Location: Japan
- Status: Offline
Re: Unsupportable block size 0 after upgrade
You didn't mention about VM until this post. What host/configuration do you use?rs232 wrote:Finally I've installed Nas4free on a different VM. Importing the ZFS disks from 9.0.0.1.175 into the new 9.1.0.1.306 I still get lots of these: unsupportable block size 0
I'm not able to import the zfs pool, and if I go adding the disks manually (under disk management) they are all reported as size 0.
Using he very same disks into the old 9.0.0.1.175 VM everything works fine though!
NAS4Free 10.2.0.2.2115 (x64-embedded), 10.2.0.2.2258 (arm), 10.2.0.2.2258(dom0)
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
No I didn't sorry about that.
The host is a ESXi 5.0 with the latest updates installed (July 2012). 2xcpu dual core, 16Gb of RAM, 2x500Gb in raid1 via adaptec 5405 controller.
The VM is version 8 built on a FreeBSD x64 template. Virtual HW as follow:
1 CPU - 1 core assigned
6Gb of RAM
2Gb virtualdisk
8Gb virtualswap disk
4 physical disks raw mapped (e.g. vmkfstools -x /vmfs/devices/disks/t10.ATA_____WDC_WD20EARS2D00MVWB0_________________________WD2DWCAZA5684556 /vmfs/volumes/boot/Physical-disks/WD20EARS_C.vmdk). These are formatted as zfs and used by nas4free directly.
I did a full ISO installation originally and updated a couple of times since the initial installation booting the VM from .iso and performed the upgrade via console option 9. Never had problems before upgrading. The zfs pool is up and running, healthy I've also run a scrub a couple of times in the past and exported the pool it before powering down the old VM. Still the new VM see the raw mapped disks as size 0. I'm attaching the config in txt format.
To be said that the disks are exactly the same, I've just move them between the two VMs.
Many thanks for the help!
The host is a ESXi 5.0 with the latest updates installed (July 2012). 2xcpu dual core, 16Gb of RAM, 2x500Gb in raid1 via adaptec 5405 controller.
The VM is version 8 built on a FreeBSD x64 template. Virtual HW as follow:
1 CPU - 1 core assigned
6Gb of RAM
2Gb virtualdisk
8Gb virtualswap disk
4 physical disks raw mapped (e.g. vmkfstools -x /vmfs/devices/disks/t10.ATA_____WDC_WD20EARS2D00MVWB0_________________________WD2DWCAZA5684556 /vmfs/volumes/boot/Physical-disks/WD20EARS_C.vmdk). These are formatted as zfs and used by nas4free directly.
I did a full ISO installation originally and updated a couple of times since the initial installation booting the VM from .iso and performed the upgrade via console option 9. Never had problems before upgrading. The zfs pool is up and running, healthy I've also run a scrub a couple of times in the past and exported the pool it before powering down the old VM. Still the new VM see the raw mapped disks as size 0. I'm attaching the config in txt format.
To be said that the disks are exactly the same, I've just move them between the two VMs.
Many thanks for the help!
You do not have the required permissions to view the files attached to this post.
- daoyama
- Developer

- Posts: 394
- Joined: 25 Aug 2012 09:28
- Location: Japan
- Status: Offline
Re: Unsupportable block size 0 after upgrade
Which type did you create, virtual or physical?rs232 wrote: 4 physical disks raw mapped (e.g. vmkfstools -x /vmfs/devices/disks/t10.ATA_____WDC_WD20EARS2D00MVWB0_________________________WD2DWCAZA5684556 /vmfs/volumes/boot/Physical-disks/WD20EARS_C.vmdk). These are formatted as zfs and used by nas4free directly.
AFAIK, "vmkfstools -z" creates physical, "vmkfstools -r" creates virtual.
I have used "-r" only.
Please upload your vmname.vmx.
Thanks,
NAS4Free 10.2.0.2.2115 (x64-embedded), 10.2.0.2.2258 (arm), 10.2.0.2.2258(dom0)
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
I'm pretty sure I have used -z.
Here the vmx:
Here the vmx:
Code: Select all
.encoding = "UTF-8"
config.version = "8"
virtualHW.version = "8"
pciBridge0.present = "TRUE"
pciBridge4.present = "TRUE"
pciBridge4.virtualDev = "pcieRootPort"
pciBridge4.functions = "8"
pciBridge5.present = "TRUE"
pciBridge5.virtualDev = "pcieRootPort"
pciBridge5.functions = "8"
pciBridge6.present = "TRUE"
pciBridge6.virtualDev = "pcieRootPort"
pciBridge6.functions = "8"
pciBridge7.present = "TRUE"
pciBridge7.virtualDev = "pcieRootPort"
pciBridge7.functions = "8"
vmci0.present = "TRUE"
hpet0.present = "TRUE"
nvram = "Nas4free.nvram"
virtualHW.productCompatibility = "hosted"
powerType.powerOff = "soft"
powerType.powerOn = "hard"
powerType.suspend = "hard"
powerType.reset = "soft"
displayName = "Nas4free_"
extendedConfigFile = "Nas4free.vmxf"
memsize = "6144"
ethernet0.present = "TRUE"
ethernet0.virtualDev = "e1000"
ethernet0.networkName = "VM Network"
ethernet0.addressType = "generated"
vmci0.unrestricted = "TRUE"
chipset.onlineStandby = "FALSE"
guestOS = "freebsd-64"
uuid.location = "56 4d a9 36 ec 09 41 cd-38 19 d2 39 00 c4 ea 04"
uuid.bios = "56 4d a9 36 ec 09 41 cd-38 19 d2 39 00 c4 ea 04"
vc.uuid = "52 10 0c f0 f8 09 5c df-ad 1a c9 a5 69 e0 99 0d"
wwn.enabled = "TRUE"
snapshot.action = "keep"
sched.cpu.min = "0"
sched.cpu.units = "mhz"
sched.cpu.shares = "normal"
sched.mem.min = "0"
sched.mem.shares = "normal"
ethernet0.generatedAddress = "00:0c:29:c4:ea:04"
vmci0.id = "12904964"
tools.syncTime = "FALSE"
cleanShutdown = "FALSE"
replay.supported = "FALSE"
sched.swap.derivedName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/Nas4free/Nas4free-9c8bc162.vswp"
replay.filename = ""
pciBridge0.pciSlotNumber = "17"
pciBridge4.pciSlotNumber = "21"
pciBridge5.pciSlotNumber = "22"
pciBridge6.pciSlotNumber = "23"
pciBridge7.pciSlotNumber = "24"
ethernet0.pciSlotNumber = "32"
vmci0.pciSlotNumber = "33"
ethernet0.generatedAddressOffset = "0"
hostCPUID.0 = "0000000168747541444d416369746e65"
hostCPUID.1 = "00040f120002080000002001178bfbff"
hostCPUID.80000001 = "00040f12000003530000001febd3fbff"
guestCPUID.0 = "0000000168747541444d416369746e65"
guestCPUID.1 = "00040f120000080080002001078bfbff"
guestCPUID.80000001 = "00040f120000035300000209ebd3fbff"
userCPUID.0 = "0000000168747541444d416369746e65"
userCPUID.1 = "00040f120002080080002001078bfbff"
userCPUID.80000001 = "00040f120000035300000209ebd3fbff"
evcCompatibilityMode = "FALSE"
vmotion.checkpointFBSize = "4194304"
scsi0.present = "TRUE"
scsi0:0.present = "TRUE"
scsi0.sharedBus = "none"
scsi0.virtualDev = "lsilogic"
scsi0:0.fileName = "Nas4free_1.vmdk"
scsi0:0.mode = "independent-persistent"
scsi0:0.deviceType = "scsi-hardDisk"
scsi0:1.present = "TRUE"
scsi0:1.fileName = "Nas4free_2.vmdk"
scsi0:1.mode = "independent-persistent"
scsi0:1.deviceType = "scsi-hardDisk"
scsi0:0.redo = ""
scsi0:1.redo = ""
scsi0.pciSlotNumber = "16"
bios.forceSetupOnce = "FALSE"
scsi0:2.fileName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/Physical-disks/Hitachi_A.vmdk"
scsi0:2.mode = "independent-persistent"
scsi0:2.ctkEnabled = "FALSE"
scsi0:2.deviceType = "scsi-hardDisk"
scsi0:2.present = "TRUE"
scsi0:2.redo = ""
scsi0:3.fileName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/Physical-disks/Hitachi_B.vmdk"
scsi0:3.mode = "independent-persistent"
scsi0:3.ctkEnabled = "FALSE"
scsi0:3.deviceType = "scsi-hardDisk"
scsi0:3.present = "TRUE"
scsi0:3.redo = ""
scsi0:4.fileName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/Physical-disks/Hitachi_C.vmdk"
scsi0:4.mode = "independent-persistent"
scsi0:4.ctkEnabled = "FALSE"
scsi0:4.deviceType = "scsi-hardDisk"
scsi0:4.present = "TRUE"
scsi0:4.redo = ""
scsi0:5.fileName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/Physical-disks/Hitachi_D.vmdk"
scsi0:5.mode = "independent-persistent"
scsi0:5.ctkEnabled = "FALSE"
scsi0:5.deviceType = "scsi-hardDisk"
scsi0:5.present = "TRUE"
scsi0:5.redo = ""
sched.scsi0:1.shares = "normal"
sched.scsi0:1.throughputCap = "off"
ide0:0.present = "TRUE"
ide0:0.fileName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/ISOs/NAS4Free-x64-LiveCD-9.0.0.1.175.iso"
ide0:0.deviceType = "cdrom-image"
ide0:0.startConnected = "FALSE"
ide1:1.present = "FALSE"
ide1:0.present = "FALSE"
floppy0.present = "FALSE"-
NastyEbilPiwate
- NewUser

- Posts: 1
- Joined: 04 Oct 2012 21:36
- Status: Offline
Re: Unsupportable block size 0 after upgrade
I'm also running an ESXi 5 system with raw mapped (-z) disks, and am experiencing the exact same problem, running 9.1.0.1 r306.
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
I just wanted to add something here. The reason why I originally mapped the disks with -z was to be able to use s.m.a.r.t. to poll HD info (e.g. temperature) from Freenas originally nas4free nowadays.
I have two *brothers* systems and smart has always been troublesome. So I did leave the mapping -z but decided to disable smart as it was freezing my system.
I don't mind re-map the disks in virtual mode but perhaps there is something nas4free can do to become more ESX friendly before I go that way.
my 2 cents
I have two *brothers* systems and smart has always been troublesome. So I did leave the mapping -z but decided to disable smart as it was freezing my system.
I don't mind re-map the disks in virtual mode but perhaps there is something nas4free can do to become more ESX friendly before I go that way.
my 2 cents
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
I've noticed that in the vkdm file there's a reference to buslogic which is not supported bor freebsdx64!. I've thus deleted the virtual disks and recreated them with the option -a lsilogic
vmkfstools -z /vmfs/devices/disks/t10.ATA_____Hitachi_HDS722020ALA330_______________________JK1130YAG1D60T ./Hitachi_A.vmdk -a lsilogic
The performance seems better but the very same problem exists:
Unsupportable block size 0
if I upgrade to any other version other than 9.0.0.1 - Sandstorm (revision 175)
There must be a solution...
vmkfstools -z /vmfs/devices/disks/t10.ATA_____Hitachi_HDS722020ALA330_______________________JK1130YAG1D60T ./Hitachi_A.vmdk -a lsilogic
The performance seems better but the very same problem exists:
Unsupportable block size 0
if I upgrade to any other version other than 9.0.0.1 - Sandstorm (revision 175)
There must be a solution...
-
rs232
- Starter

- Posts: 59
- Joined: 25 Jun 2012 13:48
- Status: Offline
Re: Unsupportable block size 0 after upgrade
Ok, it seems like switching to virtual does the job. Here what I did:
1) power off the VM
2) ssh into ESXi and remove your vmdk files. This will NOT delete you data.
3) recreate the files with the -r AND -a lsilogic parameters. e.g.
I suggest you use the old filenames when recreating the vmdk!
4) open the VM properties and remove the disks from the config
5) re-add the disks to the VM config using the GUI.
6) power on the VM
7) I'm not sure this is needed for you but I has to run:
7a) re-add the pool under Disks/ZFS/Pool (as it wouldn't auto-mount otherwise
8) finally upgrade to the latest nas4free version, no unsupported block 0 will appear any more
HTH
rs232
1) power off the VM
2) ssh into ESXi and remove your vmdk files. This will NOT delete you data.
3) recreate the files with the -r AND -a lsilogic parameters. e.g.
Code: Select all
vmkfstools -z /vmfs/devices/disks/t10.ATA_____Hitachi_HDS722020ALA330_______________________JK1130YAG1D60T ./Hitachi_A.vmdk -a lsilogic4) open the VM properties and remove the disks from the config
5) re-add the disks to the VM config using the GUI.
6) power on the VM
7) I'm not sure this is needed for you but I has to run:
Code: Select all
zpool export mypool
zpool import mypool8) finally upgrade to the latest nas4free version, no unsupported block 0 will appear any more
HTH
rs232
-
sollord
- NewUser

- Posts: 1
- Joined: 12 Nov 2012 09:56
- Status: Offline
Re: Unsupportable block size 0 after upgrade
rs232 wrote:Ok, it seems like switching to virtual does the job. Here what I did:
1) power off the VM
2) ssh into ESXi and remove your vmdk files. This will NOT delete you data.
3) recreate the files with the -r AND -a lsilogic parameters. e.g.I suggest you use the old filenames when recreating the vmdk!Code: Select all
vmkfstools -z /vmfs/devices/disks/t10.ATA_____Hitachi_HDS722020ALA330_______________________JK1130YAG1D60T ./Hitachi_A.vmdk -a lsilogic
4) open the VM properties and remove the disks from the config
5) re-add the disks to the VM config using the GUI.
6) power on the VM
7) I'm not sure this is needed for you but I has to run:7a) re-add the pool under Disks/ZFS/Pool (as it wouldn't auto-mount otherwiseCode: Select all
zpool export mypool zpool import mypool
8) finally upgrade to the latest nas4free version, no unsupported block 0 will appear any more
HTH
rs232
While this is a solution of sorts it's not one I'd want to use long term as mounting the drives as virtual disks means nas4free no longer sees any S.M.A.R.T. data from the drives
-
mikeybhoy
- NewUser

- Posts: 1
- Joined: 19 May 2013 23:48
- Status: Offline
Re: [SOLVED] Unsupportable block size 0 after upgrade
I too have the same issue with RDM drives >=2Gb with the later versions of nas4free. Reverting to an earlier version fixes the issue. I have an HP N40L with an LSI card fitted. I have 7 drives in total, 6 of which I pass through as RDM (created with -z for smart monitoring) to nas4free for raidz2. The 4 drives passed through by the onboard SAS controller give the unsupportable block size error, but the 2 drives passed by the LSI are recognized. Makes and models of drive vary (samsung, wd, seagate)
For now I have removed ESX and am running nas4free natively on the latest build (performance was terrible), but am hoping that vmxnet3 support will help. Just need the unsupportable block size issue to get resolved before I can try vmxnet3:)
Cheers
For now I have removed ESX and am running nas4free natively on the latest build (performance was terrible), but am hoping that vmxnet3 support will help. Just need the unsupportable block size issue to get resolved before I can try vmxnet3:)
Cheers
-
nothin
- NewUser

- Posts: 1
- Joined: 30 Jun 2013 05:12
- Status: Offline
Re: [SOLVED] Unsupportable block size 0 after upgrade
Does anyone know where I can source a 9.0 install from? The directory on sourceforge is empty...
-
poppadum
- NewUser

- Posts: 2
- Joined: 27 Oct 2012 22:56
- Status: Offline
Re: [SOLVED] Unsupportable block size 0 after upgrade
The issue is still present on 9.1.0.1 r804. A disk passed through from ESXi is seen as having zero size, and the console reports lots of errors along the lines of
Update: after further investigation it seems that changing the SCSI controller type to 'buslogic parallel' in ESXi 5.1 lets the OS see the disk at its correct size and import the pool. Too early to tell if it's stable yet.
I believe it's the same issue as reported in this thread.(da1:mpt0:0:1:0): unsupportable block size 0
Update: after further investigation it seems that changing the SCSI controller type to 'buslogic parallel' in ESXi 5.1 lets the OS see the disk at its correct size and import the pool. Too early to tell if it's stable yet.
-
gbonny
- NewUser

- Posts: 5
- Joined: 24 May 2013 18:19
- Status: Offline
Re: [SOLVED] Unsupportable block size 0 after upgrade
'buslogic parallel' isn't preferred using BSD, but is this still an issue using 'LSI Logic SAS' SCSI controller and physically mapping with build r847?
I recently did a test with r804 using buslogic parallel physical mapping which showed less performance compared to LSI Logic Parallel with virtual mapping.
The bug's still listed open though http://sourceforge.net/p/nas4free/bugs/65/ Is the any progress to report?
And as stated the limitation of virtual mappings (RDM) is 2TB while 3TB+ gets more common..
I recently did a test with r804 using buslogic parallel physical mapping which showed less performance compared to LSI Logic Parallel with virtual mapping.
The bug's still listed open though http://sourceforge.net/p/nas4free/bugs/65/ Is the any progress to report?
And as stated the limitation of virtual mappings (RDM) is 2TB while 3TB+ gets more common..
9.2.0.1 revision 943 x86 VM;
640MB RAM, E1000, LSI Logic Parallel Virtual mapping:
CIFS/SMB (w enable tuning + large r/w + browse master, w-o use sendfile + AIO): ~32MB/s read & ~20MB/s write, iperf ~366Mbit/s;
on Atom D525, 4GB DDR3, ESXi 5.1u2 with VM's on 60GB SSD Vertex 2, raw disk 2TB WD20EARX, USB3.0 HDD casing with 3TB WD30EFRX.
640MB RAM, E1000, LSI Logic Parallel Virtual mapping:
CIFS/SMB (w enable tuning + large r/w + browse master, w-o use sendfile + AIO): ~32MB/s read & ~20MB/s write, iperf ~366Mbit/s;
on Atom D525, 4GB DDR3, ESXi 5.1u2 with VM's on 60GB SSD Vertex 2, raw disk 2TB WD20EARX, USB3.0 HDD casing with 3TB WD30EFRX.