Page 1 of 1

[SOLVED] Unsupportable block size 0 after upgrade

Posted: 29 Aug 2012 10:38
by rs232
I've just upgraded nas4free from 9.0.0.1 - Sandstorm (revision 175) to the latest 9.1.0.1.236.
I did (as usual) a full OS upgrade, but now after the reboot my zfs pool is marked as FAULTED!

I've looked into the nas4free console and I can see a long list of "unsupportable block size 0".

I'm pretty sure there's no problem with the zfs poll as it was working perfectly before performing the upgrade.

Can anybody help please?

I'm stuck!

Thanks

Re: Unsupportable block size 0 after upgrade

Posted: 29 Aug 2012 10:54
by rs232

Code: Select all

pool: raid5
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
	replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-3C
  scan: none requested
config:

	NAME                      STATE     READ WRITE CKSUM
	raid5                     UNAVAIL      0     0     0
	  raidz1-0                UNAVAIL      0     0     0
	    16605981846334390349  UNAVAIL      0     0     0  was /dev/da2p2
	    3757930375885694232   UNAVAIL      0     0     0  was /dev/da3p2
	    7709243216078954917   UNAVAIL      0     0     0  was /dev/da4p2
	    2556703805349698994   UNAVAIL      0     0     0  was /dev/da5p2

Re: Unsupportable block size 0 after upgrade

Posted: 29 Aug 2012 10:56
by rs232
And here it's the console screenshot

Re: Unsupportable block size 0 after upgrade

Posted: 29 Aug 2012 11:02
by raulfg3
Try to revert to latest Nas4Free 9.0.0.1.XXX

Latest Nas4Free 9.1 have a automatic config conversion for FreeNAS 0.7, I not a programmer so I don't know how it works, but I suscpect that your previosly config have some that make to Nas4Free 9.1 that is FreeNAs and try to convert HD names.

PD: really I'm not sure that this are the problem, is a speculation only.

PD2: Please compare config after and before upgrade to check changes.

Re: Unsupportable block size 0 after upgrade

Posted: 29 Aug 2012 11:09
by rs232
What's the right way to do so?

Shall I "upgrade" to the old version?

Thanks!

Re: Unsupportable block size 0 after upgrade

Posted: 29 Aug 2012 11:56
by raulfg3
Not tested for me, but in theory, you can boot from nas4Free 9.0.0.1.188 and upgrade full from 9.1.0.1.236

PD: if not work edit config.html and find string version, must be 1 or 1.0 not sure.

Re: Unsupportable block size 0 after upgrade

Posted: 29 Aug 2012 18:03
by rs232
Ok, I've downgraded to the previous version using the old ISO file I had.
Apart from finding the zfs pool unmounted (mounted manually) everything else seems to be working fine!

Just one thing.... the minidlna!

/usr/local/etc/rc.d/minidlna start
Starting minidlna.
minidlna: dlna: Invalid argument
/usr/local/etc/rc.d/minidlna: WARNING: failed to start minidlna

hum... :-/

Re: Unsupportable block size 0 after upgrade

Posted: 29 Aug 2012 19:41
by rs232
it seems to be an rc.d script problem as running
/usr/local/sbin/minidlna -f /usr/local/etc/minidlna.conf
works totally fine. Any tip?

Re: Unsupportable block size 0 after upgrade

Posted: 29 Aug 2012 20:43
by rs232
reinstalled an updated version of minidlna and now it works fine.

Re: Unsupportable block size 0 after upgrade

Posted: 31 Aug 2012 00:30
by zoon01
raulfg3 wrote:Try to revert to latest Nas4Free 9.0.0.1.XXX

Latest Nas4Free 9.1 have a automatic config conversion for FreeNAS 0.7, I not a programmer so I don't know how it works, but I suscpect that your previosly config have some that make to Nas4Free 9.1 that is FreeNAs and try to convert HD names.

PD: really I'm not sure that this are the problem, is a speculation only.

PD2: Please compare config after and before upgrade to check changes.
Latest 9.0.0.1 has a automatic config conversion for FreeNAS 0.7 too.

Re: Unsupportable block size 0 after upgrade

Posted: 05 Sep 2012 12:10
by rs232
For reference, what is the precise version since Nas4free supports config conversion?

Re: Unsupportable block size 0 after upgrade

Posted: 05 Sep 2012 12:25
by raulfg3

Re: Unsupportable block size 0 after upgrade

Posted: 23 Sep 2012 11:17
by rs232
I've tried to upgrade my 9.0.0.1.175 to 9.1.0.1.306 (no luck) and 9.0.0.1.249 (no luck).

After the upgrade I get the following message:

Code: Select all

Your config is a blacklist. You must reset the config or reinstall.

Do you want to reset To Factory Defaults now?
I I downgrade to 9.0.0.1.175 everything works well again.
Not sure what I'm doing wrong...

Re: Unsupportable block size 0 after upgrade

Posted: 23 Sep 2012 16:05
by raulfg3
You aren't doing something wrong, but for your config if you want to upgrade, you need to do a fresh install, and reinstall all your packages, users, shares, etc....

PD: something in your config is bad and can't be convert to new schema, the blacklist is active since rev 247

viewtopic.php?f=78&t=1039&p=3675&hilit=blacklist#p3675

Code: Select all

All version of NAS4Free below 247 have wrong input validations.
Due to this, any user can insert invalid characters to NAS4Free.
It's easy to destroy the system.

9.0.0.1.XXX have wrong config conversion at boot.
It is failed until you touch "Access > Users and Groups".
Once the config is converted, it's impossible to recover.

To avoid a security risk, all the previous versions were deleted from SF.

NAS4Free 9.1.0.1.247 is current developing version.
And it should be base version for future release.

If your hardware is not compatible with 9.1, please use 9.0.0.1.249 instead until a problem is fixed.

Don't forget create backup configuration of NAS4Free after upgrading!


Re: Unsupportable block size 0 after upgrade

Posted: 23 Sep 2012 21:52
by rs232
Finally I've installed Nas4free on a different VM. Importing the ZFS disks from 9.0.0.1.175 into the new 9.1.0.1.306 I still get lots of these: unsupportable block size 0
I'm not able to import the zfs pool, and if I go adding the disks manually (under disk management) they are all reported as size 0.
Using he very same disks into the old 9.0.0.1.175 VM everything works fine though!

Re: Unsupportable block size 0 after upgrade

Posted: 23 Sep 2012 22:43
by raulfg3
please maintain your pool on 9.0.0.1.175 until Daoyama read this post , I really don't know what to do.

Re: Unsupportable block size 0 after upgrade

Posted: 24 Sep 2012 19:30
by daoyama
rs232 wrote:Finally I've installed Nas4free on a different VM. Importing the ZFS disks from 9.0.0.1.175 into the new 9.1.0.1.306 I still get lots of these: unsupportable block size 0
I'm not able to import the zfs pool, and if I go adding the disks manually (under disk management) they are all reported as size 0.
Using he very same disks into the old 9.0.0.1.175 VM everything works fine though!
You didn't mention about VM until this post. What host/configuration do you use?

Re: Unsupportable block size 0 after upgrade

Posted: 24 Sep 2012 19:54
by rs232
No I didn't sorry about that.
The host is a ESXi 5.0 with the latest updates installed (July 2012). 2xcpu dual core, 16Gb of RAM, 2x500Gb in raid1 via adaptec 5405 controller.

The VM is version 8 built on a FreeBSD x64 template. Virtual HW as follow:
1 CPU - 1 core assigned
6Gb of RAM
2Gb virtualdisk
8Gb virtualswap disk
4 physical disks raw mapped (e.g. vmkfstools -x /vmfs/devices/disks/t10.ATA_____WDC_WD20EARS2D00MVWB0_________________________WD2DWCAZA5684556 /vmfs/volumes/boot/Physical-disks/WD20EARS_C.vmdk). These are formatted as zfs and used by nas4free directly.

I did a full ISO installation originally and updated a couple of times since the initial installation booting the VM from .iso and performed the upgrade via console option 9. Never had problems before upgrading. The zfs pool is up and running, healthy I've also run a scrub a couple of times in the past and exported the pool it before powering down the old VM. Still the new VM see the raw mapped disks as size 0. I'm attaching the config in txt format.

To be said that the disks are exactly the same, I've just move them between the two VMs.

Many thanks for the help!

Re: Unsupportable block size 0 after upgrade

Posted: 26 Sep 2012 16:30
by daoyama
rs232 wrote: 4 physical disks raw mapped (e.g. vmkfstools -x /vmfs/devices/disks/t10.ATA_____WDC_WD20EARS2D00MVWB0_________________________WD2DWCAZA5684556 /vmfs/volumes/boot/Physical-disks/WD20EARS_C.vmdk). These are formatted as zfs and used by nas4free directly.
Which type did you create, virtual or physical?
AFAIK, "vmkfstools -z" creates physical, "vmkfstools -r" creates virtual.
I have used "-r" only.

Please upload your vmname.vmx.

Thanks,

Re: Unsupportable block size 0 after upgrade

Posted: 27 Sep 2012 20:37
by rs232
I'm pretty sure I have used -z.

Here the vmx:

Code: Select all

.encoding = "UTF-8"
config.version = "8"
virtualHW.version = "8"
pciBridge0.present = "TRUE"
pciBridge4.present = "TRUE"
pciBridge4.virtualDev = "pcieRootPort"
pciBridge4.functions = "8"
pciBridge5.present = "TRUE"
pciBridge5.virtualDev = "pcieRootPort"
pciBridge5.functions = "8"
pciBridge6.present = "TRUE"
pciBridge6.virtualDev = "pcieRootPort"
pciBridge6.functions = "8"
pciBridge7.present = "TRUE"
pciBridge7.virtualDev = "pcieRootPort"
pciBridge7.functions = "8"
vmci0.present = "TRUE"
hpet0.present = "TRUE"
nvram = "Nas4free.nvram"
virtualHW.productCompatibility = "hosted"
powerType.powerOff = "soft"
powerType.powerOn = "hard"
powerType.suspend = "hard"
powerType.reset = "soft"
displayName = "Nas4free_"
extendedConfigFile = "Nas4free.vmxf"
memsize = "6144"
ethernet0.present = "TRUE"
ethernet0.virtualDev = "e1000"
ethernet0.networkName = "VM Network"
ethernet0.addressType = "generated"
vmci0.unrestricted = "TRUE"
chipset.onlineStandby = "FALSE"
guestOS = "freebsd-64"
uuid.location = "56 4d a9 36 ec 09 41 cd-38 19 d2 39 00 c4 ea 04"
uuid.bios = "56 4d a9 36 ec 09 41 cd-38 19 d2 39 00 c4 ea 04"
vc.uuid = "52 10 0c f0 f8 09 5c df-ad 1a c9 a5 69 e0 99 0d"
wwn.enabled = "TRUE"
snapshot.action = "keep"
sched.cpu.min = "0"
sched.cpu.units = "mhz"
sched.cpu.shares = "normal"
sched.mem.min = "0"
sched.mem.shares = "normal"
ethernet0.generatedAddress = "00:0c:29:c4:ea:04"
vmci0.id = "12904964"
tools.syncTime = "FALSE"
cleanShutdown = "FALSE"
replay.supported = "FALSE"
sched.swap.derivedName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/Nas4free/Nas4free-9c8bc162.vswp"
replay.filename = ""
pciBridge0.pciSlotNumber = "17"
pciBridge4.pciSlotNumber = "21"
pciBridge5.pciSlotNumber = "22"
pciBridge6.pciSlotNumber = "23"
pciBridge7.pciSlotNumber = "24"
ethernet0.pciSlotNumber = "32"
vmci0.pciSlotNumber = "33"
ethernet0.generatedAddressOffset = "0"
hostCPUID.0 = "0000000168747541444d416369746e65"
hostCPUID.1 = "00040f120002080000002001178bfbff"
hostCPUID.80000001 = "00040f12000003530000001febd3fbff"
guestCPUID.0 = "0000000168747541444d416369746e65"
guestCPUID.1 = "00040f120000080080002001078bfbff"
guestCPUID.80000001 = "00040f120000035300000209ebd3fbff"
userCPUID.0 = "0000000168747541444d416369746e65"
userCPUID.1 = "00040f120002080080002001078bfbff"
userCPUID.80000001 = "00040f120000035300000209ebd3fbff"
evcCompatibilityMode = "FALSE"
vmotion.checkpointFBSize = "4194304"
scsi0.present = "TRUE"
scsi0:0.present = "TRUE"
scsi0.sharedBus = "none"
scsi0.virtualDev = "lsilogic"
scsi0:0.fileName = "Nas4free_1.vmdk"
scsi0:0.mode = "independent-persistent"
scsi0:0.deviceType = "scsi-hardDisk"
scsi0:1.present = "TRUE"
scsi0:1.fileName = "Nas4free_2.vmdk"
scsi0:1.mode = "independent-persistent"
scsi0:1.deviceType = "scsi-hardDisk"
scsi0:0.redo = ""
scsi0:1.redo = ""
scsi0.pciSlotNumber = "16"
bios.forceSetupOnce = "FALSE"
scsi0:2.fileName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/Physical-disks/Hitachi_A.vmdk"
scsi0:2.mode = "independent-persistent"
scsi0:2.ctkEnabled = "FALSE"
scsi0:2.deviceType = "scsi-hardDisk"
scsi0:2.present = "TRUE"
scsi0:2.redo = ""
scsi0:3.fileName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/Physical-disks/Hitachi_B.vmdk"
scsi0:3.mode = "independent-persistent"
scsi0:3.ctkEnabled = "FALSE"
scsi0:3.deviceType = "scsi-hardDisk"
scsi0:3.present = "TRUE"
scsi0:3.redo = ""
scsi0:4.fileName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/Physical-disks/Hitachi_C.vmdk"
scsi0:4.mode = "independent-persistent"
scsi0:4.ctkEnabled = "FALSE"
scsi0:4.deviceType = "scsi-hardDisk"
scsi0:4.present = "TRUE"
scsi0:4.redo = ""
scsi0:5.fileName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/Physical-disks/Hitachi_D.vmdk"
scsi0:5.mode = "independent-persistent"
scsi0:5.ctkEnabled = "FALSE"
scsi0:5.deviceType = "scsi-hardDisk"
scsi0:5.present = "TRUE"
scsi0:5.redo = ""
sched.scsi0:1.shares = "normal"
sched.scsi0:1.throughputCap = "off"
ide0:0.present = "TRUE"
ide0:0.fileName = "/vmfs/volumes/4e8435c1-8c06cbc2-90fb-00e0815f2757/ISOs/NAS4Free-x64-LiveCD-9.0.0.1.175.iso"
ide0:0.deviceType = "cdrom-image"
ide0:0.startConnected = "FALSE"
ide1:1.present = "FALSE"
ide1:0.present = "FALSE"
floppy0.present = "FALSE"

Re: Unsupportable block size 0 after upgrade

Posted: 04 Oct 2012 21:38
by NastyEbilPiwate
I'm also running an ESXi 5 system with raw mapped (-z) disks, and am experiencing the exact same problem, running 9.1.0.1 r306.

Re: Unsupportable block size 0 after upgrade

Posted: 05 Oct 2012 10:54
by rs232
I just wanted to add something here. The reason why I originally mapped the disks with -z was to be able to use s.m.a.r.t. to poll HD info (e.g. temperature) from Freenas originally nas4free nowadays.
I have two *brothers* systems and smart has always been troublesome. So I did leave the mapping -z but decided to disable smart as it was freezing my system.
I don't mind re-map the disks in virtual mode but perhaps there is something nas4free can do to become more ESX friendly before I go that way.

my 2 cents

Re: Unsupportable block size 0 after upgrade

Posted: 25 Oct 2012 13:12
by rs232
I've noticed that in the vkdm file there's a reference to buslogic which is not supported bor freebsdx64!. I've thus deleted the virtual disks and recreated them with the option -a lsilogic

vmkfstools -z /vmfs/devices/disks/t10.ATA_____Hitachi_HDS722020ALA330_______________________JK1130YAG1D60T ./Hitachi_A.vmdk -a lsilogic

The performance seems better but the very same problem exists:
Unsupportable block size 0
if I upgrade to any other version other than 9.0.0.1 - Sandstorm (revision 175)

There must be a solution...

Re: Unsupportable block size 0 after upgrade

Posted: 25 Oct 2012 14:17
by rs232
Ok, it seems like switching to virtual does the job. Here what I did:

1) power off the VM
2) ssh into ESXi and remove your vmdk files. This will NOT delete you data.
3) recreate the files with the -r AND -a lsilogic parameters. e.g.

Code: Select all

vmkfstools -z /vmfs/devices/disks/t10.ATA_____Hitachi_HDS722020ALA330_______________________JK1130YAG1D60T ./Hitachi_A.vmdk -a lsilogic
I suggest you use the old filenames when recreating the vmdk!
4) open the VM properties and remove the disks from the config
5) re-add the disks to the VM config using the GUI.
6) power on the VM
7) I'm not sure this is needed for you but I has to run:

Code: Select all

zpool export mypool
zpool import mypool
7a) re-add the pool under Disks/ZFS/Pool (as it wouldn't auto-mount otherwise
8) finally upgrade to the latest nas4free version, no unsupported block 0 will appear any more

HTH
rs232

Re: Unsupportable block size 0 after upgrade

Posted: 12 Nov 2012 10:00
by sollord
rs232 wrote:Ok, it seems like switching to virtual does the job. Here what I did:

1) power off the VM
2) ssh into ESXi and remove your vmdk files. This will NOT delete you data.
3) recreate the files with the -r AND -a lsilogic parameters. e.g.

Code: Select all

vmkfstools -z /vmfs/devices/disks/t10.ATA_____Hitachi_HDS722020ALA330_______________________JK1130YAG1D60T ./Hitachi_A.vmdk -a lsilogic
I suggest you use the old filenames when recreating the vmdk!
4) open the VM properties and remove the disks from the config
5) re-add the disks to the VM config using the GUI.
6) power on the VM
7) I'm not sure this is needed for you but I has to run:

Code: Select all

zpool export mypool
zpool import mypool
7a) re-add the pool under Disks/ZFS/Pool (as it wouldn't auto-mount otherwise
8) finally upgrade to the latest nas4free version, no unsupported block 0 will appear any more

HTH
rs232

While this is a solution of sorts it's not one I'd want to use long term as mounting the drives as virtual disks means nas4free no longer sees any S.M.A.R.T. data from the drives

Re: [SOLVED] Unsupportable block size 0 after upgrade

Posted: 19 May 2013 23:57
by mikeybhoy
I too have the same issue with RDM drives >=2Gb with the later versions of nas4free. Reverting to an earlier version fixes the issue. I have an HP N40L with an LSI card fitted. I have 7 drives in total, 6 of which I pass through as RDM (created with -z for smart monitoring) to nas4free for raidz2. The 4 drives passed through by the onboard SAS controller give the unsupportable block size error, but the 2 drives passed by the LSI are recognized. Makes and models of drive vary (samsung, wd, seagate)

For now I have removed ESX and am running nas4free natively on the latest build (performance was terrible), but am hoping that vmxnet3 support will help. Just need the unsupportable block size issue to get resolved before I can try vmxnet3:)

Cheers

Re: [SOLVED] Unsupportable block size 0 after upgrade

Posted: 30 Jun 2013 05:23
by nothin
Does anyone know where I can source a 9.0 install from? The directory on sourceforge is empty...

Re: [SOLVED] Unsupportable block size 0 after upgrade

Posted: 04 Aug 2013 12:24
by poppadum
The issue is still present on 9.1.0.1 r804. A disk passed through from ESXi is seen as having zero size, and the console reports lots of errors along the lines of
(da1:mpt0:0:1:0): unsupportable block size 0
I believe it's the same issue as reported in this thread.

Update: after further investigation it seems that changing the SCSI controller type to 'buslogic parallel' in ESXi 5.1 lets the OS see the disk at its correct size and import the pool. Too early to tell if it's stable yet.

Re: [SOLVED] Unsupportable block size 0 after upgrade

Posted: 17 Dec 2013 10:37
by gbonny
'buslogic parallel' isn't preferred using BSD, but is this still an issue using 'LSI Logic SAS' SCSI controller and physically mapping with build r847?
I recently did a test with r804 using buslogic parallel physical mapping which showed less performance compared to LSI Logic Parallel with virtual mapping.

The bug's still listed open though http://sourceforge.net/p/nas4free/bugs/65/ Is the any progress to report?
And as stated the limitation of virtual mappings (RDM) is 2TB while 3TB+ gets more common..