Page 1 of 1

raid10 in ZFS I'm ok with this

Posted: 17 Apr 2013 23:32
by bougui
Hi all,

we are used to do raid 10 with solaris box but are looking to replace that with nas4free or freenas.

So I have a test nas4free server with 5 drives and want to test driive raid10

I go to ssh and I create the pool like this

Code: Select all

zpool create -m /mnt/storage storage mirror da1 da2 mirror da3 da4 spare da5
and if I check the status like this

==> zpool status

All is perfect ;-)

Code: Select all

  pool: storage
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	storage     ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da2     ONLINE       0     0     0
	  mirror-1  ONLINE       0     0     0
	    da3     ONLINE       0     0     0
	    da4     ONLINE       0     0     0
	spares
	  da5       AVAIL   

errors: No known data errors
If I go back in the wui I see my pool under the Detected TAB of ZFS but I dont see anything in the other tab, normal ?

I export my zool and re imported it with the wui and I dont see more details

Is this normal that I dont see the actual detail in the wui other than in the detected tab ?

I can share nfs over this so it seems to be fine but I want to ask before I start using it in the wrong way

In freenas I dont see any details of the pool but in nas4free I can see it in the detected part so that why I'm trying nas4free !

Thanks for any input on this

Guillaume

Re: raid10 in ZFS I'm ok with this

Posted: 17 Apr 2013 23:56
by raulfg3
bougui wrote:Is this normal that I dont see the actual detail in the wui other than in the detected tab ?
YES until use sync button to sync detected with webGUI

sync button is in Disks|ZFS|Configuration|Synchronize page

Re: raid10 in ZFS I'm ok with this

Posted: 18 Apr 2013 02:20
by bougui
Wow thanks just incredible, this has convince me to switch, for the record I saw that ZFS hotspare is not working in freenas

http://doc.freenas.org/index.php/Volumes#ZFS_Extra if you look under Spare you will see,

Now is this supported under nas4free ?

From my test env I just remove # of the drive and the Spare did not kick in auto magically to we need to activate something ?

Code: Select all

nas4free:~# zpool status
  pool: storage
 state: DEGRADED
status: One or more devices has been removed by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using 'zpool online' or replace the device with
	'zpool replace'.
  scan: none requested
config:

	NAME                     STATE     READ WRITE CKSUM
	storage                  DEGRADED     0     0     0
	  mirror-0               ONLINE       0     0     0
	    da1                  ONLINE       0     0     0
	    da2                  ONLINE       0     0     0
	  mirror-1               DEGRADED     0     0     0
	    2391638290636129274  REMOVED      0     0     0  was /dev/da3
	    da4                  ONLINE       0     0     0
	spares
	  da5                    AVAIL   

errors: No known data errors

Thanks

Re: raid10 in ZFS I'm ok with this

Posted: 18 Apr 2013 06:57
by sleid
zpool set autoreplace=on

OK Freebsd9 and NAS4Free (zfs v28)

Re: raid10 in ZFS I'm ok with this

Posted: 18 Apr 2013 13:55
by bougui
Hi,

Great but not sure it is working... Or maye I'm not waiting enough

I just tested it and the spare does not go in the mirror, how long do we have to wait ? I have disconnect 1 drive for more than 5 min ... Maybe I'm missing something this is a default install of nas4free...

Here is the complet version of nas4free

Code: Select all

Hostname	nas4free.local
Version	9.1.0.1 - Sandstorm (revision 573)
Built date	Sun Dec 16 14:58:27 JST 2012
Platform OS	 FreeBSD 9.1-RELEASE (reldate 901000)
Platform	 x64-full on Intel(R) Core(TM) i7-3720QM CPU @ 2.60GHz
System	Intel Corporation 440BX Desktop Reference Platform Bios: 6.00 07/02/2012
System time	
System uptime	20 minute(s) 12 second(s)
Last config change	
Here is the output of relevant zpool cli.

Code: Select all

nas4free:~# zpool get autoreplace storage
NAME     PROPERTY     VALUE    SOURCE
storage  autoreplace  on       local


nas4free:~# zpool status
  pool: storage
 state: DEGRADED
status: One or more devices has been removed by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using 'zpool online' or replace the device with
	'zpool replace'.
  scan: resilvered 1.50K in 0h0m with 0 errors on Thu Apr 18 00:22:39 2013
config:

	NAME                      STATE     READ WRITE CKSUM
	storage                   DEGRADED     0     0     0
	  mirror-0                DEGRADED     0     0     0
	    da1                   ONLINE       0     0     0
	    17127724748045906489  REMOVED      0     0     0  was /dev/da2
	  mirror-1                ONLINE       0     0     0
	    da3                   ONLINE       0     0     0
	    da4                   ONLINE       0     0     0
	spares
	  da5                     AVAIL   

errors: No known data errors


nas4free:~# zpool get version storage
NAME     PROPERTY  VALUE    SOURCE
storage  version   28       default
I have wait more than 10 min now ... any other pointer would be great !

Thanks

sleid wrote:zpool set autoreplace=on

OK Freebsd9 and NAS4Free (zfs v28)

Re: raid10 in ZFS I'm ok with this

Posted: 18 Apr 2013 14:29
by sleid
First

zpool replace storage 17127724748045906489 da2

resilver etc....

after physically removing the disc hot da2

under these conditions "autoreplace" works

BUT
This motherboard don't support hotswap.....So not sure it does not crash the system.