This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

raid10 in ZFS I'm ok with this

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
bougui
NewUser
NewUser
Posts: 4
Joined: 17 Apr 2013 23:19
Status: Offline

raid10 in ZFS I'm ok with this

Post by bougui »

Hi all,

we are used to do raid 10 with solaris box but are looking to replace that with nas4free or freenas.

So I have a test nas4free server with 5 drives and want to test driive raid10

I go to ssh and I create the pool like this

Code: Select all

zpool create -m /mnt/storage storage mirror da1 da2 mirror da3 da4 spare da5
and if I check the status like this

==> zpool status

All is perfect ;-)

Code: Select all

  pool: storage
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	storage     ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da2     ONLINE       0     0     0
	  mirror-1  ONLINE       0     0     0
	    da3     ONLINE       0     0     0
	    da4     ONLINE       0     0     0
	spares
	  da5       AVAIL   

errors: No known data errors
If I go back in the wui I see my pool under the Detected TAB of ZFS but I dont see anything in the other tab, normal ?

I export my zool and re imported it with the wui and I dont see more details

Is this normal that I dont see the actual detail in the wui other than in the detected tab ?

I can share nfs over this so it seems to be fine but I want to ask before I start using it in the wrong way

In freenas I dont see any details of the pool but in nas4free I can see it in the detected part so that why I'm trying nas4free !

Thanks for any input on this

Guillaume

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: raid10 in ZFS I'm ok with this

Post by raulfg3 »

bougui wrote:Is this normal that I dont see the actual detail in the wui other than in the detected tab ?
YES until use sync button to sync detected with webGUI

sync button is in Disks|ZFS|Configuration|Synchronize page
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

bougui
NewUser
NewUser
Posts: 4
Joined: 17 Apr 2013 23:19
Status: Offline

Re: raid10 in ZFS I'm ok with this

Post by bougui »

Wow thanks just incredible, this has convince me to switch, for the record I saw that ZFS hotspare is not working in freenas

http://doc.freenas.org/index.php/Volumes#ZFS_Extra if you look under Spare you will see,

Now is this supported under nas4free ?

From my test env I just remove # of the drive and the Spare did not kick in auto magically to we need to activate something ?

Code: Select all

nas4free:~# zpool status
  pool: storage
 state: DEGRADED
status: One or more devices has been removed by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using 'zpool online' or replace the device with
	'zpool replace'.
  scan: none requested
config:

	NAME                     STATE     READ WRITE CKSUM
	storage                  DEGRADED     0     0     0
	  mirror-0               ONLINE       0     0     0
	    da1                  ONLINE       0     0     0
	    da2                  ONLINE       0     0     0
	  mirror-1               DEGRADED     0     0     0
	    2391638290636129274  REMOVED      0     0     0  was /dev/da3
	    da4                  ONLINE       0     0     0
	spares
	  da5                    AVAIL   

errors: No known data errors

Thanks

sleid
PowerUser
PowerUser
Posts: 774
Joined: 23 Jun 2012 07:36
Location: FRANCE LIMOUSIN CORREZE
Status: Offline

Re: raid10 in ZFS I'm ok with this

Post by sleid »

zpool set autoreplace=on

OK Freebsd9 and NAS4Free (zfs v28)
12.1.0.4 - Ingva (revision 7852)
FreeBSD 12.1-RELEASE-p12 #0 r368465M: Tue Dec 8 23:25:11 CET 2020
X64-embedded sur Intel(R) Atom(TM) CPU C2750 @ 2.40GHz Boot UEFI
ASRock C2750D4I 2 X 8GB DDR3 ECC
Pool of 2 vdev Raidz1: 3 WDC WD40EFRX + 3 WDC WD40EFRX

bougui
NewUser
NewUser
Posts: 4
Joined: 17 Apr 2013 23:19
Status: Offline

Re: raid10 in ZFS I'm ok with this

Post by bougui »

Hi,

Great but not sure it is working... Or maye I'm not waiting enough

I just tested it and the spare does not go in the mirror, how long do we have to wait ? I have disconnect 1 drive for more than 5 min ... Maybe I'm missing something this is a default install of nas4free...

Here is the complet version of nas4free

Code: Select all

Hostname	nas4free.local
Version	9.1.0.1 - Sandstorm (revision 573)
Built date	Sun Dec 16 14:58:27 JST 2012
Platform OS	 FreeBSD 9.1-RELEASE (reldate 901000)
Platform	 x64-full on Intel(R) Core(TM) i7-3720QM CPU @ 2.60GHz
System	Intel Corporation 440BX Desktop Reference Platform Bios: 6.00 07/02/2012
System time	
System uptime	20 minute(s) 12 second(s)
Last config change	
Here is the output of relevant zpool cli.

Code: Select all

nas4free:~# zpool get autoreplace storage
NAME     PROPERTY     VALUE    SOURCE
storage  autoreplace  on       local


nas4free:~# zpool status
  pool: storage
 state: DEGRADED
status: One or more devices has been removed by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using 'zpool online' or replace the device with
	'zpool replace'.
  scan: resilvered 1.50K in 0h0m with 0 errors on Thu Apr 18 00:22:39 2013
config:

	NAME                      STATE     READ WRITE CKSUM
	storage                   DEGRADED     0     0     0
	  mirror-0                DEGRADED     0     0     0
	    da1                   ONLINE       0     0     0
	    17127724748045906489  REMOVED      0     0     0  was /dev/da2
	  mirror-1                ONLINE       0     0     0
	    da3                   ONLINE       0     0     0
	    da4                   ONLINE       0     0     0
	spares
	  da5                     AVAIL   

errors: No known data errors


nas4free:~# zpool get version storage
NAME     PROPERTY  VALUE    SOURCE
storage  version   28       default
I have wait more than 10 min now ... any other pointer would be great !

Thanks

sleid wrote:zpool set autoreplace=on

OK Freebsd9 and NAS4Free (zfs v28)

sleid
PowerUser
PowerUser
Posts: 774
Joined: 23 Jun 2012 07:36
Location: FRANCE LIMOUSIN CORREZE
Status: Offline

Re: raid10 in ZFS I'm ok with this

Post by sleid »

First

zpool replace storage 17127724748045906489 da2

resilver etc....

after physically removing the disc hot da2

under these conditions "autoreplace" works

BUT
This motherboard don't support hotswap.....So not sure it does not crash the system.
12.1.0.4 - Ingva (revision 7852)
FreeBSD 12.1-RELEASE-p12 #0 r368465M: Tue Dec 8 23:25:11 CET 2020
X64-embedded sur Intel(R) Atom(TM) CPU C2750 @ 2.40GHz Boot UEFI
ASRock C2750D4I 2 X 8GB DDR3 ECC
Pool of 2 vdev Raidz1: 3 WDC WD40EFRX + 3 WDC WD40EFRX

Post Reply

Return to “ZFS (only!)”