This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

ZFS vdev/zpool recommendations

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
withanHdammit
NewUser
NewUser
Posts: 12
Joined: 04 May 2017 19:05
Location: Washington State, US
Contact:
Status: Offline

ZFS vdev/zpool recommendations

Post by withanHdammit »

I am new to ZFS and from what I have read the number of disks in a vdev can make a difference.

My system info is:
Version: 11.0.0.4 - Sayyadina (revision 4195)
Compiled: Saturday April 15 06:24:44 PDT 2017
Platform OS: FreeBSD 11.0-RELEASE-p9 #0 r316944M: Sat Apr 15 00:45:52 CEST 2017
Platform: x64-embedded on Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz (2x6 core = 12 cores total)
System: Supermicro X10DRH-iT
System BIOS: American Megatrends Inc. Version: 2.0a 06/30/2016
RAM: 32GB
Disks: 24 x 1TB drives HGST HTS721010A9E630
The disks are on 3 controllers of 8 disks per controller.

The intended use for this is as a backup target, no live data usage expected at this time.

My question is what is the best way to configure the ZFS for space, performance, & reliability in that order. I do not believe I need hot spares, I have cold spares available if a drive fails and needs to be swapped out.

Thanks in advance for any advice you can provide.

H

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: ZFS vdev/zpool recommendations

Post by raulfg3 »

Good documentation, forget that is writen for FreeNAS, all refered to ZFS is aplicable to N4F:

https://forums.freenas.org/index.php?th ... oobs.7775/
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

withanHdammit
NewUser
NewUser
Posts: 12
Joined: 04 May 2017 19:05
Location: Washington State, US
Contact:
Status: Offline

Re: ZFS vdev/zpool recommendations

Post by withanHdammit »

raulfg3 wrote:
04 May 2017 20:43
Good documentation, forget that is writen for FreeNAS, all refered to ZFS is aplicable to N4F:

https://forums.freenas.org/index.php?th ... oobs.7775/
Thanks for the link! Looks like I've got some good reading for my plane ride tomorrow!

H

User avatar
ms49434
Developer
Developer
Posts: 828
Joined: 03 Sep 2015 18:49
Location: Neuenkirchen-Vörden, Germany - GMT+1
Contact:
Status: Offline

Re: ZFS vdev/zpool recommendations

Post by ms49434 »

Just a thought if you want redundancy on disk and on controller level you can get a maximum of 16TB from 24 1TB hard drives:

single disk failure / single controller failure / max performance:
1 pool, 8 vdevs (A-H), RAIDZ1, 3 disks per vdev (1 from each controller (C1-C3)):

Code: Select all

	D1	D2	D3	D4	D5	D6	D7	D8
C1	A	B	C	D	E	F	G	H
C2	A	B	C	D	E	F	G	H
C3	A	B	C	D	E	F	G	H

2 disk failure / single controller failure / good performance
1 pool, 4 vdevs (A-D), RAIDZ2, 6 disks per vdev (2 from each controller (C1-C3))

Code: Select all

	D1	D2	D3	D4	D5	D6	D7	D8
C1	A	A	B	B	C	C	D	D
C2	A	A	B	B	C	C	D	D
C3	A	A	B	B	C	C	D	D
1) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 22GB out of 32GB ECC RAM, LSI 9300-8i IT mode in passthrough mode. Pool 1: 2x HGST 10TB, mirrored, L2ARC: Samsung 850 Pro; Pool 2: 1x Samsung 860 EVO 1TB, SLOG: Samsung SM883, services: Samba AD, CIFS/SMB, ftp, ctld, rsync, syncthing, zfs snapshots.
2) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 8GB out of 32GB ECC RAM, IBM M1215 crossflashed, IT mode, passthrough mode, 2x HGST 10TB , services: rsync.

withanHdammit
NewUser
NewUser
Posts: 12
Joined: 04 May 2017 19:05
Location: Washington State, US
Contact:
Status: Offline

Re: ZFS vdev/zpool recommendations

Post by withanHdammit »

ms49434 wrote:
05 May 2017 11:40
Just a thought if you want redundancy on disk and on controller level you can get a maximum of 16TB from 24 1TB hard drives:
Thanks, I hadn't thought of controller redundancy, that's a spectacular idea!

withanHdammit
NewUser
NewUser
Posts: 12
Joined: 04 May 2017 19:05
Location: Washington State, US
Contact:
Status: Offline

Re: ZFS vdev/zpool recommendations

Post by withanHdammit »

ms49434 wrote:
05 May 2017 11:40
2 disk failure / single controller failure / good performance
1 pool, 4 vdevs (A-D), RAIDZ2, 6 disks per vdev (2 from each controller (C1-C3))

Code: Select all

	D1	D2	D3	D4	D5	D6	D7	D8
C1	A	A	B	B	C	C	D	D
C2	A	A	B	B	C	C	D	D
C3	A	A	B	B	C	C	D	D

So I set up my vdev's the way you recommended for controller failure protection:

Code: Select all

  pool: primary_volume
 state: ONLINE
  scan: none requested
config:

	NAME        STATE         READ WRITE CKSUM
	primary_volume  ONLINE       0     0     0
	  raidz2-0      ONLINE       0     0     0
	    da0         ONLINE       0     0     0
	    da1         ONLINE       0     0     0
	    da8         ONLINE       0     0     0
	    da9         ONLINE       0     0     0
	    da16        ONLINE       0     0     0
	    da17        ONLINE       0     0     0
	  raidz2-1      ONLINE       0     0     0
	    da2         ONLINE       0     0     0
	    da3         ONLINE       0     0     0
	    da10        ONLINE       0     0     0
	    da11        ONLINE       0     0     0
	    da18        ONLINE       0     0     0
	    da19        ONLINE       0     0     0
	  raidz2-2      ONLINE       0     0     0
	    da4         ONLINE       0     0     0
	    da5         ONLINE       0     0     0
	    da12        ONLINE       0     0     0
	    da13        ONLINE       0     0     0
	    da20        ONLINE       0     0     0
	    da21        ONLINE       0     0     0
	  raidz2-3      ONLINE       0     0     0
	    da6         ONLINE       0     0     0
	    da7         ONLINE       0     0     0
	    da14        ONLINE       0     0     0
	    da15        ONLINE       0     0     0
	    da22        ONLINE       0     0     0
	    da23        ONLINE       0     0     0

errors: No known data errors

And it tells me that I have 21.7T of available space not 16T. Any thoughts? What did I miss or configure incorrectly?

Image

Image

Image

User avatar
ms49434
Developer
Developer
Posts: 828
Joined: 03 Sep 2015 18:49
Location: Neuenkirchen-Vörden, Germany - GMT+1
Contact:
Status: Offline

Re: ZFS vdev/zpool recommendations

Post by ms49434 »

Your configuration is very accurate. 24 terabyte is the capacity of all disks which calculates as 21.8 tebibyte. https://en.wikipedia.org/wiki/Tebibyte

There is one set of information called alloc/free which reports the physically allocated and free space of all disks in your pool (regardless of redundancy - gross disk space) and there's another set of information called used/avail which gives you the logical used and available space of your pool (dededucted by the space that is needed for redundancy - net pool space).
1) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 22GB out of 32GB ECC RAM, LSI 9300-8i IT mode in passthrough mode. Pool 1: 2x HGST 10TB, mirrored, L2ARC: Samsung 850 Pro; Pool 2: 1x Samsung 860 EVO 1TB, SLOG: Samsung SM883, services: Samba AD, CIFS/SMB, ftp, ctld, rsync, syncthing, zfs snapshots.
2) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 8GB out of 32GB ECC RAM, IBM M1215 crossflashed, IT mode, passthrough mode, 2x HGST 10TB , services: rsync.

withanHdammit
NewUser
NewUser
Posts: 12
Joined: 04 May 2017 19:05
Location: Washington State, US
Contact:
Status: Offline

Re: ZFS vdev/zpool recommendations

Post by withanHdammit »

I'm familiar with TB vs TiB, so wasn't concerned about that, good to know the 21.8T is the allocated, is there somewhere I can see the logical availability? I've been through all the screens and can't find it anywhere.

Edit:
I was able to find where it shows, but lists the storage as 14T. I would expect a TB to TiB conversion from 16TB to be 14.55TiB, not 14.0 TiB.

Code: Select all

Filesystem                   Type      Size    Used   Avail Capacity  Mounted on
/dev/md0                     ufs       120M     88M     31M    74%    /
devfs                        devfs     1.0K    1.0K      0B   100%    /dev
/dev/md1                     ufs       719M    494M    225M    69%    /usr/local
procfs                       procfs    4.0K    4.0K      0B   100%    /proc
/dev/ufsid/59078fc8ace7f9b1  ufs       194G    1.2M    178G     0%    /mnt/system_data
/dev/md2                     ufs       496M    5.8M    480M     1%    /var
tmpfs                        tmpfs     256M     80K    256M     0%    /var/tmp
primary_volume               zfs        14T    200K     14T     0%    /mnt/primary_volume
/dev/ada0p2                  ufs       953M    214M    739M    22%    /cf

User avatar
ms49434
Developer
Developer
Posts: 828
Joined: 03 Sep 2015 18:49
Location: Neuenkirchen-Vörden, Germany - GMT+1
Contact:
Status: Offline

Re: ZFS vdev/zpool recommendations

Post by ms49434 »

Disks > ZFS > Datasets > Information shows information about your pool and your datasets.
1) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 22GB out of 32GB ECC RAM, LSI 9300-8i IT mode in passthrough mode. Pool 1: 2x HGST 10TB, mirrored, L2ARC: Samsung 850 Pro; Pool 2: 1x Samsung 860 EVO 1TB, SLOG: Samsung SM883, services: Samba AD, CIFS/SMB, ftp, ctld, rsync, syncthing, zfs snapshots.
2) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 8GB out of 32GB ECC RAM, IBM M1215 crossflashed, IT mode, passthrough mode, 2x HGST 10TB , services: rsync.

Post Reply

Return to “ZFS (only!)”