*New 11.3 series Release:
2019-10-19: XigmaNAS 11.3.0.4.7014 - released

*New 12.0 series Release:
2019-10-05: XigmaNAS 12.0.0.4.6928 - released!

*New 11.2 series Release:
2019-09-23: XigmaNAS 11.2.0.4.6881 - released!

We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money. Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

ZFS cheat sheet

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
User avatar
raulfg3
Site Admin
Site Admin
Posts: 4921
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

ZFS cheat sheet

#1

Post by raulfg3 » 25 Oct 2017 20:25

12.0.0.4 (revision 6766)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes

User avatar
Maurizio
Starter
Starter
Posts: 57
Joined: 05 Jul 2018 21:49
Location: Linate (MIlan)
Status: Offline

Re: ZFS cheat sheet

#2

Post by Maurizio » 07 Aug 2018 15:10

This info is outdated:
## You should keep the raidz array at a low power of two plus partity
raidz1 - 3, 5, 9 disks
raidz2 - 4, 6, 8, 10, 18 disks
raidz3 - 5, 7, 11, 19 disks

From Delphix blog:
...
A misunderstanding of this overhead, has caused some people to recommend using "(2^n)+p" disks, where p is the number of parity "disks" (i.e. 2 for RAIDZ-2), and n is an integer. These people would claim that for example, a 9-wide (2^3+1) RAIDZ1 is better than 8-wide or 10-wide. This is not generally true. The primary flaw with this recommendation is that it assumes that you are using small blocks whose size is a power of 2. While some workloads (e.g. databases) do use 4KB or 8KB logical block sizes (i.e. recordsize=4K or 8K), these workloads benefit greatly from compression. At Delphix, we store Oracle, MS SQL Server, and PostgreSQL databases with LZ4 compression and typically see a 2-3x compression ratio. This compression is more beneficial than any RAID-Z sizing. Due to compression, the physical (allocated) block sizes are not powers of two, they are odd sizes like 3.5KB or 6KB. This means that we can not rely on any exact fit of (compressed) block size to the RAID-Z group width.
Source: ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ
XigmaNAS 11.2.0.4 on Dell R710 144GB RAM - RootOnZFS zroot on 2x 64GB 15k HDDs in mirror, zdata on 3x 1TB SSD in RAIDZ1.
2x XigmaNAS 11.2.0.4 - RootOnZFS on HPE Proliant Microserver gen10 X3216 - 3x 4TB WD RED. In mirror with zrep.

Post Reply

Return to “ZFS (only!)”