This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Am I really going to benefit from a ZIL\L2ARC

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
kjbuente
Starter
Starter
Posts: 30
Joined: 06 Jan 2016 04:21
Status: Offline

Am I really going to benefit from a ZIL\L2ARC

Post by kjbuente »

I'm running the idea in my head to replace the old, outdated, insecure software on my 24 drive SAN with NAS4Free. The drives are 500GB 15K SAS using SAS2\6Gbps interface speeds. Right now it is in a RAID5 configuration and has 8GB of RAM plus a quad core Xeon @ 3Ghz. (Not sure what model). It handles mostly iSCSI for ESXi (Over 4-1Gb links, serving approx 20VMs. A few have databases) and CIFS (2-1Gb Links). I'm not going into how the SAS ports are replicated for the 24 drives, I could not begin to understand why anyone would want to do it that way... I digress.

The plan is blow this configuration out and redo it as: Each drive will have a mirror resulting in 12 mirrors in the pool. I scored a lot of 12 M1015 HBAs for $300 on eBay, so I'm going to flash them to IT mode and use them directly to the drives and getting rid of the SAS replicator. (IF the motherboard will support 3 HBAs). It will handling iSCSI only via 4-10Gb links. (One 10Gb crossover link to ESXi host). Not right away, but the RAM will be upgraded to 32GB. Compression will be on with Lz4, no DeDup.

What I am wondering is... Will adding SSD(s) over SATA3\6Gbps for a ZIL and\or L2ARC be worthwhile? How would a SATA3\6Gbps SSD for ZIL or L2ARC be better than a THEORETICAL 144Gbps though 3 SAS HBAs with drives spinning at 15K? I know that I'll never get full SAS speed from a spinning drive, but a eighth of that is still 36Gbps. Are the IOPS on a SSD or multiple SSDs that much better than my 12 vdevs? I do know that the type of data traffic will play a factor, but I am focusing on iSCSI right now. Plus only 8GB of RAM for the time being, I could see it. But what after I upgrade to 32GB?


This is my second ZFS system. My first is not nearly as complicated. (10 WD Red 6TB in split in two RAIDZ1 VDEVs, just being used as backup off site). I've been reading so much on ZFS to make sure I configure this system right on the first go I think I got myself sideways....
SuperMicro 846 Chassis + 1200W Redundant PSUs + "A" model Backplane , SuperMicro X10SRL-F Motherboard, 512GB ECC RAM, Xeon E5 1650v3, 24*8TB WD Red Pro, 4 RaidZ2 Vdevs, 2 IBM M1015 Cross Flash to IT Mode, 2 IBM 46M0997 SAS Expanders, Dual port 10Gb Intel X560. RootOnZFS.

SuperMicro 846 Chassis + 1200W Redundant PSUs + SAS3-846-EL2 Backplane , SuperMicro X10SRL-F Motherboard, 512GB ECC RAM, Xeon E5 1650v3, 24*SanDisk Lightening 800GB 12Gbps SSDs, 12 Mirror vDevs, 4 Intel Optane 32GB NVMe drives (SLOG),Dual port 40Gb Intel XL710. RootOnZFS.

Onichan
Advanced User
Advanced User
Posts: 238
Joined: 04 Jul 2012 21:41
Status: Offline

Re: Am I really going to benefit from a ZIL\L2ARC

Post by Onichan »

SATA ~5-7k RPM HDDs have 75-150 IOPS each, 10-15k RPM SAS will have 140-210 IOPS. A single quality SATA6 SSD will have 40000-90000 IOPS and PCIE SSDs get in the hundreds of thousands. So yes the IOPS are not even comparable, would take hundreds of SAS drives to even compete with a single SSD.

Next you start comparing sequential speeds with IOPS, they are completely different performance measurements. Also you start talking about theoretical SATA6 speeds when they really have nothing to do with actual disk speeds. No idea how fast your SAS drives are for sequential, but 15k SAS currently max out at ~200MBps. So lets say you get all 12 vdevs with 200MBps write speed each for a total of 2.34375GBps (aka 18.75Gbps) of throughput. Also that has nothing to do with your IO. Copying a 100GB file to/from would take advantage of your sequential write speed, but running 20 VMs off them would be mostly random reads and writes and would generally need more IOPS than sequential speeds.

Next is ZIL, you actually mean SLOG. A ZIL always exists, just by default it exists on your HDDs. A SLOG is a separate intent log that is normally put on faster disks. First off a SLOG only helps with synchronous writes (asynchronous writes do get buffered in RAM), which iSCSI is supposed to be, but SMB shares are not. With you running VMs off it then yes a SLOG could help, but it really just depends on how much random writes you are doing. If your VM environment has low amounts of writes then you might not notice a big difference, but it could help a little with latency. Also a SLOG only needs to keep 5 seconds worth of writes as it's flushed every 5 seconds to disk. So with a single 10Gb connection it would only need to be ~6.25GB in size, but of course would want a bit bigger so the wear leveling can be handled better. The latency is probably the biggest thing to look for in a SLOG and IOPS probably the second biggest. Somebody did a small test of a few SSDs as SLOG https://b3n.org/ssd-zfs-zil-slog-benchm ... omparison/ Also remember SLOGs need to be mirrored as it contains data that hasn't been flushed to disks so loosing a SLOG means you loose data.

Next is ARC/L2ARC, both are just a read cache. ARC is the cache in RAM where L2ARC is the cache normally put on a fast SSD. Generally speaking RAM is king for ZFS, but L2ARC can be good in certain situations. Now when is it a good idea, that I'm not familiar enough to really answer. I do know that having an L2ARC does use up some RAM itself for metadata so it actually lowers your usable ARC. Also I know things don't directly fall off ARC into L2ARC, it's a special job that scans for blocks that qualify for L2ARC that are then copied to it. Also L2ARC is just a read cache so it doesn't need to be mirrored as loosing it just means you loose performance, but not data. Though in big environments it's a good idea to mirror since the performance loss can be big. The only thing I can suggest is to just go without it for now and you can run analytics on the ARC to see the % rate of blocks that would qualify for L2ARC and decide if you think it would be worthwhile. Also I wouldn't get it with only 8GB of RAM.

kjbuente
Starter
Starter
Posts: 30
Joined: 06 Jan 2016 04:21
Status: Offline

Re: Am I really going to benefit from a ZIL\L2ARC

Post by kjbuente »

Wow, Wonderfully written! That answers my questions. Thank you!
SuperMicro 846 Chassis + 1200W Redundant PSUs + "A" model Backplane , SuperMicro X10SRL-F Motherboard, 512GB ECC RAM, Xeon E5 1650v3, 24*8TB WD Red Pro, 4 RaidZ2 Vdevs, 2 IBM M1015 Cross Flash to IT Mode, 2 IBM 46M0997 SAS Expanders, Dual port 10Gb Intel X560. RootOnZFS.

SuperMicro 846 Chassis + 1200W Redundant PSUs + SAS3-846-EL2 Backplane , SuperMicro X10SRL-F Motherboard, 512GB ECC RAM, Xeon E5 1650v3, 24*SanDisk Lightening 800GB 12Gbps SSDs, 12 Mirror vDevs, 4 Intel Optane 32GB NVMe drives (SLOG),Dual port 40Gb Intel XL710. RootOnZFS.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Am I really going to benefit from a ZIL\L2ARC

Post by Parkcomm »

Small quibble - there is no transfer from ZIl to disk unless you lose power. For explanation read this - https://pthree.org/2013/04/19/zfs-admin ... intent-log, The ZIL speeds up handshaking for synchronous writes. I found a speed up of around 20% for NFS synchronous writes. Very useful for databases.

L2ARC greatly improves IOPS (random reads) compared to reading from spinning platters. So also very useful for databases. Also very useful if you have multiple users on your NAS (because simultaneous sequential reads look like random reads)
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

kjbuente
Starter
Starter
Posts: 30
Joined: 06 Jan 2016 04:21
Status: Offline

Re: Am I really going to benefit from a ZIL\L2ARC

Post by kjbuente »

Power loss for me is unlikely. The chassis that I am using has quad redundant PSU each plugged into it's own 3KVA UPS that gives me about an hour of uptime. Though usually the generator kicks in within a few minutes. I fully understand though that there is ALWAYS that possibility of a unexpected shutdown.

My revised plan now consists of upgrading the RAM to maximum first and including a SLOG and possibly a L2Arc depending on load.
SuperMicro 846 Chassis + 1200W Redundant PSUs + "A" model Backplane , SuperMicro X10SRL-F Motherboard, 512GB ECC RAM, Xeon E5 1650v3, 24*8TB WD Red Pro, 4 RaidZ2 Vdevs, 2 IBM M1015 Cross Flash to IT Mode, 2 IBM 46M0997 SAS Expanders, Dual port 10Gb Intel X560. RootOnZFS.

SuperMicro 846 Chassis + 1200W Redundant PSUs + SAS3-846-EL2 Backplane , SuperMicro X10SRL-F Motherboard, 512GB ECC RAM, Xeon E5 1650v3, 24*SanDisk Lightening 800GB 12Gbps SSDs, 12 Mirror vDevs, 4 Intel Optane 32GB NVMe drives (SLOG),Dual port 40Gb Intel XL710. RootOnZFS.

Post Reply

Return to “ZFS (only!)”