This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

SSD wearout of ZIL an L2ARC

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
User avatar
crowi
Forum Moderator
Forum Moderator
Posts: 1176
Joined: 21 Feb 2013 16:18
Location: Munich, Germany
Status: Offline

SSD wearout of ZIL an L2ARC

Post by crowi »

Hi Community,

I tested the use of a combined cache and log device on a SSD for one complete year and wanted now to share the results of my private testing.

My server runs mostly 24/7 unless I am not on holiday.
The pool has 10,7 TB useable space (5x3TB on RAIDZ1) and is filled to ~50% so far.
The server is truely private use with about 11 clients (including DLNA devices like TV and AV-Receiver) accessing it for Backup and media (pictures, movies, music).
The L2ARC (read cache) is not the optimum size, I only used a 60GB SSD.

My fear was that Cache and Log devices get heavily worn out because the fill level of the SSD for the L2ARC part is always ramping up to 100% and iostat shows heavy use of log and cache.

So after one year of SSD use I found a total write (which is the important factor on SSD) of 7.85 TB.
I.e. my SSD got about 150 times completely filled durng this time. Assuming to get the same values on the next years and calculating with 10.000 write cycles of the SSD I don't fear the wearout and need for early replacement.
All other SMART values of the SSD look still fine, too.
And the use of at least L2ARC gives a nice boost of performance for frequently read data.

Limitations: This report may not be valid for heavy data use in corporate level environment.

Cheers,
Crowi
NAS 1: Milchkuh: Asrock C2550D4I, Intel Avoton C2550 Quad-Core, 16GB DDR3 ECC, 5x3TB WD Red RaidZ1 +60 GB SSD for ZIL/L2ARC, APC-Back UPS 350 CS, NAS4Free 11.0.0.4.3460 embedded
NAS 2: Backup: HP N54L, 8 GB ECC RAM, 4x4 TB WD Red, RaidZ1, NAS4Free 11.0.0.4.3460 embedded
NAS 3: Office: HP N54L, 8 GB ECC RAM, 2x3 TB WD Red, ZFS Mirror, APC-Back UPS 350 CS NAS4Free 11.0.0.4.3460 embedded

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: SSD wearout of ZIL an L2ARC

Post by b0ssman »

what is the smart mwi or equivalent value on the ssd?
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

User avatar
crowi
Forum Moderator
Forum Moderator
Posts: 1176
Joined: 21 Feb 2013 16:18
Location: Munich, Germany
Status: Offline

Re: SSD wearout of ZIL an L2ARC

Post by crowi »

Hi b0ssman,

the problem is that it is a dirt cheap Intenso SSD, which has astonishing good read/write speed and quite good iops specs, but it lacks on SMART support. The device is not in the smartctl database.
Maybe you get some more information out of this:

Code: Select all

 1 Raw_Read_Error_Rate     0x000f   120   120   050    Pre-fail  Always       -       0
  5 Reallocated_Sector_Ct   0x0033   100   100   003    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0032   000   000   000    Old_age   Always       -       8234 (25 234 0)
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       98
171 Unknown_Attribute       0x0032   000   000   000    Old_age   Always       -       0
172 Unknown_Attribute       0x0032   000   000   000    Old_age   Always       -       0
174 Unknown_Attribute       0x0030   000   000   000    Old_age   Offline      -       28
177 Wear_Leveling_Count     0x0000   000   000   000    Old_age   Offline      -       3
181 Program_Fail_Cnt_Total  0x0032   000   000   000    Old_age   Always       -       0
182 Erase_Fail_Count_Total  0x0032   000   000   000    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
194 Temperature_Celsius     0x0022   128   129   000    Old_age   Always       -       128 (0 127 0 129 0)
195 Hardware_ECC_Recovered  0x001c   100   100   000    Old_age   Offline      -       0
196 Reallocated_Event_Count 0x0033   100   100   003    Pre-fail  Always       -       0
201 Unknown_SSD_Attribute   0x001c   100   100   000    Old_age   Offline      -       0
204 Soft_ECC_Correction     0x001c   100   100   000    Old_age   Offline      -       0
230 Unknown_SSD_Attribute   0x0013   100   100   000    Pre-fail  Always       -       100
231 Temperature_Celsius     0x0013   099   099   010    Pre-fail  Always       -       0
233 Media_Wearout_Indicator 0x0000   000   000   000    Old_age   Offline      -       8543
234 Unknown_Attribute       0x0032   000   000   000    Old_age   Always       -       8045
241 Total_LBAs_Written      0x0032   000   000   000    Old_age   Always       -       8045
I also mounted in my Windows machine and used CrystalDiskInfo and as well in PartedMegic to run the self tests, which makes it easier for me to cope with the values and found no problems. :)

Cheers,
Crowi
NAS 1: Milchkuh: Asrock C2550D4I, Intel Avoton C2550 Quad-Core, 16GB DDR3 ECC, 5x3TB WD Red RaidZ1 +60 GB SSD for ZIL/L2ARC, APC-Back UPS 350 CS, NAS4Free 11.0.0.4.3460 embedded
NAS 2: Backup: HP N54L, 8 GB ECC RAM, 4x4 TB WD Red, RaidZ1, NAS4Free 11.0.0.4.3460 embedded
NAS 3: Office: HP N54L, 8 GB ECC RAM, 2x3 TB WD Red, ZFS Mirror, APC-Back UPS 350 CS NAS4Free 11.0.0.4.3460 embedded

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: SSD wearout of ZIL an L2ARC

Post by b0ssman »

well i was looking for the 233 value but that value makes no sense. normally that is between 100 and 0.

could be 177 but that is normally a delta value.

from the values it could have been a kingston drive but kingston has the indicator on 231.
http://media.kingston.com/support/downl ... ribute.pdf
so a value of 0 would mean the drive is near death. which should not be after 7 terrabyte.

there is a nice forum entry about wear level testing
http://www.xtremesystems.org/forums/sho ... nm-Vs-34nm
but at the moment the site is down.
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

User avatar
crowi
Forum Moderator
Forum Moderator
Posts: 1176
Joined: 21 Feb 2013 16:18
Location: Munich, Germany
Status: Offline

Re: SSD wearout of ZIL an L2ARC

Post by crowi »

Hi Bossman,

a strange drive I have right, at least what SMART says or doesn't say?
I was also looking for the wear level and could not really find it.

Maybe I will turn the ZIL function off to be on the safe side, I actually don't need it here.

Thanks for the xtremesystems link. :)
Looks encouraging, so I can use my little baby at least 10 more years, even if I take the worst scenario.

Cheers,
Crowi
NAS 1: Milchkuh: Asrock C2550D4I, Intel Avoton C2550 Quad-Core, 16GB DDR3 ECC, 5x3TB WD Red RaidZ1 +60 GB SSD for ZIL/L2ARC, APC-Back UPS 350 CS, NAS4Free 11.0.0.4.3460 embedded
NAS 2: Backup: HP N54L, 8 GB ECC RAM, 4x4 TB WD Red, RaidZ1, NAS4Free 11.0.0.4.3460 embedded
NAS 3: Office: HP N54L, 8 GB ECC RAM, 2x3 TB WD Red, ZFS Mirror, APC-Back UPS 350 CS NAS4Free 11.0.0.4.3460 embedded

Onichan
Advanced User
Advanced User
Posts: 238
Joined: 04 Jul 2012 21:41
Status: Offline

Re: SSD wearout of ZIL an L2ARC

Post by Onichan »

I didn't think any consumer SSD's were SLC, so I would guess yours is MLC. Which means your 10000 writes is incorrect and it's more around 3000. Still that is a good 20 years so I don't think it's a problem.

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: SSD wearout of ZIL an L2ARC

Post by b0ssman »

SLC write cycles are around 100000
mlc write cycles are between 3000 to 15000
tlc write cycles are between 1000 to 5000
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

ku-gew
Advanced User
Advanced User
Posts: 172
Joined: 29 Nov 2012 09:02
Location: Den Haag, The Netherlands
Status: Offline

Re: SSD wearout of ZIL an L2ARC

Post by ku-gew »

Newer smartmontools support more drives, be sure to check the latest version.
HP Microserver N40L, 8 GB ECC, 2x 3TB WD Red, 2x 4TB WD Red
XigmaNAS stable branch, always latest version
SMB, rsync

User avatar
lindsay
Forum Moderator
Forum Moderator
Posts: 282
Joined: 23 Jun 2012 09:59
Location: Steinkjer,Norway
Status: Offline

Re: SSD wearout of ZIL an L2ARC

Post by lindsay »

Nice to know crowi as i am planning the same setup with an HP proliant.
Protected by smoothiebox Red,Green,
Purple,Orange Zones/VLAN`s
Powered by AMD A10-6700T


XigmaNAS Box-1 11.2.0.4 - Omnius (revision 6625)
Platform : x64-embedded on 2X Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
Motherboard: ASUS Z10PA-D8, 2xSocket-2011-3
SATA Controllers : 1X Avago Technologies (LSI) SAS2008 and 1x Avago Technologies (LSI) SAS2308
Pool 1 (Media-Pool) 8X4TB in raidz2
Pool 2 (Media-Pool-2) 4X2TB in raidz2 and 2X2TB in mirror mirror and 2X3TB in mirror
Pool 3 (Media-Pool-3) 2X2TB in mirror and 2X4TB in mirror and 2X1TB in mirror

Post Reply

Return to “ZFS (only!)”