This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Why is my benchmark slower with ZIL & L2ARC then without?

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
neptunus
experienced User
experienced User
Posts: 79
Joined: 11 Jun 2013 08:50
Status: Offline

Why is my benchmark slower with ZIL & L2ARC then without?

Post by neptunus »

Why is my benchmark slower with ZIL & L2ARC then without? For me it's a total mystery!

Used disks:
- 10x Hitachi US7K3000 (SATA 600)
- 2x intel 320 (SATA 300)

Configuration: 2x M1015 in IT with disks connected as following:
- 5x Hitachi US7K3000 and 1x intel 320 connected to 1x M1015

Without ZIL & L2ARC
WRITE

Code: Select all

sideswipe:/mnt# dd if=/dev/zero of=zero.file bs=1M count=40960
40960+0 records in
40960+0 records out
42949672960 bytes transferred in 55.288349 secs (776830447 bytes/sec)
READ

Code: Select all

sideswipe:/mnt# dd if=zero.file of=/dev/null bs=1m
40960+0 records in
40960+0 records out
42949672960 bytes transferred in 50.604693 secs (848729046 bytes/sec)
With ZIL & L2ARC
WRITE

Code: Select all

sideswipe:/mnt# dd if=/dev/zero of=zero.file bs=1M count=40960
40960+0 records in
40960+0 records out
42949672960 bytes transferred in 75.277081 secs (570554442 bytes/sec)
READ

Code: Select all

READ
sideswipe:/mnt# dd if=zero.file of=/dev/null bs=1m
40960+0 records in
40960+0 records out
42949672960 bytes transferred in 61.050132 secs (703514828 bytes/sec)

User avatar
siftu
Moderator
Moderator
Posts: 71
Joined: 17 Oct 2012 06:36
Status: Offline

Re: Why is my benchmark slower with ZIL & L2ARC then without

Post by siftu »

I am not exactly sure why it would be slower, but I know dd would be a terrible test, especially a single user writing sequential files full of zeros with large block sizes. Try something thats a little more random in nature, like iozone/fio etc. Also you want to make sure you are use sync=always on the dataset or direct IO on your dd command.

If that dd command did in fact use the zil/slog and l2arc then it could have been limited by the sata ports bandwidth. To drill in more you could use zfs-stats in a jail to see real time usage of your l2arc and zil/slog.
System specs: NAS4Free amd64-embedded on ASUSTeK. M5A78L-M LX PLUS - AMD Phenom(tm) II X3 720 Processor - 8GB ECC Ram, Storage: 2x ZFS mirrors with 4x Western Digital Green (WDC WD10EADS)
My NAS4Free related blog - http://n4f.siftusystems.com/

User avatar
siftu
Moderator
Moderator
Posts: 71
Joined: 17 Oct 2012 06:36
Status: Offline

Re: Why is my benchmark slower with ZIL & L2ARC then without

Post by siftu »

Also read this http://nex7.blogspot.com/2013/03/readme1st.html

Point 14 and some more discussion in the comments.
System specs: NAS4Free amd64-embedded on ASUSTeK. M5A78L-M LX PLUS - AMD Phenom(tm) II X3 720 Processor - 8GB ECC Ram, Storage: 2x ZFS mirrors with 4x Western Digital Green (WDC WD10EADS)
My NAS4Free related blog - http://n4f.siftusystems.com/

neptunus
experienced User
experienced User
Posts: 79
Joined: 11 Jun 2013 08:50
Status: Offline

Re: Why is my benchmark slower with ZIL & L2ARC then without

Post by neptunus »

siftu wrote:Also read this http://nex7.blogspot.com/2013/03/readme1st.html

Point 14 and some more discussion in the comments.
Thanks really helpful.

neptunus
experienced User
experienced User
Posts: 79
Joined: 11 Jun 2013 08:50
Status: Offline

Re: Why is my benchmark slower with ZIL & L2ARC then without

Post by neptunus »

I have done some extra benchmarking with iozone. What do you think of the results?

Code: Select all

sideswipe: /zstore # iozone -a -s 24g -r 4096
        Iozone: Performance Test of File I/O
                Version $Revision: 3.397 $
                Compiled for 64 bit mode.
                Build: freebsd

        Run began: Wed Jun 26 17:40:08 2013

        Auto Mode
        File size set to 25165824 KB
        Record Size 4096 KB
        Command line used: iozone -a -s 24g -r 4096
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
        25165824    4096  534718  504527   808964   845510  252624  541583  240000  9654992   248405   506617   525360  845465   858172

iozone test complete.

Code: Select all

 sideswipe: /zstore # iozone -t 2
        Iozone: Performance Test of File I/O
                Version $Revision: 3.397 $
                Compiled for 64 bit mode.
                Build: freebsd

        Run began: Wed Jun 26 17:36:45 2013

        Command line used: iozone -t 2
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Throughput test with 2 processes
        Each process writes a 512 Kbyte file in 4 Kbyte records

        Children see throughput for  2 initial writers  =  331156.22 KB/sec
        Parent sees throughput for  2 initial writers   =   19601.20 KB/sec
        Min throughput per process                      =       0.00 KB/sec
        Max throughput per process                      =  331156.22 KB/sec
        Avg throughput per process                      =  165578.11 KB/sec
        Min xfer                                        =       0.00 KB

        Children see throughput for  2 rewriters        = 1219392.81 KB/sec
        Parent sees throughput for  2 rewriters         =  103444.93 KB/sec
        Min throughput per process                      =  598877.31 KB/sec
        Max throughput per process                      =  620515.50 KB/sec
        Avg throughput per process                      =  609696.41 KB/sec
        Min xfer                                        =     500.00 KB

        Children see throughput for  2 readers          = 2749245.25 KB/sec
        Parent sees throughput for  2 readers           =   49035.01 KB/sec
        Min throughput per process                      = 1365384.00 KB/sec
        Max throughput per process                      = 1383861.25 KB/sec
        Avg throughput per process                      = 1374622.62 KB/sec
        Min xfer                                        =     512.00 KB

        Children see throughput for 2 re-readers        = 2717170.38 KB/sec
        Parent sees throughput for 2 re-readers         =   49175.27 KB/sec
        Min throughput per process                      = 1336867.12 KB/sec
        Max throughput per process                      = 1380303.25 KB/sec
        Avg throughput per process                      = 1358585.19 KB/sec
        Min xfer                                        =     508.00 KB

        Children see throughput for 2 reverse readers   = 2547417.12 KB/sec
        Parent sees throughput for 2 reverse readers    =   48100.36 KB/sec
        Min throughput per process                      = 1224101.75 KB/sec
        Max throughput per process                      = 1323315.38 KB/sec
        Avg throughput per process                      = 1273708.56 KB/sec
        Min xfer                                        =     492.00 KB

        Children see throughput for 2 stride readers    = 2516891.00 KB/sec
        Parent sees throughput for 2 stride readers     = 1101501.58 KB/sec
        Min throughput per process                      = 1211275.62 KB/sec
        Max throughput per process                      = 1305615.38 KB/sec
        Avg throughput per process                      = 1258445.50 KB/sec
        Min xfer                                        =     488.00 KB

        Children see throughput for 2 random readers    = 2528375.62 KB/sec
        Parent sees throughput for 2 random readers     = 1115632.23 KB/sec
        Min throughput per process                      = 1254513.75 KB/sec
        Max throughput per process                      = 1273861.88 KB/sec
        Avg throughput per process                      = 1264187.81 KB/sec
        Min xfer                                        =     512.00 KB

        Children see throughput for 2 mixed workload    = 1463057.25 KB/sec
        Parent sees throughput for 2 mixed workload     =   20276.53 KB/sec
        Min throughput per process                      =       0.00 KB/sec
        Max throughput per process                      = 1463057.25 KB/sec
        Avg throughput per process                      =  731528.62 KB/sec
        Min xfer                                        =       0.00 KB

        Children see throughput for 2 random writers    =  589190.19 KB/sec
        Parent sees throughput for 2 random writers     =   20000.63 KB/sec
        Min throughput per process                      =  589190.19 KB/sec
        Max throughput per process                      =  589190.19 KB/sec
        Avg throughput per process                      =  294595.09 KB/sec
        Min xfer                                        =     512.00 KB

        Children see throughput for 2 pwrite writers    =  994503.81 KB/sec
        Parent sees throughput for 2 pwrite writers     =   34344.83 KB/sec
        Min throughput per process                      =  487639.97 KB/sec
        Max throughput per process                      =  506863.84 KB/sec
        Avg throughput per process                      =  497251.91 KB/sec
        Min xfer                                        =     492.00 KB

        Children see throughput for 2 pread readers     = 2819027.25 KB/sec
        Parent sees throughput for 2 pread readers      =   48994.26 KB/sec
        Min throughput per process                      = 1389092.75 KB/sec
        Max throughput per process                      = 1429934.50 KB/sec
        Avg throughput per process                      = 1409513.62 KB/sec
        Min xfer                                        =     504.00 KB

iozone test complete.

Post Reply

Return to “ZFS (only!)”