This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Slow raidz1 performance with 5 x WD20EARX in HP n36L

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
jones_chris
NewUser
NewUser
Posts: 8
Joined: 22 Nov 2012 10:12
Status: Offline

Slow raidz1 performance with 5 x WD20EARX in HP n36L

Post by jones_chris »

Hi

I'm suffering really poor write performance with zfs and a raidz1 array in my n36L.

It's only got 4GB RAM so zfs.prefetch is disabled, sync is disabled, dedup if off, atime and xattr are off and so is compression, but I'm struggling to get over 40MB/sec. It'll peak at around 100MB/sec for the start of a transfer (nfs) but then drop down and be around 30-40MB/sec.

Code: Select all

nas4free:~# zpool iostat pool0 5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
pool0       4.76T  4.30T      3    482  15.6K  42.4M
pool0       4.76T  4.30T      0    458  1.60K  40.6M
pool0       4.76T  4.30T      0    498      0  40.9M
pool0       4.76T  4.30T      0    436  3.20K  35.7M
pool0       4.76T  4.30T      0    371      0  33.9M
pool0       4.76T  4.30T      0    413    819  33.2M
pool0       4.76T  4.30T      0    505      0  41.3M
I'm using the latest stable, 9.1.0.1.636.

All drives report they are fine to smartctl, I'm using the 4k tickbox and deleted the .nops and connected straight to the ada[0-4] and the ashift is 12.

BIOS has write cache disabled, which actually made little/no difference when turned on.

Weirdly enough having an SSD on its own pool didn't get any better, so I'm wondering if this is actually a controller issue on the n36L (TheBay v2 BIOS for AHCI/full speed on all ports).

The source drives (over nfs) can provide data fast enough, so can the network.

Code: Select all

nas4free:~# iperf -c 10.0.10.5 -w 512k -l 128k -P 2 -f m -t 10
------------------------------------------------------------
Client connecting to 10.0.10.5, TCP port 5001
TCP window size: 0.50 MByte (WARNING: requested 0.50 MByte)
------------------------------------------------------------
[  4] local 10.0.10.1 port 59002 connected with 10.0.10.5 port 5001
[  3] local 10.0.10.1 port 55820 connected with 10.0.10.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  4045 MBytes  3393 Mbits/sec
[  3]  0.0-10.0 sec  3996 MBytes  3351 Mbits/sec
[SUM]  0.0-10.0 sec  8041 MBytes  6744 Mbits/sec
If I run a local DD then it may spark get up to 60MB/sec

Code: Select all

pool0       4.81T  4.26T      3    482  15.6K  42.3M
pool0       4.81T  4.26T      0    505  1.60K  44.7M
pool0       4.81T  4.26T      0    467      0  42.1M
<started DD>
pool0       4.81T  4.26T     12    742  50.4K  59.7M
pool0       4.81T  4.26T     12    805  55.2K  54.3M
pool0       4.81T  4.26T     11    805  47.2K  64.7M
pool0       4.81T  4.25T     11    887  49.6K  67.9M
<DD ended>
pool0       4.81T  4.25T      6    640  28.0K  51.5M
pool0       4.81T  4.25T      4    511  20.0K  38.0M

Code: Select all

nas4free:/mnt/pool0/Users# dd if=/dev/zero of=test.tst bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 21.331220 secs (50336635 bytes/sec)
A scrub will do over 200MB/sec, so what is causing the slow writes? Parity calculations, interrupts, or is something just in need of a tweak?

I'm sure this has all happened since the 5th HDD went in, so perhaps there a issue with one drive being off the motherboard connector the others being off the SFF-8087, although all are sync'd at 300MB/s.

Perhaps this should be in the hardware section, but i thought I'd ask here in case its a zfs issue.

Help appreciated as currently its all a bit slow.

Thanks
Chris

Jtcdesigns
Starter
Starter
Posts: 28
Joined: 05 Apr 2013 02:04
Status: Offline

Re: Slow raidz1 performance with 5 x WD20EARX in HP n36L

Post by Jtcdesigns »

Just to be clear.. what are you doing that is extremely slow? It appears you're getting good lan speeds between the two boxes so your write speeds should be atleast matching that. What are you writing that is only going super slow? Also, with ZFS.... NFS is a lot slower than SMB as I've seen in tests.

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: Slow raidz1 performance with 5 x WD20EARX in HP n36L

Post by b0ssman »

enable write cache in bios and leave it on, otherwise the performance on hp microserver is very bad.

download the zfs kerneltune addon and enable zfs prefetch with 4gb configuration.
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

jones_chris
NewUser
NewUser
Posts: 8
Joined: 22 Nov 2012 10:12
Status: Offline

Re: Slow raidz1 performance with 5 x WD20EARX in HP n36L

Post by jones_chris »

Jtcdesigns wrote:Just to be clear.. what are you doing that is extremely slow? It appears you're getting good lan speeds between the two boxes so your write speeds should be atleast matching that. What are you writing that is only going super slow? Also, with ZFS.... NFS is a lot slower than SMB as I've seen in tests.
The write speeds aren't close, that's the issue.

I've just gone back to a clean install of .431, as I had a similar issue with 4 drives under 636 as per this thread:
viewtopic.php?f=66&t=2577&p=13467

I exported the pool from .636 and imported onto .431, was a worthwhile test.

Code: Select all

nas4free:/mnt/pool0/Users# dd if=/dev/zero of=test.tst bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes transferred in 61.460198 secs (174705234 bytes/sec)

Code: Select all

pool0       7.21T  1.85T      0      0      0      0
pool0       7.22T  1.85T      0    513      0  60.0M
pool0       7.21T  1.85T      0  1.41K      0   162M
pool0       7.22T  1.85T      0  1.31K    819   153M
pool0       7.22T  1.85T      0  1.45K      0   169M
pool0       7.22T  1.84T      0  1.37K      0   160M
pool0       7.22T  1.84T      0  1.35K      0   155M
pool0       7.22T  1.84T      0  1.38K      0   164M
pool0       7.22T  1.84T      0  1.39K      0   163M
pool0       7.22T  1.84T      0  1.45K      0   167M
pool0       7.22T  1.84T      0  1.30K      0   152M
pool0       7.22T  1.84T      0  1.43K      0   168M
pool0       7.22T  1.84T      0  1.43K      0   170M
pool0       7.23T  1.84T      0  1.46K      0   168M
etc
Both tests are much much faster than .636.

There's clearly an issue somewhere since .431 which is affecting my setup badly, as per my previous post which went unanswered :(

For now it appears I'm left on .431.

jones_chris
NewUser
NewUser
Posts: 8
Joined: 22 Nov 2012 10:12
Status: Offline

Re: Slow raidz1 performance with 5 x WD20EARX in HP n36L

Post by jones_chris »

More intersting stuff.

The network is now also much faster on .431, so I'm guessing perhaps its an interrupt/scheduler issue somewhere in .636.

Code: Select all

nas4free:~# iperf -c 10.0.10.5 -w 512k -l 128k -P 2 -f m -t 10
------------------------------------------------------------
Client connecting to 10.0.10.5, TCP port 5001
TCP window size: 0.50 MByte (WARNING: requested 0.50 MByte)
------------------------------------------------------------
[  3] local 10.0.10.1 port 15712 connected with 10.0.10.5 port 5001
[  4] local 10.0.10.1 port 60817 connected with 10.0.10.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  5234 MBytes  4390 Mbits/sec
[  4]  0.0-10.0 sec  5228 MBytes  4385 Mbits/sec
[SUM]  0.0-10.0 sec  10463 MBytes  8776 Mbits/sec
That's over 200Mbit/sec faster now.

Post Reply

Return to “ZFS (only!)”