Page 1 of 1
[RESOLVED] No performance increase with 4 disks over 2!
Posted: 26 May 2013 10:34
by chrisf4lc0n
Hi I just added another 2 disks to my original ZFS Mirror Pool and it looks like that now:
Code: Select all
nas4free:~# zpool status
pool: ZFS_RAID
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ZFS_RAID ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0
errors: No known data errors
My read speeds have only gone up by something like 10 Mb/s to around 80 Mb/s, when write speeds have significantly gone down to around 30 Mb/s from around 50 Mb/s??
I am using 3x320GB HDDs 2 of which are exactly the same Caviar Blue and the 3rd one is a 2.5" Fujitsu from my old laptop plus 1x500GB 5400RPM Samsung SpinPoint salvaged from the Maxtor USB external HDD...
Could it the different size in my config cause bottlenecking, or would it be the 2.5" Fujitsu?
Edit: I am also using OEM JMB363 4x SATA II controller...
Re: No performance increase with 4 disks over 2!
Posted: 26 May 2013 19:27
by raulfg3
you need to check all chain involved ( Disk, ZFS, NIC, RAM) etc...
http://constantin.glez.de/blog/2010/01/ ... still-best
Do you tune your system to your new 8GB RAM?
viewtopic.php?f=71&t=1278&p=19924
To tune your SMB conection you need to change parameters one by one and observe result, keep in mind that Windows need to be tunned too:
viewtopic.php?f=74&t=3805
Re: No performance increase with 4 disks over 2!
Posted: 26 May 2013 20:52
by chrisf4lc0n
loader.conf:
Code: Select all
kernel="kernel"
bootfile="kernel"
kernel_options=""
kern.hz="100"
hw.est.msr_info="0"
hw.hptrr.attach_generic="0"
kern.maxfiles="65536"
kern.maxfilesperproc="50000"
kern.cam.boot_delay="8000"
autoboot_delay="5"
isboot_load="YES"
zfs_load="YES"
kern.maxvnodes="250000"
# ZFS kernel tune
vm.kmem_size="6656M"
vfs.zfs.arc_min="5120M"
vfs.zfs.arc_max="5120M"
vfs.zfs.prefetch_disable="0"
vfs.zfs.txg.timeout="5"
vfs.zfs.vdev.max_pending="1"
vfs.zfs.vdev.min_pending="1"
vfs.zfs.write_limit_override="131072000"
vfs.zfs.no_write_throttle="0"
smb.conf:
Code: Select all
[global]
encrypt passwords = yes
netbios name = nas4free
workgroup = Poland
server string = NAS4Free Server
security = user
max protocol = SMB2
dns proxy = no
# Settings to enhance performance:
strict locking = no
read raw = yes
write raw = yes
oplocks = yes
max xmit = 65535
deadtime = 15
getwd cache = yes
socket options = TCP_NODELAY SO_SNDBUF=128480 SO_RCVBUF=12848
# End of performance section
unix charset = UTF-8
store dos attributes = yes
local master = yes
domain master = yes
preferred master = yes
os level = 35
time server = yes
guest account = ftp
map to guest = Never
display charset = LOCALE
max log size = 100
syslog only = yes
syslog = 1
load printers = no
printing = bsd
printcap name = /dev/null
disable spoolss = yes
log level = 1
dos charset = CP437
smb passwd file = /var/etc/private/smbpasswd
private dir = /var/etc/private
passdb backend = tdbsam
idmap config * : backend = tdb
idmap config * : range = 10000-39999
[Backup]
comment = Samsung Backup
path = /mnt/Samsung/
writeable = yes
printable = no
veto files = /.snap/.sujournal/
hide dot files = yes
guest ok = no
inherit permissions = yes
vfs objects = shadow_copy2
shadow:format = auto-%Y%m%d-%H%M%S
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
[ZFS_RAID]
comment = ZFS Pool
path = /mnt/ZFS_RAID/
writeable = yes
printable = no
veto files = /.snap/.sujournal/
hide dot files = yes
guest ok = no
inherit permissions = yes
vfs objects = shadow_copy2 zfsacl
shadow:format = auto-%Y%m%d-%H%M%S
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
veto files = /.zfs/
ifconfig:
Code: Select all
re0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 6000
options=80098<VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LINKSTATE>
ether b8:97:5a:28:bf:63
inet 192.168.0.16 netmask 0xffffff00 broadcast 192.168.0.255
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media: Ethernet 1000baseT <full-duplex>
status: active
Same MTU 6000 on Windows.
Re: No performance increase with 4 disks over 2!
Posted: 26 May 2013 20:55
by chrisf4lc0n
I doubt it is controller or Samba config fault as I am still getting ~50Mb/s when copying to UFS disk...
So it is either one of the disks which is bottlenecking or there is a problem with the ZFS config somewhere!
Re: No performance increase with 4 disks over 2!
Posted: 26 May 2013 22:10
by kkd
hi,
i had fujitsi hdd and it was extremly slow because of it's low cache.
and image what kind of hdd u had in an usb external gadget.... maybe not the fastest one was built in.
it worths a try to make a smart test, but i thing it was a bad choise to use 2 slow hdds even if the raid is 0.
Re: No performance increase with 4 disks over 2!
Posted: 27 May 2013 06:01
by Lee Sharp
You do not have a 4 disk pool, but a pair of two disk pools. So you do not get the full boost in performance. If you can, pull off all the data, and build a totally new 4 disk pool. You will see a much bigger boost in performance.
Re: No performance increase with 4 disks over 2!
Posted: 27 May 2013 08:59
by raulfg3
I do not see AIO active in SMB, revise other post to see what parameters have more influence on speed, for me are AIO, and SMB=NT1, and size of buffers, ( I use 64240x2=128480) and works better for my NIC ( Intel em0).
try enable / disable large read write and use send file, some times disable it's better
Re: No performance increase with 4 disks over 2!
Posted: 27 May 2013 09:21
by chrisf4lc0n
Here we go I have done a wide range of tests:
1. 320GB Fujitsu and 500GB Samsung SpinPoint in Mirror RAID both of them on JMB363 Controller:
Code: Select all
nas4free:~# zpool status
pool: ZFS_RAID
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ZFS_RAID ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada2 ONLINE 0 0 0
errors: No known data errors
Fujitsu with SpinPoint.jpg
2. 2x320GB Caviar Blue in Mirror RAID on the buil-in mobo controller:
Code: Select all
nas4free:~# zpool status
pool: ZFS_RAID
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ZFS_RAID ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0
errors: No known data errors
2x Caviar.jpg
3. All disks in 1 Mirror Pool:
Code: Select all
nas4free:~# zpool status
pool: ZFS_RAID
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ZFS_RAID ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0
errors: No known data errors
All 4 in 1 Pool.jpg
4. 2 Mirror Pools with 2 disks each:
Code: Select all
nas4free:~# zpool status
pool: ZFS_RAID
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ZFS_RAID ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0
errors: No known data errors
2x2 Pool.jpg
5. UFS 640GB Samsung SpinPoint on JMB363 controller used as backup for comparison:
Samsung.jpg
Also before I installed the JMB363 controller and only had 2x320GB Caviar in place I had around 80 Mb/s write speeds too, so what the heck??
Re: No performance increase with 4 disks over 2!
Posted: 27 May 2013 09:30
by chrisf4lc0n
1. Cannot use NT1... Windows cannot access shares when enabled

2. AIO on makes performance degrade even further.
3. Sendfile is not working either...
4. Single disk in UFS seems to perform alright and it was fine with just 2 disks at the beginning floating for read/write ~80Mb/s...
Re: No performance increase with 4 disks over 2!
Posted: 27 May 2013 09:42
by chrisf4lc0n
And this is a test with Stripped RAID:
Stripped.jpg
That means whatever RAID config I use now seems to be performing worse than a single UFS disk!
Re: No performance increase with 4 disks over 2!
Posted: 27 May 2013 09:53
by raulfg3
chrisf4lc0n wrote:1. Cannot use NT1... Windows cannot access shares when enabled

strange , works for me very well ( I use Win7 in the other side and NT1 and anonymous in my NAS)
chrisf4lc0n wrote:2. AIO on makes performance degrade even further.
Sorry,not for me , I notice a great boost performance from 50MB to 90MB
chrisf4lc0n wrote:3. Sendfile is not working either...
, I do not notice increase or decrease performance so I disable, but I mount at least 5 more NAS for friends and some times increase performance ( and others no), so my suggest is to do test a choose better option.
I notice better performance if double the send and recive buffers ( 128480 instead 64240 by default) <- In a old NAS that use realtek 8111C NIC I need to use 32120 to increase stability ( less spikes).
Re: No performance increase with 4 disks over 2!
Posted: 27 May 2013 10:16
by chrisf4lc0n
It is not Samba's config fault, single UFS drive performs right!
Re: No performance increase with 4 disks over 2!
Posted: 27 May 2013 10:54
by kkd
Re: No performance increase with 4 disks over 2!
Posted: 27 May 2013 11:42
by raulfg3
thanks, this is coherent with my reply I ask about old install, perhaps in the time that still works, sorry if my answer confuses or generate noise in the thread.
Re: No performance increase with 4 disks over 2!
Posted: 28 May 2013 07:14
by chrisf4lc0n
OK, I think I have found the problem, for some reason all drives report UDMA6 and PIO892!
Code: Select all
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <ST3500630AS 3.AFM> ATA-7 SATA 2.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 476940MB (976773168 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: <SAMSUNG HD502HI 1AG01118> ATA-7 SATA 2.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 476940MB (976773168 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad6
ada2 at ahcich2 bus 0 scbus3 target 0 lun 0
ada2: <ST2000DM001-1CH164 CC24> ATA-8 SATA 3.x device
ada2: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada2: Previously was known as ad10
ada3 at ahcich3 bus 0 scbus4 target 0 lun 0
ada3: <WDC WD3200AAKS-22B3A0 01.03A01> ATA-8 SATA 2.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 305245MB (625142448 512 byte sectors: 16H 63S/T 16383C)
ada3: Previously was known as ad12
ada4 at ahcich4 bus 0 scbus5 target 0 lun 0
ada4: <WDC WD3200AAKS-00V1A0 05.01D05> ATA-8 SATA 2.x device
ada4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada4: Command Queueing enabled
ada4: 305245MB (625142448 512 byte sectors: 16H 63S/T 16383C)
ada4: Previously was known as ad14
But why no AHCI?
I need to go back to BIOS...
Re: No performance increase with 4 disks over 2!
Posted: 28 May 2013 12:14
by chrisf4lc0n
Code: Select all
May 28 10:21:58 nas4free kernel: GEOM: ada0: the primary GPT table is corrupt or invalid.
May 28 10:21:58 nas4free kernel: GEOM: ada0: using the secondary instead -- recovery strongly advised.
May 28 10:21:58 nas4free kernel: GEOM: ada1: the primary GPT table is corrupt or invalid.
May 28 10:21:58 nas4free kernel: GEOM: ada1: using the secondary instead -- recovery strongly advised.
May 28 10:21:58 nas4free kernel: GEOM: ada3: the primary GPT table is corrupt or invalid.
May 28 10:21:58 nas4free kernel: GEOM: ada3: using the secondary instead -- recovery strongly advised.
May 28 10:21:58 nas4free kernel: GEOM: ada4: the primary GPT table is corrupt or invalid.
May 28 10:21:58 nas4free kernel: GEOM: ada4: using the secondary instead -- recovery strongly advised.
Found it! GPT table was corrupt and it degraded the performance of my disks!
This is the result after fixing the problem:
Result.jpg
Code: Select all
nas4free:/mnt/ZFS_RAID# dd bs=1M count=128 if=/dev/zero of=test
128+0 records in
128+0 records out
134217728 bytes transferred in 0.867861 secs (154653479 bytes/sec)
Re: [RESOLVED] No performance increase with 4 disks over 2!
Posted: 28 May 2013 12:53
by raulfg3
please post how to fix GPT table to help others in same situation.
Re: [RESOLVED] No performance increase with 4 disks over 2!
Posted: 28 May 2013 13:43
by chrisf4lc0n
As requested by a senior member of the forum a guide on how to fix GPT Table:
1. Backup all your data from the partition you will be fixing the GPT from, I know it can be a pain, but I do not know how to do that without destroying the partition. For those using mirror raid it should not be a problem if only 1 disks has got GPT corrupt, as it will start rebuilding once added back to the pool.
2. I had to destroy the pool as all the drives had GPT corrupt!
3a. In SSH:
Code: Select all
zpool destroy -f "name of your pool"
for example:
Code: Select all
gpart destroy -F gpt "name of the disk where gpt will be destroyed"
3b. In my case all 4 partitions, so:
Code: Select all
gpart destroy -F gpt ada0
gpart destroy -F gpt ada1
gpart destroy -F gpt ada3
gpart destroy -F gpt ada4
4. Now you need to recreate GPT:
Code: Select all
gpart create -s gpt ada0
gpart create -s gpt ada1
gpart create -s gpt ada3
gpart create -s gpt ada4
5. I also prefer having 4kb sectors so:
Code: Select all
gnop create -S 4096 ada0
gnop create -S 4096 ada1
gnop create -S 4096 ada3
gnop create -S 4096 ada4
6. Re-create pool:
Code: Select all
zpool create -fm /mnt/ZFS_RAID ZFS_RAID mirror /dev/ada0.nop /dev/ada1.nop mirror /dev/ada3.nop /dev/ada4.nop
7. Check if the pool has been created with 4kb sectors by:
if it is 12 then you have got 4kb sectors if it is 9 something has gone wrong.
8. For those who has only got 1 disk with GPT corrupt, you just need to detach device from the pool by in my example lets say the pool is a mirror pool with ada0.nop and ada1.nop, where ada0.nop GPT is corrupt:
Code: Select all
zpool detach ZFS_RAID /dev/ada0.nop
then you go to 3b and follow the instruction to 5 where you:
Code: Select all
zpool attach ZFS_RAID /dev/ada0.nop
Hope it will help.
Re: [RESOLVED] No performance increase with 4 disks over 2!
Posted: 28 May 2013 14:23
by raulfg3
thanks a lot.