This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Pathological Samba performance when writing small files

CIFS/SMB network sharing.
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
digitalis99
Starter
Starter
Posts: 15
Joined: 21 Aug 2015 02:00
Status: Offline

Pathological Samba performance when writing small files

Post by digitalis99 »

We have a 10.2.0.2 N4F instance running on a Dell PowerEdge R720xd. It has 24 SSD's exposed as individual disks, and we've raidz2'd them all together into a large pool. We have one Samba share defined off that pool with only guest user access (no auth). We routinely need to extract compressed archives containing 10's of 1,000's of files in them to the share from Windows machines running Win7, Windows Server 2008 R2, or other Windows OS'. Our N4F box has a Chelsio 40Gbps interface connected to a Juniper EX4600 switch. Windows clients are Intel 10Gbps interfaces connected to the same switch.

When writing large files to the share, the performance is mediocre at best. The one Samba process that is handling the write session goes CPU bound, and the resultant throughput is just OK, usually sustaining around 1Gbps when we're lucky. When writing a lot of small files to the share, the performance is abysmal, never exceeding 1Mbps. The one Samba process handling the write session in that instance is still CPU bound.

We upgraded the (single) CPU in the N4F system from a 4 core 1.8Ghz to a 12 core 2.7Ghz, hoping that the clock speed increase would help those single session writes. It made a little difference, but not much. I've tried a number of tuning parameters and researching Samba options, but no one seems to care about writing small files effectively. By the way, the N4F box has 48GB of memory, which is generally all consumed by ZFS.

We test small file writes against any other Windows machine holding a simple file share, and the write performance is easily 20-50 times that of the N4F box. I also enabled FTP on the N4F box and tried writing to it that way, but it was also very slow.

Is there some kind of conflict between what Samba wants to do and what ZFS wants to do, perchance? Does anyone have any ideas of what to focus our testing on? Has anyone seen and overcome this behavior?

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Pathological Samba performance when writing small files

Post by Parkcomm »

First of all - nice setup.

I would have said try NFS - its pretty easy to set up, and i find that the load on my CPU is about 25% of that of samba, with similar throughput (I'm not CPU bound). However I would expect similar light CPU load from FTP.

Can you provide the ouput of

Code: Select all

top -HSzaIPt

the key flags are:
  • S-show system process
  • H-show per thread
  • P-per cpu stats
I just want ot check if the samba proces itself is tapping out the CPU or whether a ZFS system process might be tapping out the CPU, also whether interrupt handling for the interface is the issue. If you get high %interrupt in the top output for any cpu, please also provide the output of

Code: Select all

systat -vmstat
Does anyone have any ideas of what to focus our testing on?
  • Interrupt handling for the NIC (maybe can be tuned, maybe use polling instead
  • zfs processing (24 ssd could be a high load, I'd be much more inclined to run 3x8 ssd vdevs and there are a few zfs tunables)
  • samba - I never bother tuning samba, as I said move to NSF and/or iSCSI. Maybe others on the forum have tips
Also you might have a config error on samba - why not paste you config here as well (include at least one share)
Last edited by Parkcomm on 15 Nov 2015 06:14, edited 1 time in total.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

digitalis99
Starter
Starter
Posts: 15
Joined: 21 Aug 2015 02:00
Status: Offline

Re: Pathological Samba performance when writing small files

Post by digitalis99 »

Unfortunately, Samba is mandatory for this application. I don't have the option of using NFS...at least I don't think I do. I may run some tests in the future, as I know NFS has a lower system load in general...and has far, far fewer tuning knobs needed to make it work efficiently.

The output of top as you requested:

Code: Select all

last pid: 85076;  load averages:  2.48,  2.41,  2.32  up 16+09:28:13    19:23:16
540 processes: 25 running, 424 sleeping, 91 waiting

Mem: 30M Active, 335M Inact, 35G Wired, 235M Buf, 12G Free
ARC: 25G Total, 12G MFU, 13G MRU, 1840K Anon, 118M Header, 334M Other
Swap: 

  PID USERNAME     PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
49505 root         103    0   313M 39032K CPU12  12  24.3H 100.00% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
84980 root          43    0   300M 31592K select 10   0:34  33.25% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
84958 root          40    0   300M 31708K select  6   0:38  25.49% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
85071 root          43    0   314M 42036K select 19   0:05  15.87% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
84944 root          25    0   300M 31776K select 18   0:39  11.28% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
85072 root          23    0   304M 40116K select  0   0:03   7.08% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
85072 root          23    0   304M 40116K uwait  10   0:00   6.98% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
85072 root          23    0   304M 40116K uwait   2   0:00   6.98% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
85072 root          23    0   304M 40116K uwait  21   0:00   6.98% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
85072 root          23    0   304M 40116K uwait  18   0:00   6.98% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
84943 root          22    0   304M 31860K select 20   0:38   5.18% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
85072 root          20    0   304M 40116K uwait   8   0:00   3.27% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
85072 root          20    0   304M 40116K uwait  15   0:00   3.27% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
   12 root         -92    -     0K  2184K WAIT   15 817:51   1.07% [intr{irq305: t5nex0:0}]
85063 root          20    0   308M 42012K select 23   0:05   0.49% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
84832 root          20    0   304M 41816K select 23   0:03   0.39% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
85063 root          20    0   308M 42012K uwait   9   0:00   0.39% /usr/local/sbin/smbd -D -s /var/etc/smb4.conf{smbd}
   12 root         -92    -     0K  2184K WAIT   17 773:03   0.29% [intr{irq307: t5nex0:0}]
Interrupt usage is low, as it should be, so I didn't bother running sysstat.

Tell me more about splitting the ZFS pool up into 3 chunks. I'm not clear on what benefit that would gain me, unless there's some portion of ZFS that is single-threaded and "scaling" it requires multiple pools (as in, a process/thread per pool).

My current smb4.conf file:

Code: Select all

[global]
server role = standalone
encrypt passwords = yes
netbios name = fileserver
workgroup = WORKGROUP
server string = NAS4Free Server
security = user
max protocol = SMB2
dns proxy = no
# Settings to enhance performance:
strict locking = no
read raw = yes
write raw = yes
oplocks = yes
max xmit = 65535
deadtime = 15
getwd cache = yes
socket options = TCP_NODELAY SO_SNDBUF=128480 SO_RCVBUF=128480 
# End of performance section
unix charset = UTF-8
local master = no
domain master = no
preferred master = no
os level = 0
time server = no
guest account = ftp
map to guest = Bad User
max log size = 100
syslog only = yes
syslog = 1
load printers = no
printing = bsd
printcap cache time = 0
printcap name = /dev/null
disable spoolss = yes
log level = 1
dos charset = CP437
smb passwd file = /var/etc/private/smbpasswd
private dir = /var/etc/private
passdb backend = tdbsam
idmap config * : backend = tdb
idmap config * : range = 10000-39999
aio read size = 1024
aio write size = 1024
kernel oplocks = yes
socket options = IPTOS_LOWDELAY

[jobs]
comment = Main share
path = /mnt/pool1
writeable = yes
printable = no
veto files = /.snap/.sujournal/
hide dot files = no
guest ok = yes
vfs objects = zfsacl aio_pthread 
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = yes
veto files = /.zfs/

The last two options in the global section I've added in the GUI, but there doesn't seem to be any combination of common strings (like socket options). Is Samba ignoring the first or last lines, by chance, or is it honoring both? I was experimenting with kernel oplocks, but I doubt N4F has a kernel that supports it. Samba is supposed to ignore that option if the kernel doesn't support it.

Anyway, thanks for giving this another set of eyeballs. Maybe something obvious will leap out at you that hasn't at me.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Pathological Samba performance when writing small files

Post by Parkcomm »

With regard to breaking pools up please have a look at the For performance on random IOPS http://blog.delphix.com/matt/2014/06/06 ... ipe-width/ - zfs tries to read and write to striped vdevs in parallel wherever possible.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Pathological Samba performance when writing small files

Post by Parkcomm »

Just a thought - how is the health of your zpool?

Code: Select all

zpool status
Have you tried a local performance benchmark to make sure your pool is preforming as expected. Also maybe create a memory disk and share that over samba - to localise the problem.

I can't spot a problem in your samba config.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

digitalis99
Starter
Starter
Posts: 15
Joined: 21 Aug 2015 02:00
Status: Offline

Re: Pathological Samba performance when writing small files

Post by digitalis99 »

The zpool reports online and normal. No known data errors.

I'll try the memory disk test. What's your favorite local disk performance test in N4F? I'm only loosely familiar with a couple common Linux tools to measure that sort of thing.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Pathological Samba performance when writing small files

Post by Parkcomm »

I'd suggest looking for those linux tools on https://freshports.org/ you'll find most of them have been ported to freebsd (or the other way). That way you are automatically tuned to what good performance looks like.


Personally I like iozone, its a great tool.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: Pathological Samba performance when writing small files

Post by erico.bettoni »

Could you post the output of zpool status?

User avatar
daoyama
Developer
Developer
Posts: 394
Joined: 25 Aug 2012 09:28
Location: Japan
Status: Offline

Re: Pathological Samba performance when writing small files

Post by daoyama »

digitalis99 wrote: The last two options in the global section I've added in the GUI, but there doesn't seem to be any combination of common strings (like socket options). Is Samba ignoring the first or last lines, by chance, or is it honoring both?
You can see current(merged) config by:

# testparm -sv
NAS4Free 10.2.0.2.2115 (x64-embedded), 10.2.0.2.2258 (arm), 10.2.0.2.2258(dom0)
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1

digitalis99
Starter
Starter
Posts: 15
Joined: 21 Aug 2015 02:00
Status: Offline

Re: Pathological Samba performance when writing small files

Post by digitalis99 »

zpool status:

Code: Select all

  pool: pool1
 state: ONLINE
  scan: none requested
config:

	NAME            STATE     READ WRITE CKSUM
	pool1           ONLINE       0     0     0
	  raidz2-0      ONLINE       0     0     0
	    mfisyspd0   ONLINE       0     0     0
	    mfisyspd1   ONLINE       0     0     0
	    mfisyspd2   ONLINE       0     0     0
	    mfisyspd3   ONLINE       0     0     0
	    mfisyspd4   ONLINE       0     0     0
	    mfisyspd5   ONLINE       0     0     0
	    mfisyspd6   ONLINE       0     0     0
	    mfisyspd7   ONLINE       0     0     0
	    mfisyspd8   ONLINE       0     0     0
	    mfisyspd9   ONLINE       0     0     0
	    mfisyspd10  ONLINE       0     0     0
	    mfisyspd11  ONLINE       0     0     0
	    mfisyspd12  ONLINE       0     0     0
	    mfisyspd13  ONLINE       0     0     0
	    mfisyspd14  ONLINE       0     0     0
	    mfisyspd15  ONLINE       0     0     0
	    mfisyspd16  ONLINE       0     0     0
	    mfisyspd17  ONLINE       0     0     0
	    mfisyspd18  ONLINE       0     0     0
	    mfisyspd19  ONLINE       0     0     0
	    mfisyspd20  ONLINE       0     0     0
	    mfisyspd21  ONLINE       0     0     0
	    mfisyspd22  ONLINE       0     0     0
	    mfisyspd23  ONLINE       0     0     0

errors: No known data errors
Thanks for the testparm pointer. It appears have the socket options specified in the GUI overrides those defined elsewhere. That's not going to help me. I'll copy the other parameters into that line and try again.

User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: Pathological Samba performance when writing small files

Post by erico.bettoni »

WOW
Never seen such a pool. I would NOT recommend that setup.
You should try 4 x Raidz2 with 6 disks each.

I'm guessing that calculating parity for that pool will never be efficient... Doesn't matter the CPU...

digitalis99
Starter
Starter
Posts: 15
Joined: 21 Aug 2015 02:00
Status: Offline

Re: Pathological Samba performance when writing small files

Post by digitalis99 »

OK...I need all of the available space to show up under a single Samba share. I don't see why parity calculations would take less time with fewer disks in the pool, unless ZFS does the stripe write, reads the stripe back, generates and writes the parity from the read data (which, I believe, would be a fundamentally flawed approach). ZFS is still calculating two parity copies no matter how many disks are in the pool.

That said, I still see one Samba process (per writing client) go CPU bound with little to no CPU activity from anything else (including interupts). It's all user mode time, so I don't think ZFS is getting in the way.

digitalis99
Starter
Starter
Posts: 15
Joined: 21 Aug 2015 02:00
Status: Offline

Re: Pathological Samba performance when writing small files

Post by digitalis99 »

A simple iozone test, just for kicks:

Code: Select all

        Auto Mode
        Record Size 4096 KB
        File size set to 16000 KB
        Using maximum file size of 12000000 kilobytes.
        Command line used: ./iozone -a -i 0 -i 1 -r 4096 -s 16000 -g 12000000 -f /mnt/pool1/iozone.tst
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
           16000    4096 5670045 6467670 10684428 11851439
           32000    4096 5703589 6100247  8838806  9394649
           64000    4096 2461633 2579874  3973093  3966427
          128000    4096 2089969 2003816  3866620  3862050
          256000    4096 2171236 2672307  3661689  3715087
          512000    4096 2131184 2371623  7818333  8015786
         1024000    4096 4524367 4274523  7562603  7942658
         2048000    4096 3364497 4378444  6190547  4604068
         4096000    4096 2664902 2059480  7926064  7957431
         8192000    4096 2229723 2456904  7726682  7856052

iozone test complete.
2.0-5.6Gbytes/sec writes (If I'm reading that correctly) seems pretty good. :o

Samba still seems to be the performance culprit, even after getting all of my socket options into place. I did a search of the testparm output for all lines with "max" in them, but didn't see too much that stood out. I find it odd I can have one client writing one large set of small files to the N4F box and still get good throughput if writing from the same client with large files. If a single client is moving two large sets of small files to the N4F box, all other writes by that client perform at the level of the small file writes. It's like Samba only allows 2 processes per client, and then all I/O gets handled by one of those two processes regardless of how many additional transfer sessions from that client are established.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Pathological Samba performance when writing small files

Post by Parkcomm »

On the samba threading issue - that might be a question for samba.org or freebsd.org.

On the pool strategy - I think you might misunderstand how zfs pools work
I need all of the available space to show up under a single Samba share
A zpool treats vdevs basically as smart raid0 stripes (not 100% accurate, but a useful way to think about it). So if you whether you create 1 vdev or 23 vdevs - if they are in the same pool you only get one pool.

On this pool you can create as few or as many datasets as you want - in your case create 1 and share it.

Erico mentioned using raidz2 - you should look up how likely it is that you will get a two disk failure with your disk size. The big issues with raidz1 is that you get a second device problem during a rebuild. In your case, when zfs rebuilds it has to touch every disk, more disks = slower rebuilds. However, all the conventional wisdom is related to pools of spinning disks, SSDs will rebuild faster, you'll probably have to work this one out for yourself.

Also have a look at this re performance https://blogs.oracle.com/roch/entry/when_to_and_not_to
and the https://forums.freebsd.org/threads/zfs- ... devs.4641/

btw - I am not sure what too you used besides top, but you can have a high cpu utilisation in top while a process waits for io (don't know if that what is happening) so it is possible you have zfs iops issues which would affect small files more than large.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

digitalis99
Starter
Starter
Posts: 15
Joined: 21 Aug 2015 02:00
Status: Offline

Re: Pathological Samba performance when writing small files

Post by digitalis99 »

Parkcomm wrote:On the pool strategy - I think you might misunderstand how zfs pools work

A zpool treats vdevs basically as smart raid0 stripes (not 100% accurate, but a useful way to think about it). So if you whether you create 1 vdev or 23 vdevs - if they are in the same pool you only get one pool.

On this pool you can create as few or as many datasets as you want - in your case create 1 and share it.

Erico mentioned using raidz2 - you should look up how likely it is that you will get a two disk failure with your disk size. The big issues with raidz1 is that you get a second device problem during a rebuild. In your case, when zfs rebuilds it has to touch every disk, more disks = slower rebuilds. However, all the conventional wisdom is related to pools of spinning disks, SSDs will rebuild faster, you'll probably have to work this one out for yourself.
Thanks for the pointers. I read up on the information there. You are correct in that I didn't fully understand how ZFS pools worked. I'm still somewhat confused, coming from years and years of hardware RAID methodology, but some things are becoming clearer. The ZFS-writes-at-the-speed-of-a-single-disk-in-the-vdev thing is probably killing me here. My drives are all 120GB SSD's, so at least the performance penalty for resilvering shouldn't be too great.

So, if I want to break my pool up into 3-4 vdevs...I pretty much have to start over from scratch, right? I only have about 19GB stored in the pool right now, but it doesn't appear that I can remove devices from the vdev in order to shrink it.

User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: Pathological Samba performance when writing small files

Post by erico.bettoni »

Yes, you have to start from scratch...
As of now you cannot shrink a pool, only grow it by adding vdevs or changing all the disks in a vdev to bigger ones, so using standard vdevs configurations will make things a lot easier.

The more vdevs you have, the faster the pool will be. Right now you have 2 disks "lost" to parity. If you use 4xraidz2 with 6 disks you will loose 8 disks, which seems reasonable for your server.

Another good setup for performance would be 5 x raidz1 with 5 disks, you would lose only 5 disks to parity but would have to add 1 more sdd to the server somehow.

As performance is your goal, try to keep the number of non-parity disks in a vdev as a power of 2, because on your setup I think it will make a difference...
Last edited by erico.bettoni on 16 Nov 2015 21:18, edited 1 time in total.

digitalis99
Starter
Starter
Posts: 15
Joined: 21 Aug 2015 02:00
Status: Offline

Re: Pathological Samba performance when writing small files

Post by digitalis99 »

Cool, thanks for the input.

I actually figured out why Samba performs so poorly when writing thousands of files to the same directory...finally. For everyone's edification, the answer is here:

https://forums.freebsd.org/threads/samb ... ems.49732/

"Case sensitive = Auto" is bad, apparently. Forcing it to "yes" made Samba write performance WAY better on my N4F box. With some ZFS re-arranging (thanks to you all), I can probably make it even better.

This was difficult enough to find, and appears to be a problem encountered by a small group of people. It may make sense to add that setting to the N4F GUI, or make the default be "yes" instead of the Samba default of "Auto".

User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: Pathological Samba performance when writing small files

Post by erico.bettoni »

Thanks! I will try this myself...
Also, mapping atributes and extended attributes have a negative impact on performance.
If you can, disable on the dataset and samba.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Pathological Samba performance when writing small files

Post by Parkcomm »

New one on me - good find.

What throughput are your getting now?
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: Pathological Samba performance when writing small files

Post by erico.bettoni »

Well... I have to thank you digitalis99.
I have a backup job which got 40% faster thanks to this setting.
I think the N4F default should be changed.

digitalis99
Starter
Starter
Posts: 15
Joined: 21 Aug 2015 02:00
Status: Offline

Re: Pathological Samba performance when writing small files

Post by digitalis99 »

Parkcomm wrote:New one on me - good find.

What throughput are your getting now?
My write speeds went from under 1Mbps to around 30Mbps when writing to folders that have thousands of files. I thought the problem was writing small files...turns out it was Samba reading all of the files in any given folder before it wrote to make sure there was no name collision.

I have found a problem with =yes, though. Turns out, any NTFS link (the old mklink utility on Windows) that points to the UNC path of the Samba share can not open folders in the share that have lower-case letters in the folder name. Samba case sensitivity needs to be set to =auto or =no to get that to work. Somewhat ironically, Windows itself is not case sensitive when reading from a share, but Samba's default is =auto instead of =no to allow Linux and other non-Windows clients to connect to the share and do case sensitive stuff on it.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Pathological Samba performance when writing small files

Post by Parkcomm »

Thanks for that - looking back it makes total sense.

I would still recommend you look at your vdev strategy , and if you do it would be interested in any performance tests, certainly read will be faster.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Pathological Samba performance when writing small files

Post by Parkcomm »

Thanks for that - looking back it makes total sense.

I would still recommend you look at your vdev strategy , and if you do it would be interested in any performance tests, certainly read will be faster.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

digitalis99
Starter
Starter
Posts: 15
Joined: 21 Aug 2015 02:00
Status: Offline

Re: Pathological Samba performance when writing small files

Post by digitalis99 »

Parkcomm wrote:I would still recommend you look at your vdev strategy , and if you do it would be interested in any performance tests, certainly read will be faster.
Why read? I thought it was supposed to be write, primarily, that would increase in performance. No more writing to 24 drives at the speed of 1.

Post Reply

Return to “CIFS/SMB (Samba)”