Hello,
I have created a 18 TB file extent on my 18TB RaidZ, because performance over Samba wasn't good for my usecase.
Write speed is 20 Megabyte per second. Read speed is about 70 Megabyte per Second.
Same raidz with Samba is about 100 Megabyte per second read and write.
local write speed with dd is about 200 Megabyte per second and local read speed is 150 Megabyte per second.
I used a ZFS Kernel Tune plugin and changed the setting to "6GB Memory". Before that read speeds where higher than write speeds.
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
Bad Performance in 18 TB extent.
-
thedoginthewok
- NewUser

- Posts: 5
- Joined: 02 Sep 2013 20:45
- Status: Offline
- raulfg3
- Site Admin

- Posts: 4865
- Joined: 22 Jun 2012 22:13
- Location: Madrid (ESPAÑA)
- Contact:
- Status: Offline
Re: Bad Performance in 18 TB extent.
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)
Wiki
Last changes
HP T510
Wiki
Last changes
HP T510
-
thedoginthewok
- NewUser

- Posts: 5
- Joined: 02 Sep 2013 20:45
- Status: Offline
Re: Bad Performance in 18 TB extent.
I already found the problem.
I set the "Logical Block Length" option of the iscsi target to 4096, now the performance is much much better. It maxes out the capabilities of the NIC now.
I set the "Logical Block Length" option of the iscsi target to 4096, now the performance is much much better. It maxes out the capabilities of the NIC now.