This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

ZFS drive configurations

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
miercoles13
NewUser
NewUser
Posts: 6
Joined: 01 Jul 2012 17:32
Status: Offline

ZFS drive configurations

Post by miercoles13 »

Hello everyone,

I just recently completed my first nas4free build which I would like to say has been running as smooth as it can get, yes I'm maxing out my GigE link and i love it. I use this box as a iscsi server for 2 esx servers and SMB/CIFS server for a set of computers (4~6) mostly home usage with some heavy usage every once in a while when I decide to move 100~200 gigs of data around media centers pcs and such. Well anyhow, I currently have a 12 drives in the following format:

2x 30Gig SSD cache drives (internal non hot swappable)
6x 750gig + 4x 1tb drives in a RAIDZ2 (hot swappable) reports about 6.8Tbs with about 2.47Tbs used. (still testing)

I been doing some reading on how to best organize my drives and I seem to understand I would get better performance and redundancy if I created smaller drive groups as the following example:

4x 1tb RaidZ1 3tb usable (already owned)
4x 1tb RaidZ1 3tb usable (to be purchased)
4x 1tb RaidZ1 3tb usable (to be purchased)
4x 1tb RaidZ1 3tb usable (to be purchased)
4x 750gig Raidz1 2.2tb usable (already owned)

I would have to give up 2 of my 750gig drives, but I have plans for them in another server. A setup as mentioned above would grant me around 14.2Tbs is my guess, and would grant me much better throughput when it comes to VM performance. Anyone care to comment, this is still chicken scratch in my books but very close to coming alive if I can gather more information.
Last edited by miercoles13 on 25 Jul 2012 16:08, edited 1 time in total.

miercoles13
NewUser
NewUser
Posts: 6
Joined: 01 Jul 2012 17:32
Status: Offline

Re: ZFS drive configurations

Post by miercoles13 »

Shameless bump!

I have the money saved up now for the first group of 4 drives so that I can begin reworking my storage pool. I'd love to get some feedback if there is anyone that has some good experience in this side. As a side note, I been reading a bit about ZIL being on SSD which points to it sometimes being a bottleneck. On the other Freenas8 it is mentioned you must have redundancy for ZIL through a mirror at least, but some other reading I have done mentions that this is no longer the case in newer ZFS revisions.

User avatar
Earendil
Moderator
Moderator
Posts: 48
Joined: 23 Jun 2012 15:57
Location: near Boston, MA, USA.
Status: Offline

Re: ZFS drive configurations

Post by Earendil »

Must have missed this because I have some comments:

1. Envy! I thought I had enough for a home user but you take it to another level.

2. Why 1 TB HDDs when 2 TB HDDs on NewEgg have come down to $100 each? Just an observation.
miercoles13 wrote:I been doing some reading on how to best organize my drives and I seem to understand I would get better performance and redundancy if I created smaller drive groups as the following example:
3. Not quite understanding where this may have come from because I never got this from my ZFS reading. Certainly for data arragements it can help but for better redundancy? I don't see it. Performance may in fact increase but then I've heard lots of memory in the ZFS NAS would help even more so many multiple pools instead of a couple big pools, I don't see it either. Still you never know. I happen to have a few 1 TB HDDs around and I rounded it to four 1 TB HDDs to make the first pool otherwise I would have had one big 2 TB HDD pool. Glad I did though, most usage is on one pool where the other HDDs are only rarely used, reducing my electric bill.

4. Actually in my 4x 1 TB HDD RAIDZ1 usable space is 2.62 TB. Just fyi. :)

5. I have NEVER gotten my GB network to max out and my file transfer speeds are low, about 35-40 MB/sec under perfect conditions. Of course I also have not put a lot of effort into it because I know from the small effort I did I got little increase in file transfer rates. Would like to know all the things you did to max out any file transfers, ZFS tuning, SMB/CIFS tuning, etc.

6. The need for speed through the SSDs for ZFS caching, why? You mention media players and I will assume the worst. One 1080p resolution video through Blu-ray is specified to support a max data transfer rate of 54 Mb/sec which is about 5-6 MB/sec sustained. All media players buffer but not a lot so doubling that should cover any stutters. The future is 4x 1080p monitors so four times the max data rate would mean about 24 MB/sec maximum data rates for media. If the reason is for faster file transfers (my reason) or just to get the max performance or just to say you're the fastest, then I withdraw my "why?" I question because it costs and SSDs may not last as long as mechanical HDDs (I understand the SSD specs but some people are still not fully convinced for long haul without 5-10 years of use actually occuring).

7 Are you saying everything (i.e. N4F) is in a VM? Or are you comparing N4F to a cloud on a VM? Confused...

8. FreeNAS 8 direction, yuck, poey! :roll:
Earendil

XigmaNAS server:
-AMD A10-7860K APU
-Gigabyte F2A88XM-D3HP w/16GB RAM
-pool0 - 4x 2 TB WD green HDDs
-pool1 - 6x 8 TB WD white HDDs
-Ziyituod (used to be Ubit) SA3014 PCI-e 1x SATA card
-External Orico USB 3.0 5 bay HDD external enclosure set at RAID 5
--5x 4 TB WD green HDDs
-650W power supply

miercoles13
NewUser
NewUser
Posts: 6
Joined: 01 Jul 2012 17:32
Status: Offline

Re: ZFS drive configurations

Post by miercoles13 »

Oh man, thanks for the reply :) I been doing as much as I can solo, but sometimes i fall into a spot where I cant find all of the information or examples. Anyhow, to answer some of the points you brought up:
Earendil wrote: 2. Why 1 TB HDDs when 2 TB HDDs on NewEgg have come down to $100 each? Just an observation.
I currently have the WD black RE3 1tb drives and have experience extremely good performances for the past 2 years. I want to continue using WD RE drives and currently the 1tb drives are definitely within range.I love to jump onto 2tb drives with RE4, but they are just so expensive at the moment. To further expand on this, I often try to justify my use on drives as punishment. I run VMware on 2 servers hosting from 8~14 VMs running from simple DCs to medium sized databases/applications that let me practice a lot of stuff for work. (SCCM/SCOM/SCSM....)

#3
I guess I was a bit broad, I done some readings ranging from blogs, freenas site, nas4free, hardops forums and such to understand how other folks configured their setups. I noticed a few times where people proceeded to create several vdevs groups (3~5) drives in raidz1 and bound them into a pool, which would look somewhat similar to what I'm picturing below:

Code: Select all

	NAME        STATE     READ WRITE CKSUM
	pool1       ONLINE       0     0     0
	  raidz1-0  ONLINE       0     0     0
	    da0     ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da2     ONLINE       0     0     0
	    da3     ONLINE       0     0     0
	  raidz1-1  ONLINE       0     0     0
	    da4     ONLINE       0     0     0
	    da5     ONLINE       0     0     0
	    da6     ONLINE       0     0     0
	    da7     ONLINE       0     0     0
	  raidz1-2  ONLINE       0     0     0
	    da8     ONLINE       0     0     0
	    da9     ONLINE       0     0     0
	    da10     ONLINE       0     0     0
	    da11     ONLINE       0     0     0
If I understand this correctly, the Pool1 would have the storage from raidz1-0, 1-1, 1-2 combined lets say 2.62tb*3 = 7.86tb approximately. In addition each raidz1 would have the possibility of having 1 HD fail without complete failure. As for performance I could only guess improved performance!
Earendil wrote: 4. Actually in my 4x 1 TB HDD RAIDZ1 usable space is 2.62 TB. Just fyi. :)
You made me sad ;) well only a bit, i knew it was a rough guesstimate.

#5
See attachment, I performed benchmarks from VMs hosted on each of my ESX host with SCSI and round robin enabled. I guess I may have been able to get it higher if I would have done a file copy from my machine at the same time, but when I saw this performance It greatly surpassed anything Freenas8 could ever hope :) I was more than happy.

#6
This is only my need4speed, well its actually more of a if i'm going to spend X amount of money for good drives and ZIL be my bottleneck I figured I slap that guy with a decent Intel SSD. Will i need it? Not sure, but my usage is varying as my VM environment can be idle for a good amount of time with passive synchronizations, updates and scheduled jobs to the point where things get busy and gigs of gigs of data need to be pushed through.

#7
I think this relates to #6, but NAS4Free server and two PowerEdge 2950s are physical in my server room. All of my other server OS systems are VMs hosted on VMware by the two 2950s. I have a set of desktop/workstation systems throughout the house that will also connect to the N4F system for common share access. Within the N4F system I have 760Gigs through ISCSI LUN shared out to my virtual environment which has been growing. I been testing a new product I wanted to get familiar, and may grow into another 760Gigs. I have about 1.4Tb of data as regular SMB/CIFS shares that are definitely going to grow whenever I finish working on the NAS and a separate backup.

Not sure if that answers your question. Not comparing it to a cloud system, I just read around that N4F uses a much later version ZFS which does not crap out completely if a ZIL drive takes a dive.

#8
Yeah, i tried it and quickly looked away.
You do not have the required permissions to view the files attached to this post.

cancerman
Starter
Starter
Posts: 33
Joined: 23 Jun 2012 07:27
Status: Offline

Re: ZFS drive configurations

Post by cancerman »

1. I would just go with WD Black drives and leave the RE drives on the shelf. Same disks, warranty, performance, etc, better price. You won't be using the RAID ready firmware that you're paying for in the RE drives.

2. The prevailing thought for RAIDZ1 pools is to have 3, 5 or 9 disks, and 4, 6 or 10 for Z2. Read more here and here.
Nas4Free 9.1.0.1.775. EP43T-UD3L, 12GB, Q6600, Supermicro USAS-L8i with IT firmware, 4x 2TB WD Green, 4x 1.5TB WD Green, 3x 1TB Samsung F4, 3x 1TB Seagate Barracuda, 2x 1TB Hitachi Deskstar, OCZ SSD for L2ARC, Mirrored Corsair SSDs for ZIL.

miercoles13
NewUser
NewUser
Posts: 6
Joined: 01 Jul 2012 17:32
Status: Offline

Re: ZFS drive configurations

Post by miercoles13 »

I just received a newegg email for Western Digital Caviar Black WD2002FAEX 2TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive -Bare Drive at 169$ that seems like a pretty dam good deal for the performance and 5 year warranty. Anyone know if the RE are preferred over the black caviar? All i heard was the RE did not spin down like regular desktop drives.

cancerman
Starter
Starter
Posts: 33
Joined: 23 Jun 2012 07:27
Status: Offline

Re: ZFS drive configurations

Post by cancerman »

As far as I know the only difference is that the firmware works better with hardware RAIDs than the commercial firmware. Those issues never come into the picture with zfs.
Nas4Free 9.1.0.1.775. EP43T-UD3L, 12GB, Q6600, Supermicro USAS-L8i with IT firmware, 4x 2TB WD Green, 4x 1.5TB WD Green, 3x 1TB Samsung F4, 3x 1TB Seagate Barracuda, 2x 1TB Hitachi Deskstar, OCZ SSD for L2ARC, Mirrored Corsair SSDs for ZIL.

Post Reply

Return to “ZFS (only!)”