This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Best ZPOOL build with avail drives?

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
errfoil
NewUser
NewUser
Posts: 6
Joined: 23 Feb 2013 22:13
Status: Offline

Best ZPOOL build with avail drives?

Post by errfoil »

Greetings,

I am upgrading a 4 year old Freenas (7.2) install because the 2 2TB WD Green drives were full. I just purchased 3 3TB WD Red drives to use in my 3 empty SATA slots. While waiting for the new drives to arrive I have been reading up on Nas4Free and ZFS and I realize I may have screwed up with my selection of drives as I've learned more about RAIDZ2.

This NAS is used as a home media server for pictures, music, AV files (serving nor streaming). In the past I didn't care about redundancy but as my collection has grown over the years (450 movies) I am getting a bit more worried about a total system failure. I could always rerip...but that is getting increasingly worrisome.

I am asking for advice on the optimal installation for my given hardware:
1-A single VDev containing all drives. Gives me maximum storage. I have to just hope SMART warns me in time to replace a drive before it dies. Is this really dicey, or is modern monitoring pretty good?
2-Create a single VDev with all drives in RAID-Z2. This gives me 2 drive failure protection but cuts my storage capacity to 6TB...is this correct?
3-Is there any advantage to putting the 2x2TB in one VDev and the 3x3TB in another VDev? Did I mention I am a NOOB!
4-Is this all foolish long term, meaning I should just bite the bullet and order 2 more 3TB drives, or in this case should I get 3 more for a true RAID-Z2?

Hardware:
Supermicro X7SPA-H (Intel ATOM D510)
4GB RAM
3x WD30EFRX
2X WD20EARS
6X SATA ports on motherboard. I thought I had 5 available for storage but I just learned how to install without a CD drive....FWIW
NAS4Free-x64-embedded-9.1.0.1.636 on internal USB drive

TIA,
BC
SuperMicro X7SPA-HF-D525 w/8 GB RAM
6 WD 30EFRX in RaidZ2
NAS4Free-x64-embedded-9.1.0.1.636

rostreich
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by rostreich »

hi!
I could always rerip...but that is getting increasingly worrisome.
:lol: you really want to do this? spend all the time and the money for power? forget about it!
1-A single VDev containing all drives. Gives me maximum storage. I have to just hope SMART warns me in time to replace a drive before it dies. Is this really dicey, or is modern monitoring pretty good?
forget about this, too. smart CAN be a warner, but in most times the disk just dies and your data is gone. btw. it depends, on how you configure the vdev.
2-Create a single VDev with all drives in RAID-Z2. This gives me 2 drive failure protection but cuts my storage capacity to 6TB...is this correct?
-> vdev stripe, 3x3TB -> 9TB space -> one disk dies, everything is gone
-> vdev mirror, 3x3TB -> 3 TB space -> two disks can die, data still on the third disk
-> vdev raidz1, 3x3TB -> 6 TB space -> one disk can die, data still on the 2 other disks
-> vdev raidz2 -> 2 disks can die, but you should use raidz2 with a minimum of 4 disks. you'll then get a space of 6 TB

watch this:
https://docs.google.com/file/d/0BzHapVf ... edit?pli=1
3-Is there any advantage to putting the 2x2TB in one VDev and the 3x3TB in another VDev? Did I mention I am a NOOB!
watch this:
https://docs.google.com/file/d/0BzHapVf ... edit?pli=1

if you use 2x2TB in mirror raid and 2x3TB in mirror raid and give your pool these 2 vdevs, you'll have 5TB usable space and the pool itself acts as a stripe, which gives most I/O. one disk per vdev can die, a whole dead vdev destroys the complete pool.
4-Is this all foolish long term, meaning I should just bite the bullet and order 2 more 3TB drives, or in this case should I get 3 more for a true RAID-Z2?
do it right and think for long term usage!
if I were you, i would order 3 more 3TBs so you can make a raidz2 vdev -> gives you pool1 and 12 TB usable space and 2 disks can die. just to be really safe!
then create a 2x2TB mirror vdev, which will give you pool2 and 2 TB usable space for the 'not so important files'.

14 GB in total and safe. not bad, huh? :D

and change the spindown parameter from the wd green firmware as it will stress the disks too much because of aggressive spindown! -> viewtopic.php?f=66&t=2539

the wd reds are good for NAS usage. they were built for this purpose.

errfoil
NewUser
NewUser
Posts: 6
Joined: 23 Feb 2013 22:13
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by errfoil »

Hi rostreich, thanks for the quick reply.
-> vdev raidz1, 3x3TB -> 6 TB space -> one disk can die, data still on the 2 other disks
do it right and think for long term usage!
if I were you, i would order 3 more 3TBs so you can make a raidz2 vdev -> gives you pool1 and 12 TB usable space and 2 disks can die. just to be really safe!
then create a 2x2TB mirror vdev, which will give you pool2 and 2 TB usable space for the 'not so important files'.
How about this idea; I use your suggestion above with just 3 drives in 1 VDev. Then later I buy 3 more red drives and add them to a second VDev which I add to my existing ZPool. This gets me up and running now, with the ability to get to the configuration you mention above, later on. I realize this option limits me to 1 failure per VDev, but losing 2 drives at the same time seems like a rare occurrence for a home server.

What if I use the empty SATA slots short term for a UFS Usenet drive(s) since that is pretty hard on the drive, eventually replacing my Usenet UFS drive with 3 new red drives for VDev2?
and change the spindown parameter from the wd green firmware as it will stress the disks too much because of aggressive spindown! -> viewtopic.php?f=66&t=2539
Thanks, I'll make sure I change that on the 'greens', there is no problems with the 'reds' right?

How about my 4GB of RAM? I am worried about all this ZFS storage space with so little RAM. This MB is capped at 4MB so my only option is a new MB. My use of services will be pretty light I think, CIFS, maybe a Torrent and/or Usenet client, I would like to run Squeeze Server but I think that project died on the vine.

Is the ZFS memory impact any different if I use 1 vs. 2 VDevs?
SuperMicro X7SPA-HF-D525 w/8 GB RAM
6 WD 30EFRX in RaidZ2
NAS4Free-x64-embedded-9.1.0.1.636

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by b0ssman »

Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by b0ssman »

with your selection of drives you are a bit screwed for zfs.

one option to consider is unraid.

in unraid you can combine drives with different sizes and use the biggest drive as parity.
the array survives one failure. On a second failure you only loose the data on the drives that died
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

User avatar
Earendil
Moderator
Moderator
Posts: 48
Joined: 23 Jun 2012 15:57
Location: near Boston, MA, USA.
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by Earendil »

I am NOT saying Rostreich is wrong but consider, Errfoil, exactly what you are trying to protect and exactly what you want and what you can afford. I agree with Rostreich's post as it's great if you just consider HDD failure yet you mention re-rip which infers the possibility of both HDD failure AND corruption of data.

Rostreich covers all the HDD failure (in detail! :)). I'll just say from my own experience it's always better to make the HDD numbers come out with an even number of HDDs in any pool starting at four especially if they are the same size. One exception is a single HDD mirror of two HDDs. 'nuff said.

The other thing is corrupt data and you do not mention backups. Simple concept but expensive to execute as you have to have twice as much storage space to do so. Remember, mirrors are NOT backups (see innumerable articles on the Internet to "back up" this claim).

The only thing I'd mention is ensure you understand what a vdev does within ZFS. As I understand it the majority of ZFS' magic in preventing and staying on top of data corruption happens within a vdev, not across them. This goes for block sizes as well which is more efficient as vdevs get bigger (this is in direct contradiction than most other file systems due to the amount of blocks ZFS can handle and variable block sizes). The bottom line I've gleaned from what I've read on ZFS is to keep as many HDDs as possible in a vdev. For example, in my main NAS my four 1 TB HDDs are in one vdev but in one pool as well. The other six 2 TB HDDs are in another vdev which are also another pool. One disadvantage to single vdevs is you cannot increase the size of the pool with only one vdev. There are two exceptions:
  • - add another vdev but you have more than one vdev per pool
    - swap out each HDD in the vdev for a larger HDD and in the end you will have a larger vdev but this is expensive.
So to throw in my two cents.
  • Keep all HDDs at the same size in their own vdevs.
  • Make each vdev it's own RAIDZ1 pool (RAIDZ2 is expensive and most HDDs fail within 3 months anyway so after 3 months the HDDs will probably last for years).
  • Unraid is fine if you have many different HDD sizes and need a JBOD array with redundancy (somewhat, you'll loose something but not everything) but that's not the case here and you leave behind all of ZFS' goodness.
  • I think anything but green HDDs in a NAS is a waste unless you have very high speed throughput required (i.e. enterprise database access, over a hundred MB [that's BYTES] per second). Sure, green HDDs can barely do 40 MB/sec but they are thrifty with power and current Blu-Ray specification is at 54 Mb/sec. Even higher bandwidths like for 2160p (four 1080p screens) coming out will be only 210 Mb/sec and green HDDs do close to 400 Mb/sec (rule of thumb, 1 MByte/sec is approximately 10 Mbits/sec). In a RAID5 (what roughly RAIDZ1 is) you will be even faster since some HDD access is parallel!
  • More memory always makes ZFS happy but my system seems happy enough with the 1.5 GB I have available. Unless you have a slow CPU or really fast HDDs, chances are it's not worth the cost but if you have the money... 8-)
  • Of course more HDDs are better and I would suggest getting one more 3 TB HDD to make it an even four 3 TB HDDs but that's up to you.
  • As for backup I am RESIGNED to the fact I may loose what is on my main storage pool (the six 2 TB HDDs in RAIDZ1 giving me nearly 10 TB of storage) because I do not back it up. Nevertheless I have spare 2 TB HDDs in case of HDD failure so I am not tempting fate too much. I know the ramifications though since I lost 3 TB of data a couple of years ago (my mistake) because I did not have extra storage and it took me 6 months to reconstitute it. :?
Disclaimer: I have read a lot and these are the conclusions I've seen. Please feel free to post corrections if I've got it wrong.
Earendil

XigmaNAS server:
-AMD A10-7860K APU
-Gigabyte F2A88XM-D3HP w/16GB RAM
-pool0 - 4x 2 TB WD green HDDs
-pool1 - 6x 8 TB WD white HDDs
-Ziyituod (used to be Ubit) SA3014 PCI-e 1x SATA card
-External Orico USB 3.0 5 bay HDD external enclosure set at RAID 5
--5x 4 TB WD green HDDs
-650W power supply

rostreich
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by rostreich »

Code: Select all

Then later I buy 3 more red drives and add them to a second VDev which I add to my existing ZPool. This gets me up and running now, with the ability to get to the configuration you mention above, later on. I realize this option limits me to 1 failure per VDev, but losing 2 drives at the same time seems like a rare occurrence for a home server.
This would work, but you loose capacity of one drive!

see vdev1 -> 3x 3TB Raidz1 -> 6TB usable space

then next vdev2 -> 3x3 TB Raidz1 -> 6 TB usable space makes capacity of 12 TB in one pool.

if you wait and create your raidz1 vdev with 6 drives you'll get 15 TB with same redundancy. and if you wait you dont have to copy your already written data on another place when you change your configuration to 6 drives. it all must then be reinitialized with recreating the pool.
but losing 2 drives at the same time seems like a rare occurrence for a home server
the reason for this is the following. imagine one disk dies and you replace it with a new one. then your pool will resilver (zfs term for raid rebuiliding). this is stressing the other disks very much for some hours. and so its likely that then will another disk die and the whole data is gone. and this is a not so rare occurence. ;)
there is no problems with the 'reds' right?
yup, like i said, they were built for nas purposes ;)
How about my 4GB of RAM? I am worried about all this ZFS storage space with so little RAM.
stack up to 8 gb, so you can activate the prefetch option. 4GB will run stable, but if you use that much of diskspace, you should also upgrade ram to get it smooth. ram is cheap at the moment.

your mobo isn't capped at 4GB. look here: http://hardforum.com/showthread.php?t=1646230
and there: http://www.servethehome.com/supermicro- ... nsumption/
I'll just say from my own experience it's always better to make the HDD numbers come out with an even number of HDDs in any pool starting at four especially if they are the same size. One exception is a single HDD mirror of two HDDs. 'nuff said.
right ;)
Remember, mirrors are NOT backups (see innumerable articles on the Internet to "back up" this claim).
yup, the easiest was to explain somebody that is to say that any filemunching virus is mirrored too. ;)
@OP you could use your 2 2TBs as backup drives for really important files. or sell them on ebay, use the money for more 3TB disks.
The bottom line I've gleaned from what I've read on ZFS is to keep as many HDDs as possible in a vdev.
not with really much drives.
Sure, green HDDs can barely do 40 MB/sec
I disagree. where did you read that? green eco disks are slower, but not that slow! they have less rpm and aggressive spindown to save energy, thats all.

errfoil
NewUser
NewUser
Posts: 6
Joined: 23 Feb 2013 22:13
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by errfoil »

OK, you guys win :-)
I've just ordered 3 more WD Red drives. I'll be building a 6 drive RAID-Z2 pool with a single VDev. That should give me 12TB of storage with 2 drive failure redundancy.

With these new Red Drives do I need to destroy and re-import the pool like they recommend with the Green Drives? I never did this before and I had very consistent 70MB+ transfer speeds over CIFS. From my reading that is pretty good performance?!?

Thanks for all the help.

Cheers...
SuperMicro X7SPA-HF-D525 w/8 GB RAM
6 WD 30EFRX in RaidZ2
NAS4Free-x64-embedded-9.1.0.1.636

errfoil
NewUser
NewUser
Posts: 6
Joined: 23 Feb 2013 22:13
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by errfoil »

b0ssman wrote:your mb is not capped at 4gb

http://hardforum.com/showthread.php?t=1646230
For the benefit of those that might follow this thread the above statement is true if you have the D525 version of the ATOM processor, which is DDR3. I happen to have the older board which is the D510 ATOM processor, running on DDR2. I couldn't find any reference to anyone stuffing 8GB into the older D510 board. Also, the old DDR2 memory is expensive (best price I could find is $150 for 8GB), so in the end I bought a new board with 8GB of DDR3 for less than $250 total vs. paying $150 for slower memory in an older board that may or may not work.

YMMV :-)
SuperMicro X7SPA-HF-D525 w/8 GB RAM
6 WD 30EFRX in RaidZ2
NAS4Free-x64-embedded-9.1.0.1.636

rostreich
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by rostreich »

That should give me 12TB of storage with 2 drive failure redundancy.
Yup, it will. :)
With these new Red Drives do I need to destroy and re-import the pool like they recommend with the Green Drives?
Could you be more detailed? I don't really understand why you should reimport when you will build your pool with 6 drives at once?
I never did this before and I had very consistent 70MB+ transfer speeds over CIFS. From my reading that is pretty good performance?!?
This is indeed good performance for ATOM CPU. :D
so in the end I bought a new board with 8GB of DDR3 for less than $250 total vs. paying $150 for slower memory in an older board that may or may not work.
Yeah, the better way. Which mobo did you buy? I'm asking because it's good to know which mobos are out there with 6+ sata ports.

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by b0ssman »

rostreich wrote: Yeah, the better way. Which mobo did you buy? I'm asking because it's good to know which mobos are out there with 6+ sata ports.
just buy a lsi sas controller (ibm m1015) and dont worry so much about the ports on the motherboard.

for a good motherboard check out
http://www.supermicro.com/products/moth ... 9scm-f.cfm
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

rostreich
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by rostreich »

b0ssman wrote:
rostreich wrote: Yeah, the better way. Which mobo did you buy? I'm asking because it's good to know which mobos are out there with 6+ sata ports.
just buy a lsi sas controller (ibm m1015) and dont worry so much about the ports on the motherboard.

for a good motherboard check out
http://www.supermicro.com/products/moth ... 9scm-f.cfm
I know, but these small boards mostly have only 1 PCI slot, which maybe better used for a better NIC. Intel > Realtek. And I need it mostly for ISCSI Multipath... ;)

Btw. forgot about one answer of OP:

No, it doesn't make a noticeable difference for RAM when using multiple vdevs...but I think it's obsolete, coz OP will use 1 raidz2 vdev. :D

errfoil
NewUser
NewUser
Posts: 6
Joined: 23 Feb 2013 22:13
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by errfoil »

With these new Red Drives do I need to destroy and re-import the pool like they recommend with the Green Drives?
Could you be more detailed? I don't really understand why you should reimport when you will build your pool with 6 drives at once?
I never did this before and I had very consistent 70MB+ transfer speeds over CIFS. From my reading that is pretty good performance?!?
This is indeed good performance for ATOM CPU. :D
I found this thread...viewtopic.php?f=59&t=1494...when I was trolling through the FAQ's. I asked about it because I thought my performance in my old system was pretty good, so I was wondering if I just got lucky or if I could get even better performance by removing the GEOM intermediary?
so in the end I bought a new board with 8GB of DDR3 for less than $250 total vs. paying $150 for slower memory in an older board that may or may not work.
Yeah, the better way. Which mobo did you buy? I'm asking because it's good to know which mobos are out there with 6+ sata ports.
I went with the Supermicro X7SPA-HF-D525. I've been using an older version of this board and it has performed perfectly for me. My needs are for a low power, always-on home server. I have moved a lot, lived in some tough environments (hot, dusty), my server was dropped in one move that crushed the bottom of the case and distorting the drive cage so that initially I could not even remove the drives. In short this little board just keeps on trucking and I've never even updated the BIOS from 4+ years ago! Another thing I like is the type A usb port on the MB itself, it plugs in perpendicular to the MB, very nice for NAS4Free embedded!

A couple of tips that took me a while to figure out; there are several versions of this ATOM D525 board, the X7SPE is for server racks, it is an inch longer on 1 axis so it won't fit in a mini-ITX case. The -H (vs the -HF) does not have IPMI v2.0. I have not tried out IPMI yet but everyone says it is a great tool, no need for a keyboard and monitor for initial setup.

Thanks again for the assistance.
SuperMicro X7SPA-HF-D525 w/8 GB RAM
6 WD 30EFRX in RaidZ2
NAS4Free-x64-embedded-9.1.0.1.636

rostreich
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by rostreich »

I asked about it because I thought my performance in my old system was pretty good, so I was wondering if I just got lucky or if I could get even better performance by removing the GEOM intermediary?
Ah...ok. This has nothing to do with performance. Every disk which was in a zpool, has these metadata. So when you want to put these disks in another pool, ZFS will fail.
So kill all the disk first with DBAN from UBCD or Active Killdisk Free for Windows to get really clean drives to start.

I looked over this page: http://www.supermicro.com/products/moth ... f-d525.cfm

Looks really good!
I try my luck with an ASUS C60M1-I next week. It can hold 16 GB of RAM.

errfoil
NewUser
NewUser
Posts: 6
Joined: 23 Feb 2013 22:13
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by errfoil »

Ok, so I pushed the power button and my new server box came to life with 6 new WD 30EFRX's spinning away, set up N4F, went through the ADD/Format/VDev/Pool creation with only a few small problems.

But...when I check disk status i show 16.2T installed, 935k used, 10.7T free ??? There is nothing in the ZPOOL yet, I haven't even set up CIFS shares or users yet. Is something wrong??
SuperMicro X7SPA-HF-D525 w/8 GB RAM
6 WD 30EFRX in RaidZ2
NAS4Free-x64-embedded-9.1.0.1.636

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by b0ssman »

sounds about right with raidz2.

you get less space because of hd 1000 to 1024 difference
the other is zfs overhead
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

rostreich
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by rostreich »

16.2T installed
That's pure space of all available disks.
10.7T free
That's your available space due to raidz2.

Ouroborus
Status: Offline

Re: Best ZPOOL build with avail drives?

Post by Ouroborus »

rostreich wrote:
I asked about it because I thought my performance in my old system was pretty good, so I was wondering if I just got lucky or if I could get even better performance by removing the GEOM intermediary?
Ah...ok. This has nothing to do with performance. Every disk which was in a zpool, has these metadata. So when you want to put these disks in another pool, ZFS will fail.
So kill all the disk first with DBAN from UBCD or Active Killdisk Free for Windows to get really clean drives to start.
I think he is reffering to the 4k physical sector fix. I did it, but I don't know what good it did. All disks but one are reporting 4096 physical bytes per sector, the last one is an EARS-disk with sector emulation and whatnot-mumbojumbo.

Post Reply

Return to “ZFS (only!)”