*New 11.4 series Release:
2020-07-03: XigmaNAS 11.4.0.4.7633 - released!

*New 12.1 series Release:
2020-04-17: XigmaNAS 12.1.0.4.7542 - released


We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money. Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

My pre-build design flowchart

Show off that XigmaNAS system!
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
awinn17
Starter
Starter
Posts: 24
Joined: 28 Apr 2013 15:23
Status: Offline

My pre-build design flowchart

#1

Post by awinn17 »

Hey, so I've been doing a lot of brainstorming and I think I have my design about 70% finalized. See what you think.

Suggestions would be appreciated!

Also, a better photo host suggestion would be appreciated as well... doesn't look like it's posting right. Right click/view image seems to work though at least in chrome.

Image

same image:http://www.flickr.com/photos/54365832@N02/10841418664/

kzrussian
NewUser
NewUser
Posts: 7
Joined: 16 Nov 2013 09:26
Status: Offline

Re: My pre-build design flowchart

#2

Post by kzrussian »

My comment is about the red and green HDDs and their coresponding Zpool_2 and Zpool_3.
I'm a noobie, so I could be wrong about this, but from what I've read - you can't use 1 HDD with ZFS and get it's full capacity of usable space. Anotherwords, 500GB HDD will not give you 500GB VDEV_2.

Another comment: How come you have "Laptop 2" directly connected to the "Internet"? Shouldn't that go through the router?

Comment #3: Please label all your colors. And why do you have multiple connections between the same devices?

What you created is a great idea! I keep of dreaming of doing something similar of my network topology. Good Job and keep us updated!

awinn17
Starter
Starter
Posts: 24
Joined: 28 Apr 2013 15:23
Status: Offline

Re: My pre-build design flowchart

#3

Post by awinn17 »

Thanks for your reply! :D

In order: I think you are probably right. I'd still like to use ZFS because... well because it seems like everyone else uses it and I'm guessing its for a good reason... So unless it trimmed off a large amount of space, I'm still for it I guess.

Laptop 2 is connected via internet because I plan on using one when we travel to access the NAS and dump photos- gf has an SLR and shoots high res pics. It's easy to dump them to a laptop, but since all this is in the interest of data safety I want to be able to put them up to the NAS and make a duplicate as soon as possible. Use it if I got it right?

The colors weren't really schemed- they are just colored to represent separate groups, like colors on a map.


You have an interesting point about the ZFS storage system. I'll look into that.

ku-gew
Advanced User
Advanced User
Posts: 173
Joined: 29 Nov 2012 09:02
Location: Den Haag, The Netherlands
Status: Offline

Re: My pre-build design flowchart

#4

Post by ku-gew »

How many network cards do you have?? the television is connected directly to a pool?
HP Microserver N40L, 8 GB ECC, 2x 3TB WD Red, 2x 4TB WD Red
XigmaNAS stable branch, always latest version
SMB, rsync

awinn17
Starter
Starter
Posts: 24
Joined: 28 Apr 2013 15:23
Status: Offline

Re: My pre-build design flowchart

#5

Post by awinn17 »

It was assuming a Smart TV, something that could read from a mass storage device, and that was assuming I could make that device read like one.

As for network cards, I'm not sure I understand what you mean...? Each PC I have has built in LAN, but I don't think that was your question.

It looks like I'll be making Rev 1 pretty soon lol

alexplatform
Starter
Starter
Posts: 38
Joined: 26 Jun 2012 21:21
Status: Offline

Re: My pre-build design flowchart

#6

Post by alexplatform »

Looks like you got it pretty much mapped out. What is the reason for the 3 zpools? I suggest removing the 160GB disk outright; it is not useful and probably eats more electricity then the 3TB disks. Besides, its probably old and its useful life is coming to an end, so you cant depend on it for any survivability.

As for the 500GB drive- I appreciate that you already have it and you would feel like you're leaving resources unused, but it would be far more efficient to create a dataset for your DVR on your main pool and leave it off the system; it will only use up as much space as you have recordings and will not require any additional spinning disks.

Lastly, RAIDZ2 with 4 drives is really inefficient- it would yield 6TB file system but with 4 disks spinning. Are you really that concerned with downtime/fault tolerance?

awinn17
Starter
Starter
Posts: 24
Joined: 28 Apr 2013 15:23
Status: Offline

Re: My pre-build design flowchart

#7

Post by awinn17 »

I'm know you're right about the 160 and 500. I don't like wasting resources but at the same time if it's not redundant I wouldn't "like" using it and it would just sit and waste power until it burned up. Perhaps I'll find something else for them.

My setup was designed with 5 drives- you said 4 but you may have meant 5. According to the calculator I found 5 3TB drives yielded ~8.2TB in the end. I realize that's preparing for a 40% failure rate as opposed to say 12 drives which would be expecting a 17% failure rate. I haven't locked into anything yet because I wanted to get some other opinions like these. I would likely go with 3 or 4 4TB in Z1, down to 33% or 25% expectancy, save some money or break even, and increase space a little.. What would your opinion be on something like that?

The reason I started so conservative was because the reviews I was reading for all these different hard drives all seemed pretty awful... It seems like the WD Red line has pretty poor quality control, but it could be its exposure to a crowd more likely to post reviews. Neither here nor there. I said 5 and Z2 because I worried that another might fail by the time warranty or I could replaced the first. After I got overwhelmed I kind of stopped looking, but of what I saw I liked the Seagate NAS rated 3TB or 4TB versions the most. The WD Blue are too expensive for my money.

RedAntz
experienced User
experienced User
Posts: 127
Joined: 11 Jul 2012 07:46
Location: Sydney, Australia
Status: Offline

Re: My pre-build design flowchart

#8

Post by RedAntz »

RAIDZ2 should use 2^n + 2 drives (4, 6, 10, HDDs etc) for good performance.

Reference from Solaris Internals website :-
A RAIDZ configuration with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised.

Start a single-parity RAIDZ (raidz) configuration at 3 disks (2+1)
Start a double-parity RAIDZ (raidz2) configuration at 6 disks (4+2)
Start a triple-parity RAIDZ (raidz3) configuration at 9 disks (6+3)
(N+P) with P = 1 (raidz), 2 (raidz2), or 3 (raidz3) and N equals 2, 4, or 6
The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups.

alexplatform
Starter
Starter
Posts: 38
Joined: 26 Jun 2012 21:21
Status: Offline

Re: My pre-build design flowchart

#9

Post by alexplatform »

5 drives makes for an ideal RAIDZ1 (4 data+1parity per logical block) it allows for properly aligned I/O, as described in RedAntz post. If you want dual parity, you'd want 6 drives. You CAN create a RAIDZ2 volume with 5 drives but its not ideal.

Dont worry too much about disk failure. Your setup is not mission critical, eg downtime does not have a direct cost; you can sustain a single disk failure without downtime using a RAIDZ1 setup- two disks failing at the same time is far less likely then losing a power supply, which you have no provision for.

Lastly, have a back up strategy for anything you care about regardless of fault tolerance features of your primary storage. disk fault tolerance does not protect you from non disk related failures (environmental, user error, hardware fault, malicious damage, etc.)

awinn17
Starter
Starter
Posts: 24
Joined: 28 Apr 2013 15:23
Status: Offline

Re: My pre-build design flowchart

#10

Post by awinn17 »

RedAntz wrote:RAIDZ2 should use 2^n + 2 drives (4, 6, 10, HDDs etc) for good performance.

...
alexplatform wrote:5 drives makes for an ideal RAIDZ1 (4 data+1parity per logical block) it allows for properly aligned I/O, as described in RedAntz post. If you want dual parity, you'd want 6 drives. You CAN create a RAIDZ2 volume with 5 drives but its not ideal.

Dont worry too much about disk failure. Your setup is not mission critical, eg downtime does not have a direct cost; you can sustain a single disk failure without downtime using a RAIDZ1 setup- two disks failing at the same time is far less likely then losing a power supply, which you have no provision for.


Thanks guys! That makes me more confident in 4 or 5 and Z1. Its true- if a disk does down I can shut down the system and not miss much for a week or two until I can get it up again. As for power supply failures- IS there a solution to that? I know that sometimes they just go, and it can be gentle or violent. I've only had one fail ever and it didn't hurt anything just went off. Was I lucky?
alexplatform wrote:Lastly, have a back up strategy for anything you care about regardless of fault tolerance features of your primary storage. disk fault tolerance does not protect you from non disk related failures (environmental, user error, hardware fault, malicious damage, etc.)
What are some good solutions for this? I have had the same thought. I was thinking of investing in a blu ray burner and burning my most important data to those, and storing in a different building (in case of fires etc). But I know plastic disks have a shelf life. They're cheap enough I could just do a backup/version and then start a new copy every year?

Thoughts? I seriously appreciate all the feedback.

Post Reply

Return to “Gallery”