This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

I love ZFS (Just went through my first drive failure)

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
Captain Ron
NewUser
NewUser
Posts: 14
Joined: 03 Jul 2012 19:33
Location: Boston, MA, USA
Status: Offline

I love ZFS (Just went through my first drive failure)

Post by Captain Ron »

Allow me to first say it:

I LOVE ZFS!!!

First some background. As some of you might read - I run My NAS as a VM on a Linux Machine, I get a slightly lower performance out of the machine, but the trade offs are:
Machine is completely headless
Upgrades don't require a CD or even being at the computer
Backups of the OS are automated

One downside of running in Linux is that the hardware is owned by root. So I have to chown all the drives before I run the VM (or run the VM as root, which I don't want to do). No biggie - just an extra step before running the VM.

The NAS has existed in about 3-4 different incarnations for 5 years. It was initially a Windows XP machine, then switched to FreeNAS and now NAS4Free. Initially the machine utilized a hardware RAID5 setup. But not long after the switch to FreeNAS I pulled the RAID card in favor of a software raid at which time I decided to give ZFS a shake. It is a decision I have not regretted at all.

Anyway, with 5 year old drives - it is getting to be about that time for the drives to start to give up the ghost. And last week one of them finally did. When any hardware dedicated to a VM has a fault, the VM is halted by the hypervisor. So my NAS immediately stopped being accessible. This does stink, because the NAS doesn't need the drive to work for the system to remain running.

Anyway - identifying the drive was not hard (the VM revealed the faulted hardware by serial number). Physically changing out the drive, no problem. But then came the issue of the adding the drive to the ZFS pool. This had me worried - but in ZFS it is easy, and the web front end of NAS4Free further makes it easier. Just do a simple drive replace and VIOLA, the pool automatically does the rebuild. The reslivering took about 7 hours.

Given how easy this change was, combined with the fact that my drives are aging - I am now thinking it is time to start to look to increase the size of the drive pool. I put in a spare (but older drive) in the system for this replacement. This is more of a band-aid and maybe not the permanent fix. Newer drives are clearly the answer. Plus I used my 1 spare drive, so I need a new drive anyway.
Breaker: NAS4Free 11.0.0.4r3330 x64-full
Intel Celeron G1850 @ 2.90GHz, Supermicro X10SLM-F
16 GB ECC RAM
Intel PCI-E GigE Card & Dual Onboard Intel GigE
16GB SanDisk Ultra USB 3.0 Flash Drive
4 x Western Digital Red 3TB HDD (ZFS Storage/Share)
Antec Performance One P180 Silver

kernow
experienced User
experienced User
Posts: 92
Joined: 23 Jun 2012 01:28
Status: Offline

Re: I love ZFS (Just went through my first drive failure)

Post by kernow »

Nice story, it's good to know for when the inevitable happens.
HP Microserver N36L / 6GB ECC RAM / 2 x 2TB WD20EARS, N4F9.1.x

Post Reply

Return to “ZFS (only!)”