This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

[SOLVED] Disaster!!! disk names shifted / do not match

For "upgrading" from FreeNAS/NAS4Free Legacy to XigmaNAS and upgrading XigmaNAS to newer builds.
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
BodgeIT
Starter
Starter
Posts: 74
Joined: 03 Jul 2012 17:39
Location: London
Status: Offline

[SOLVED] Disaster!!! disk names shifted / do not match

Post by BodgeIT »

I recently decided to provide a little more room on my PCI bus by buy buying a PCI-e Sata card to run one of my Raidz pools.
I exported the pool before closing down the system (which I think is done anyway), I then installed the card and moved the disks on it.
Brought the system backup and voila...the pool was already there...I was a bit astonished to be honest.
I did a test of the data and everything was still where it should be and was acessible.

I then noticed that some of the disk descriptions didn't match the disks they were describing...something had shifted around so I though I'd do a scrub on both pools.
In the morning I checked both pools and they were fine with no errors...I then noticed that the entire contents of one of the datasets on one pool(not the one moved to the new card) had completely vanished...years of backups accross many subjects...

The system and ZFS however still reports the same disk usage as before and that would not be possible, so it suggests to me the data is still there.
Is there a way for me to get this data back?

I'm fearfull of shutting down or doing anything which might jeapordise any chance I might have of recovering this data.

I would appreciate any help or suggestions.
Gary

** Update
I have rechecked and although disk management reports everything OK with the disks that are there, there is one disk not being used which used to be part of the pool missing data. I have so many disks, I hadn't noticed this before and I'm surprised that both my pools are reporting being online rather than degraded. ZFS reports this disk as being part of the pool and without errors.
I don't understand how that disk changed from ada7 to ada5 as no disks were moved in that pool. It seems the system has become very confused.
Is it simply a case of bringing that disk into the system as a zfs formatted device and scrubbing again or should I proceed in another way to get back my missing dataset?
Last edited by al562 on 13 Jan 2013 06:12, edited 1 time in total.
Reason: Adjusted Subject. Added [SOLVED] Tag.
NAS4Free: 11.4.0.4 - (7682) amd64-embedded
Mobo: Supermicro X9SCL-F, CPU: Xeon E3-1230v2; RAM: Crucial 32GB ECC
System: IBM M1015it SAS controller(SAS2008 v20); Intel Dual 1Gb Server Nic; Thermaltake 1KW PSU;

Storage: Raidz1(3x Toshiba N300 6TB), Raidz1(3x WD Red 3TB), UFS(1x 1TB) Utility disk, ZFS Cache(64Gb SSD)

BodgeIT
Starter
Starter
Posts: 74
Joined: 03 Jul 2012 17:39
Location: London
Status: Offline

Re: Disaster!!! --- [Fixed]

Post by BodgeIT »

Little update on my own post above.

I thought it odd that only data from one dataset was affected, so while digging around I noticed in Information\Mounts, the affected dataset wasn't mounted.
I entered

Code: Select all

zfs mount -a
in CLI
which ran without error, then when using zfs mount showed the dataset mounted, I checked the disk and all my data had returned!

Now the issue I am left with is the system is all over the place in terms of what disks it thnks are where.

What is the best way to "resync" the disks in Management? This relates to zfs disks, system disk and other UFS disks, they are all mixed up.
NAS4Free: 11.4.0.4 - (7682) amd64-embedded
Mobo: Supermicro X9SCL-F, CPU: Xeon E3-1230v2; RAM: Crucial 32GB ECC
System: IBM M1015it SAS controller(SAS2008 v20); Intel Dual 1Gb Server Nic; Thermaltake 1KW PSU;

Storage: Raidz1(3x Toshiba N300 6TB), Raidz1(3x WD Red 3TB), UFS(1x 1TB) Utility disk, ZFS Cache(64Gb SSD)

waltdog72080
NewUser
NewUser
Posts: 1
Joined: 09 Sep 2012 08:06
Status: Offline

Re: Disaster!!! --- [Fixed]

Post by waltdog72080 »

I would really like the answer to the question you posted above. Do you have an answer for that yet?? Thanks for all your help in advance.

BodgeIT
Starter
Starter
Posts: 74
Joined: 03 Jul 2012 17:39
Location: London
Status: Offline

Re: Disaster!!! --- [Fixed]

Post by BodgeIT »

In the end I simply deleted all the disks and added them again through the disk Management UI.
All my arrays are ZFS, I only 1 disk that is UFS and I had to re create a new mount for that one.

Everything back to normal now.
NAS4Free: 11.4.0.4 - (7682) amd64-embedded
Mobo: Supermicro X9SCL-F, CPU: Xeon E3-1230v2; RAM: Crucial 32GB ECC
System: IBM M1015it SAS controller(SAS2008 v20); Intel Dual 1Gb Server Nic; Thermaltake 1KW PSU;

Storage: Raidz1(3x Toshiba N300 6TB), Raidz1(3x WD Red 3TB), UFS(1x 1TB) Utility disk, ZFS Cache(64Gb SSD)

al562
Advanced User
Advanced User
Posts: 210
Joined: 12 Dec 2012 08:02
Location: New Jersey, U.S.A.
Contact:
Status: Offline

Re: Disaster!!! --- [Fixed]

Post by al562 »

Hi BodgIT,

Long time no see :D .
BodgeIT wrote:What is the best way to "resync" the disks in Management?
The recommended easy way is to use the buttons at the bottom of the WebGUI Tab> Disks|Management. This should work, but in complicated systems with disks that have been used for a few years in older versions, there may still be problems. In the end you did the right thing.

For more background see Q: What is and what causes “device name shifting syndrome” and how can I avoid or fix it?

Best of luck,
Al

Post Reply

Return to “Upgrade XigmaNAS”