*New 12.1 series Release:
2020-07-21: XigmaNAS - released

*New 11.4 series Release:
2020-07-20: XigmaNAS - released!

We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money. Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

Zpool replace -f

Forum rules
Set-Up GuideFAQsForum Rules
User avatar
Forum Moderator
Forum Moderator
Posts: 802
Joined: 23 Jun 2012 09:14
Location: Athens, Greece
Status: Offline

Zpool replace -f


Post by ChriZathens »

Hi guys!
I removed an HDD from my Pool and wanted to replace it with a different one..
I got a few strange behavior but after a reboot seems sorted out so there is no point to write more about them..
But when i chose to replace it with the new one I got :

Code: Select all

invalid vdev specification
use '-f' to override the following errors:
/dev/ada3.nop is part of potentially active pool 'Main_Pool'
The "Main_Pool" was an old destroyed pool I had in the past. I thought that it makes sense to have kept info so I wiped the drive using the method described in the wiki
But I got again the same error..
So I finally run a

Code: Select all

zpool replace -f Media3 5777340420297675354 ada3.nop
and it imported just fine..
Shouldn't there be an option in the webgui to use the -f flag in case it is needed
P.S.: The steps I performed were the following: I believe I have omitted a few steps, but all is fine nevertheless
1. Physically removed the HDD
2. After a reboot I issued the replace command via the webui...

I suspect there should be in between steps to take the device offline and then remove it before actually running the replace command...
My Nas
  1. Case: Fractal Design Define R2
  2. M/B: Supermicro x9scl-f
  3. CPU: Intel Celeron G1620
  4. RAM: 16GB DDR3 ECC (2 x Kingston KVR1333D3E9S/8G)
  5. PSU: Chieftec 850w 80+ modular
  6. Storage: 8x2TB HDDs in a RaidZ2 array ~ 10.1 TB usable disk space
  7. O/S: XigmaNAS -amd64 embedded
  8. Extra H/W: Dell Perc H310 SAS controller, crosflashed to LSI 9211-8i IT mode, 8GB Innodisk D150SV SATADOM for O/S

Backup Nas: U-NAS NSC-400, Gigabyte MB10-DS4 (4x4TB Seagate Exos disks in RaidZ configuration - 32GB RAM)

User avatar
Site Admin
Site Admin
Posts: 5154
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Status: Offline

Re: Zpool replace -f


Post by raulfg3 »

I detected in the past the same behaviour, and use the wipe script describe on the wiki , and same as you do not work, I increase the wipe sectors to 10 and this time works fine, perhaps is time to update script in wiki from 1 to 10 sectors to be sure that all relevant metadata info is deleted.

Someother that can confirm this? (revision 6766)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Last changes

HP T510

User avatar
Posts: 422
Joined: 25 Aug 2012 09:28
Location: Japan
Status: Offline

Re: Zpool replace -f


Post by daoyama »

I didn't know reason why you cant. But ZFS label is written in 4 times. Each size is 256KiB.
The head is called L0L1. The tail is called L2L3.
So, at least, you must clear head of 512KiB and tail of 512KiB. (+ disk header such as GPT/MBR/GEOM)
This is 512KiB not 512B, it means 524288 bytes.

Note: you can see the label by:

# zdb -l /dev/XXXX

Daisuke Aoyama
Last edited by al562 on 30 Jan 2013 03:47, edited 1 time in total.
Reason: Added to FAQs & locked.
NAS4Free (x64-embedded), (arm),
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1


Return to “ZFS (only!)”