This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Why not destroy .nop after pool creating?

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
User avatar
MikeMac
Forum Moderator
Forum Moderator
Posts: 429
Joined: 07 Oct 2012 23:12
Location: Moscow, Russia
Contact:
Status: Offline

Why not destroy .nop after pool creating?

Post by MikeMac »

A common trick to create zfs pool with ashift=12 is

Code: Select all

gnop create -S 4096 /dev/ada0
And nas4free is sucessfull uses one, thank you.

But why you keep .nop dvices after pool creation?
I mean it is safe to destroy ones immediatelly afer Pool creation

Code: Select all

zpool create -m /mnt/Pool Pool raidz /dev/ad1.nop /dev/ad2.nop /dev/ad3.nop
zpool export Pool
gnop destroy /dev/ada1.nop
gnop destroy /dev/ada2.nop
gnop destroy /dev/ada3.nop
zpool import Pool
This case you do not need create .nop devices at startup. It is in according with KISS principle :)
And it will add pool stability in (rare) case of problems during replacing drive. In acording with my experiments on virtual machine if during disk replace both old and new disks will fail, zfs raidz1 on raw disks will survive

Code: Select all

nas4free:~# zpool status pool: Pool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Wed Mar 20 18:42:22 2013
    6.69G scanned out of 7.29G at 1/s, (scan is slow, no estimated time)
    257M resilvered, 91.72% done
config:
        NAME                        STATE     READ WRITE CKSUM
        Pool                        DEGRADED     0     0     0
          raidz1-0                  DEGRADED     0     0     0
            replacing-0             UNAVAIL      0     0     0
              2050528262512619809   UNAVAIL      0     0     0  was /dev/ada3
              14504872036416078121  UNAVAIL      0     0     0  was /dev/ada4
            ada2                    ONLINE       0     0     0
            ada1                    ONLINE       0     0     0
errors: No known data errors
But same raidz on .nop devces could be lost

Code: Select all

nas4free:~# zpool status
no pools available
nas4free:~# zpool import
  pool: Pool
    id: 8374523812252373009
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:
        Pool                      UNAVAIL  insufficient replicas
          raidz1-0                UNAVAIL  insufficient replicas
            replacing-0           ONLINE
              ada1.nop            ONLINE
              ada2.nop            ONLINE
            15699628039254375131  UNAVAIL  cannot open
            13721477516992971685  UNAVAIL  cannot open

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Why not destroy .nop after pool creating?

Post by raulfg3 »

still commented in this post: viewtopic.php?f=59&t=1494

I add it to favorites, and test in my pool, really works.
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

User avatar
MikeMac
Forum Moderator
Forum Moderator
Posts: 429
Joined: 07 Oct 2012 23:12
Location: Moscow, Russia
Contact:
Status: Offline

Re: Why not destroy .nop after pool creating?

Post by MikeMac »

Thank you!

Simplest implementation look quite easy - just do not emplement nop creation at restart.

Mike

fsbruva
Advanced User
Advanced User
Posts: 378
Joined: 21 Sep 2012 14:50
Status: Offline

Re: Why not destroy .nop after pool creating?

Post by fsbruva »

As you can see from the thread raul linked - the .nop gets created at boot time because Nas4Free knows it has a 4k customer for drives that are reporting 512b (remember, the drives themselves are mis-reporting their sector size to the operating system). It doesn't hurt anything, as long as zfs is configured properly.

User avatar
MikeMac
Forum Moderator
Forum Moderator
Posts: 429
Joined: 07 Oct 2012 23:12
Location: Moscow, Russia
Contact:
Status: Offline

Re: Why not destroy .nop after pool creating?

Post by MikeMac »

fsbruva wrote:the .nop gets created at boot time because Nas4Free knows it has a 4k customer for drives that are reporting 512b
AFAIK, and this is proven by experiment, zfs pool need information about sector size only once - during pool creating. Then this info is storred at ashift paremeter (12 in case of pool on 4K sector devices).

By the way, nop trick is FreeBSD-related. Both openindiana and zfs on linux uses other methods. But pools created are 100% compatible.
fsbruva wrote:It doesn't hurt anything, as long as zfs is configured properly.
It is. Example in my 1st post. nop devices after creation of the pool add one level of complexity to the system. In difficult situations (like mixing disks and controller ports) difference could be as significant as working and dead pool.

Post Reply

Return to “ZFS (only!)”