My status, I'll be humble and say, I'm at noob-intermediate level with ZFS and NAS4Free. Probably part of my problem.
I'm currently working with 192 TB of RAW storage (1st pool and the one of concern is 142TB as ZFS RAIDZ2 and 2nd pool, which is in good health, is 50TB and also ZFS RAIDZ2), served up as iSCSI targets to Windows servers (One of which is a Windows 7 vm; please, don't ask. It was found to be a mistake.
When I deleted the volume, I ended up having to restart the NAS4Free server and now I cannot recall why. But, when it attempted to startup, it would state that the disks were created, then say that it ran out of swap space and restart.
This is where, looking back, I should have this and that, but what I did do is...
1) unplugged the storage controller and boot up. --That worked to at least bring up the OS.
2) loaded the latest NAS4Free (9.2.0.1.972) on a 240GB SSD (I was running NAS4Free 9.1.0.1.847 on a 32GB USB Thumb/Flash drive), both with and without previous and relavent config data. The hope was that maybe the newer system could have had bugs worked out, but also to give a larger swap space the system could use. --This showed an additional error/message after the drives were created at boot time and would restart after saying it ran out of swap space (beforehand, it complained I gave it more than recommended swap space). Which was saying, "cannot open objset <Volume Name of which I had deleted in the GUI before restarting>" But, if I started the system with a clean/default config file, it would then start up and I could get into the GUI and shell. Obviously, since the pools were not added in that default config.
3) Added the disks to the new system, within the GUI.
4) I attempted to import the pool (same pool that had the deleted volume (where the deleted volume resides or resided). --I was having to work remotely and as far as I could tell, the system froze/locked up.
5) Impatience may have got the best of me. But, it I could not do anything on the system. So, I did a hard restart of the server. --This is where I worry. Now, ZFS says the pool does not exist.
I realized after the fact that I should have run a resilver and/or scrub before even deleting the volume. Also, that I may have been able to run a zpool clear.
I also kick myself, now, for not backing up the config file before making the change. Before deleting the volume (Hence letting you know I'm a noob to this. I only have about 2 yrs working with FreeBSD/NAS4Free and ZFS.)
So, back to my question, "Is attempting to run a zpool create zpool with all of the original disks this zpool had going to help reconstruct the original zpool? Or, would that be a bad move and in that case, is there some other advise for me?"
Thank you for any assistance


