Hey, I missed this post but found the same solution online yesterday. This problem exhibits itself in many ways, not just zfs send.
In hopes of helping others that search the forum, here are the two ways I hit this problem:
Samba - Trying to transfer multi-gigabyte files from the NAS to a Windows machine. The file transfer would just hang right away or part-way through, with no error messages in the logs (even at high log levels).
SFTP file transfers - This one led me to the solution as it gave me an error in the NAS log. Using FileZilla for instance, try transferring a multi-gigabyte file from ZFS on your NAS to your computer via SFTP. Intermittently, it will disconnect and reconnect again. The logs will say something like "fatal: packet_write_poll: Cannot allocate memory". This is your big clue you are suffering from the same problem.
After pulling my hair out for a couple days, I found articles from 2011 such as
https://lists.freebsd.org/pipermail/fre ... 63322.html that talk about this phenomenon. After knowing what to look for, I found freebsd also has it on their virtualbox to-do list:
https://wiki.freebsd.org/VirtualBox/ToDo
In my case, I put what the article suggested in the loader.conf: net.graph.maxdata=65536
I have no idea what the value actually does, and I will try reducing it as erik suggests in case I'm wasting memory. But this solved all of my issues with transfering large data with VirtualBox VMs running.
If you use VirtualBox and ZFS, test for this, and add the variable to the loader.conf to solve the problem. Maybe it should be added by default, but just disabled in newer NAS4Free builds?
UPDATE: I've reduced the value to 4096 as mentioned by erik, and everything is still rock solid testing Samba and SFTP. Thanks erik.