Latest News:
2019-03-31: XigmaNAS 11.2.0.4.6625 - released!

Latest BETA Release:
2019-03-31: XigmaNAS 12.0.0.4.6625 - BETA released!

We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS cost money, please consider a donation to our project so we can continue to offer you the best.
We need your support! eg: PAYPAL

[solved] Intermittant reboots & Kernel Panics (help!)

If you are new on this forum and you don't know where to post please use this sub-forum. Somebody will answer your question and/or will move your topic into the right sub-forum.
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
texneus
Starter
Starter
Posts: 23
Joined: 12 Oct 2017 05:02
Status: Offline

[solved] Intermittant reboots & Kernel Panics (help!)

#1

Post by texneus » 19 Oct 2018 06:13

I've been chasing a very intermittent problem since upgrading a stable 9.2 system to 11.1 (and beyond) about a year ago. The real problem is it often takes a week or more to act up so I've been trying things here and there and crossing my fingers. I've finally decided to just ask for help after today I found it rebooted yesterday at 8:58am.

The bottom line is it acts like there is a memory leak somewhere. Over time the ARC fills normally and RAM useage holds at a nominal value. In as little as two days or as long as three weeks, all of a sudden RAM usage jumps to near 100% and spills into the swap. Once this happens it's a matter of hours (or perhaps a day or so) before it crashes. Once or twice I got a kernal panic (a page fault), but more often then not it just spontaneously reboots. Once restarted it repeats again, and again, and again, and... From what I can tell the crash corresponds with disk access, most often either snapshots executed via cron (as was the case yesterday) or at the start of an rsync backup from other machines.

Since I run embedded I finally figured out how to save logs, and I just see a hole in the log from when it restarted yesterday at about 8:58am (cron runs a snapshot of all drives starting at 8:55):

Code: Select all

...
Oct 16 21:26:00 marvin su: zaphod to root on /dev/pts/0
Oct 16 21:26:00 marvin su: zaphod to root on /dev/pts/0
Oct 16 22:56:05 marvin su: zaphod to root on /dev/pts/0
Oct 16 22:56:05 marvin su: zaphod to root on /dev/pts/0
Oct 17 08:58:20 marvin syslogd: kernel boot file is /boot/kernel/kernel
Oct 17 08:58:20 marvin kernel: Copyright (c) 1992-2018 The FreeBSD Project.
Oct 17 08:58:20 marvin kernel: Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
Oct 17 08:58:20 marvin kernel: The Regents of the University of California. All rights reserved.
Oct 17 08:58:20 marvin kernel: FreeBSD is a registered trademark of The FreeBSD Foundation.
...
Before I knew how to preserve logs I took a pic of a kernal panic, this one from 7/8/2018.

Image


Just Googling around today, my symptoms seem to very closely mimic this: https://forums.freenas.org/index.php?th ... eak.60109/. If you follow the trail to through the bug reports this seems to be reproducable, but I don't see anywhere describing exactly how. On a whim this evening I tried a simple 'cp' of a large directory from one volume to another in an SSH shell to "stress" things. It just so happened that almost immediately the RAM usage shot to 98% and ~340MB went into swap, so I caught it in the act tonight. This is pretty typical of RAM/Swap usage, and once used never goes back down. If past results are anything to go by, it will probably reboot again by this time tomorrow. A screen capture of top sorted by size while the copy is in progress (nothing looks out of line to me, other than 25G of "wired" RAM but UI reports 31GB used, and ARC being reduced from 22GB):

Code: Select all

last pid: 75435;  load averages:  0.45,  0.44,  0.40                                                                                                                        up 1+13:40:34  22:37:56
58 processes:  1 running, 57 sleeping
CPU:  0.2% user,  0.0% nice,  8.9% system,  0.0% interrupt, 90.9% idle
Mem: 307M Active, 345M Inact, 4653M Laundry, 25G Wired, 12M Buf, 289M Free
ARC: 20G Total, 1440M MFU, 13G MRU, 4423M Anon, 79M Header, 473M Other
     15G Compressed, 16G Uncompressed, 1.05:1 Ratio
Swap: 48G Total, 338M Used, 48G Free

  PID USERNAME      THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
 4236 root           20  20    0  5146M   988M kqread  3 154:18  15.15% bhyve
 4177 www            44  52    0  4034M   854M uwait   0  14:33   0.06% java
 4113    989         11  20    0   369M   141M uwait   3   5:01   0.03% mono-sgen
71842 root            1  52    0   303M 14752K accept  3   0:00   0.00% php-cgi
74553 root            1  52    0   301M 11344K accept  2   0:00   0.11% php-cgi
 2922 root            1  52    0   300M   404K wait    2   0:00   0.00% php-cgi
 4252 root            2  21    0   167M 28352K select  2   1:32   2.96% smbd
 4253 root            1  20    0   167M 26928K select  3   2:40   0.00% smbd
75104 root            1  20    0   163M 25216K select  3   0:00   0.00% smbd
 2470 root            1  20    0   163M 25208K select  0   1:36   0.00% smbd
 2550 root            2  20    0   120M 10104K select  0   0:00   0.00% smbd
 2557 root            1  20    0   119M 10940K select  2   0:01   0.00% smbd
 4171 root            1  52    0 46436K  1040K select  0   0:00   0.00% sudo
 2467 root            1  20    0 29200K  4840K select  0   0:06   0.00% nmbd
 3752 www             1  20    0 25432K     0K kqread  2   0:00   0.00% <nginx>
 3754 www             1  20    0 25432K     0K kqread  2   0:00   0.00% <nginx>
 3751 root            1  52    0 25432K     0K pause   0   0:00   0.00% <nginx>
 4124 root            1  20    0 12592K   660K nanslp  3   0:01   0.00% cron
 3938 root            1  20    0 12592K   648K nanslp  0   0:01   0.00% cron
 3760 root            1  20    0 12592K   648K nanslp  2   0:01   0.00% cron
 2921 root            1  20    0 10936K  4300K kqread  0   0:10   0.01% lighttpd
33295 zaphod          1  20    0 10632K  4140K select  1   0:00   0.01% sshd
33291 root            1  20    0 10632K  4020K select  3   0:00   0.00% sshd
50793 root            1  20    0 10632K  4000K select  3   0:00   0.00% sshd
50946 zaphod          1  20    0 10632K  3976K select  0   0:00   0.00% sshd
 3879 root            1  20    0 10500K  1276K select  2   0:01   0.00% syslogd
 4055 root            1  20    0 10500K  1276K select  2   0:01   0.00% syslogd
 3700 root            1  20    0 10492K  1272K select  1   0:01   0.00% syslogd
 4112    989          1  20    0 10468K  1004K piperd  3   0:00   0.00% daemon
 2672 root            1  20    0 10332K  2648K select  0   0:00   0.00% sshd
 2231 root            1  20    0  9544K  2576K select  0   0:09   0.01% upsd
 2280 root            1  52    0  9536K  2620K piperd  0   0:00   0.00% upsmon
 2282 root            1  20    0  9536K   620K nanslp  1   0:06   0.00% upsmon
 2245 root            1  20    0  9520K  1032K nanslp  3   0:01   0.00% upslog
 1873 root            1  20    0  9180K   976K select  2   0:00   0.00% devd
25372 root            1  20    0  8488K     0K nanslp  0   0:01   0.00% <smartd>
75160 root            1  20    0  7916K  2800K CPU2    2   0:00   0.10% top
 2808 root            1  22    0  7792K  1440K select  3   0:00   0.00% rsync
33299 root            1  20    0  7500K  3488K pause   0   0:00   0.00% csh
51726 root            1  20    0  7500K     0K pause   2   0:00   0.00% <csh>
 4209 root            1  52    0  7500K     0K pause   0   0:00   0.00% <csh>
33296 zaphod          1  22    0  7412K     0K pause   0   0:00   0.00% <csh>
50956 zaphod          1  20    0  7412K     0K pause   1   0:00   0.00% <csh>
 4226 root            1  52    0  7144K  1408K ttyin   0   0:00   0.00% sh
System details:
Asrock C236 WSI
Intel Xeon E3-1225
32GB ECC RAM
Dell H310 flashed to LSI IT mode
10 HDDs and 2 SSDs, all ZFS pools (no L2ARC, etc).

11.2.0.4.5975 at present (embedded on a USB stick - problem has been persistent since 11.1.0.4.4578 which is where I jumped into the 11.x releases)
vfs.zfs.arc_max is set to 22GB at present. Lower values do seem to delay the problem, but don't eliminate it.
As you can probably tell from top, I do have one BHyve VM plus a few jails (via TheBrig).
48GB swap in a ZVOL on the SSD (move swap to a large fast volume was one of the things I tried).


Update: After the file copy finished I now see this at the top of the top report. It seems I've given back almost all my ARC now...

Code: Select all

Mem: 311M Active, 439M Inact, 4643M Laundry, 25G Wired, 12M Buf, 958M Free
ARC: 8509M Total, 1078M MFU, 6904M MRU, 5600K Anon, 53M Header, 468M Other
     6920M Compressed, 7260M Uncompressed, 1.05:1 Ratio
Swap: 48G Total, 385M Used, 48G Free
Last edited by texneus on 14 Feb 2019 06:28, edited 1 time in total.

netprince
NewUser
NewUser
Posts: 5
Joined: 14 Aug 2012 15:39
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#2

Post by netprince » 27 Nov 2018 16:07

Just FYI, I am seeing a similar issue here, on similar hardware.

User avatar
ms49434
Developer
Developer
Posts: 563
Joined: 03 Sep 2015 18:49
Location: Neuenkirchen-Vörden, Germany - GMT+1
Contact:
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#3

Post by ms49434 » 27 Nov 2018 16:49

You can disable ARC compression and see if it makes a difference. ARC compression caused panics on my NAS box in the beginning. ARC compression can be disabled in sysctl.conf by adding vfs.zfs.compressed_arc_enabled with a value of 0.

Your top report shows only 958M free, my suggestion is to limit ARC Max to 4GB-6GB and start monitoring memory consumption over a month. ARC Max is set by adding vfs.zfs.arc_max to loader.conf (earliest possible stage), a reboot is required.

I assume you don't have L2ARC devices installed.
1) XigmaNAS 11.2.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7, 22GB out of 32GB ECC RAM, LSI 9300-8i IT mode in passthrough mode. Pool 1: 2x HGST 10TB, mirrored, SLOG: Samsung 850 Pro, L2ARC: Samsung 850 Pro, Pool 2: 1x Samsung 860 EVO 1TB , services: Samba AD, CIFS/SMB, ftp, ctld, rsync, syncthing, zfs snapshots.
2) XigmaNAS 11.2.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7, 8GB out of 32GB ECC RAM, IBM M1215 crossflashed, IT mode, passthrough mode, 1x HGST 10TB , services: rsync.

netprince
NewUser
NewUser
Posts: 5
Joined: 14 Aug 2012 15:39
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#4

Post by netprince » 27 Nov 2018 19:00

Thanks ms49434, I'm going to try that.

User avatar
JoseMR
Hardware & Software Guru
Hardware & Software Guru
Posts: 1005
Joined: 16 Apr 2014 04:15
Location: PR
Contact:
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#5

Post by JoseMR » 28 Nov 2018 20:12

netprince wrote:
27 Nov 2018 19:00
Thanks ms49434, I'm going to try that.

Code: Select all

panic: page fault
Plus random I/O errors, please first steep to do is to run memtest86+, if pass then the rest troubleshooting as usual.

Note that not only RAM may be the cause, on legacy and/or old hardware a bad Memory/MCH/Bridge controller may cause unexpected behaviors harder to debug/troubleshoot but at least you get the idea.

Regards
System: FreeBSD 12, MB: Supermicro X8SI6-F, Xeon X3450, 16GB DDR3 ECC RDIMMs.
Addons at GitHub
JoseMRPubServ

texneus
Starter
Starter
Posts: 23
Joined: 12 Oct 2017 05:02
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#6

Post by texneus » 29 Nov 2018 04:38

OP here. Sorry for the long absence, so let me bring everyone up to date. To make a long story short I moved the Bhyve VM to a physical machine about 3 weeks back hoping to bring the configuration back to something "more standard". No luck. As of last weekend and completely out of ideas I am now starting over, reinstalling Xigmanas from scratch (11.2.0.4.6195) and completely reconfiguring everything from a fresh install. The original settings.xml spanned three XigmaNAS versions (9.3, 11.1, now 11.2) and 2 motherboards, so in the back of my mind this was always suspect. While I was at it, I ditched the USB key and embedded install and I now have a full UEFI install on a small SDD in a 64GB UFS partition plus a 48GB swap. All existing data drives were pulled, so install and initial setup were on as simple of a system as possible. To date, I have only done the following to restore prior operation (and only the following...if it's not here it's probably still at defaults)

Basic setup (passwords, time, time zone, added NTP source, etc)
Advanced setup (Power Daemon to High Performance, Email (reports still disabled), enabled swap)
In Loader.conf, I have added the following tuneables:
- vfs.zfs.arc_max=16GB (why? Because I read somewhere that FreeNAS defaults to 50% of RAM for ARC, but I can't find it now)
- vfs.zfs.arc_min=8GB (why? Why not. I didn't have a minimum before)
- vm.kmem_size = vm.kmem_size_max=30GB (why? Found an old ZFS tuning guide that suggested this for 32GB RAM)
SSH service is enabled and used for shell logins only.
UPS service is enabled and configured.

After this I started putting the drives back in the machine, one by one, but rather than importing the old pools I'm re-creating each vdev/pool/dataset and copying data back to the NAS from a backup using straight cp and/or rsync (with --checksum to verify the copy). To date I have 9 of the original 12 drives re-installed.

Last night I added scripts for automated snapshots to cron.

That's all! No jails, SMB, or anything else...yet.

@ms49434, I will try disabling compression if I continue to experience instability

@JoseMR, RAM was Memtested when new for a week. I retested last spring for 24 hours, updated BIOS at the same time. Memtest has never reported any errors at all on this system. If you think it's worthwhile to run some more I can do.

So far I'm pleased to report success (knock on wood), although one thing does bother me. Could somebody help me understand how FreeBSD reports memory consumption? Or more to the point, is this OK? As I understand things "Wired" RAM is the ARC plus kernel. When I first start Xigmanas Wired is about 2-3GB higher than ARC, which seems reasonable. But each instance of rsync or cp causes "WIRED - ARC" to increase about 2GB. It did this before which I interpreted as a memory leak, but the difference now is WIRED stabilizes at 26GB (ARC + 10GB), ARC stays put, and nothing swaps, where-as before Wired grew, ARC shrunk, and memory swapped. While "Inact" and "Free" trade between each other, the sum of both remains about 4.7GB, which seems"healthy", but 10GB for the kernel?? Are there other caches besides ARC hiding here? For example, after 4 days uptime and several TB of data copied:

Code: Select all

last pid: 42094;  load averages:  0.11,  0.23,  0.28        up 4+02:47:31  21:01:11
30 processes:  1 running, 29 sleeping
CPU:     % user,     % nice,     % system,     % interrupt,     % idle
Mem: 11M Active, 136M Inact, 26G Wired, 148M Buf, 4608M Free
ARC: 16G Total, 350M MFU, 15G MRU, 87M Anon, 58M Header, 699M Other
     14G Compressed, 16G Uncompressed, 1.13:1 Ratio
Swap: 48G Total, 48G Free
I assume specifying vm.kmem is why memory usage seems to have stabilized.

texneus
Starter
Starter
Posts: 23
Joined: 12 Oct 2017 05:02
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#7

Post by texneus » 30 Nov 2018 05:36

Crash after 5 days update when I tried to import an old pool. No panic, just went non-responsive for a few seconds, then reboot. Running Memtest now, no findings so far. What I'll do next is up in the air, but I'm pretty much at my wits end...

User avatar
JoseMR
Hardware & Software Guru
Hardware & Software Guru
Posts: 1005
Joined: 16 Apr 2014 04:15
Location: PR
Contact:
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#8

Post by JoseMR » 30 Nov 2018 08:57

texneus wrote:
30 Nov 2018 05:36
Crash after 5 days update when I tried to import an old pool. No panic, just went non-responsive for a few seconds, then reboot. Running Memtest now, no findings so far. What I'll do next is up in the air, but I'm pretty much at my wits end...

Most of the time memtest do very little on ECC RAM unless it is severely damaged or a bad memory controller present.

Also noticed you have "48GB swap in a ZVOL on the SSD" this is way too much Swap, but the real problem is that is placed into ZFS, everything under ZFS uses RAM, so there is very little to no sense to run Swap from a zvol or a swapfile placed under a dataset.

While this may or may not help, my recommend is to use a native swap partition, of say 2~4GB in size and re-try latest release, additionally if you are trying on Full Platform, please use RootOnZFS Platform which can be Firmware upgraded from WebGUI plus more features.

Regards
System: FreeBSD 12, MB: Supermicro X8SI6-F, Xeon X3450, 16GB DDR3 ECC RDIMMs.
Addons at GitHub
JoseMRPubServ

doktornotor
Advanced User
Advanced User
Posts: 173
Joined: 16 May 2017 00:22
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#9

Post by doktornotor » 30 Nov 2018 11:20

JoseMR wrote:
30 Nov 2018 08:57
the real problem is that is placed into ZFS, everything under ZFS uses RAM, so there is very little to no sense to run Swap from a zvol or a swapfile placed under a dataset.
Has been a subject of flame fests since 2011. I don't think the issue is relevant any more with current FreeBSD version (see e.g. PR 199189). FWIW, Oracle/Solaris has been doing swap on a zvol for ages, in default install.

User avatar
JoseMR
Hardware & Software Guru
Hardware & Software Guru
Posts: 1005
Joined: 16 Apr 2014 04:15
Location: PR
Contact:
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#10

Post by JoseMR » 30 Nov 2018 12:40

doktornotor wrote:
30 Nov 2018 11:20
JoseMR wrote:
30 Nov 2018 08:57
the real problem is that is placed into ZFS, everything under ZFS uses RAM, so there is very little to no sense to run Swap from a zvol or a swapfile placed under a dataset.
Has been a subject of flame fests since 2011. I don't think the issue is relevant any more with current FreeBSD version (see e.g. PR 199189). FWIW, Oracle/Solaris has been doing swap on a zvol for ages, in default install.
Thanks for pointing out those informative links doktornotor, I think it will worth to make some simulations on spare hardware and see how it goes indeed.

Regards
System: FreeBSD 12, MB: Supermicro X8SI6-F, Xeon X3450, 16GB DDR3 ECC RDIMMs.
Addons at GitHub
JoseMRPubServ

texneus
Starter
Starter
Posts: 23
Joined: 12 Oct 2017 05:02
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#11

Post by texneus » 30 Nov 2018 21:33

Just thought I should point out that the large swap partition was an attempted fix. The native swap partition is 1GB and it faulted using that, but since it was on a thumb drive I moved it to a big ZVOL, which made no noticeable difference. On the fresh install the ZVOL was deleted and only native swap partition was in use. Although still 48GB it was never used at any point.

At the moment I'm finishing up the 8th pass in Memtest, no errors. If ECC does correct an error, I thought Memtest would report and if in FreeBSD it would log it? Unless somebody has another suggestion I'm going to swap the MB from my daily driver PC. Completely different setup with an I7, not Xeon and no ECC...but at least (hopefully) I can isolate this further. Probably going to be a couple of days before I can do this though...

texneus
Starter
Starter
Posts: 23
Joined: 12 Oct 2017 05:02
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#12

Post by texneus » 30 Nov 2018 22:37

Question: It is apparently known the Dell PERC H310 can interfere with the SMbus causing PC's not to boot, hence the "tape mod" such as that described in step 4 here. Since my motherboard (Asrock Rack C236 WSI) boots I have never done this. Does anyone happen to know if the PERC 310 in IT mode can or does cause SMbus issues even if the boot is successful? Google seems to be stumped...

texneus
Starter
Starter
Posts: 23
Joined: 12 Oct 2017 05:02
Status: Offline

Re: Intermittant reboots & Kernel Panics (help!)

#13

Post by texneus » 20 Dec 2018 06:35

Just another update. I continue to have reboots every 2-4 days. The answer to the above would appear to be 'no', as taping the H310 connector pins had no effect on stability. I did not swap motherboards as I said I would since it at the time it would be a few days before it was free. Instead I removed the H310 and put a Rocket 620 (Marvel based) 2 port SATA card I had laying around since I could do that right away. I moved 8 drives to the motherboard controller, giving me 10 drives total. I do lose two non-critical drives, something I can live with temporarily. Low and behold it seemed to be stable, up for a week...longer than it's been in a long time. Thinking my H310 was bad I bought an IBM M1015, flashed it to 9211 IT mode, installed it, and the random reboots resumed. Based on this it seems there is just some bizarre interaction between LSI SAS cards and this MB (Asrock C236 WSI). For now I'm back to the Rocket 620 controller where it will need to stay until next year, where I will either need to replace the MB or find a 4 port Marvel SATA card (such as a Rocket 640). In the mean time, I'm open to other suggestions to try...

texneus
Starter
Starter
Posts: 23
Joined: 12 Oct 2017 05:02
Status: Offline

Re: [solved] Intermittant reboots & Kernel Panics (help!)

#14

Post by texneus » 14 Feb 2019 06:50

Final update. It took over a year of trial and error, but I finally found the problem. The memory consumption seems to have been a red herring, I noticed it was much better starting around Nov/Dec so possibly a update around that time solved that. Once I stopped focusing on fixing RAM consumption and looking at drive controllers, I did manage to get to the bottom of it all.

The problem actually turned out to be the Spindown utility bundled with XigmaNAS which I used to spin down unused drives on the LSI controller. The Marvell controller worked simply because it was AHCI and drive spindown is set in the UI. In all fairness, I don't really know if it's software itself, the newest FreeBSD kernel not liking something it does, an interaction with newest LSI firmware/driver, or maybe that I'm using SATA drives. In anycase, using camcontrol timers in lieu of spindown still gives me spindown of unused drives and the system has been stable for over a month now.

User avatar
lux
Advanced User
Advanced User
Posts: 199
Joined: 23 Jun 2012 11:37
Location: Bielefeld, Germany
Contact:
Status: Offline

Re: [solved] Intermittant reboots & Kernel Panics (help!)

#15

Post by lux » 14 Feb 2019 15:56

that's curious - my Disks attached on M1015 works with spindown without Problems... :!:

all other Disks attached on Mainboard works only with ataidle - however without any glitch

Greetings
Home:11.2.x.6625/emb@32GB USB|1270v2@X9SCA-F|ECC32GB|i340-T4[lagg@GS108Tv2&smb-mch]|M1015@IT|9HDD~40TB@3xRaidZ1+1HDD+2SSD i335&i520+1xi800P@ZIL|~44W idle@SS-400FL2|Nanoxia Deep Silence 6B|24/7
Services: CIFS, FTP, TFTP, SSH, NFS, Rsync, Syncthing, Webserver, BitTorrent, VirtualBox | Extensions: OBI, TheBrig[Emby, certbot, Asterisk] | Extensions self installed: Streamripper, Pi-hole@Debian9 VM
Test:12.x/emb@8GB IDE Flash|X3 420e@M4A88TD-V|12GB|i350-T2|M1015@IT|7xHDD+1xSSD[different Size&Brand]RaidZ1+2|for TESTing only

texneus
Starter
Starter
Posts: 23
Joined: 12 Oct 2017 05:02
Status: Offline

Re: [solved] Intermittant reboots & Kernel Panics (help!)

#16

Post by texneus » 17 Feb 2019 00:32

All I can say is "File it under things that make you go Hmmmm!" Like I said, I never figured out why it was causing trouble...as noted it might have been something else that didn't like what it was doing. I won't preclude that I might have been using it incorrectly as well, the documentation wasn't abundantly clear to me. All I can say for sure is once I stopped using it, problem gone. Turned it back on, crashes came back. If it works for you then I wouldn't change it.

Post Reply

Return to “Newbie Questions”