Quantcast
Channel: Ignite Realtime: Message List
Viewing all articles
Browse latest Browse all 12000

Re: Possible resource leak?

$
0
0

Hello Vinh

 

Unfortunately I have not solved this as-of yet. I havne't had much time the past few weeks to troubleshoot this much more than before, coupled with the extreme length of continous runtime before the issue occurs... it's making for a difficult one to track down.

 

Assuming there is no issue with the TRUNK -- the issue would have to reside with something either in Install4j (I'm using latest version from ej-technologies website) and/or the bundled jre version Install4j uses (from the automatic download within the install4j gui wizard). If it's something install4j is causing, it could be something with the libs and runtimes they use to launch Spark.

 

My custom build was taken from TRUNK, branded with company logo and such, and locked down a bit so users don't get too curious, Then compiled using ANT 1.9.x with JDK 1.7.0_25 x86 from Oracle on a 64 bit windows 7 machine, then packaged with Install4j 5.1.6 64 bit version (had to swap out Install4j's bundled JRE with a 32 bit j7 JRE from oracle's website since the included bundle is actually still JRE6 and I got some packaging errors initially). This produced an EXE which I pushed to everyone in the office after doing a full removal of all previous versions of spark and killed local user profiles so that everything was Fresh. My userbase spans Windows 2000, XP, and Win7 workstations with varied installed RAM ... majority are 2GB+.

 

From using JProfiler - I can see huge volumes of object instantiations... then GC does it's thing and wipes them out (causing heap memory usage to drop as well), but then almost imediately it goes back up. At it's peak, there was over 1 million objects in memory, compared to when first launching spark it's well below 100K. This happens for me after Spark has been running continuously for a long while (several days, seems to be about every 3+ days the problem happens and machines start to go sluggish and stuff until I kill Spark via Task Manager and relaunch it).

 

So something is indeed leaking, somewhere. The question is what's causing it... if it's TRUNK codebase related or if it's something else (like packaging or compile tools, etc). Possibly it's an issue existant in the TRUNK but only aggrevated and brought to light by something (such as company branding and/or locking spark settings down and/or bundled jre, etc etc etc). It's really tough to say...

 

In the meantime, I'm going to roll most of my userbase back to release 2.6.3 which I know works perfect in our environment. Myself and a few select workstations will continue attempting to debug this (my users who I know won't complain too much lol).


Viewing all articles
Browse latest Browse all 12000

Trending Articles