Hi,

I am running a system which consists of eight concurrent Java processes. All these processes run on the same box and are backend daemon processes which means they are designed to be started and then run continuously, as they are backend processes they have no user interface. Thus the processes are controlled (paused/shutdown etc...) by parameters in a database they interact with (also on the same box). The system has more than enough CPU power to run these processes (and then some). However the problem is the amount of memory they consume. The spec of the box/environment is below:

Sun Sparc v9 1.3Ghz CPU (x2) with 4 Gig of RAM (1.2 Gig available) - Running Solaris 9 (SunOS 5.9)
Oracle 10g (10.1.0.4.0) (This is way only 1.2 Gig of RAM is available)
Sun Solaris Mixed Mode JRE 1.5.0.03

I have configured the eight current Java processes through JVM options to run within the 1.2 Gig of available RAM. Thought the system is on the limit only having around 100-200 free Megs of RAM and a portion of each JVM has to be swapped out of RAM to disk. This situation is fine for the time being as the swapped out portions are idle code/data for the most part. Each JVM consumes around 160-180 Megs of RAM.

An example of the options I am passing to the JVM are below:

-server -d64

-Djava.awt.headless=true

-Xms16384k
-Xmx16384k

-XX:NewSize=12288k
-XX:MaxNewSize=12288k
-XX:NewRatio=128
-XX:TargetSurvivorRatio=90
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=85

-XX:+UseParallelGC
-XX:ParallelGCThreads=2
-XX:+UseAdaptiveSizePolicy
-XX:MaxTenuringThreshold=32

-XX:PermSize=4096k
-XX:MaxPermSize=4096k

-XX:ThreadStackSize=512

-XX:CompileThreshold=16384

-Xconcurrentio

The real problem is that now an additional three daemon Java processes need to be run on this box. To do this will require around another 600 Megs of RAM. Thought the system only has 100-200 free. I requested more memory for the box, but who knows how long that will take!

Oracle's SGA (System Global Area - its memory footprint) has already been reduced to a minimum and moving the Java processes to another box is feasible but deemed not viable as there is no other production systems with available resources to take the load (CPU wise).

So I am hoping that somebody out there knows of any tips or tricks to get these JVM processes running in less RAM.

I don't want to reduce the heap sizes any more than I already have. Garbage collections are frequent enough already with such a small heap, the only reason this situation is acceptable is due to the CPU performance of the system.

Any suggestions?

I hope Sun release the MVM (Multi-Tasking Virtual Machine) for JRE 1.5 soon, as I know I am not the only one battling with this situation.

No Replys ... That is what I was afraid of :)

Well I have an idea, I am working on creating a 'slave' process which will start and stop processes by spawning them into a new Thread ... So in essence each 'Process' will be running within its own Thread as spawned by the slave process. Instead of its own JVM.

Although I have noticed in other multi-threaded applications I have written that the last thread spawned seems to be given slightly more priority by the JVM than those spawned before it. I shall tinker some more ;)

If this works though I can run, say 4 processes within one JVM process, thus lowering the overall memory requirments by running less JVMs in total.

That was my immediate idea when seeing your question (which I'd never seen before).

But had you tried to set the JVM memory parameters?
things like

-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
-Xss<size> set java thread stack size

maybe also enable incremental GC which might help keep the required memory down.

-Xincgc enable incremental garbage collection

Thanks jwenting,

Yes I have set the 'eden' heap sizes, using the '-Xms' & '-Xmx' arguments:

-Xms16384k
-Xmx16384k

I have also reduced the 'new' heap sizes:

-XX:NewSize=12288k
-XX:MaxNewSize=12288k

And the 'perm' heap sizes:

-XX:PermSize=4096k
-XX:MaxPermSize=4096k

You can see the full listing of all the arguments I am passing to the JVM in my original post.

I did try using the Incremental GC instead of the Parallel GC, but it only reduced the memory by around 5MB, which wasn't enough to justify the slower performance compaired to the Parallel GC. I didn't mention that these processes need to be as near-real-time as possible, yet as small as possible. An added complication :)

Kate

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.