orientdb: Out of Memory (Memory Leak)

Hi Luca, today i running load testing again 2.0.8, after running for a while, system became extremely slow, and java heap dump file was generated (java_pidxxxx.hprof). In server log, it keep prompting memory monitor log :

2015-05-20 23:03:11:145 INFO  [odb2]:2434 [orientdb] [3.3] processors=8, physical.memory.total=15.6G, physical.memory.free=189.6M, swap.space.total=2.0G, swap.space.free=2.0G, heap.memory.used=394.9M, heap.memory.free=60.6M, heap.memory.total=455.5M, heap.memory.max=455.5M, heap.memory.used/total=86.70%, heap.memory.used/max=86.70%, minor.gc.count=660, minor.gc.time=6280ms, major.gc.count=211, major.gc.time=118413ms, load.process=100.00%, load.system=100.00%, load.systemAverage=780.00%, thread.count=93, thread.peakCount=123, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operation.size=0, executor.q.priorityOperation.size=0, executor.q.response.size=0, operations.remote.size=2, operations.running.size=0, proxy.count=11, clientEndpoint.count=0, connection.active.count=1, connection.count=1 [HealthMonitor]

Open jconsole, memory usage was high, and i click ‘Perform GC’, memory usage was not reduced anymore, so i suspect it maybe a memory leak.

Use the memory dump file to run memory analyze. Following is Leak Suspects result

  Problem Suspect 1

13,107 instances of "jdk.nashorn.internal.runtime.RecompilableScriptFunctionData", loaded by "sun.misc.Launcher$ExtClassLoader @ 0xe0048f88" occupy 237,164,384 (61.99%) bytes. These instances are referenced from one instance of "java.lang.Thread", loaded by "<system class loader>"

Keywords
java.lang.Thread
sun.misc.Launcher$ExtClassLoader @ 0xe0048f88
jdk.nashorn.internal.runtime.RecompilableScriptFunctionData

Details »
``

About this issue

  • Original URL
  • State: closed
  • Created 9 years ago
  • Comments: 21 (21 by maintainers)

Commits related to this issue

Most upvoted comments

The behavior i observed is, when i increased the concurrent size, the memory usage which cannot be GCed (i keep hitting perform GC in jconsole) is increase as well.

Eg, at first i send bunch of requests with 2 concurrent, testing for a while then stop wait for avg load drop to 0, using jconsole to monitor the memory, hit the ‘perform GC’, the memory cannot be GCed is about 130MB. Then i increased the concurrent to 5, sending same requests. After test done, hit ‘perform GC’, not the memory cannot be GCed is about 340MB. If i set concurrent at large number, the server will quickly running out of memory and never get chance to flush those memory.