medusa: Server crashes due to OOM, possible memory leak.
The Medusa Server instance crashes (SIGABRT), due to memory allication failure after a period of time (1-4 hours observed). This happens both when running with medusa start
and medusa develop
.
Medusa version 1.3.1 with Postgres. Node v16.15.1 Mac OS
<--- Last few GCs --->
[77923:0x7fb5ab900000] 3441351 ms: Mark-sweep 4045.9 (4136.8) -> 4036.3 (4141.0) MB, 4754.0 / 0.0 ms (average mu = 0.781, current mu = 0.394) task scavenge might not succeed
[77923:0x7fb5ab900000] 3450246 ms: Mark-sweep 4049.5 (4141.3) -> 4040.8 (4145.8) MB, 5770.7 / 0.1 ms (average mu = 0.653, current mu = 0.351) task scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x104e89a85 node::Abort() [/usr/local/bin/node]
2: 0x104e89c08 node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
3: 0x105002a67 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
4: 0x105002a03 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
5: 0x1051a1445 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
6: 0x10519fcac v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
7: 0x10522e37d v8::internal::ScavengeJob::Task::RunInternal() [/usr/local/bin/node]
8: 0x104ef4bcb node::PerIsolatePlatformData::RunForegroundTask(std::__1::unique_ptr<v8::Task, std::__1::default_delete<v8::Task> >) [/usr/local/bin/node]
9: 0x104ef3617 node::PerIsolatePlatformData::FlushForegroundTasksInternal() [/usr/local/bin/node]
10: 0x10583d52b uv__async_io [/usr/local/bin/node]
11: 0x105850c9b uv__io_poll [/usr/local/bin/node]
12: 0x10583da21 uv_run [/usr/local/bin/node]
13: 0x104dc2eaf node::SpinEventLoop(node::Environment*) [/usr/local/bin/node]
14: 0x104ec9f41 node::NodeMainInstance::Run(int*, node::Environment*) [/usr/local/bin/node]
15: 0x104ec9b99 node::NodeMainInstance::Run(node::EnvSerializeInfo const*) [/usr/local/bin/node]
16: 0x104e5768b node::Start(int, char**) [/usr/local/bin/node]
17: 0x7fff2039df3d start [/usr/lib/system/libdyld.dylib]
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 32 (7 by maintainers)
@olavurdj
I have 100% the exact same error as you . When i start medusa memory will start at around 380mb something and i can see increase bit by bit even if i do nothing until it crash
this is my medusa api core before crashing 3.29g. When i launch its just 380mb .I’m on localhost , doing nothing, chilling.not even launch admin or storefront…
After lot testing , i find out is because as soon as you start to have a lot of data(not sure how much) in your medusa database , Customer-Product-Price etc you must absolutely MUST install/use ‘REDIS’ <—not optional , i didn’t because i was on localhost and just started to play with medusa and while i had few/no data i didnt have any errors.
Maybe i should raise a ticket but my problem is reapetable i test it with 2 friend same memory leak error.
@AlexDigital974 Thank you! I am not running redis, so this is very likely the same issue I’m experiencing. I’ll see if installing redis on my setup helps.
@olavurdj great! it is likely the same issue as i have , if you do not have redis and you db is not empty this is it.
If it resolved you issue let us know i think as more people use medusa , having DB with some data & not Redis install is a very likely scenario that a lot of people will meet
Hi,
Context I was confronted to this error while trying to deploy backend along with the admin plugin. First I deployed with Digital Ocean App then, assuming Digital Ocean was the problem, I tried to deploy to Railway but again, facing exact same error code as OP.
Solution
The way I understand it is that with
autoRebuild
set to true, the cloud providers were trying to launch and rebuild the admin at the same time, causing the memory to be saturated. But this is only an humble guess.It seems to me that this piece of information should be sitting in all of
medusaJS
“deploy” documentation sections. As this would very easily avoid frustrations and even quitting from some of your users who, like me, are less experimented.Thank you for your project, I like medusa JS.
Got it, would love if you could keep me posted 😃
We will soon deprecate the fake Redis module and default to an event bus with no effect when events are emitted and subscribed to. It will just let the events pass through, such that you can deploy without Redis (even though that’s not recommended) or boot up preview environments as mentioned by @magnusdr.
You can find the WIP implementation in this PR.
Right now, we use
ioredis-mock
(link) to power the fake Redis, so you might be able to find a command in their documentation to restart or flush the instance.Hope this helps.