mediasoup: [Rust] Worker panics when running with multiple workers

Bug Report

mediasoup version: 0.15

I’m trying to run mediasoup with 2 (eventually more) workers per server (on a 2 core server, so one worker per core). After switching to 2 workers, I’ve been getting pretty common worker panics:

thread 'mediasoup-worker-cd9eec89-8672-4293-b4eb-3a857fdd0341' panicked at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/mediasoup-0.15.0/src/router/data_consumer.rs:277:21:
Wrong message from worker: NotificationRef { handler_id: Ok("\u{c}\0\0\0\u{8}\0\u{e}\0\u{7}\0\u{8}\0\u{8}\0\0\0\0\0\0\u{4}\u{c}\0\0\0\0\0\u{6}\0\u{8}\0\u{4}\0\u{6}\0\0\0\u{4}\0\0\0q\0\0\0WRTC::WebRtcServer::OnStunDataReceived() | ignoring received STUN packet with unknown remote ICE usernameFragment\0\0\0"), event: Ok(DataconsumerBufferedAmountLow), body: Err(Error { source_location: ErrorLocation { type_: "Notification", method: "body", byte_offset: 36 }, error_kind: InvalidOffset }) }
thread 'mediasoup-worker-87b809a2-a4f9-4231-89a2-b4f5fe2cd40c' panicked at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/mediasoup-0.15.0/src/router/data_consumer.rs:277:21:
Wrong message from worker: NotificationRef { handler_id: Ok("\u{c}\0\0\0\u{8}\0\u{e}\0\u{7}\0\u{8}\0\u{8}\0\0\0\0\0\0\u{4}\u{c}\0\0\0\0\0\u{6}\0\u{8}\0\u{4}\0\u{6}\0\0\0\u{4}\0\0\0q\0\0\0WRTC::WebRtcServer::OnStunDataReceived() | ignoring received STUN packet with unknown remote ICE usernameFragment\0\0\0"), event: Ok(DataconsumerBufferedAmountLow), body: Err(Error { source_location: ErrorLocation { type_: "Notification", method: "body", byte_offset: 36 }, error_kind: InvalidOffset }) }
thread 'mediasoup-worker-bb92335a-9b5b-4b48-b4bc-127543ac7041' panicked at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/mediasoup-0.15.0/src/router/webrtc_transport.rs:479:21:
Wrong message from worker: NotificationRef { handler_id: Err(Error { source_location: ErrorLocation { type_: "Notification", method: "handler_id", byte_offset: 36 }, error_kind: InvalidUtf8 { source: Utf8Error { valid_up_to: 0, error_len: Some(1) } } }), event: Ok(TransportSctpStateChange), body: Err(Error { source_location: ErrorLocation { type_: "Notification", method: "body", byte_offset: 36 }, error_kind: InvalidOffset }) }
thread 'mediasoup-worker-6e6b7487-e501-42bf-9681-7450e79a0dfb' panicked at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/mediasoup-0.15.0/src/router/webrtc_transport.rs:465:21:
Wrong message from worker: NotificationRef { handler_id: Err(Error { source_location: ErrorLocation { type_: "Notification", method: "handler_id", byte_offset: 36 }, error_kind: InvalidOffset }), event: Ok(WebrtctransportDtlsStateChange) }

and

thread 'mediasoup-worker-28a9b972-22da-4324-8e10-4284fbab7089' panicked at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/mediasoup-0.15.0/src/worker/channel.rs:256:91:
called `Result::unwrap()` on an `Err` value: Error(Char { character: 'h', index: 29 })

The last one is weird because it seems unrelated to the others.

I’ve never seen any of these panics before switching to multiple workers, so I’m assuming it’s related to that. I also only see this in prod so it might be difficult to try and reproduce

About this issue

  • Original URL
  • State: closed
  • Created 5 months ago
  • Comments: 19 (15 by maintainers)

Most upvoted comments

@jmillan and me have talked long about this and we have identified the problem. I’m exposing it in detail in a new issue: https://github.com/versatica/mediasoup/issues/1352

I’m then closing this issue so let’s please move to the new one.

CC @satoren @PaulOlteanu @nazar-pc @GEverding

Yeah, I’ve been using mediasoup 0.15 since January 8 and never saw this issue in a single-worker setup. As soon as I tried multiple workers I saw many of these kinds of errors within 24 hours.

I did experiment with multiple workers for a week a few months ago and I also never saw this issue (I think I was using mediasoup 0.12 at the time which was before the flatbuffer change).