tungstenite-rs: Sharing socket between threads / non-blocking read

Hello, I’m building a service which demands minimal latency.

I need to be able to send a message as fast as possible (latency wise), but still engage in potential ping/pong interactions.

My problem is that read_message seems to block until a message arrives, which won’t do.

I was thinking I could access the underlying stream, split it, then have two threads, one which blocks while waiting for new messages and then handles them, and the other which writes whenever it needs to according to its own independent logic.

Is this possible? I’ve heard about using mio to make some of the components async, I saw a set_nonblocking method mentioned in another issue regarding the blocking nature of read_message. I’m overall a bit confused, and can’t find an example of how I would achieve an async read (or something equivalent) using mio.

Thanks so much!

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 7
  • Comments: 20 (7 by maintainers)

Most upvoted comments

Hi, what you’re doing now is probably the least efficient way ever. Using multiple network connections does neither simplify things nor improve latency since the communication between threads is not easier than the WebSocket itself. It just has exactly the same problem. In your case I’d rather use no threads at all. Use tokio and tokio-tungstenite. Create the WebSocket and call split() on it. You’ll get separate sender and receiver parts, both asynchronous, and then you’ll be able to spawn two separate tasks. A good example of doing so is available here: https://github.com/snapview/tokio-tungstenite/blob/master/examples/interval-server.rs

Hi,

first, what set_nonblocking() does. It prevents read_message() from blocking. Instead it will return Err immediately if there are no messages available. If your software has some CPU-intensive computations all the time without pauses and with 100% CPU load, then calling non-blocking read_message somewhere in the computation would do the trick. But this approach always causes 100% CPU load and thus should not be used directly. Instead, you generally combine non-blocking read_message with your own blocking waiting. This differs from blocking read_message in that you have full control over blocking waiting and are able to interrupt it for i.e. sending.

That’s what mio does. Using mio is nothing else than the standard “poll/select” approach known from UNIX textbooks. mio is a wrapper around poll/select. It waits for one or multiple sockets and tells when one or more of them has data for reading. By using it you would be able to send and receive within the same thread. That’s how we work with Tungstenite in our company.

Probably the best way nowadays is tokio with tokio-tungstenite. It wraps mio and tungstenite together using Future trait. Then you can use async/await just like in JavaScript or start background tasks (not threads!) that do some work in parallel. If unsure, try tokio. But if for some reason you need fine-grained control over low-level sockets and threads, then use mio directly. This is generally harder to do and I don’t recommend this for general use.

Hi @agalakhov, sorry to resurrect this thread, but I’m trying to use tungstenite with mio as well.

Right now the issue I have is with the handshake when using the client function. For some reason I keep getting HandshakeError::Interrupted(...).

Any chance you have a simple example using mio as a client? I can post a simplified version of what I’m trying to do if that helps

Much thanks!

You can also set TcpStream to non-blocking mode. In this case it will return WouldBlock immediately. You can then use third-party library (i.e. mio) to check if it will return WouldBlock before you call and thus manage your connections and tasks. Or you can just call read periodically, but this is not so nice since your CPU will be always active.

@application-developer-DA Good question. First, the project I’m using mio + tungstenite for is a passion project and learning how to use mio has been interesting.

Also, my project has 2 streams we need to listen to, a HTTP server, and also a websocket client that is both getting and sending messages. In examples where I’ve seen tokio used, it seems that there’s one loop where connections are accepted or messages received, but it’s not immediately clear how to have 2 loops. For sure let me know if this issue has an obvious answer! In the meantime I plan on spawning threads and message passing.

Last, my project may have parts that are CPU blocking, but tokio has an answer for that, so I guess it was as much of a problem as I thought 😅

I guess the hope is that by using mio and wrestling with the compiler I’ll learn a lot about solving concurrency problems. My background is in javascript, this should be a great learning experience.

Hello

You can achieve this by using multi-producer-single-consumer channels.

So basically when you create a WS connection.

You create these channels; pass them to the shared server object and that owns them.

Together with the WsConnection start point you spawn a new task and listen for the receiving messages and then write to the tcp stream on the WsConnection.

The ws connection should have the sender part of the server (which you can clone multiple times)

If you want to read from multiple places you can instead use a multi-producer-multiple-consumer channel.

I put together a naive ascii art for you: ws-workflow

I have created an example create to show you how you can achieve this with full working code. Though it’s not fully compatible with the rfc spec yet and can probably be improved a lot. But it’s focused towards explaining the simpler concepts of multi users in a server which handles connections etc.

Hope that makes sense in some way 😮 Please let me know if you have any further questions