starlette: Middleware Request parse Hangs forever
I’m currently trying to integrate some metrics tooling in a middleware where I need to inspect the request body. The problem I am seeing is that this triggers the request to hang forever. The following is how you can reproduce it.
app = Starlette()
class ParseRequestMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next: RequestResponseEndpoint):
# payload = await request.json() # This will cause test() to hang
response = await call_next(request)
payload = await request.json() # Hangs forever
return response
async def test(scope: Scope, receive: Receive, send: Send):
request = Request(scope=scope, receive=receive)
data = await request.json()
response = Response(status_code=405)
await response(scope, receive, send)
app.mount("/test", test)
app.add_middleware(ParseRequestMiddleware)
I’m still pretty new to starlette but it seems to be an issue with the fact that the middleware and ASGI app use different Request objects and therefore can’t share the Request._body cache. Which means they exhaust the stream on the first read and the second read isn’t correctly detecting that the stream is exhausted and so gets stuck in that while True loop. I’m not sure I haven’t dove deeper than the Request object. I’m not sure where the code is for the receive function that gets passed into the ASGI App is.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 16
- Comments: 15 (1 by maintainers)
If someone is experiencing the same issue and cannot wait until a solution is found, my current simple fix is simply to patch the
Request. For example (the code is not tested, but you can see the idea):So i confirmed that if I use the same Request object it works. I added a middleware to store the request object into a ContextVar and if I use that instead of the
receiveandsendpassed into the ASGI app then the request doesn’t hang anymore. This might be enough of a workaround for me right now but the more apps I add (especially third party ones) the more of a problem this will become.Update: Running more tests today I had hoped to reduce the number of places I have to ignore the send/receive args but it looks like BaseHTTPMiddleware also creates a new Request object. So basically anyone who needs to share the request body data should get the request object from ContextVars to ensure the stream is only read once and doesn’t get exhausted.
I’m still unclear whether this is all by design and the developer is expected to manage a cache of the data streams. It seems like at minimum it shouldn’t hang and it should be raising an error instead of getting stuck in an infinite loop. I would greatly appreciate some insights here whenever you have a moment @tomchristie
This is my work around right now. You basically have to replace everywhere that instantiates a Request with CachedRequest. This means any third party middleware or asgi apps you will need to extend and replace where they instantiate the Request. Below you can see how I did it with Route. I can use this new Route class in my app now. It’s incredibly painful to scale this but it is working.
For those looking for a workaround that doesn’t use ContextVars. I basically just took the new receive function from #848 and created a CachedRequest that i use in my middleware instead.
When the middleware executors finally call the app it passes
cached_request.receivefunction to the app which will return thecached_receivefunction instead of the originalself._receive