tornado: BufferError: Existing exports of data: object cannot be re-sized
OS: Darwin 17.4.0 Darwin Kernel Version 17.4.0: Sun Dec 17 09:19:54 PST 2017; root:xnu-4570.41.2~1/RELEASE_X86_64 x86_64 Python Version: 3.6.0 tornado==4.5.3
So, I have a little websocket server that serves video streams. And I am getting this error:
$ python3 test.py
*** Websocket Server Started at 192.168.119.1***
websocket opened
message received: start data stream
spawning subprocess to generate data stream
ERROR:tornado.application:Uncaught exception GET /ws (127.0.0.1)
HTTPServerRequest(protocol='http', host='localhost:8888', method='GET', uri='/ws', version='HTTP/1.1', remote_ip='127.0.0.1', headers={'Host': 'localhost:8888', 'Connection': 'Upgrade', 'Pragma': 'no-cache', 'Cache-Control': 'no-cache', 'Upgrade': 'websocket', 'Origin': 'http://localhost:8888', 'Sec-Websocket-Version': '13', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36', 'Dnt': '1', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'en-US,en;q=0.9', 'Sec-Websocket-Key': 'ZAug6WtdtAp7g1RYx2/m9Q==', 'Sec-Websocket-Extensions': 'permessage-deflate; client_max_window_bits'})
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/tornado/web.py", line 1468, in _stack_context_handle_exception
raise_exc_info((type, value, traceback))
File "<string>", line 4, in raise_exc_info
File "/usr/local/lib/python3.6/site-packages/tornado/stack_context.py", line 316, in wrapped
ret = fn(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/tornado/websocket.py", line 502, in <lambda>
self.stream.io_loop.add_future(result, lambda f: f.result())
File "/usr/local/lib/python3.6/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "<string>", line 4, in raise_exc_info
File "/usr/local/lib/python3.6/site-packages/tornado/gen.py", line 1063, in run
yielded = self.gen.throw(*exc_info)
File "test.py", line 87, in on_message
yield executor.submit(self.stream_data)
File "/usr/local/lib/python3.6/site-packages/tornado/gen.py", line 1055, in run
value = future.result()
File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/_base.py", line 398, in result
return self.__get_result()
File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/_base.py", line 357, in __get_result
raise self._exception
File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "test.py", line 79, in stream_data
self.write_message(data)#, binary=True)
File "/usr/local/lib/python3.6/site-packages/tornado/websocket.py", line 252, in write_message
return self.ws_connection.write_message(message, binary=binary)
File "/usr/local/lib/python3.6/site-packages/tornado/websocket.py", line 793, in write_message
return self._write_frame(True, opcode, message, flags=flags)
File "/usr/local/lib/python3.6/site-packages/tornado/websocket.py", line 776, in _write_frame
return self.stream.write(frame)
File "/usr/local/lib/python3.6/site-packages/tornado/iostream.py", line 395, in write
self._write_buffer += data
BufferError: Existing exports of data: object cannot be re-sized
^CTraceback (most recent call last):
File "test.py", line 105, in <module>
ioloop.IOLoop.instance().start()
File "/usr/local/lib/python3.6/site-packages/tornado/ioloop.py", line 863, in start
event_pairs = self._impl.poll(poll_timeout)
File "/usr/local/lib/python3.6/site-packages/tornado/platform/kqueue.py", line 66, in poll
kevents = self._kqueue.control(None, 1000, timeout)
I have put together a little gist which reproduces the problem.
It seems to be related to the amount (rate?) of data being processed.
If I change line 72 p = subprocess.Popen(['/usr/bin/tr', '-dc', 'A-Za-z0-9'], stdin=dev_rand.stdout, stdout=subprocess.PIPE, env=new_env)
to p = subprocess.Popen(['/usr/bin/tr', '-dc', 'A-Z'], stdin=dev_rand.stdout, stdout=subprocess.PIPE, env=new_env)
the problem seems to go away. I surmise that this change “fixes” the problem because it reduces the amount (rate?) of the data being generated by the subprocess.
I’m new to tornado, so any insight into this would be tremendously helpful. If I’m doing it wrong, please let me know the correct way to do what I’m after.
Thanks!
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 17 (7 by maintainers)
If you’re new to Tornado and asynchronous programming, I would recommend avoiding threads. Threads are an advanced topic in any context, and the interactions between threads and the asynchronous model are best left until you have a solid grasp on the single-threaded case.
The problem here is that you’re calling
self.write_messagein the ThreadPoolExecutor. This is unsafe because (by design) very few Tornado methods are thread-safe. When using a ThreadPoolExecutor you mustreturnwhatever data you need to use from the Tornado side.That doesn’t fit well with your streaming subprocess, but fortunately you can avoid threads here. Use
tornado.process(andtornado.process.STREAM) instead ofsubprocess(andsubprocess.PIPE) and then you can yield results from the subprocess in your coroutine directly, with no need for threads. This would look something like this: