pymeasure: ZMQ error
Hi,
While running PyMeasure on a newer version of anaconda, I got an error which looks like it’s coming from an update ZMQ had. In particular, the error occurs when a worker tries to use its emit method. The error is in the self.publisher.send_serialized function, which I’m guessing has changed in a newer version of ZMQ or something. A fix for now is to add a TypeError to the following except statement, but some functionality (which it seems I don’t use) is probably being lost along the way. The full error message is at the bottom of this post, and the versions of everything that I’m using are below.
ZMQ version: 17.1.2 Python version: 3.6.6 Conda version: 4.5.11
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\Joseph\Anaconda3\lib\site-packages\zmq\sugar\socket.py", line 427, in send_multipart
memoryview(msg)
TypeError: memoryview: a bytes-like object is required, not 'int'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Joseph\Anaconda3\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Users\Joseph\Anaconda3\lib\site-packages\pymeasure-0.5.1-py3.6.egg\pymeasure\experiment\workers.py", line 161, in run
self.update_status(Procedure.RUNNING)
File "C:\Users\Joseph\Anaconda3\lib\site-packages\pymeasure-0.5.1-py3.6.egg\pymeasure\experiment\workers.py", line 115, in update_status
self.emit('status', status)
File "C:\Users\Joseph\Anaconda3\lib\site-packages\pymeasure-0.5.1-py3.6.egg\pymeasure\experiment\workers.py", line 95, in emit
self.publisher.send_serialized((topic, record), serialize=cloudpickle.dumps)
File "C:\Users\Joseph\Anaconda3\lib\site-packages\zmq\sugar\socket.py", line 512, in send_serialized
return self.send_multipart(frames, flags=flags, copy=copy, **kwargs)
File "C:\Users\Joseph\Anaconda3\lib\site-packages\zmq\sugar\socket.py", line 434, in send_multipart
i, rmsg,
TypeError: Frame 0 (128) does not support the buffer interface.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 1
- Comments: 18 (10 by maintainers)
Commits related to this issue
- Implement test for #168. — committed to bilderbuchi/pymeasure by bilderbuchi 4 years ago
- Propose fix for #168. — committed to bilderbuchi/pymeasure by bilderbuchi 4 years ago
- Fix topic filtering as proposed in #168. Also add a test for that, which first needed the preceding fixes to work. — committed to bilderbuchi/pymeasure by bilderbuchi 3 years ago
- Fix topic filtering as proposed in #168. Also add a test for that, which first needed the preceding fixes to work. — committed to bilderbuchi/pymeasure by bilderbuchi 3 years ago
- Fix topic filtering as proposed in #168. Also add a test for that, which first needed the preceding fixes to work. — committed to bilderbuchi/pymeasure by bilderbuchi 3 years ago
- Fix topic filtering as proposed in #168. Also add a test for that, which first needed the preceding fixes to work. — committed to bilderbuchi/pymeasure by bilderbuchi 3 years ago
- Fix topic filtering as proposed in #168. Also add a test for that, which first needed the preceding fixes to work. — committed to bilderbuchi/pymeasure by bilderbuchi 3 years ago
In order for the built-in topic filtering to work, the topic as specified in
socket.setsockopt(zmq.SUBSCRIBE, topic)must come at the beginning of the incoming dataframe, which however is at that point serialized, i.e. in the current implementation the result ofcloudpickle.dumps((topic,record)). Now if you try to substitute a string for topic (and anything for record) and view the returned result of this function, you see that the topic string is there somewhere, but not at the beginning, meaning that it will not pass the topic filter. Hence one way to get the data is to pass an empty string to the filter (thus letting through all the messages) and later when you have the deserialized result (topic, record), check if the topic is of interest and correspondingly do something with (or skip) the record. A benefit of this method is that you can actually monitor multiple topics with one listener.A possibly cleaner method that might work (haven’t tested it myself) is suggested in this SO article (a bit of discussion is added here). Apparently, if a multipart message is received and the first part has (or perhaps begins with) the filter string, the message will pass through. So one can set the first part to be the topic and add the serialized record in a second part. If one still wants to use send_serialized/recv_serialized, it would be something like
but as these functions call the send/recv_multipart anyway, it may make more sense to call them directly, as in the referred example (replace ‘pickle’ with ‘cloudpickle’ there).
@jmittelstaedt and @lawa42, thanks for reporting this issue. It sounds like there is an incompatibility with
cloudpickleas @CasperSchippers brings up. I will take a look when I get the opportunity.