mongo-connector: mongo-connector stuck on fatal error
I have 2 replicaset in 2 different environnment (dev&prod) with no issue. They run mongo-connector 2.0.3 with pyMongo 2.8. Prod is running with python 2.6, dev with 2.7.
I tried recently to set up a 3thd env with dev data. I ran into fatal issue. This occured with pyMongo 2.0.3, 2.10 and pyMongo 2.8 and 2.8.1.
When starting mongo-connector service, after a constant number of 6000 doc inserted into ElasticSearch. The 6001nth is raising:
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/local/lib/python2.7/dist-packages/mongo_connector/util.py", line 85, in wrapped
func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/mongo_connector/oplog_manager.py", line 185, in run
for n, entry in enumerate(cursor):
File "/usr/local/lib/python2.7/dist-packages/pymongo/cursor.py", line 1090, in next
if len(self.__data) or self._refresh():
File "/usr/local/lib/python2.7/dist-packages/pymongo/cursor.py", line 1039, in _refresh
limit, self.__id))
File "/usr/local/lib/python2.7/dist-packages/pymongo/cursor.py", line 958, in __send_message
self.__compile_re)
File "/usr/local/lib/python2.7/dist-packages/pymongo/helpers.py", line 121, in _unpack_response
compile_re)
File "/usr/local/lib/python2.7/dist-packages/bson/__init__.py", line 537, in decode_all
tz_aware, uuid_subtype, compile_re))
File "/usr/local/lib/python2.7/dist-packages/bson/__init__.py", line 331, in _elements_to_dict
data, position, as_class, tz_aware, uuid_subtype, compile_re)
File "/usr/local/lib/python2.7/dist-packages/bson/__init__.py", line 320, in _element_to_dict
data, position, as_class, tz_aware, uuid_subtype, compile_re)
File "/usr/local/lib/python2.7/dist-packages/bson/__init__.py", line 159, in _get_object
encoded, as_class, tz_aware, uuid_subtype, compile_re)
File "/usr/local/lib/python2.7/dist-packages/bson/__init__.py", line 331, in _elements_to_dict
data, position, as_class, tz_aware, uuid_subtype, compile_re)
File "/usr/local/lib/python2.7/dist-packages/bson/__init__.py", line 320, in _element_to_dict
data, position, as_class, tz_aware, uuid_subtype, compile_re)
File "/usr/local/lib/python2.7/dist-packages/bson/__init__.py", line 232, in _get_date
dt = EPOCH_AWARE + datetime.timedelta(seconds=seconds)
InvalidBSON: date value out of range
If I set timeZoneAware to false, this is exactly the same, expect EPOCH_AWARE
is replaced by EPOCH_NAIVE
.
So you think the 6001nth record I imported is bad ? Nope. The first time I launched mongo-connector, I have already imported 180,000+ records, but the second time I juste droped the collection and re-imported all of them so that mongo-connector started to insert doc in ES at the time I start import. Same issue. Setting noDump=true had no effect. The proof the 6001nth doc is not bad ? By spliting import file into 2 parts, (1: 6000 records, 2: remaining), I got 6000 record then 100 imported before first error. 6000 and 100 where constants. By splitingn one more time (6000+6000+remaining), I was hable to import 6000+283 docs. Strange. So pyMongo was not detectin a bad BSON, there is a kind of race condition underneath.
continueOnError and batchSize have no effect, each time I restart mongo-connector without removing oplog-timestamp, even with a dropped collection, it will try to send to ES the same docs again and again. I expected batchSize=1 to force mongo-connector to have an up to date timestamp, but this is not working. the last few records before the fatal error are always “repeated”.
The bad part is that I’m not able to launch mongo-connector only to refresh timestamp and exiting, is is crashing very fast and do not update anything. I’m still looking for a workaround : my goal start from an empty collection, start mongo-connector that will remain quiet, then starting to insert doc at a slow rate to see if it matters.
Please upgrade pyMongo dep if it solve this solution (pyMongo 3 is supposed to have a better date support), or forward the issue to pyMongo if the latest has the same issue. Anyway, you should fix mongo-connector so that oplog-timestamp is updated to avoid doc manager insertion replay.
About this issue
- Original URL
- State: closed
- Created 9 years ago
- Comments: 25 (6 by maintainers)
We are experiencing this same issue. The problem is, that that the ‘invalid’ ISODate (year 9999+) is in a collection managed by another team/project, and mongo-connector is not even configured to follow that collection. This should not cause problems for the collection mongo-connector is following, since the two collections have nothing to do with each other.