spaCy: Segmentation fault on Ubuntu while training NER or TextCat

How to reproduce the behaviour

Hello! I’m coming across a segmentation fault while training a model for NER and text classifier. While I was training it exclusively on my local machine, this problem did not (and still do not) occur, it seems to appear only on my VMs. Here is the context: I want to train a lot of models, while making some slight variant on different parameters, as the dropout, the batch size, the number of iterations, etc. Also, for each different set of parameters, I train 10 models, in order to be sure that the good score of the model wasn’t some kind of luck. I first of all create a new blank model from a language:

nlp = spacy.blank("fr")
nlp.add_pipe(nlp.create_pipe(SENTENCIZER))
nlp.add_pipe(nlp.create_pipe(TAGGER))
nlp.add_pipe(nlp.create_pipe(PARSER))
nlp.begin_training()

I add the the newly created model some word vectors:

from gensim.models import FastText

gensim_model = FastText.load(vectors_dir)
gensim_model.init_sims(replace=True)
nr_dim = gensim_model.wv.vector_size
nlp.vocab.reset_vectors(width=nr_dim)
for word in gensim_model.wv.index2word:
    vector = gensim_model.wv.get_vector(word)
    nlp.vocab[word]
    nlp.vocab.set_vector(word, vector)

I then call another python script to properly train the component I want, be it the NER or TextCat pipe. In this project, I have a custom “multi-ner” and “multi-textcat” to train each label as a separate submodel, as can be shown in the image: image The training is done with 5000 sentences for the NER and 2000 for the TextCat, and while it demands a bit of RAM, it’s really nothing for the machines that have 16 Gigabytes. After that, I modify the meta.json file in order to incorporate some project-related infos, and it’s done.
As I said, I want to train a lot of models, so I train series of 10 models (each series with the same parameters). The 10 models aren’t trained simultaneously but one after the other. And here is the thing: while on my local machine I can train dozens of model without a single error, there is another behaviour on the VMs. After some training (usually I can train 2 models, so 2*20 iterations), I have a Segmentation Fault error. It’s always when I try to load the model, and it can be just before a training, or before the meta file changes. I don’t really know how to investigate this error, and what I can do to solve it, any help or tip is welcome! 😃

I was trying to be as exhaustive as possible, but as I am quite a newbie at writing issues I may have missed some important info, do not hesitate in asking more detail or code preview!

Your Environment

I’m using 3 different VM to train simultaneously the models, and the configuration is slightly different from one to another, but the bug is the same. Here is also my local configuration, the errorless one. Each machine (VM or Local) has 16 Gigabytes RAM, and it appears to be more than enough.

Info about spaCy Local VM 1 VM 2 & 3
spaCy version: 2.2.4 2.2.4 2.2.4
Platform: Windows-10-10.0.17134-SP0 Linux-4.4.0-62-generic-x86_64-with-Ubuntu-16.04-xenial Linux-4.15.0-106-generic-x86_64-with-Ubuntu-18.04-bionic
Python version: 3.6.5 3.6.5 3.6.9

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 17 (10 by maintainers)

Most upvoted comments

Thank you, it worked! Thanks for taking your time to help me find the problem

About the serialization of custom components, the documentation provides more information here:

When spaCy loads a model via its meta.json, it will iterate over the “pipeline” setting, look up every component name in the internal factories and call nlp.create_pipe to initialize the individual components, like the tagger, parser or entity recognizer. If your model uses custom components, this won’t work – so you’ll have to tell spaCy where to find your component. You can do this by writing to the Language.factories:

from spacy.language import Language
Language.factories["my_component"] = lambda nlp, **cfg: MyComponent(nlp, **cfg)

That’s also pretty much what the error message says there. Have you tried implementing this so you can just save out the entire model in one go?

Hello, I’ve run some tests, and I hope the results will help us find the cause of the error. First of all, these are the tests I ran:

Test Local VM 1 VM 2 VM 3
Components trained NER and TextCat NER and TextCat NER and TextCat NER and TextCat
Word vectors Yes, all No Yes, pruned to 100000 Yes, all

The word vectors, if not pruned, have 962148 vectors. There is 947364 keys. The models and their components were created locally on each machine. And now for the results:

Local

It really surprised me, because for the first time I had a memory error on my local Windows. Here is the stacktrace:

  File "C:\Project\project-nlp\src\project_nlp\utils\model\model_custom.py", line 32, in load_model
    multi_textcat = MultiTextcat(path, nlp)
  File "C:\Project\project-nlp\src\project_nlp\utils\model\model_multi_textcat.py", line 18, in __init__
    textcat.from_disk(path + TEXTCAT_MODEL_PATH + "/" + model, vocab=False)
  File "pipes.pyx", line 228, in spacy.pipeline.pipes.Pipe.from_disk
  File "C:\Project\v_env\lib\site-packages\spacy\util.py", line 654, in from_disk
    reader(path / key)
  File "pipes.pyx", line 219, in spacy.pipeline.pipes.Pipe.from_disk.load_model
  File "C:\Project\v_env\lib\site-packages\thinc\neural\_classes\model.py", line 395, in from_bytes
    dest = getattr(layer, name)
  File "C:\Project\v_env\lib\site-packages\thinc\describe.py", line 44, in __get__
    self.init(data, obj.ops)
  File "C:\Project\v_env\lib\site-packages\thinc\neural\_classes\hash_embed.py", line 20, in wrapped
    copy_array(W, ops.xp.random.uniform(lo, hi, W.shape))
  File "mtrand.pyx", line 1307, in mtrand.RandomState.uniform
  File "mtrand.pyx", line 242, in mtrand.cont2_array_sc
MemoryError

VM 1

I didn’t get any errors. It seems that creating the model and the components on the VM was a good call.

VM 2

I got another memory error. Memory error sslv2 NER + Textcat vecteurs100000 The stacktrace is full of this fragment, reiterated more than a hundred time - so much that I can’t scroll up to see what started it.

VM 3

Here I didn’t get Segmentation fault (core dumped) nor a Memory Error, but just

[CUSTOM_MODEL] Loading model: /path/to/model
Killed


For me the conclusion is that my word vectors are causing the error, because if they aren’t present, all seems to work perfectly (although I’ll run some other tests to be sure).