spaCy: Problems and errors in German lemmatizer
How to reproduce the behaviour
import spacy
nlp = spacy.load('de')
test = nlp.tokenizer('die Versicherungen') # The insuranceS
for t in test:
print(t,t.lemma_)
[output] die der
[output] Versicherungen Versicherung
test = nlp.tokenizer('Die Versicherungen') # The insuranceS
for t in test:
print(t,t.lemma_)
[output] Die Die
[output] Versicherungen Versicherung
test = nlp.tokenizer('die versicherungen') # The insuranceS
for t in test:
print(t,t.lemma_)
[output] die der
[output] versicherungen versicherungen
Your Environment
- Python version: 3.5.2
- Models: de
- Platform: Linux-4.4.0-112-generic-x86_64-with-Ubuntu-16.04-xenial
- spaCy version: 2.0.11
Hi all,
I hope the code snippet exemplifies the problem clearly enough.
Basically, I fail to see how the German lemmatization should be used.
Nouns are only lemmatized if they are Capitalized, all other text elements are only lemmatized if they are lower case. So turning all words to lower() means throwing away all nouns lemmas. Trusting the input to have proper capitalization means losing all cases where a non-noun is at the beginning of a sencence (hence not lower case).
How do people actually use this in a real use-case?
Thanks for your help,
Andrea.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 1
- Comments: 16 (10 by maintainers)
I think lookup with POS tag will solve the majority of the issues.
Btw, if you want to experiment with my lemmatizer design here he is:
https://github.com/DuyguA/DEMorphy
You can find the list of accompanying morphological dictionaries in the repo as well.
In case if you need German languahe resources you can always contact me and my colleagues at Parlamind. We’re more than happy to help.
Making this the master issue for everything related to the German lemmatizer, so copying over the other comments and test cases. We’re currently planning out various improvements to the rule-based lemmatizer, and strategies to replace the lookup tables with rules wherever possible.
#2368
#2120
The German lemmatizer currently only uses a lookup table – that’s fine for some cases, but obviously not as good as a solution that takes part-of-speech tags into account.
You might want to check out #2079, which discusses a solution for implementing a custom lemmatizer in French – either based on spaCy’s English lemmatization rules, or by implementing a third-party library via a custom pipeline component.
One quick note on the expected lemmatization / tokenization:
spaCy’s German tokenization rules currently don’t split contractions like “unterm”. One reason is that spaCy will never modify the original
ORTHvalue of the tokens – so"unterm"would have to become["unter", "m"], where the token “m” will have theNORM“dem”. Those single-letter tokens can easily lead to confusion, which is why we’ve opted to not produce them for now. But if your treebank or expected tokenization requires contractions to be split, you can easily add your own special case rules:We don’t have an immediate plan or timeline yet, but we’d definitely love to move from lookup lemmatization to rule-based or statistical lemmatization in the future. (Shipping the tables with spaCy really adds a lot of bloat and it comes with all kinds of other problems.)
Hey there. I hooked the treetagger into the pipeline to shorten the waiting time until spacy’s german lemmatizer catches up 😉. This is how:
We only need to add our custom lemmatizer to spacy’s pipeline now:
Et voilà:
Yes, this should be no problem, so if you want to submit a PR, that would be cool 😃
The lookup lemmatizers aren’t great, and we’re hoping that we’ll be able to replace them with a rule-based lemmatizer like the English one soon. There have been a few other issues in that area as well (mostly with the rule-based lemmatizer and especially with German), so I’m worried that there might even be a subtle bug somewhere (see #2368 for example) 😩 So yeah, we can’t wait to give the lemmatizers an overhaul.
Hi @DuyguA,
indeed, my main problem is not with Versicherungen (which exists in the lemmatizer lookup table with the correct capitalization), but the fact that “Die” is not recognized, while “die” is. In general, every time a verb/adjective/pronoun/article is at the beginning of a sentence, it will not be recognized, because the lemmatizer only knows it in lower-case.
And of course if I lower() everything, I lose all the nouns, as you pointed out. The same if I lower() only the first word of a sentence, since from time to time nouns will be there too…
It seems to me like the only correct solution compatible with the current lookup-based approach would be to add to the lookup all verbs/pronouns/articles/adjectives both with and without capitalization, and leave the nouns only with capitalization. Basically: all words in the present lookup that are not capitalized, must be duplicated in their capitalized version, the corresponding lemmas can stay lower-case. Those that are already capitalized stay as they are. One may have to take care of words that are both verbs and nouns depending on the capitalization (“leben” to live, “Leben” the life). Of couse this would increase the size of the lookup, but better a larger lemmatizer that one can use, than a smaller unusable one 😃
Jm2c,
Andrea.