glow: [TextTranslator] Quantized text translation is broken

Looks like a FunctionConverter refactor broke quantized text translation.

Before (https://github.com/pytorch/glow/commit/96377be74fde8678a50ebf43e5b6f1bc62b40a9a):

$ ./bin/text-translator -m=en2gr -load_profile=en2gr.yaml --do_not_quantize_nodes=Add
Enter a sentence in English to translate to German: My name is Jordan .
mein Name ist Jordan .

After (https://github.com/pytorch/glow/commit/2a8f7d52246fe1014d0b95e11ce2effa0968139b):

$ ./bin/text-translator -m=en2gr -load_profile=en2gr.yaml --do_not_quantize_nodes=Add
Enter a sentence in English to translate to German: My name is Jordan .
" für , , der , . , .

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 20 (20 by maintainers)

Commits related to this issue

Most upvoted comments

Essentially I think the eos token doesn’t go through. Why, I don’t know, but that’s work for another day 😃.

@qcolombet It’s working for me with https://github.com/pytorch/glow/pull/1999, thanks!! Now we should investigate why we’re seeing different output…

I think I found the bug. I believe this is another instance of #1832 where essentially we decide new quantization parameters for the result of an operation and apply them, whereas the execution expects the output parameters to be the same as the input parameters. This instance of the bug is particularly nasty because usually the operators that have such constraints check it in the verifier, whereas this one was not checked 😦.

The problematic node in that case is Slice. We assign a new scale to its result, whereas the generated code doesn’t perform any rescale internally, thus we should have the same scale and offset as the inputs. The fix is trivial (will post a PR soon), but I will double check if we have more nodes in that situation and add the proper verifier checks.