text: Multi30K dataset link is broken
The link to Multi30K dataset at http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/training.tar.gz
is broken: https://github.com/pytorch/text/blob/73bf4fa8cedc12d910ab76190e446bd2e47a8325/torchtext/datasets/multi30k.py#L16
About this issue
- Original URL
- State: open
- Created 2 years ago
- Comments: 18 (4 by maintainers)
Commits related to this issue
- Temporarily disable the T5 tutorial Temporarily disable the T5 tutorial to fix the issue with the dataset that can't be downloaded because the website is down. More info: https://github.com/pytorch/t... — committed to pytorch/tutorials by deleted user a year ago
- Temporarily disable the T5 tutorial (#2511) Temporarily disable the T5 tutorial to fix the issue with the dataset that can't be downloaded because the website is down. More info: https://github.com/p... — committed to pytorch/tutorials by deleted user a year ago
Found a local copy of the dataset and uploaded it to github (it’s rather small). For now it is available via this link: https://github.com/neychev/small_DL_repo/tree/master/datasets/Multi30k
Just in case, all rights belong to the original authors of the dataset, this is only a temporal copy for convenience.
Thanks, @Nayef211, @rrmina !
No idea what’s exactly wrong with the data, the files above were located in
~/.torchtext/cache/Multi30k
of one of my students.I’ve tried to simply rename the archive (according to the name in torchtext docs) and files in it and change MD5 to the correct one and it seems to work.
Including the approach suggested by @Nayef211, which is way more elegant, the final algorithm should be the following:
Test data has 1000 sentences, which seems correct.
Plus, besides commenting the previous
URL
, you also need to change theMD5
intorchtext/datasets/multi30k.py
.Please, refer to the next answer with updated example
Example code to make it work (tested on Colab):It wasn’t automatically extracted because the
mmt16_task1_test.tar.gz
archive containes Apple metadata files ._test.de, ._test.en, and ._test.fr that matche the filter and are getting extracted instead. Would be good to fix the archive file, but meanwhile this patch for _filter_fn can help it to pick the correct file from the archive:Just wanted to mention another approach to get Multi30k working with the data you are hosting @neychev. Rather than downloading the data directly using
wget
we can programmatically modify the URLs that each split of the dataset is being dowloaded from as follows:As @rrmina mentioned earlier, this approach still doesn’t work with the
test
split. If I try to print the contents of thetest
split, I don’t get any outputs. @neychev do you happen to know what the discrepancy is formmt16_task1_test.tar.gz
between the originaltest
split and the one you host?As a next step, I also plan to update our
Multi30k
dataset implementation so we can rely on the data stored in https://github.com/neychev/small_DL_repo/tree/master/datasets/Multi30k until the dataset in the original server is restored. This way we don’t need to rely on any of the above hacks to get this dataset working. 😄