Captions ======== The dev and test caption files follow the naming scheme: "img_-en.tok.lc.filtered.(dev|test)" is a two-letter code (de = German, fr = French, ru = Russian). The format of the parallel image-caption pairs is: "imagenameforeign_caption ||| english_caption" Dev and Test Images =================== The images have to be downloaded manually. We provide a simple bash-script for downloading, which makes use of the following perl-script: https://commons.wikimedia.org/wiki/User:AzaToth/wikimgrab.pl Simply enter the "images" directory and call the script "download_dev_test.sh" from there, e.g.: $ cd images $ source ./download_dev_test.sh Downloading the Full Corpus =========================== We also provide a script for downloading the full corpus, i. e. (monolingual and multilingual) image annotations and their associated images directly from Wikimedia commons. Note that this may take a very long time (depending on your local bandwidth) and that we recommend distributing the task onto serveral machines. The script can be found in the "full_corpus" directory. We have compiled an index of 3.816.940 images on Wikimedia Commons. To download, for example, the first 20.000 images from the full corpus (and their annotatins) do the following: $ cd full_corpus $ ./download_full_corpus.sh 0 20000 The images will be placed in sequentially numbered directories, one directory containg a maximum of 10.000 images. The annotations will be written to files named XX.captions, where XX is a language identifier (en.cpations contains English captions, etc.) The files are tab-seperated in the following format: Corpus index Filename Annotated caption (including HTML "div" with metadata) Please note that some images might have been deleted since the index was created. They will be skipped.