Datasets:

Modalities:
Text
License:
VV_EV / README.md
starkadur's picture
Update README.md
cd5a8a3 verified
metadata
license: other
license_name: igc-restricted-licence
license_link: >-
  https://repository.clarin.is/licenses/userlicense_igc_restricted_download_en.pdf

Texts from the Icelandic Web of Science and the European Web * JSONL-FORMAT

The dataset contains questions and answers from the Icelandic Web of Science (www.visindavefur.is) and the European Web (www.evropuvefur.is), run by the University of Iceland. The corpus does not contain all the texts from the websites but only those authorized by the authors. The corpus can be found in CLARIN-IS's repository, both the unannotated version (http://hdl.handle.net/20.500.12537/361) and the annotated version (http://hdl.handle.net/20.500.12537/362).

In the original corpus, the texts were divided into four parts: 'question', 'long question' (if available), 'answer' and 'rest' (references, information about photos, footnotes etc.) In this dataset only the shorter version of the question and the answer are included. NOTE that in same cases it was not possible to remove all extra texts (such as footontes) from the answer.


LICENSE:

The corpus contained in this package is published with a restricted licence (https://repository.clarin.is/licenses/userlicense_igc_restricted_download_en.pdf).


THE HUGGINGFACE DATASET:

Each line in the JSONL file contains one article (a question and an answer). The information and the format of a single line is the following:

  {
      "document":
        {"
          "question": "The question",
          "answer": "The answer"
        },

      "uuid": "a randomly generated ID for the json object", 
      "metadata": 
      {
          "author": "the original file's author, if available", 
          "fetch_timestamp": "the date of the conversion", 
          "xml_id": "the ID of the original XML file", 
          "publish_timestamp": "the publishing date of the text in the original XML file",
          "question":
            {
               "paragraphs": [{"offset": None, "length": None}, {"offset": None, "length": None}, ...],    
               # the offset and length of each paragraph in document['question']
               "sentences": [{"offset": None, "length": None}, {"offset": None, "length": None}, ...],     
               # the offset and length of each sentence in document['question']
            },
          "answer":
            {
               "paragraphs": [{"offset": None, "length": None}, {"offset": None, "length": None}, ...],    
               # the offset and length of each paragraph in document['answer']
               "sentences": [{"offset": None, "length": None}, {"offset": None, "length": None}, ...],     
               # the offset and length of each sentence in document['answer']
            },
          "source": "the source of the original text, taken from the XML file"
      }
  }

USAGE:

You can simply load the dataset in python with:

from datasets import load_dataset

#load the dataset
ds = load_dataset("arnastofnun/VV_EV")

Then you can iterate through each article (each pair of question and answer):

#iterate through each article
for article in ds["train"]:
    pass

Here below is a complete Python-code that prints out each paragraph for the question and answer in each article:

from datasets import load_dataset

#load the dataset
ds = load_dataset("arnastofnun/VV_EV")

#iterate through each article
for article in ds["train"]:
    
    print("## QUESTION ##")
    #iterate through items containing information about offset and length of each paragraph in 'question'
    for paragraph in article['metadata']['question']['paragraphs']:    
        #print the substring of article['document']['question']        
        print(article['document']['question'][paragraph['offset']:paragraph['offset']+paragraph['length']])

    print("## ANSWER ##")
    #iterate through items containing information about offset and length of each paragraph in 'answer'
    for paragraph in article['metadata']['answer']['paragraphs']:    
        
        #print the substring of article['document']['answer']
        print(article['document']['answer'][paragraph['offset']:paragraph['offset']+paragraph['length']])