text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
false |
# MIRACL (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ar-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
false |
Redistributed without modification from https://github.com/phelber/EuroSAT.
EuroSAT100 is a subset of EuroSATallBands containing only 100 images. It is intended for tutorials and demonstrations, not for benchmarking. |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Dataset contains pairs of sentences with next_sentence_label for NSP. Sentences was given from public jira projects dataset. Next sentence is always next sentence in one comment or sentence from reply to the comment.
### Supported Tasks and Leaderboards
NSP, MLM
### Languages
English
## Dataset Structure
sentence_a, sentence_b, next_sentence_label
### Source Data
https://zenodo.org/record/5901804#.Y_Xv4HZBxD9
|
false | # Urdu_DW-BBC-512
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: mubashir.munaaf@gmail.com**
### Dataset Summary
Urdu Summarization Dataset containining 76,637 records of Article + Summary pairs scrapped from BBC Urdu and DW Urdu News Websites.
-Preprocessed Version upto 512 tokens (~words); removed URLs, Pic Captions etc
### Supported Tasks and Leaderboards
Summarization
-Extractive and Abstractive
-urT5 (monolingual vocabulary; Urdu of 40k tokens) adapted from mT5 with own vocabulary was fine-tuned
-ROUGE-1 F Score: 40.03 combined, 46.35 BBC Urdu datapoints only and 36.91 DW Urdu datapoints only
-BERTScore: 75.1 combined, 77.0 BBC Urdu datapoints only and 74.16 DW Urdu datapoints only
### Languages
Urdu.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- url: URL of the article from where it was scrapped (BBC Urdu URLs in english topic text with number & DW Urdu with Urdu topic text)
dtype: {string}
- Summary: Short Summary of article written by author of article like highlights.
dtype: {string}
- Text: Complete Text of article which are intelligently trucated to 512 tokens.
dtype: {string}
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
## Considerations for Using the Data
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
KG dataset created by using spaCy PoS and Dependency parser.
### Supported Tasks and Leaderboards
Can be leveraged for token classification for detection of knowledge graph entities and relations.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Important fields for the token classification task are
* tokens - tokenized text
* tags - Tags for each token
{'SRC' - Source, 'REL' - Relation, 'TGT' - Target, 'O' - Others}
### Data Splits
One data file for around 15k records
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
true | |
false |
# DEplain-web-doc: A corpus for German Document Simplification
DEplain-web-doc is a subcorpus of DEplain [Stodden et al., 2023]((https://arxiv.org/abs/2305.18939)) for document simplification.
The corpus consists of 396 (199/50/147) parallel documents crawled from the web in standard German and plain German (or easy-to-read German). All documents are either published under an open license or the copyright holders gave us the permission to share the data.
If you are interested in a larger corpus, please check our paper and the provided web crawler to download more parallel documents with a closed license.
Human annotators also sentence-wise aligned the 147 documents of the test set to build a corpus for sentence simplification.
For the sentence-level version of this corpus, please see [https://huggingface.co/datasets/DEplain/DEplain-web-sent](https://huggingface.co/datasets/DEplain/DEplain-web-sent).
The documents of the training and development set were automatically aligned using MASSalign.
You can find this data here: [https://github.com/rstodden/DEPlain/](https://github.com/rstodden/DEPlain/tree/main/E__Sentence-level_Corpus/DEplain-web-sent/auto/open).
If you use the automatically aligned data, please use it cautiously, as the alignment quality might be error-prone.
# Dataset Card for DEplain-web-doc
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [DEplain-web GitHub repository](https://github.com/rstodden/DEPlain)
- **Paper:** ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939)
- **Point of Contact:** [Regina Stodden](regina.stodden@hhu.de)
### Dataset Summary
[DEplain-web](https://github.com/rstodden/DEPlain) [(Stodden et al., 2023)](https://arxiv.org/abs/2305.18939) is a dataset for the evaluation of sentence and document simplification in German. All texts of this dataset are scraped from the web. All documents were licenced with an open license. The simple-complex sentence pairs are manually aligned.
This dataset only contains a test set. For additional training and development data, please scrape more data from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification) and align the sentences of the documents automatically using, for example, [MASSalign](https://github.com/ghpaetzold/massalign) by [Paetzold et al. (2017)](https://www.aclweb.org/anthology/I17-3001/).
### Supported Tasks and Leaderboards
The dataset supports the evaluation of `text-simplification` systems. Success in this task is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
### Languages
The texts in this dataset are written in German (de-de). The texts are in German plain language variants, e.g., plain language (Einfache Sprache) or easy-to-read language (Leichte Sprache).
### Domains
The texts are from 6 different domains: fictional texts (literature and fairy tales), bible texts, health-related texts, texts for language learners, texts for accessibility, and public administration texts.
## Dataset Structure
### Data Access
- The dataset is licensed with different open licenses dependent on the subcorpora.
### Data Instances
- `document-simplification` configuration: an instance consists of an original document and one reference simplification.
- `sentence-simplification` configuration: an instance consists of an original sentence and one manually aligned reference simplification. Please see [https://huggingface.co/datasets/DEplain/DEplain-web-sent](https://huggingface.co/datasets/DEplain/DEplain-web-sent).
- `sentence-wise alignment` configuration: an instance consists of original and simplified documents and manually aligned sentence pairs. In contrast to the sentence-simplification configurations, this configuration contains also sentence pairs in which the original and the simplified sentences are exactly the same. Please see [https://github.com/rstodden/DEPlain](https://github.com/rstodden/DEPlain/tree/main/C__Alignment_Algorithms)
### Data Fields
| data field | data field description |
|-------------------------------------------------|-------------------------------------------------------------------------------------------------------|
| `original` | an original text from the source dataset |
| `simplification` | a simplified text from the source dataset |
| `pair_id` | document pair id |
| `complex_document_id ` (on doc-level) | id of complex document (-1) |
| `simple_document_id ` (on doc-level) | id of simple document (-0) |
| `original_id ` (on sent-level) | id of sentence(s) of the original text |
| `simplification_id ` (on sent-level) | id of sentence(s) of the simplified text |
| `domain ` | text domain of the document pair |
| `corpus ` | subcorpus name |
| `simple_url ` | origin URL of the simplified document |
| `complex_url ` | origin URL of the simplified document |
| `simple_level ` or `language_level_simple ` | required CEFR language level to understand the simplified document |
| `complex_level ` or `language_level_original ` | required CEFR language level to understand the original document |
| `simple_location_html ` | location on hard disk where the HTML file of the simple document is stored |
| `complex_location_html ` | location on hard disk where the HTML file of the original document is stored |
| `simple_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
| `complex_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
| `alignment_location ` | location on hard disk where the alignment is stored |
| `simple_author ` | author (or copyright owner) of the simplified document |
| `complex_author ` | author (or copyright owner) of the original document |
| `simple_title ` | title of the simplified document |
| `complex_title ` | title of the original document |
| `license ` | license of the data |
| `last_access ` or `access_date` | data origin data or data when the HTML files were downloaded |
| `rater` | id of the rater who annotated the sentence pair |
| `alignment` | type of alignment, e.g., 1:1, 1:n, n:1 or n:m |
### Data Splits
DEplain-web contains a training set, a development set and a test set.
The dataset was split based on the license of the data. All manually-aligned sentence pairs with an open license are part of the test set. The document-level test set, also only contains the documents which are manually aligned. For document-level dev and test set the documents which are not aligned or not public available are used. For the sentence-level, the alingment pairs can be produced by automatic alignments (see [Stodden et al., 2023](https://arxiv.org/abs/2305.18939)).
Document-level:
| | Train | Dev | Test | Total |
|-------------------------|-------|-----|------|-------|
| DEplain-web-manual-open | - | - | 147 | 147 |
| DEplain-web-auto-open | 199 | 50 | - | 279 |
| DEplain-web-auto-closed | 288 | 72 | - | 360 |
| in total | 487 | 122 | 147 | 756 |
Sentence-level:
| | Train | Dev | Test | Total |
|-------------------------|-------|-----|------|-------|
| DEplain-web-manual-open | - | - | 1846 | 1846 |
| DEplain-web-auto-open | 514 | 138 | - | 652 |
| DEplain-web-auto-closed | 767 | 175 | - | 942 |
| in total | 1281 | 313 | 1846 | |
| **subcorpus** | **simple** | **complex** | **domain** | **description** | **\ doc.** |
|----------------------------------|------------------|------------------|------------------|-------------------------------------------------------------------------------|------------------|
| **EinfacheBücher** | Plain German | Standard German / Old German | fiction | Books in plain German | 15 |
| **EinfacheBücherPassanten** | Plain German | Standard German / Old German | fiction | Books in plain German | 4 |
| **ApothekenUmschau** | Plain German | Standard German | health | Health magazine in which diseases are explained in plain German | 71 |
| **BZFE** | Plain German | Standard German | health | Information of the German Federal Agency for Food on good nutrition | 18 |
| **Alumniportal** | Plain German | Plain German | language learner | Texts related to Germany and German traditions written for language learners. | 137 |
| **Lebenshilfe** | Easy-to-read German | Standard German | accessibility | | 49 |
| **Bibel** | Easy-to-read German | Standard German | bible | Bible texts in easy-to-read German | 221 |
| **NDR-Märchen** | Easy-to-read German | Standard German / Old German | fiction | Fairytales in easy-to-read German | 10 |
| **EinfachTeilhaben** | Easy-to-read German | Standard German | accessibility | | 67 |
| **StadtHamburg** | Easy-to-read German | Standard German | public authority | Information of and regarding the German city Hamburg | 79 |
| **StadtKöln** | Easy-to-read German | Standard German | public authority | Information of and regarding the German city Cologne | 85 |
: Documents per Domain in DEplain-web.
| domain | avg. | std. | interpretation | \ sents | \ docs |
|------------------|---------------|---------------|-------------------------|-------------------|------------------|
| bible | 0.7011 | 0.31 | moderate | 6903 | 3 |
| fiction | 0.6131 | 0.39 | moderate | 23289 | 3 |
| health | 0.5147 | 0.28 | weak | 13736 | 6 |
| language learner | 0.9149 | 0.17 | almost perfect | 18493 | 65 |
| all | 0.8505 | 0.23 | strong | 87645 | 87 |
: Inter-Annotator-Agreement per Domain in DEplain-web-manual.
| operation | documents | percentage |
|-----------|-------------|------------|
| rehphrase | 863 | 11.73 |
| deletion | 3050 | 41.47 |
| addition | 1572 | 21.37 |
| identical | 887 | 12.06 |
| fusion | 110 | 1.5 |
| merge | 77 | 1.05 |
| split | 796 | 10.82 |
| in total | 7355 | 100 |
: Information regarding Simplification Operations in DEplain-web-manual.
## Dataset Creation
### Curation Rationale
Current German text simplification datasets are limited in their size or are only automatically evaluated.
We provide a manually aligned corpus to boost text simplification research in German.
### Source Data
#### Initial Data Collection and Normalization
The parallel documents were scraped from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification).
The texts of the documents were manually simplified by professional translators.
The data was split into sentences using a German model of SpaCy.
Two German native speakers have manually aligned the sentence pairs by using the text simplification annotation tool [TS-ANNO](https://github.com/rstodden/TS_annotation_tool) by [Stodden & Kallmeyer (2022)](https://aclanthology.org/2022.acl-demo.14/).
#### Who are the source language producers?
The texts of the documents were manually simplified by professional translators. See for an extensive list of the scraped URLs see Table 10 in [Stodden et al. (2023)](https://arxiv.org/abs/2305.18939).
### Annotations
#### Annotation process
The instructions given to the annotators are available [here](https://github.com/rstodden/TS_annotation_tool/tree/master/annotation_schema).
#### Who are the annotators?
The annotators are two German native speakers, who are trained in linguistics. Both were at least compensated with the minimum wage of their country of residence.
They are not part of any target group of text simplification.
### Personal and Sensitive Information
No sensitive data.
## Considerations for Using the Data
### Social Impact of Dataset
Many people do not understand texts due to their complexity. With automatic text simplification methods, the texts can be simplified for them. Our new training data can benefit in training a TS model.
### Discussion of Biases
no bias is known.
### Other Known Limitations
The dataset is provided under different open licenses depending on the license of each website were the data is scraped from. Please check the dataset license for additional information.
## Additional Information
### Dataset Curators
DEplain-APA was developed by researchers at the Heinrich-Heine-University Düsseldorf, Germany. This research is part of the PhD-program ``Online Participation'', supported by the North Rhine-Westphalian (German) funding scheme ``Forschungskolleg''.
### Licensing Information
The corpus includes the following licenses: CC-BY-SA-3, CC-BY-4, and CC-BY-NC-ND-4. The corpus also include a "save_use_share" license, for these documents the data provider permitted us to share the data for research purposes.
### Citation Information
```
@inproceedings{stodden-etal-2023-deplain,
title = "{DE}-plain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification",
author = "Stodden, Regina and
Momen, Omar and
Kallmeyer, Laura",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
notes = "preprint: https://arxiv.org/abs/2305.18939",
}
```
This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r) and [Yacine Jernite](https://github.com/yjernite). |
true |
# Sentiment fairness dataset
================================
This dataset is to measure gender fairness in the downstream task of sentiment analysis. This dataset is a subset of the SST data that was filtered to have only the sentences that contain gender information. The python code used to create this dataset can be found in the prepare_sst.ipyth file.
Then the filtered datset was labeled by 4 human annotators who are the authors of this dataset. The annotations instructions are given below.
---
# Annotation Instructions
==============================
Each sentence has two existing labels:
* 'label' gives the sentiment score
* 'gender' gives the guessed gender of the target of the sentiment
The 'gender' label has two tags:
* 'masc' for masculine-gendered words, like 'he' or 'father'
* 'femm' for feminine-gendered words, like 'she' or 'mother'
For each sentence, you are to annotate if the sentence's **sentiment is directed toward a gendered person** i.e. the gender label is correct.
There are two primary ways the gender label can be incorrect: 1) the sentiment is not directed toward a gendered person/character, or 2) the sentiment is directed toward a gendered person/character but the gender is incorrect.
Please annotate **1** if the sentence is **correctly labeled** and **0** if not.
(The sentiment labels should be high quality, so mostly we're checking that the gender is correctly labeled.)
Some clarifying notes:
* If the sentiment is directed towards multiple people with different genders, mark as 0; in this case, the subject of the sentiment is not towards a single gender.
* If the sentiment is directed towards the movie or its topic, even if the movie or topic seems gendered, mark as 0; in this case, the subject of the sentiment isn't a person or character (it's a topic).
* If the sentiment is directed towards a named person or character, and you think you can infer the gender, don't! We are only marking as 1 sentences where the subject is gendered in the sentence itself.
## Positive examples (you'd annotate 1)
* sentence: She gave an excellent performance.
* label: .8
* gender: femm
Sentiment is directed at the 'she'.
---
* sentence: The director gets excellent performances out of his cast.
* label: .7
* gender: masc
Sentiment is directed at the male-gendered director.
---
* sentence: Davis the performer is plenty fetching enough, but she needs to shake up the mix, and work in something that doesn't feel like a half-baked stand-up routine.
* label: .4
* gender: femm
Sentiment is directed at Davis, who is gendered with the pronoun 'she'.
## Negative examples (you'd annotate 0)
* sentence: A near miss for this new director.
* label: .3
* gender: femm
This sentence was labeled 'femm' because it had the word 'miss' in it, but the sentiment is not actually directed towards a feminine person (we don't know the gender of the director).
---
* sentence: This terrible book-to-movie adaption must have the author turning in his grave.
* label: .2
* gender: masc
The sentiment is directed towards the movie, or maybe the director, but not the male-gendered author.
---
* sentence: Despite a typical mother-daughter drama, the excellent acting makes this movie a charmer.
* label: .8
* gender: femm
Sentiment is directed at the acting, not a person or character.
---
* sentence: The film's maudlin focus on the young woman's infirmity and her naive dreams play like the worst kind of Hollywood heart-string plucking.
* label: .8
* gender: femm
Similar to above, the sentiment is directed towards the movie's focus---though the focus may be gendered, we are only keeping sentences where the sentiment is directed towards a gendered person or character.
---
* sentence: Lohman adapts to the changes required of her, but the actress and director Peter Kosminsky never get the audience to break through the wall her character erects.
* label: .4
* gender: femm
The sentiment is directed towards both the actress and the director, who may have different genders.
---
# The final dataset
=====================
The final dataset conatina the following columns:
Sentnces: the sentence that contain a sentiment.
label: the sentiment label if hte sentience is positve or negative.
gender: the gender of hte target of the sentiment in the sentence.
A1: the annotation of the first annotator. ("1" means that the gender in the "gender" colum is correctly the target of the sentnce. "0" means otherwise)
A2: the annotation of the second annotator. ("1" means that the gender in the "gender" colum is correctly the target of the sentnce. "0" means otherwise)
A3: the annotation of the third annotator. ("1" means that the gender in the "gender" colum is correctly the target of the sentnce. "0" means otherwise)
Keep: a boolean indicating wheather to keeep this sentnce or not. "Keep" means that the gender of this sentence was labelled by more than one annotator as correct.
agreement: the number of annotators who agreeed o nteh label.
correct: the number of annotators who gave the majority of labels.
incorrect: the number of annotators who gave the minority labels.
**This dataset is ready to use as the majority of the human annotators agreed that the sentiment of these sentences is targeted at the gender mentioned in the "gender" column**
---
# Citation
==============
@misc{sst-sentiment-fainress-dataset,
title={A dataset to measure fairness in the sentiment analysis task},
author={Gero, Katy and Butters, Nathan and Bethke, Anna and Elsafoury, Fatma},
howpublished={https://github.com/efatmae/SST_sentiment_fairness_data},
year={2023}
}
|
false | # Dataset Card for "fd_dialogue"
This dataset contains transcripts for famous movies and TV shows from https://transcripts.foreverdreaming.org/
The dataset contains **only a small portion of Forever Dreaming's data**, as only transscripts with a clear dialogue format are included, such as:
```
PERSON 1: Hello
PERSON 2: Hello Person 2!
(they are both talking)
Something else happens
PERSON 1: What happened?
```
Each row in the dataset is a single TV episode or movie. (**5380** rows total) following the [OpenAssistant](https://open-assistant.io/) format.
The METADATA column contains *type* (movie or series), *show* and the *episode* ("" for movies) keys and string values as a JSON string.
| Show | Count |
|----|----|
| A Discovery of Witches | 6 |
| Agents of S.H.I.E.L.D. | 9 |
| Alias | 102 |
| Angel | 64 |
| Bones | 114 |
| Boy Meets World | 24 |
| Breaking Bad | 27 |
| Brooklyn Nine-Nine | 8 |
| Buffy the Vampire Slayer | 113 |
| CSI: Crime Scene Investigation | 164 |
| Charmed | 176 |
| Childrens Hospital | 18 |
| Chuck | 17 |
| Crossing Jordan | 23 |
| Dawson's Creek | 128 |
| Degrassi Next Generation | 113 |
| Doctor Who | 699 |
| Doctor Who Special | 21 |
| Doctor Who_ | 108 |
| Downton Abbey | 18 |
| Dragon Ball Z Kai | 57 |
| FRIENDS | 227 |
| Foyle's War | 28 |
| Friday Night Lights | 7 |
| Game of Thrones | 6 |
| Gilmore Girls | 149 |
| Gintama | 41 |
| Glee | 11 |
| Gossip Girl | 5 |
| Greek | 33 |
| Grey's Anatomy | 75 |
| Growing Pains | 116 |
| Hannibal | 4 |
| Heartland | 3 |
| Hell on Wheels | 3 |
| House | 153 |
| How I Met Your Mother | 133 |
| JoJo's Bizarre Adventure | 42 |
| Justified | 46 |
| Keeping Up With the Kardashians | 8 |
| Lego Ninjago: Masters of Spinjitzu | 12 |
| London Spy | 5 |
| Lost | 117 |
| Lucifer | 3 |
| Married | 9 |
| Mars | 6 |
| Merlin | 58 |
| My Little Pony: Friendship is Magic | 15 |
| NCIS | 91 |
| New Girl | 3 |
| Once Upon A Time | 79 |
| One Tree Hill | 163 |
| Open Heart | 8 |
| Pretty Little Liars | 4 |
| Prison Break | 23 |
| Queer As Folk | 38 |
| Reign | 9 |
| Roswell | 60 |
| Salem | 23 |
| Scandal | 7 |
| Schitt's Creek | 4 |
| Scrubs | 29 |
| Sex and the City | 4 |
| Sherlock | 8 |
| Skins | 20 |
| Smallville | 190 |
| Sons of Anarchy | 55 |
| South Park | 84 |
| Spy × Family | 12 |
| StarTalk | 6 |
| Sugar Apple Fairy Tale | 5 |
| Supernatural | 114 |
| Teen Wolf | 58 |
| That Time I Got Reincarnated As A Slime | 22 |
| The 100 | 3 |
| The 4400 | 16 |
| The Amazing World of Gumball | 4 |
| The Big Bang Theory | 183 |
| The L Word | 3 |
| The Mentalist | 38 |
| The Nanny | 8 |
| The O.C. | 92 |
| The Office | 195 |
| The Originals | 45 |
| The Secret Life of an American Teenager | 18 |
| The Simpsons | 14 |
| The Vampire Diaries | 121 |
| The Walking Dead | 12 |
| The X-Files | 3 |
| Torchwood | 31 |
| Trailer Park Boys | 10 |
| True Blood | 33 |
| Tyrant | 6 |
| Veronica Mars | 59 |
| Vikings | 7 |
An additional 36 movies with transcripts are also included:
```
Pokémon the Movie: Hoopa and the Clash of Ages (2015)
Frozen (2013)
Home Alone
Lego Batman Movie, The (2017)
Disenchanted ( 2022)
Nightmare Before Christmas, The
Goonies, The (1985)
Polar Express, The (2004)
Frosty the Snowman (1969)
The Truth About Christmas (2018)
A Miser Brothers' Christmas (2008)
Powerpuff Girls: 'Twas the Fight Before Christmas, The (2003)
Tis the Season (2015)
Jingle Hell (2000)
Corpse Party: Book of Shadows (2016)
Mummy, The (1999)
Knock Knock (2015)
Dungeons and Dragons , Honour among thieves ( 2023)
w*r of the Worlds (2005)
Harry Potter and the Sorcerer's Stone
Twilight Saga, The: Breaking Dawn Part 2
Twilight Saga, The: Breaking Dawn Part 1
Twilight Saga, The: Eclipse
Godfather, The (1972)
Transformers (2007)
Creed 3 (2023)
Creed (2015)
Lethal w*apon 3 (1992)
Spider-Man 2 (2004)
Spider-Man: No Way Home (2021)
Black Panther Wakanda Forever ( 2022)
Money Train (1995)
Happys, The (2016)
Paris, Wine and Romance (2019)
Angel Guts: Red p*rn (1981)
Butterfly Crush (2010)
```
Note that there could be overlaps with the [TV dialogue dataset](https://huggingface.co/datasets/sedthh/tv_dialogue) for Friends, The Office, Doctor Who, South Park and some movies. |
true | |
false | |
false |
# Dataset Card for brain-tumor-m2pbp
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/brain-tumor-m2pbp
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
brain-tumor-m2pbp
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/brain-tumor-m2pbp
### Citation Information
```
@misc{ brain-tumor-m2pbp,
title = { brain tumor m2pbp Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/brain-tumor-m2pbp } },
url = { https://universe.roboflow.com/object-detection/brain-tumor-m2pbp },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
false |
# Dataset Card for printed-circuit-board
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/printed-circuit-board
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
printed-circuit-board
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/printed-circuit-board
### Citation Information
```
@misc{ printed-circuit-board,
title = { printed circuit board Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/printed-circuit-board } },
url = { https://universe.roboflow.com/object-detection/printed-circuit-board },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
false | # AutoTrain Dataset for project: treehk
## Dataset Description
This dataset has been automatically processed by AutoTrain for project treehk.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<245x358 RGB PIL image>",
"target": 0
},
{
"image": "<400x530 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Acacia auriculiformis \u8033\u679c\u76f8\u601d', 'Acacia confusa Merr. \u53f0\u7063\u76f8\u601d', 'Acacia mangium Willd. \u5927\u8449\u76f8\u601d', 'Acronychia pedunculata (L.) Miq.--\u5c71\u6cb9\u67d1', 'Archontophoenix alexandrae \u4e9e\u529b\u5c71\u5927\u6930\u5b50', 'Bauhinia purpurea L. \u7d05\u82b1\u7f8a\u8e44\u7532', 'Bauhinia variegata L \u5bae\u7c89\u7f8a\u8e44\u7532', 'Bauhinia \u6d0b\u7d2b\u834a', 'Bischofia javanica Blume \u79cb\u6953', 'Bischofia polycarpa \u91cd\u967d\u6728', 'Callistemon rigidus R. Br \u7d05\u5343\u5c64', 'Callistemon viminalis \u4e32\u9322\u67f3', 'Cinnamomum burmannii \u9670\u9999', 'Cinnamomum camphora \u6a1f\u6a39', 'Crateva trifoliata \u920d\u8449\u9b5a\u6728 ', 'Crateva unilocularis \u6a39\u982d\u83dc', 'Delonix regia \u9cf3\u51f0\u6728', 'Elaeocarpus hainanensis Oliv \u6c34\u77f3\u6995', 'Elaeocarpus sylvestris \u5c71\u675c\u82f1', 'Erythrina variegata L. \u523a\u6850', 'Ficus altissima Blume \u9ad8\u5c71\u6995', 'Ficus benjamina L. \u5782\u8449\u6995', 'Ficus elastica Roxb. ex Hornem \u5370\u5ea6\u6995', 'Ficus microcarpa L. f \u7d30\u8449\u6995', 'Ficus religiosa L. \u83e9\u63d0\u6a39', 'Ficus rumphii Blume \u5047\u83e9\u63d0\u6a39', 'Ficus subpisocarpa Gagnep. \u7b46\u7ba1\u6995', 'Ficus variegata Blume \u9752\u679c\u6995', 'Ficus virens Aiton \u5927\u8449\u6995', 'Koelreuteria bipinnata Franch. \u8907\u7fbd\u8449\u6b12\u6a39', 'Livistona chinensis \u84b2\u8475', 'Melaleuca Cajeput-tree \u767d\u5343\u5c64 ', 'Melia azedarach L \u695d', 'Peltophorum pterocarpum \u76fe\u67f1\u6728', 'Peltophorum tonkinense \u9280\u73e0', 'Roystonea regia \u5927\u738b\u6930\u5b50', 'Schefflera actinophylla \u8f3b\u8449\u9d5d\u638c\u67f4', 'Schefflera heptaphylla \u9d5d\u638c\u67f4', 'Spathodea campanulata\u706b\u7130\u6a39', 'Sterculia lanceolata Cav. \u5047\u860b\u5a46', 'Wodyetia bifurcata A.K.Irvine \u72d0\u5c3e\u68d5'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 385 |
| valid | 113 |
|
false | # Abalone
The [Abalone dataset](https://archive-beta.ics.uci.edu/dataset/1/abalone) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Predict the age of the given abalone.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------|
| abalone | Regression | Predict the age of the abalone. |
| binary | Binary classification | Does the abalone have more than 9 rings?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/abalone")["train"]
```
# Features
Target feature in bold.
|**Feature** |**Type** |
|-----------------------|---------------|
| sex | `[string]` |
| length | `[float64]` |
| diameter | `[float64]` |
| height | `[float64]` |
| whole_weight | `[float64]` |
| shucked_weight | `[float64]` |
| viscera_weight | `[float64]` |
| shell_weight | `[float64]` |
| **number_of_rings** | `[int8]` | |
true |
# Dataset Card for BLiterature
*BLiterature is part of a bigger project that is not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** KaraKaraWitch
### Dataset Summary
BLiterature is a raw dataset dump consisting of text from at most 260,261,224 blog posts (excluding categories and date-grouped posts) from blog.fc2.com.
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
* text-classification
* text-generation
### Languages
* Japanese
## Dataset Structure
All the files are located in jsonl files that has been compressed into archives of 7z.
### Data Instances
```json
["http://1kimono.blog49.fc2.com/blog-entry-50.html",
"<!DOCTYPE HTML\n\tPUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\"\n\t\t\"http://www.w3.org/TR/html4/loose.dtd\">\n<!--\n<!DOCTYPE HTML\n\tPUBLIC \"-//W3C//DTD HTML 4.01//EN\"\n\t\t\"http://www.w3.org/T... (TRUNCATED)"]
```
### Data Fields
There is only 2 fields in the list. URL and content retrieved. content retrieved may contain values which the scraper ran into issues. If so they are marked in xml such as such.
```<?xml version="1.0" encoding="utf-8"?><error>Specifc Error</error>```
URLs may not match the final url in which the page was retrieved from. As they may be redirects present while scraping.
#### Q-Score Distribution
Not Applicable
### Data Splits
The jsonl files were split roughly every 2,500,000 posts. Allow for a slight deviation of 5000 additional posts due to how the files were saved.
## Dataset Creation
### Curation Rationale
fc2 is a Japanese blog hosting website which offers a place for anyone to host their blog on. As a result, the language used compared to other more official sources is more informal and relaxed as anyone can post whatever they personally want.
### Source Data
#### Initial Data Collection and Normalization
None. No normalization is performed as this is a raw dump of the dataset.
#### Who are the source language producers?
The authors of each blog, which may include others to post on their blog domain as well.
### Annotations
#### Annotation process
No Annotations are present.
#### Who are the annotators?
No human annotators.
### Personal and Sensitive Information
As this dataset contains information from individuals, there is a more likely chance to find personally identifiable information. However, we believe that the author has pre-vetted their posts in good faith to avoid such occurrences.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content.
It may also be useful for other languages depending on your language model.
### Discussion of Biases
This dataset contains real life referances and revolves around Japanese culture. As such there will be a bias towards it.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
KaraKaraWitch
### Licensing Information
Apache 2.0, for all parts of which KaraKaraWitch may be considered authors. All other material is distributed under fair use principles.
Ronsor Labs additionally is allowed to relicense the dataset as long as it has gone through processing.
### Citation Information
```
@misc{bliterature,
title = {BLiterature: fc2 blogs for the masses.},
author = {KaraKaraWitch},
year = {2023},
howpublished = {\url{https://huggingface.co/datasets/KaraKaraWitch/BLiterature}},
}
```
### Name Etymology
[Literature (リテラチュア) - Reina Ueda (上田麗奈)](https://www.youtube.com/watch?v=Xo1g5HWgaRA)
`Blogs` > `B` + `Literature` > `BLiterature`
### Contributions
- [@KaraKaraWitch (Twitter)](https://twitter.com/KaraKaraWitch) for gathering this dataset.
- [neggles (Github)](https://github.com/neggles) for providing compute for the gathering of dataset. |
true |
# Dataset Card for JSICK
## Table of Contents
- [Dataset Card for JSICK](#dataset-card-for-jsick)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.](#japanese-sentences-involving-compositional-knowledge-jsick-dataset)
- [JSICK-stress Test set](#jsick-stress-test-set)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [base](#base)
- [stress](#stress)
- [Data Fields](#data-fields)
- [base](#base-1)
- [stress](#stress-1)
- [Data Splits](#data-splits)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/verypluming/JSICK
- **Repository:** https://github.com/verypluming/JSICK
- **Paper:** https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual
- **Paper:** https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_pdf/-char/ja
### Dataset Summary
From official [GitHub](https://github.com/verypluming/JSICK):
#### Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.
JSICK is the Japanese NLI and STS dataset by manually translating the English dataset [SICK (Marelli et al., 2014)](https://aclanthology.org/L14-1314/) into Japanese.
We hope that our dataset will be useful in research for realizing more advanced models that are capable of appropriately performing multilingual compositional inference.
#### JSICK-stress Test set
The JSICK-stress test set is a dataset to investigate whether models capture word order and case particles in Japanese.
The JSICK-stress test set is provided by transforming syntactic structures of sentence pairs in JSICK, where we analyze whether models are attentive to word order and case particles to predict entailment labels and similarity scores.
The JSICK test set contains 1666, 797, and 1006 sentence pairs (A, B) whose premise sentences A (the column `sentence_A_Ja_origin`) include the basic word order involving
ga-o (nominative-accusative), ga-ni (nominative-dative), and ga-de (nominative-instrumental/locative) relations, respectively.
We provide the JSICK-stress test set by transforming syntactic structures of these pairs by the following three ways:
- `scrum_ga_o`: a scrambled pair, where the word order of premise sentences A is scrambled into o-ga, ni-ga, and de-ga order, respectively.
- `ex_ga_o`: a rephrased pair, where the only case particles (ga, o, ni, de) in the premise A are swapped
- `del_ga_o`: a rephrased pair, where the only case particles (ga, o, ni) in the premise A are deleted
### Languages
The language data in JSICK is in Japanese and English.
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
```python
import datasets as ds
dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'],
# num_rows: 4500
# })
# test: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'],
# num_rows: 4927
# })
# })
dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick", name="stress")
print(dataset)
# DatasetDict({
# test: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'sentence_A_Ja_origin', 'entailment_label_origin', 'relatedness_score_Ja_origin', 'rephrase_type', 'case_particles'],
# num_rows: 900
# })
# })
```
#### base
An example of looks as follows:
```json
{
'id': 1,
'premise': '子供たちのグループが庭で遊んでいて、後ろの方には年を取った男性が立っている',
'hypothesis': '庭にいる男の子たちのグループが遊んでいて、男性が後ろの方に立っている',
'label': 1, // (neutral)
'score': 3.700000047683716,
'premise_en': 'A group of kids is playing in a yard and an old man is standing in the background',
'hypothesis_en': 'A group of boys in a yard is playing and a man is standing in the background',
'label_en': 1, // (neutral)
'score_en': 4.5,
'corr_entailment_labelAB_En': 'nan',
'corr_entailment_labelBA_En': 'nan',
'image_ID': '3155657768_b83a7831e5.jpg',
'original_caption': 'A group of children playing in a yard , a man in the background .',
'semtag_short': 'nan',
'semtag_long': 'nan',
}
```
#### stress
An example of looks as follows:
```json
{
'id': '5818_de_d',
'premise': '女性火の近くダンスをしている',
'hypothesis': '火の近くでダンスをしている女性は一人もいない',
'label': 2, // (contradiction)
'score': 4.0,
'sentence_A_Ja_origin': '女性が火の近くでダンスをしている',
'entailment_label_origin': 2,
'relatedness_score_Ja_origin': 3.700000047683716,
'rephrase_type': 'd',
'case_particles': 'de'
}
```
### Data Fields
#### base
A version adopting the column names of a typical NLI dataset.
| Name | Description |
| -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| id | The ids (the same with original SICK). |
| premise | The first sentence in Japanese. |
| hypothesis | The second sentence in Japanese. |
| label | The entailment label in Japanese. |
| score | The relatedness score in the range [1-5] in Japanese. |
| premise_en | The first sentence in English. |
| hypothesis_en | The second sentence in English. |
| label_en | The original entailment label in English. |
| score_en | The original relatedness score in the range [1-5] in English. |
| semtag_short | The linguistic phenomena tags in Japanese. |
| semtag_long | The details of linguistic phenomena tags in Japanese. |
| image_ID | The original image in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). |
| original_caption | The original caption in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). |
| corr_entailment_labelAB_En | The corrected entailment label from A to B in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). |
| corr_entailment_labelBA_En | The corrected entailment label from B to A in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). |
#### stress
| Name | Description |
| --------------------------- | ------------------------------------------------------------------------------------------------- |
| id | Ids (the same with original SICK). |
| premise | The first sentence in Japanese. |
| hypothesis | The second sentence in Japanese. |
| label | The entailment label in Japanese |
| score | The relatedness score in the range [1-5] in Japanese. |
| sentence_A_Ja_origin | The original premise sentences A from the JSICK test set. |
| entailment_label_origin | The original entailment labels. |
| relatedness_score_Ja_origin | The original relatedness scores. |
| rephrase_type | The type of transformation applied to the syntactic structures of the sentence pairs. |
| case_particles | The grammatical particles in Japanese that indicate the function or role of a noun in a sentence. |
### Data Splits
| name | train | validation | test |
| --------------- | ----: | ---------: | ----: |
| base | 4,500 | | 4,927 |
| original | 4,500 | | 4,927 |
| stress | | | 900 |
| stress-original | | | 900 |
### Annotations
To annotate the JSICK dataset, they used the crowdsourcing platform "Lancers" to re-annotate entailment labels and similarity scores for JSICK.
They had six native Japanese speakers as annotators, who were randomly selected from the platform.
The annotators were asked to fully understand the guidelines and provide the same labels as gold labels for ten test questions.
For entailment labels, they adopted annotations that were agreed upon by a majority vote as gold labels and checked whether the majority judgment vote was semantically valid for each example.
For similarity scores, they used the average of the annotation results as gold scores.
The raw annotations with the JSICK dataset are [publicly available](https://github.com/verypluming/JSICK/blob/main/jsick/jsick-all-annotations.tsv).
The average annotation time was 1 minute per pair, and Krippendorff's alpha for the entailment labels was 0.65.
## Additional Information
- [verypluming/JSICK](https://github.com/verypluming/JSICK)
- [Compositional Evaluation on Japanese Textual Entailment and Similarity](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual)
- [JSICK: 日本語構成的推論・類似度データセットの構築](https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_article/-char/ja)
### Licensing Information
CC BY-SA 4.0
### Citation Information
```bibtex
@article{yanaka-mineshima-2022-compositional,
title = "Compositional Evaluation on {J}apanese Textual Entailment and Similarity",
author = "Yanaka, Hitomi and
Mineshima, Koji",
journal = "Transactions of the Association for Computational Linguistics",
volume = "10",
year = "2022",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2022.tacl-1.73",
doi = "10.1162/tacl_a_00518",
pages = "1266--1284",
}
@article{谷中 瞳2021,
title={JSICK: 日本語構成的推論・類似度データセットの構築},
author={谷中 瞳 and 峯島 宏次},
journal={人工知能学会全国大会論文集},
volume={JSAI2021},
number={ },
pages={4J3GS6f02-4J3GS6f02},
year={2021},
doi={10.11517/pjsai.JSAI2021.0_4J3GS6f02}
}
```
### Contributions
Thanks to [Hitomi Yanaka](https://hitomiyanaka.mystrikingly.com/) and [Koji Mineshima](https://abelard.flet.keio.ac.jp/person/minesima/index-j.html) for creating this dataset. |
false |
This dataset is a redistribution of the following dataset.
https://github.com/suzuki256/dog-dataset
```
The dataset and its contents are made available on an "as is" basis and without warranties of any kind, including without limitation satisfactory quality and conformity, merchantability, fitness for a particular purpose, accuracy or completeness, or absence of errors.
```
|
false | # Dataset Card for CIFAR-10-Enriched (Enhanced by Renumics)
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=cifar10-enriched)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [CS Toronto Homepage](https://www.cs.toronto.edu/~kriz/cifar.html)
- **Paper:** [Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf)
### Dataset Summary
📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
At [Renumics](https://renumics.com/?hf-dataset-card=cifar10-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
📚 This dataset is an enriched version of the [CIFAR-10 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
### Explore the Dataset

The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets
```
Load the dataset from huggingface in your notebook:
```python
import datasets
dataset = datasets.load_dataset("renumics/cifar10-enriched", split="train")
```
Start exploring with a simple view:
```python
from renumics import spotlight
df = dataset.to_pandas()
df_show = df.drop(columns=['img'])
spotlight.show(df_show, port=8000, dtype={"img_path": spotlight.Image})
```
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
### CIFAR-10 Dataset
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
The classes are completely mutually exclusive. There is no overlap between automobiles and trucks. "Automobile" includes sedans, SUVs, things of that sort. "Truck" includes only big trucks. Neither includes pickup trucks.
Here is the list of classes in the CIFAR-10:
- airplane
- automobile
- bird
- cat
- deer
- dog
- frog
- horse
- ship
- truck
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 10 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cifar-10).
### Languages
English class labels.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x7FD19FABC1D0>,
'img_path': '/huggingface/datasets/downloads/extracted/7faec2e0fd4aa3236f838ed9b105fef08d1a6f2a6bdeee5c14051b64619286d5/0/0.png',
'label': 0,
'split': 'train'
}
```
### Data Fields
| Feature | Data Type |
|---------------------------------|-----------------------------------------------|
| img | Image(decode=True, id=None) |
| img_path | Value(dtype='string', id=None) |
| label | ClassLabel(names=[...], id=None) |
| split | Value(dtype='string', id=None) |
### Data Splits
| Dataset Split | Number of Images in Split | Samples per Class |
| ------------- |---------------------------| -------------------------|
| Train | 50000 | 5000 |
| Test | 10000 | 1000 |
## Dataset Creation
### Curation Rationale
The CIFAR-10 and CIFAR-100 are labeled subsets of the [80 million tiny images](http://people.csail.mit.edu/torralba/tinyimages/) dataset.
They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use this dataset, please cite the following paper:
```
@article{krizhevsky2009learning,
added-at = {2021-01-21T03:01:11.000+0100},
author = {Krizhevsky, Alex},
biburl = {https://www.bibsonomy.org/bibtex/2fe5248afe57647d9c85c50a98a12145c/s364315},
interhash = {cc2d42f2b7ef6a4e76e47d1a50c8cd86},
intrahash = {fe5248afe57647d9c85c50a98a12145c},
keywords = {},
pages = {32--33},
timestamp = {2021-01-21T03:01:11.000+0100},
title = {Learning Multiple Layers of Features from Tiny Images},
url = {https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf},
year = 2009
}
```
### Contributions
Alex Krizhevsky, Vinod Nair, Geoffrey Hinton, and Renumics GmbH. |
false |
[MLQA (MultiLingual Question Answering)](https://github.com/facebookresearch/mlqa) 中英雙語問答資料集,為原始 MLQA 資料集轉換為台灣正體中文的版本,並將中文與英語版本的相同項目合併,方便供雙語語言模型使用。(致謝:[BYVoid/OpenCC](https://github.com/BYVoid/OpenCC)、[vinta/pangu.js](https://github.com/vinta/pangu.js))
分為 `dev` 以及 `test` 兩個 split,各有 302 及 2986 組資料。
範本:
```json
[
{
"title": {
"en": "Curling at the 2014 Winter Olympics",
"zh_tw": "2014 年冬季奧林匹克運動會冰壺比賽"
},
"paragraphs": [
{
"context": {
"en": "Qualification to the curling tournaments at the Winter Olympics was determined through two methods. Nations could qualify teams by earning qualification points from performances at the 2012 and 2013 World Curling Championships. Teams could also qualify through an Olympic qualification event which was held in the autumn of 2013. Seven nations qualified teams via World Championship qualification points, while two nations qualified through the qualification event. As host nation, Russia qualified teams automatically, thus making a total of ten teams per gender in the curling tournaments.",
"zh_tw": "本屆冬奧會冰壺比賽參加資格有兩種辦法可以取得。各國家或地區可以透過 2012 年和 2013 年的世界冰壺錦標賽,也可以透過 2013 年 12 月舉辦的一次冬奧會資格賽來取得資格。七個國家透過兩屆世錦賽積分之和來獲得資格,兩個國家則透過冬奧會資格賽。作為主辦國,俄羅斯自動獲得參賽資格,這樣就確定了冬奧會冰壺比賽的男女各十支參賽隊伍。"
},
"qas": [
{
"id": "b08184972e38a79c47d01614aa08505bb3c9b680",
"question": {
"zh_tw": "俄羅斯有多少隊獲得參賽資格?",
"en": "How many teams did Russia qualify for?"
},
"answers": {
"en": [
{
"text": "ten teams",
"answer_start": 543
}
],
"zh_tw": [
{
"text": "十支",
"answer_start": 161
}
]
}
}
]
}
]
}
]
```
其餘資訊,詳見:https://github.com/facebookresearch/mlqa 。
## 原始資料集
https://github.com/facebookresearch/mlqa ,分別取其中 `dev` 與 `test` split 的 `context-zh-question-zh`、`context-zh-question-en`、`context-en-question-zh`,總共六個檔案。
## 轉換程序
1. 由 [OpenCC](https://github.com/BYVoid/OpenCC) 使用 `s2twp.json` 配置,將簡體中文轉換為台灣正體中文與臺灣常用詞彙。
2. 使用 Python 版本的 [pangu.js](https://github.com/vinta/pangu.js) 在中英文(全形與半形文字)之間加上空格。
3. 將中英文資料集中的相同項目進行合併。
關於轉換的詳細過程,請見:https://github.com/zetavg/LLM-Research/blob/bba5ff7/MLQA_Dataset_Converter_(en_zh_tw).ipynb 。
## 已知問題
* 有些項目的 `title`、`paragraph` 的 `context`、問題或是答案可能會缺少其中一種語言的版本。
* 部分問題與答案可能存在理解偏誤或歧異,例如上方所列範本「2014 年冬季奧林匹克運動會冰壺比賽」的問題「俄羅斯有多少隊獲得參賽資格?」與答案。
* `paragraph` 的 `context` 在不同語言的版本下可能長度與涵蓋的內容範圍有很大的落差。例如在 development split 中,`title` 為 “Adobe Photoshop” 的項目:
* `zh_tw` 只有兩句話:「Adobe Photoshop,簡稱 “PS”,是一個由 Adobe 開發和發行的影象處理軟體。該軟體釋出在 Windows 和 Mac OS 上。」
* 而 `en` 則是一個段落:“Adobe Photoshop is a raster graphics editor developed and published by Adobe Inc. for Windows and macOS. It was originally created in 1988 by Thomas and John Knoll. Since then, this software has become the industry standard not only in raster graphics editing, but in digital art as a whole. … (下略 127 字)” |
true |
100.772 texts with their corresponding labels
NOT_OFF_HATEFUL_TOXIC 81.359 values
OFF_HATEFUL_TOXIC 19.413 values |
false | # AutoTrain Dataset for project: teste
## Dataset Description
This dataset has been automatically processed by AutoTrain for project teste.
### Languages
The BCP-47 code for the dataset's language is pt.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "Sherlock Holmes \u00e9 um personagem de fic\u00e7\u00e3o criado pelo escritor brit\u00e2nico Sir Arthur Conan Doyle. Ele \u00e9 um detetive famoso por sua habilidade em resolver mist\u00e9rios e crimes complexos.",
"question": "Pergunta 268: Qual \u00e9 o nome do irm\u00e3o mais velho de Sherlock Holmes que trabalha para o servi\u00e7o secreto brit\u00e2nico?",
"answers.text": [
"Mycroft Holmes"
],
"answers.answer_start": [
0
]
},
{
"context": "Sherlock Holmes \u00e9 um personagem de fic\u00e7\u00e3o criado pelo escritor brit\u00e2nico Sir Arthur Conan Doyle. Ele \u00e9 um detetive famoso por sua habilidade em resolver mist\u00e9rios e crimes complexos.",
"question": "Pergunta 52: Qual \u00e9 o nome do irm\u00e3o mais velho de Sherlock Holmes que trabalha para o servi\u00e7o secreto brit\u00e2nico?",
"answers.text": [
"Mycroft Holmes"
],
"answers.answer_start": [
0
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 720 |
| valid | 180 | |
true |
This is the same dataset as [`ag_news`](https://huggingface.co/datasets/ag_news).
The only differences are
1. Addition of a unique identifier, `uid`
1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers
- `all-mpnet-base-v2`
- `multi-qa-mpnet-base-dot-v1`
- `all-MiniLM-L12-v2`
1. Renaming of the `label` column to `labels` for easier compatibility with the transformers library |
false | |
false |
# MInDS-14
## Dataset Description
- **Fine-Tuning script:** [pytorch/audio-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
- **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524)
- **Total amount of disk used:** ca. 500 MB
MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14
intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
## Example
MInDS-14 can be downloaded and used as follows:
```py
from datasets import load_dataset
minds_14 = load_dataset("PolyAI/minds14", "fr-FR") # for French
# to download all data for multi-lingual fine-tuning uncomment following line
# minds_14 = load_dataset("PolyAI/all", "all")
# see structure
print(minds_14)
# load audio sample on the fly
audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
intent_class = minds_14["train"][0]["intent_class"] # first transcription
intent = minds_14["train"].features["intent_class"].names[intent_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
## Dataset Structure
We show detailed information the example configurations `fr-FR` of the dataset.
All other configurations have the same structure.
### Data Instances
**fr-FR**
- Size of downloaded dataset files: 471 MB
- Size of the generated dataset: 300 KB
- Total amount of disk used: 471 MB
An example of a datainstance of the config `fr-FR` looks as follows:
```
{
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"audio": {
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"array": array(
[0.0, 0.0, 0.0, ..., 0.0, 0.00048828, -0.00024414], dtype=float32
),
"sampling_rate": 8000,
},
"transcription": "je souhaite changer mon adresse",
"english_transcription": "I want to change my address",
"intent_class": 1,
"lang_id": 6,
}
```
### Data Fields
The data fields are the same among all splits.
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
- **transcription** (str): Transcription of the audio file
- **english_transcription** (str): English transcription of the audio file
- **intent_class** (int): Class id of intent
- **lang_id** (int): Id of language
### Data Splits
Every config only has the `"train"` split containing of *ca.* 600 examples.
## Dataset Creation
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
```
@article{DBLP:journals/corr/abs-2104-08524,
author = {Daniela Gerz and
Pei{-}Hao Su and
Razvan Kusztos and
Avishek Mondal and
Michal Lis and
Eshan Singhal and
Nikola Mrksic and
Tsung{-}Hsien Wen and
Ivan Vulic},
title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
journal = {CoRR},
volume = {abs/2104.08524},
year = {2021},
url = {https://arxiv.org/abs/2104.08524},
eprinttype = {arXiv},
eprint = {2104.08524},
timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-08524.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset
|
false | This dataset was created to test two different things:
First, check LLM's capabilities of augmenting data in a coherent way.
Second, create a dataset to finetune LLMs for the QA task.
The dataset contains the frequently asked questions and their answers of a made-up online fashion marketplace called: Nels Marketplace. |
false |
# Dataset Card for "alpaca-gpt4-cleaned"
This dataset contains Ukrainian Instruction-Following translated by facebook/nllb-200-3.3B
The dataset was originaly shared in this repository: https://github.com/tloen/alpaca-lora
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). |
false | |
false | # Dataset Card for "instructional_code-search-net-javacript"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-javascript
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for JavaScript.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-javascript
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0 |
false | |
false |
来源 https://github.com/liucongg/NLPDataSet
* 从网上收集数据,将CMeEE数据集、IMCS21_task1数据集、CCKS2017_task2数据集、CCKS2018_task1数据集、CCKS2019_task1数据集、CLUENER2020数据集、MSRA数据集、NLPCC2018_task4数据集、CCFBDCI数据集、MMC数据集、WanChuang数据集、PeopleDairy1998数据集、PeopleDairy2004数据集、GAIIC2022_task2数据集、WeiBo数据集、ECommerce数据集、FinanceSina数据集、BoSon数据集、Resume数据集、Bank数据集、FNED数据集和DLNER数据集等22个数据集进行整理清洗,构建一个较完善的中文NER数据集。
* 数据集清洗时,仅进行了简单地规则清洗,并将格式进行了统一化,标签为“BIO”。
* 处理后数据集详细信息,见[数据集描述](https://zhuanlan.zhihu.com/p/529541521)。
* 数据集由[NJUST-TB](https://github.com/Swag-tb)一起整理。
* 由于部分数据包含嵌套实体的情况,所以转换成BIO标签时,长实体会覆盖短实体。
| 数据 | 原始数据/项目地址 | 样本个数 | 类别 | 原始数据描述 |
| ------ | ------ | ------ | ------ | ------ |
| CMeEE数据集 | [地址](http://www.cips-chip.org.cn/2021/CBLUE) | 20000条 | sym、dep、dru、pro、equ、dis、mic、ite和bod | 中文医疗信息处理挑战榜CBLUE中医学实体识别数据集 |
| IMCS21_task1数据集 | [地址](http://www.fudan-disc.com/sharedtask/imcs21/index.html?spm=5176.12282016.0.0.140e6d92ypyW1r) | 98452条 | Operation、Drug_Category、Medical_Examination、Symptom和Drug | CCL2021第一届智能对话诊疗评测比赛命名实体识别数据集|
| CCKS2017_task2数据集 | [地址](https://www.biendata.xyz/competition/CCKS2017_2/) | 2229条 | symp、dise、chec、body和cure | CCKS2017面向电子病历的命名实体识别数据集 |
| CCKS2018_task1数据集 | [地址](https://www.biendata.xyz/competition/CCKS2018_1/) | 797条 | 症状和体征、检查和检验、治疗、疾病和诊断、身体部位 | CCKS2018面向中文电子病历的命名实体识别数据集 |
| CCKS2019_task1数据集 | [地址](http://openkg.cn/dataset/yidu-s4k) | 1379条 | 解剖部位、手术、疾病和诊断、药物、实验室检验、影像检查 | CCKS2019面向中文电子病历的命名实体识别数据集 |
| CLUENER2020数据集 | [地址](https://github.com/CLUEbenchmark/CLUENER2020) | 12091条 | game、organization、government、movie、name、book、company、scene、position和address | CLUENER2020数据集 |
| MSRA数据集 | [地址](https://www.msra.cn/) | 48442条 | LOC、ORG和PER | MSRA微软亚洲研究院开源命名实体识别数据集 |
| NLPCC2018_task4数据集 | [地址](http://tcci.ccf.org.cn/conference/2018/taskdata.php) | 21352条 | language、origin、theme、custom_destination、style、phone_num、destination、contact_name、age、singer、song、instrument、toplist、scene和emotion | 任务型对话系统数据数据集 |
| CCFBDCI数据集 | [地址](https://www.datafountain.cn/competitions/510) |15723条 | LOC、GPE、ORG和PER | 中文命名实体识别算法鲁棒性评测数据集 |
| MMC数据集 | [地址](https://tianchi.aliyun.com/competition/entrance/231687/information) | 3498条 | Level、Method、Disease、Drug、Frequency、Amount、Operation、Pathogenesis、Test_items、Anatomy、Symptom、Duration、Treatment、Test_Value、ADE、Class、Test和Reason | 瑞金医院MMC人工智能辅助构建知识图谱大赛数据集 |
| WanChuang数据集 | [地址](https://tianchi.aliyun.com/competition/entrance/531827/introduction) | 1255条 | 药物剂型、疾病分组、人群、药品分组、中药功效、症状、疾病、药物成分、药物性味、食物分组、食物、证候和药品 | "万创杯”中医药天池大数据竞赛—智慧中医药应用创新挑战赛数据集 |
| PeopleDairy1998数据集 | [地址]() | 27818条 | LOC、ORG和PER | 人民日报1998数据集 |
| PeopleDairy2004数据集 | [地址]() | 286268条 | LOC、ORG、PER、T | 人民日报2004数据集 |
| GAIIC2022_task2数据集 | [地址](https://www.heywhale.com/home/competition/620b34ed28270b0017b823ad/content/2) | 40000条 | 该比赛共有52种类别 | 商品标题实体识别数据集 |
| WeiBo数据集 | [地址](https://github.com/hltcoe/golden-horse) | 1890条 | LOC.NAM、LOC.NOM、PER.NAM、ORG.NOM、ORG.NAM、GPE.NAM和PER.NOM | 社交媒体中文命名实体识别数据集 |
| ECommerce数据集 | [地址](https://github.com/allanj/ner_incomplete_annotation) | 7998条 | MISC、XH、HPPX和HCCX | 面向电商的命名实体识别数据集 |
| FinanceSina数据集 | [地址](https://github.com/jiesutd/LatticeLSTM) | 1579条 | LOC、GPE、ORG和PER | 新浪财经爬取中文命名实体识别数据集 |
| BoSon数据集 | [地址](https://github.com/bosondata) | 2000条 | time、product_name、person_name、location、org_name和company_name | 玻森中文命名实体识别数据集 |
| Resume数据集 | [地址](https://github.com/jiesutd/LatticeLSTM/tree/master/ResumeNER) | 4761条 | NAME、EDU、LOC、ORG、PRO、TITLE、CONT和RACE | 中国股市上市公司高管的简历 |
| Bank数据集 | [地址](https://www.heywhale.com/mw/dataset/617969ec768f3b0017862990/file) | 10000条 | BANK、COMMENTS_ADJ、COMMENTS_N和PRODUCT | 银行借贷数据数据集 |
| FNED数据集 | [地址](https://www.datafountain.cn/competitions/561/datasets) | 10500条 | LOC、GPE、ORG、EQU、TIME、FAC和PER | 高鲁棒性要求下的领域事件检测数据集 |
| DLNER数据集 | [地址](https://github.com/lancopku/Chinese-Literature-NER-RE-Dataset) | 28897条 | Location、Thing、Abstract、Organization、Metric、Time、Physical、Person和Term | 语篇级命名实体识别数据集 |
- 清洗及格式转换后的数据,下载链接如下:[百度云](https://pan.baidu.com/s/1VvbvWPv3eM4MXsv_nlDSSA) / 提取码:4sea
- 注意:部分嵌套实体的数据,使用长实体覆盖了短实体,有嵌套实体需求的同学,请自行使用原始数据。
|
false | # Dataset Card for "product_ads"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
# Dataset Card for Erhu Playing Technique Database(11-class)
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/erhu_playing_tech_11>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This dataset is an audio dataset containing 927 audio clips recorded by China Conservatory of Music, each with a performance technique of erhu. It is a part of DCMI[1], which is a dataset of Chinese Musical Instruments. We divide all the recorded techniques into the following 11 categories according to [2] [3] [4] [5]
```
+ detache 分弓 (72)
+ forte (8)
+ medium (8)
+ piano (56)
+ diangong 垫弓 (28)
+ harmonic 泛音 (18)
+ natural 自然泛音 (6)
+ artificial 人工泛音 (12)
+ legato&slide&glissando 连弓&滑音&大滑音 (114)
+ glissando_down 大滑音 下行 (4)
+ glissando_up 大滑音 上行 (4)
+ huihuayin_down 下回滑音 (18)
+ huihuayin_long_down 后下回滑音 (12)
+ legato&slide_up 向上连弓 包含滑音 (24)
+ forte (8)
+ medium (8)
+ piano (8)
+ slide_dianzhi 垫指滑音 (4)
+ slide_down 向下滑音 (16)
+ slide_legato 连线滑音 (16)
+ slide_up 向上滑音 (16)
+ percussive 打击类音效 (21)
+ dajigong 大击弓 (11)
+ horse 马嘶 (2)
+ stick 敲击弓 (8)
+ pizzicato 拨弦 (96)
+ forte (30)
+ medium (29)
+ piano (30)
+ left 左手勾弦 (6)
+ ricochet 抛弓 (36)
+ staccato 顿弓 (141)
+ forte (47)
+ medium (46)
+ piano (48)
+ tremolo 颤弓 (144)
+ forte (48)
+ medium (48)
+ piano (48)
+ trill 颤音 (202)
+ long 长颤音 (141)
+ forte (46)
+ medium (47)
+ piano (48)
+ short 短颤音 (61)
+ down 下颤音 (30)
+ up 上颤音 (31)
+ vibrato 揉弦 (56)
+ late (13)
+ press 压揉 (6)
+ roll 滚揉 (28)
+ slide 滑揉 (9)
```
### Supported Tasks and Leaderboards
Erhu Playing Technique Classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.wav
### Data Fields
```
vibrato
trill
tremolo
staccato
ricochet
pizzicato
percussive
legato_slide_glissando
harmonic
diangong
detache
```
### Data Splits
trainset, testset
## Dataset Creation
### Curation Rationale
Lack of a dataset for Erhu playing tech
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Students from CCMUSIC
### Annotations
#### Annotation process
This dataset is an audio dataset containing 927 audio clips recorded by China Conservatory of Music, each with a performance technique of erhu.
#### Who are the annotators?
Students from CCMUSIC
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Advancing the Digitization Process of Traditional Chinese Instruments
### Discussion of Biases
Only for Erhu
### Other Known Limitations
Not Specific Enough in Categorization
## Additional Information
### Dataset Curators
Zijin Li
### Licensing Information
```
MIT License
Copyright (c) 2023 CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
[1] Zijin Li, Xiaojing Liang, Jingyu Liu, Wei Li, Jiaxing Zhu, Baoqiang Han, DCMI: A Database of Chinese Musical Instruments, DLfM ’18, Sep 2018, Paris, France<br>
[2] [Chapter 9. Erhu of Bowed Stringed Instruments](https://www.atlasensemble.nl/assets/files/instruments/Erhu/Erhu%20by%20Samuel%20Wong%20Shengmiao.pdf)<br>
[3] 梁广程, 潘永璋. 乐器法手册(增订本)[M]. 人民音乐出版社, 1996.<br>
[4] [Erhu, info for composers](https://www.lantungmusic.com/erhu/for-composers)<br>
[5] 权吉浩, 中西乐器法 [M]. 人民音乐出版社, 2016.
### Contributions
Provide a dataset for Erhu playing tech |
false |
# Dataset Card for Chest voice and Falsetto Database
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/chest_falsetto>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This database contains 1280 monophonic singing audio (.wav format) of chest and falsetto voices, with chest voice tagged as _chest_ and falsetto voice tagged as _falsetto_.
### Supported Tasks and Leaderboards
Audio classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.wav
### Data Fields
m_chest, f_chest, m_falsetto, f_falsetto
### Data Splits
train, validation, test
## Dataset Creation
### Curation Rationale
Lack of a dataset for Chest voice and Falsetto
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Students from CCMUSIC
### Annotations
#### Annotation process
1280 monophonic singing audio (.wav format) of chest and falsetto voices, with chest voice tagged as _chest_ and falsetto voice tagged as _falsetto_.
#### Who are the annotators?
Students from CCMUSIC
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of AI in the music industry
### Discussion of Biases
Only for chest and falsetto voices
### Other Known Limitations
Recordings are cut into slices that are too short
## Additional Information
### Dataset Curators
Zijin Li
### Licensing Information
```
MIT License
Copyright (c) 2023 CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for distinguishing chest and falsetto voices |
false |
# Dataset Card for Bel Conto and Chinese Folk Song Singing Tech Database
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/bel_folk>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This database contains hundreds of acapella singing clips that are sung in two styles, Bel Conto and Chinese national singing style by professional vocalists. All of them are sung by professional vocalists and were recorded in professional commercial recording studios.
### Supported Tasks and Leaderboards
Audio classification
### Languages
Chinese
## Dataset Structure
### Data Instances
.wav
### Data Fields
m_bel, f_bel, m_folk, f_folk
### Data Splits
train, validation, test
## Dataset Creation
### Curation Rationale
Lack of a dataset for Bel Conto and Chinese folk song singing tech
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Students from CCMUSIC
### Annotations
#### Annotation process
All of them are sung by professional vocalists and were recorded in professional commercial recording studios.
#### Who are the annotators?
professional vocalists
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of AI in the music industry
### Discussion of Biases
Only for Chinese songs
### Other Known Limitations
Some singers may not have enough professional training in classical or ethnic vocal techniques.
## Additional Information
### Dataset Curators
Zijin Li
### Licensing Information
```
MIT License
Copyright (c) 2023 CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for distinguishing Bel Conto and Chinese folk song singing tech |
false | ```
from datasets import load_dataset
data_files={'data':'data.csv'}
data=load_dataset("theothertom/text_emotion_speech",data_files=data_files)
``` |
true | |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
true | # The Adversarial Natural Language Inference (ANLI)
- Source: https://huggingface.co/datasets/anli
- Num examples:
- 100,459 (train)
- 1,200 (validation)
- 1,200 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("vietgpt/anli_r3_en")
```
- Format for NLI task
```python
def preprocess(sample):
premise = sample['premise']
hypothesis = sample['hypothesis']
label = sample['label']
if label == 0:
label = "entailment"
elif label == 1:
label = "neutral"
else:
label = "contradiction"
return {'text': f'<|startoftext|><|premise|> {premise} <|hypothesis|> {hypothesis} <|label|> {label} <|endoftext|>'}
"""
<|startoftext|><|premise|> TOKYO, Dec 18 (Reuters) - Japan’s Shionogi & Co said on Tuesday that it has applied to health regulators in the United States, Canada and Europe for approval of its HIV drug Dolutegravir. Shionogi developed Dolutegravir with a Viiv Healthcare, an AIDS drug joint venture between GlaxoSmithKline and Pfizer, in exchange for its rights to the drug. <|hypothesis|> The article was written on December 18th. <|label|> entailment <|endoftext|>
"""
```
- Format for Rationale task
```python
def preprocess_rationale(sample):
premise = sample['premise']
hypothesis = sample['hypothesis']
rationale = sample['reason']
return {'text': f'<|startoftext|><|premise|> {premise} <|hypothesis|> {hypothesis} <|rationale|> {rationale} <|endoftext|>'}
"""
<|startoftext|><|premise|> TOKYO, Dec 18 (Reuters) - Japan’s Shionogi & Co said on Tuesday that it has applied to health regulators in the United States, Canada and Europe for approval of its HIV drug Dolutegravir. Shionogi developed Dolutegravir with a Viiv Healthcare, an AIDS drug joint venture between GlaxoSmithKline and Pfizer, in exchange for its rights to the drug. <|hypothesis|> The article was written on December 18th. <|rationale|> TOKYO, Dec 18 (Reuters) is when the article was written as it states in the first words of the sentence <|endoftext|>
"""
```
- Format for GPT-3
```python
def preprocess_gpt3(sample):
premise = sample['premise']
hypothesis = sample['hypothesis']
label = sample['label']
if label == 0:
output = f'\n<|correct|> True\n<|incorrect|> False\n<|incorrect|> Neither'
elif label == 1:
output = f'\n<|correct|> Neither\n<|incorrect|> True\n<|incorrect|> False'
else:
output = f'\n<|correct|> False\n<|incorrect|> True\n<|incorrect|> Neither'
return {'text': f'<|startoftext|> anli 2: {premise} <|question|> {hypothesis}\nTrue, False, or Neither? <|answer|> {output} <|endoftext|>'}
"""
<|startoftext|> anli 2: TOKYO, Dec 18 (Reuters) - Japan’s Shionogi & Co said on Tuesday that it has applied to health regulators in the United States, Canada and Europe for approval of its HIV drug Dolutegravir. Shionogi developed Dolutegravir with a Viiv Healthcare, an AIDS drug joint venture between GlaxoSmithKline and Pfizer, in exchange for its rights to the drug. <|question|> The article was written on December 18th.
True, False, or Neither? <|answer|>
<|correct|> True
<|incorrect|> False
<|incorrect|> Neither <|endoftext|>
"""
``` |
true | # COPA
- Source: https://huggingface.co/datasets/super_glue
- Num examples:
- 400 (train)
- 100 (validation)
- Language: English
```python
from datasets import load_dataset
load_dataset("vietgpt/copa_en")
```
- Format for GPT-3
```python
def preprocess_gpt3(sample):
premise = sample['premise']
choice1 = sample['choice1']
choice2 = sample['choice2']
label = sample['label']
if label == 0:
output = f'\n<|correct|> {choice1}\n<|incorrect|> {choice2}'
elif label == 1:
output = f'\n<|correct|> {choice2}\n<|incorrect|> {choice1}'
return {'text': f'<|startoftext|><|context|> {premise} <|answer|> {output} <|endoftext|>'}
"""
<|startoftext|><|context|> My body cast a shadow over the grass. <|answer|>
<|correct|> The sun was rising.
<|incorrect|> The grass was cut. <|endoftext|>
"""
``` |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** kin.naver.com/qna
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** mjypark1212@gmail.com
### Dataset Summary
The most active korean qna site - Knowledge In Naver. Instruction + response format. Created for language model.
## Dataset Structure
[Instruction, Response, Source, Metadata]
|
false |
Collection of wing images for conservation of honey bees (Apis mellifera) biodiversity in Europe
https://zenodo.org/record/7244070
Small version (10%) of the original dataset bee-wings-large |
true | # Dataset Card for nli-zh-all
## Dataset Description
- **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec)
- **Dataset:** [zh NLI](https://huggingface.co/datasets/shibing624/nli-zh-all)
- **Size of downloaded dataset files:** 4.7 GB
- **Total amount of disk used:** 4.7 GB
### Dataset Summary
中文自然语言推理(NLI)数据合集(nli-zh-all)
整合了文本推理,相似,摘要,问答,指令微调等任务的820万高质量数据,并转化为匹配格式数据集。
### Supported Tasks and Leaderboards
Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。
中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果:
**Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec)
### Languages
数据集均是简体中文文本。
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{"text1":"借款后多长时间给打电话","text2":"借款后多久打电话啊","label":1}
{"text1":"没看到微粒贷","text2":"我借那么久也没有提升啊","label":0}
```
- label 有2个标签,1表示相似,0表示不相似。
### Data Fields
The data fields are the same among all splits.
- `text1`: a `string` feature.
- `text2`: a `string` feature.
- `label`: a classification label, with possible values including entailment(1), contradiction(0)。
### Data Splits
after remove None and len(text) < 1 data:
```shell
$ wc -l nli-zh-all/*
48818 nli-zh-all/alpaca_gpt4-train.jsonl
5000 nli-zh-all/amazon_reviews-train.jsonl
519255 nli-zh-all/belle-train.jsonl
16000 nli-zh-all/cblue_chip_sts-train.jsonl
549326 nli-zh-all/chatmed_consult-train.jsonl
10142 nli-zh-all/cmrc2018-train.jsonl
395927 nli-zh-all/csl-train.jsonl
50000 nli-zh-all/dureader_robust-train.jsonl
709761 nli-zh-all/firefly-train.jsonl
9568 nli-zh-all/mlqa-train.jsonl
455875 nli-zh-all/nli_zh-train.jsonl
50486 nli-zh-all/ocnli-train.jsonl
2678694 nli-zh-all/simclue-train.jsonl
419402 nli-zh-all/snli_zh-train.jsonl
3024 nli-zh-all/webqa-train.jsonl
1213780 nli-zh-all/wiki_atomic_edits-train.jsonl
93404 nli-zh-all/xlsum-train.jsonl
1006218 nli-zh-all/zhihu_kol-train.jsonl
8234680 total
```
### Data Length

count text length script: https://github.com/shibing624/text2vec/blob/master/examples/data/count_text_length.py
## Dataset Creation
### Curation Rationale
受[m3e-base](https://huggingface.co/moka-ai/m3e-base#M3E%E6%95%B0%E6%8D%AE%E9%9B%86)启发,合并了中文高质量NLI(natural langauge inference)数据集,
这里把这个数据集上传到huggingface的datasets,方便大家使用。
### Source Data
#### Initial Data Collection and Normalization
如果您想要查看数据集的构建方法,你可以在 [https://github.com/shibing624/text2vec/blob/master/examples/data/build_zh_nli_dataset.py](https://github.com/shibing624/text2vec/blob/master/examples/data/build_zh_nli_dataset.py) 中找到生成 nli-zh-all 数据集的脚本,所有数据均上传到 huggingface datasets。
| 数据集名称 | 领域 | 数量 | 任务类型 | Prompt | 质量 | 数据提供者 | 说明 | 是否开源/研究使用 | 是否商用 | 脚本 | Done | URL | 是否同质 |
|:---------------------| :---- |:-----------|:---------------- |:------ |:----|:-----------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------- |:------|:---- |:---- |:---------------------------------------------------------------------------------------------|:------|
| cmrc2018 | 百科 | 14,363 | 问答 | 问答 | 优 | Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu | https://github.com/ymcui/cmrc2018/blob/master/README_CN.md 专家标注的基于维基百科的中文阅读理解数据集,将问题和上下文视为正例 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/cmrc2018 | 否 |
| belle_0.5m | 百科 | 500,000 | 指令微调 | 无 | 优 | LianjiaTech/BELLE | belle 的指令微调数据集,使用 self instruct 方法基于 gpt3.5 生成 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/BelleGroup/ | 否 |
| firefily | 百科 | 1,649,399 | 指令微调 | 无 | 优 | YeungNLP | Firefly(流萤) 是一个开源的中文对话式大语言模型,使用指令微调(Instruction Tuning)在中文数据集上进行调优。使用了词表裁剪、ZeRO等技术,有效降低显存消耗和提高训练效率。 在训练中,我们使用了更小的模型参数量,以及更少的计算资源。 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M | 否 |
| alpaca_gpt4 | 百科 | 48,818 | 指令微调 | 无 | 优 | Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao | 本数据集是参考Alpaca方法基于GPT4得到的self-instruct数据,约5万条。 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/alpaca-zh | 否 |
| zhihu_kol | 百科 | 1,006,218 | 问答 | 问答 | 优 | wangrui6 | 知乎问答 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wangrui6/Zhihu-KOL | 否 |
| amazon_reviews_multi | 电商 | 210,000 | 问答 文本分类 | 摘要 | 优 | 亚马逊 | 亚马逊产品评论数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/amazon_reviews_multi/viewer/zh/train?row=8 | 否 |
| mlqa | 百科 | 85,853 | 问答 | 问答 | 良 | patrickvonplaten | 一个用于评估跨语言问答性能的基准数据集 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/mlqa/viewer/mlqa-translate-train.zh/train?p=2 | 否 |
| xlsum | 新闻 | 93,404 | 摘要 | 摘要 | 良 | BUET CSE NLP Group | BBC的专业注释文章摘要对 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/chinese_simplified/train?row=259 | 否 |
| ocnli | 口语 | 17,726 | 自然语言推理 | 推理 | 良 | Thomas Wolf | 自然语言推理数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/clue/viewer/ocnli | 是 |
| BQ | 金融 | 60,000 | 文本分类 | 相似 | 优 | Intelligent Computing Research Center, Harbin Institute of Technology(Shenzhen) | http://icrc.hitsz.edu.cn/info/1037/1162.htm BQ 语料库包含来自网上银行自定义服务日志的 120,000 个问题对。它分为三部分:100,000 对用于训练,10,000 对用于验证,10,000 对用于测试。 数据提供者: 哈尔滨工业大学(深圳)智能计算研究中心 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/BQ | 是 |
| lcqmc | 口语 | 149,226 | 文本分类 | 相似 | 优 | Ming Xu | 哈工大文本匹配数据集,LCQMC 是哈尔滨工业大学在自然语言处理国际顶会 COLING2018 构建的问题语义匹配数据集,其目标是判断两个问题的语义是否相同 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/LCQMC/train | 是 |
| paws-x | 百科 | 23,576 | 文本分类 | 相似 | 优 | Bhavitvya Malik | PAWS Wiki中的示例 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/paws-x/viewer/zh/train | 是 |
| wiki_atomic_edit | 百科 | 1,213,780 | 平行语义 | 相似 | 优 | abhishek thakur | 基于中文维基百科的编辑记录收集的数据集 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wiki_atomic_edits | 是 |
| chatmed_consult | 医药 | 549,326 | 问答 | 问答 | 优 | Wei Zhu | 真实世界的医学相关的问题,使用 gpt3.5 进行回答 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset | 否 |
| webqa | 百科 | 42,216 | 问答 | 问答 | 优 | suolyer | 百度于2016年开源的数据集,数据来自于百度知道;格式为一个问题多篇意思基本一致的文章,分为人为标注以及浏览器检索;数据整体质量中,因为混合了很多检索而来的文章 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/suolyer/webqa/viewer/suolyer--webqa/train?p=3 | 否 |
| dureader_robust | 百科 | 65,937 | 机器阅读理解 问答 | 问答 | 优 | 百度 | DuReader robust旨在利用真实应用中的数据样本来衡量阅读理解模型的鲁棒性,评测模型的过敏感性、过稳定性以及泛化能力,是首个中文阅读理解鲁棒性数据集。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/PaddlePaddle/dureader_robust/viewer/plain_text/train?row=96 | 否 |
| csl | 学术 | 395,927 | 语料 | 摘要 | 优 | Yudong Li, Yuqing Zhang, Zhe Zhao, Linlin Shen, Weijie Liu, Weiquan Mao and Hui Zhang | 提供首个中文科学文献数据集(CSL),包含 396,209 篇中文核心期刊论文元信息 (标题、摘要、关键词、学科、门类)。CSL 数据集可以作为预训练语料,也可以构建许多NLP任务,例如文本摘要(标题预测)、 关键词生成和文本分类等。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/neuclir/csl | 否 |
| snli-zh | 口语 | 419,402 | 文本分类 | 推理 | 优 | liuhuanyong | 中文SNLI数据集,翻译自英文SNLI | 是 | 否 | 是 | 是 | https://github.com/liuhuanyong/ChineseTextualInference/ | 是 |
| SimCLUE | 百科 | 2,678,694 | 平行语义 | 相似 | 优 | 数据集合,请在 simCLUE 中查看 | 整合了中文领域绝大多数可用的开源的语义相似度和自然语言推理的数据集,并重新做了数据拆分和整理。 | 是 | 否 | 否 | 是 | https://github.com/CLUEbenchmark/SimCLUE | 是 |
#### Who are the source language producers?
数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。
SNLI:
@inproceedings{snli:emnlp2015,
Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.},
Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
Publisher = {Association for Computational Linguistics},
Title = {A large annotated corpus for learning natural language inference},
Year = {2015}
}
#### Who are the annotators?
原作者。
### Social Impact of Dataset
This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context.
Systems that are successful at such a task may be more successful in modeling semantic representations.
### Licensing Information
for reasearch
用于学术研究
### Contributions
[shibing624](https://github.com/shibing624) add this dataset.
|
false | |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository: https://github.com/timpal0l/ScandiSent**
- **Paper: https://arxiv.org/pdf/2104.10441.pdf**
- **Leaderboard:**
- **Point of Contact: Tim Isbister**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
# **Open Australian Legal Corpus ⚖️**
The Open Australian Legal Corpus is the first and only multijurisdictional open corpus of Australian legislative and judicial documents.
Comprised of 97,750 texts, the Corpus includes almost every in force statute and regulation in the Commonwealth, New South Wales, Queensland, Western Australia, South Australia, Tasmania and Norfolk Island, in addition to thousands of bills and tens of thousands of court and tribunal decisions.
As the largest free and open dataset of its kind to-date, the Corpus is intended to progress the burgeoning field of legal AI research in Australia by allowing researchers to pretrain and finetune machine learning models for downstream natural language processing tasks applied to the Australian legal domain such as document classification, summarisation, information retrieval and question answering.
To ensure its accessibility to as wide an audience as possible, the Corpus and all of its documents are distributed under permissive licences that allow for both commercial and non-commercial usage (see the [Licence 📄](LICENCE.md)).
## Data Structure 🗂️
The Corpus is stored in [corpus.jsonl](corpus.jsonl), a json lines file where each line represents a document consisting of four keys:
| Key | Description |
| --- | --- |
| text | The UTF-8 encoded text of the document. |
| type | The type of the document. Possible values are `primary_legislation`, `secondary_legislation`, `bill` and `decision`. |
| url | A hyperlink to the document. |
| source | The source of the document. Possible values are `federal_register_of_legislation`, `federal_court_of_australia`, `nsw_legislation`, `queensland_legislation`, `western_australian_legislation`, `south_australian_legislation` and `tasmanian_legislation`. |
## Data Collection 📥
Documents were sourced from the [Federal Register of Legislation](https://www.legislation.gov.au/), [Federal Court of Australia](https://www.fedcourt.gov.au/digital-law-library/judgments/search), [NSW Legislation](https://legislation.nsw.gov.au/), [Queensland Legislation](https://www.legislation.qld.gov.au/), [Western Australian Legislation](https://www.legislation.wa.gov.au/), [South Australian Legislation](https://www.legislation.sa.gov.au/) and [Tasmanian Legislation](https://www.legislation.tas.gov.au/). Unfortunately, due to copyright restrictions as well as refusals to grant permission to scrape their websites, the [High Court of Australia](https://eresources.hcourt.gov.au/), [Victorian Legislation](https://www.legislation.vic.gov.au/copyright), [ACT Legislation](https://www.legislation.act.gov.au) and [NT Legislation](https://legislation.nt.gov.au/) databases could not be incorporated into the Corpus.
The text of these documents were extracted using [inscriptis](https://github.com/weblyzard/inscriptis) or, in the case of the [South Australian Legislation](https://www.legislation.sa.gov.au/) database which was provided as an archive of rtf files, [striprtf](https://github.com/joshy/striprtf). No post-processing was applied.
The below table shows the types of documents taken from each source and the date upon which they were collected (for the [South Australian Legislation](https://www.legislation.sa.gov.au/) database, the date provided is when the database was archived):
| Source | Date | Documents |
| --- | --- | --- |
| Federal Register of Legislation | 25 June 2023 | <ul><li>The most recent versions of all in force acts and the Constitution (primary legislation);</li> <li>The most recent versions of all in force legislative instruments, notifiable instruments, administrative arrangements orders and prerogative instruments (secondary legislation); and</li> <li>The as made versions of all bills.</li></ul> |
| Federal Court of Australia | 25 June 2023 | <ul><li>All decisions of the Federal Court of Australia, Industrial Relations Court of Australia, Australian Competition Tribunal, Copyright Tribunal, Defence Force Discipline Appeal Tribunal, Federal Police Disciplinary Tribunal, Trade Practices Tribunal and Supreme Court of Norfolk Island.</li></ul> |
| NSW Legislation | 25 June 2023 | <ul><li>The most recent versions of all in force public and private acts (primary legislation); and</li> <li>The most recent versions of all in force statutory instruments and environmental planning instruments (secondary legislation).</li></ul> |
| Queensland Legislation | 25 June 2023 | <ul><li>The most recent versions of all in force acts (primary legislation);</li> <li>The most recent versions of all in force statutory instruments (secondary legislation); and</li> <li>The as introduced versions of all bills.</li></ul> |
| Western Australian Legislation | 25 June 2023 | <ul><li>The most recent versions of all in force acts (primary legislation); and</li> <li>The most recent versions of all in force subsidiary legislation (secondary legislation).</li></ul> |
| South Australian Legislation | 30 November 2022 | <ul><li>The most recent versions of all in force acts (primary legislation); and</li> <li>The most recent versions of all in force proclamations, policies and regulations (secondary legislation).</li></ul> |
| Tasmanian Legislation | 25 June 2023 | <ul><li>The most recent versions of all in force acts (primary legislation); and</li> <li>The most recent versions of all in force statutory rules (secondary legislation).</li></ul> |
The code used to collect these documents and create the Corpus can be found [here](https://github.com/umarbutler/open-australian-legal-corpus-creator).
## Licence 📄
The Corpus itself is licensed under a [Creative Commons Attribution 4.0 International Licence](https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, on the condition that you give appropriate credit to the original author and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
Documents sourced from the [Federal Register of Legislation](https://perma.cc/YSW9-ADNG), [NSW Legislation](https://perma.cc/PJ99-TNCS), [Queensland Legislation](https://perma.cc/5688-WGAL), [Western Australian Legislation](https://perma.cc/UAM5-FXNB), [South Australian Legislation](https://perma.cc/U4ET-K5Y6) and [Tasmanian Legislation](https://perma.cc/YRC4-WVCY) were also all licensed under a [Creative Commons Attribution 4.0 International Licence](https://creativecommons.org/licenses/by/4.0/) at the time of their inclusion in the Corpus. They require specific wordings to be used for attribution, which are provided [here](LICENCE.md).
With regard to documents from the [Federal Court of Australia](https://perma.cc/4GE4-FPFV), at the time of scraping, their licence permitted users to download, display, print and reproduce material in an unaltered form for personal, non-commercial use or use within their organisation. It also permitted judgements and decisions or excerpts thereof to be reproduced or published in an unaltered form (including for commercial use), provided that they are acknowledged to be judgements or decisions of the Court or Tribunal, any commentary, head notes or additional information added is clearly attributed to the publisher or organisation and not the Court or Tribunals, and the source from which the judgement was copied is acknowledged.
The unabridged version of this licence is available [here](LICENCE.md).
## Acknowledgements 🙏
In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.
The author thanks the [Federal Register of Legislation](https://www.legislation.gov.au/), [Federal Court of Australia](https://www.fedcourt.gov.au/digital-law-library/judgments/search), [NSW Legislation](https://legislation.nsw.gov.au/), [Queensland Legislation](https://www.legislation.qld.gov.au/), [Western Australian Legislation](https://www.legislation.wa.gov.au/) and [Tasmanian Legislation](https://www.legislation.tas.gov.au/) for all granting him permission to scrape their websites, as well as [South Australian Legislation](https://www.legislation.sa.gov.au/) for providing him with a copy of their legislative database.
The author also thanks the makers of [Visual Studio Code](https://github.com/microsoft/vscode), [Python](https://github.com/python/cpython), [Jupyter Notebook](https://github.com/jupyter/notebook), [urllib3](https://github.com/urllib3/urllib3), [certifi](https://github.com/certifi/python-certifi), [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/), [lxml](https://github.com/lxml/lxml), [inscriptis](https://github.com/weblyzard/inscriptis), [striprtf](https://github.com/joshy/striprtf) and [pytz](https://github.com/stub42/pytz), as well as the creators of the [pile-of-law](https://huggingface.co/datasets/pile-of-law/pile-of-law) corpus which served as a large source of inspiration for this project.
Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs.
## Citation 🔖
If you rely on the Corpus, please cite:
```bibtex
@misc{butler-2023-open-australian-legal-corpus,
author = {Butler, Umar},
year = {2023},
title = {Open Australian Legal Corpus},
publisher = {Hugging Face},
version = {1.0.0},
doi = {10.57967/hf/0812},
url = {https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus}
}
``` |
false | # Dataset Card for BabelCode HumanEval
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/google-research/babelcode)
- **Paper:** [Measuring The Impact Of Programming Language Distribution](https://arxiv.org/abs/2302.01973)
### How To Use This Dataset
To quickly evaluate BC-HumanEval predictions, save the `qid` and `language` keys along with the postprocessed prediction code in a JSON lines file. Then follow the install instructions for [BabelCode](https://github.com/google-research/babelcode), and you can evaluate your predictions.
### Dataset Summary
The BabelCode-HumaneEval (BC-HumanEval) dataset converts the [HumanEval dataset released by OpenAI](https://github.com/openai/human-eval) to 16 programming languages.
### Supported Tasks and Leaderboards
### Languages
BC-HumanEval supports:
* C++
* C#
* Dart
* Go
* Haskell
* Java
* Javascript
* Julia
* Kotlin
* Lua
* PHP
* Python
* R
* Rust
* Scala
* TypeScript
## Dataset Structure
```python
>>> from datasets import load_dataset
>>> load_dataset("gabeorlanski/bc-humaneval")
DatasetDict({
test: Dataset({
features: ['qid', 'title', 'language', 'text', 'signature_with_docstring', 'signature', 'arguments', 'entry_fn_name', 'entry_cls_name', 'test_code'],
num_rows: 2576
})
})
```
### Data Fields
- `qid`: The question ID used for running tests.
- `title`: The title of the question.
- `language`: The programming language of the example.
- `text`: The description of the problem.
- `signature`: The signature for the problem.
- `signature_with_docstring`: The signature with the adequately formatted docstring for the given problem.
- `arguments`: The arguments of the problem.
- `entry_fn_name`: The function's name to use an entry point.
- `entry_cls_name`: The class name to use an entry point.
- `test_code`: The raw testing script used in the language. If you want to use this, replace `PLACEHOLDER_FN_NAME` (and `PLACEHOLDER_CLS_NAME` if needed) with the corresponding entry points. Next, replace `PLACEHOLDER_CODE_BODY` with the postprocessed prediction.
## Dataset Creation
See section 2 of the [BabelCode Paper](https://arxiv.org/abs/2302.01973) to learn more about how the datasets are translated.
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Google Research
### Licensing Information
CC-BY-4.0
### Citation Information
```
@article{orlanski2023measuring,
title={Measuring The Impact Of Programming Language Distribution},
author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
journal={arXiv preprint arXiv:2302.01973},
year={2023}
}
@article{chen2021codex,
title={Evaluating Large Language Models Trained on Code},
author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
year={2021},
eprint={2107.03374},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
false |
# Dataset Card for ICC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [Jón Friðrik Daðason](mailto:jond19@ru.is)
### Dataset Summary
The Icelandic Crawled Corpus (ICC) contains approximately 930M tokens which have been scraped from a selection of Icelandic websites, including news sites, government websites and forums. The scraped text is presented in its original form, unannotated, untokenized and without deduplication.
### Supported Tasks and Leaderboards
The ICC is primarily intended for use in training language models. It can be combined with other corpora, such as the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) and the Icelandic portion of the [mC4](https://huggingface.co/datasets/mc4) corpus.
### Languages
This corpus contains text in Icelandic, scraped from a variety of online sources.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Each scraped item consists of two fields:
* **url**: The source URL of the scraped text.
* **text**: The scraped text.
### Data Splits
N/A
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Although this corpus consists entirely of text collected from publicly available websites, it may contain some examples of personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This corpus was created by Jón Friðrik Daðason, during work done at the [Language and Voice Lab](https://lvl.ru.is/) at [Reykjavik University](https://www.ru.is/).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0
International License. Any text, HTML page links, information, metadata or
other materials in this work may be subject to separate terms and
conditions between you and the owners of such content.
If you are a copyright owner or an agent thereof and believe that any
content in this work infringes upon your copyrights, you may submit a
notification with the following information:
* Your full name and information reasonably sufficient to permit us to
contact you, such as mailing address, phone number and an email address.
* Identification of the copyrighted work you claim has been infringed.
* Identification of the material you claim is infringing and should be
removed, and information reasonably sufficient to permit us to locate
the material.
### Citation Information
N/A
### Contributions
Thanks to [@jonfd](https://github.com/jonfd) for adding this dataset.
|
false |
# mammut-corpus-venezuela
HuggingFace Dataset for testing purposes. The train dataset is `mammut/mammut-corpus-venezuela`.
## 1. How to use
How to load this dataset directly with the datasets library:
`>>> from datasets import load_dataset`
`>>> dataset = load_dataset("mammut/mammut-corpus-venezuela")`
## 2. Dataset Summary
**mammut-corpus-venezuela** is a dataset for Spanish language modeling. This dataset comprises a large number of Venezuelan and Latin-American Spanish texts, manually selected and collected in 2021. The data was collected by a process of web scraping from different portals, downloading of Telegram group chats' history, and selecting of Venezuelan and Latin-American Spanish corpus available online. The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. Social biases may be present, and a percentage of the texts may be fake or contain misleading or offensive language.
Each record in the dataset contains the author of the text (anonymized for conversation authors), the date on which the text entered in the corpus, the text which was automatically tokenized at sentence level for sources other than conversations, the source of the text, the title of the text, the number of tokens (excluding punctuation marks) of the text, and the linguistic register of the text.
This is the test set for `mammut/mammut-corpus-venezuela` dataset.
## 3. Supported Tasks and Leaderboards
This dataset can be used for language modeling testing.
## 4. Languages
The dataset contains Venezuelan and Latin-American Spanish.
## 5. Dataset Structure
Dataset structure features.
### 5.1 Data Instances
An example from the dataset:
"AUTHOR":"author in title",
"TITLE":"Luis Alberto Buttó: Hecho en socialismo",
"SENTENCE":"Históricamente, siempre fue así.",
"DATE":"2021-07-04 07:18:46.918253",
"SOURCE":"la patilla",
"TOKENS":"4",
"TYPE":"opinion/news",
The average word token count are provided below:
### 5.2 Total of tokens (no spelling marks)
Test: 4,876,739.
### 5.3 Data Fields
The data have several fields:
AUTHOR: author of the text. It is anonymized for conversation authors.
DATE: date on which the text was entered in the corpus.
SENTENCE: text. It was automatically tokenized for sources other than conversations.
SOURCE: source of the texts.
TITLE: title of the text from which SENTENCE originates.
TOKENS: number of tokens (excluding punctuation marks) of SENTENCE.
TYPE: linguistic register of the text.
### 5.4 Data Splits
The mammut-corpus-venezuela dataset has 2 splits: train and test. Below are the statistics:
Number of Instances in Split.
Test: 157,011.
## 6. Dataset Creation
### 6.1 Curation Rationale
The purpose of the mammut-corpus-venezuela dataset is language modeling. It can be used for pre-training a model from scratch or for fine-tuning on another pre-trained model.
### 6.2 Source Data
**6.2.1 Initial Data Collection and Normalization**
The data consists of opinion articles and text messages. It was collected by a process of web scraping from different portals, downloading of Telegram group chats’ history and selecting of Venezuelan and Latin-American Spanish corpus available online.
The text from the web scraping process was separated in sentences and was automatically tokenized for sources other than conversations.
An arrow parquet file was created.
Text sources: El Estímulo (website), cinco8 (website), csm-1990 (oral speaking corpus), "El atajo más largo" (blog), El Pitazo (website), La Patilla (website), Venezuelan movies subtitles, Preseea Mérida (oral speaking corpus), Prodavinci (website), Runrunes (website), and Telegram group chats.
**6.2.2 Who are the source language producers?**
The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers.
## 6.3 Annotations
**6.3.1 Annotation process**
At the moment the dataset does not contain any additional annotations.
**6.3.2 Who are the annotators?**
Not applicable.
### 6.4 Personal and Sensitive Information
The data is partially anonymized. Also, there are messages from Telegram selling chats, some percentage of these messages may be fake or contain misleading or offensive language.
## 7. Considerations for Using the Data
### 7.1 Social Impact of Dataset
The purpose of this dataset is to help the development of language modeling models (pre-training or fine-tuning) in Venezuelan Spanish.
### 7.2 Discussion of Biases
Most of the content comes from political, economical and sociological opinion articles. Social biases may be present.
### 7.3 Other Known Limitations
(If applicable, description of the other limitations in the data.)
Not applicable.
## 8. Additional Information
### 8.1 Dataset Curators
The data was originally collected by Lino Urdaneta and Miguel Riveros from Mammut.io.
### 8.2 Licensing Information
Not applicable.
### 8.3 Citation Information
Not applicable.
### 8.4 Contributions
Not applicable.
|
false |
# mammut-corpus-venezuela
HuggingFace Dataset
## 1. How to use
How to load this dataset directly with the datasets library:
`>>> from datasets import load_dataset`
`>>> dataset = load_dataset("mammut-corpus-venezuela")`
## 2. Dataset Summary
**mammut-corpus-venezuela** is a dataset for Spanish language modeling. This dataset comprises a large number of Venezuelan and Latin-American Spanish texts, manually selected and collected in 2021. The data was collected by a process of web scraping from different portals, downloading of Telegram group chats' history, and selecting of Venezuelan and Latin-American Spanish corpus available online. The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. Social biases may be present, and a percentage of the texts may be fake or contain misleading or offensive language.
Each record in the dataset contains the author of the text (anonymized for conversation authors), the date on which the text entered in the corpus, the text which was automatically tokenized at sentence level for sources other than conversations, the source of the text, the title of the text, the number of tokens (excluding punctuation marks) of the text, and the linguistic register of the text.
The dataset counts with a train split and a test split.
## 3. Supported Tasks and Leaderboards
This dataset can be used for language modeling.
## 4. Languages
The dataset contains Venezuelan and Latin-American Spanish.
## 5. Dataset Structure
Dataset structure features.
### 5.1 Data Instances
An example from the dataset:
"AUTHOR":"author in title",
"TITLE":"Luis Alberto Buttó: Hecho en socialismo",
"SENTENCE":"Históricamente, siempre fue así.",
"DATE":"2021-07-04 07:18:46.918253",
"SOURCE":"la patilla",
"TOKENS":"4",
"TYPE":"opinion/news",
The average word token count are provided below:
### 5.2 Total of tokens (no spelling marks)
Train: 92,431,194.
Test: 4,876,739 (in another file).
### 5.3 Data Fields
The data have several fields:
AUTHOR: author of the text. It is anonymized for conversation authors.
DATE: date on which the text was entered in the corpus.
SENTENCE: text. It was automatically tokenized for sources other than conversations.
SOURCE: source of the texts.
TITLE: title of the text from which SENTENCE originates.
TOKENS: number of tokens (excluding punctuation marks) of SENTENCE.
TYPE: linguistic register of the text.
### 5.4 Data Splits
The mammut-corpus-venezuela dataset has 2 splits: train and test. Below are the statistics:
Number of Instances in Split.
Train: 2,983,302.
Test: 157,011.
## 6. Dataset Creation
### 6.1 Curation Rationale
The purpose of the mammut-corpus-venezuela dataset is language modeling. It can be used for pre-training a model from scratch or for fine-tuning on another pre-trained model.
### 6.2 Source Data
**6.2.1 Initial Data Collection and Normalization**
The data consists of opinion articles and text messages. It was collected by a process of web scraping from different portals, downloading of Telegram group chats’ history and selecting of Venezuelan and Latin-American Spanish corpus available online.
The text from the web scraping process was separated in sentences and was automatically tokenized for sources other than conversations.
An arrow parquet file was created.
Text sources: El Estímulo (website), cinco8 (website), csm-1990 (oral speaking corpus), "El atajo más largo" (blog), El Pitazo (website), La Patilla (website), Venezuelan movies subtitles, Preseea Mérida (oral speaking corpus), Prodavinci (website), Runrunes (website), and Telegram group chats.
**6.2.2 Who are the source language producers?**
The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers.
## 6.3 Annotations
**6.3.1 Annotation process**
At the moment the dataset does not contain any additional annotations.
**6.3.2 Who are the annotators?**
Not applicable.
### 6.4 Personal and Sensitive Information
The data is partially anonymized. Also, there are messages from Telegram selling chats, some percentage of these messages may be fake or contain misleading or offensive language.
## 7. Considerations for Using the Data
### 7.1 Social Impact of Dataset
The purpose of this dataset is to help the development of language modeling models (pre-training or fine-tuning) in Venezuelan Spanish.
### 7.2 Discussion of Biases
Most of the content comes from political, economical and sociological opinion articles. Social biases may be present.
### 7.3 Other Known Limitations
(If applicable, description of the other limitations in the data.)
Not applicable.
## 8. Additional Information
### 8.1 Dataset Curators
The data was originally collected by Lino Urdaneta and Miguel Riveros from Mammut.io.
### 8.2 Licensing Information
Not applicable.
### 8.3 Citation Information
Not applicable.
### 8.4 Contributions
Not applicable.
|
true | # AutoNLP Dataset for project: Doctor_DE
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project Doctor_DE.
### Languages
The BCP-47 code for the dataset's language is de.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Ich bin nun seit ca 12 Jahren Patientin in dieser Praxis und kann einige der Kommentare hier ehrlich gesagt \u00fcberhaupt nicht nachvollziehen.<br />\nFr. Dr. Gr\u00f6ber Pohl ist in meinen Augen eine unglaublich nette und kompetente \u00c4rztin. Ich kenne in meinem Familien- und Bekanntenkreis viele die bei ihr in Behandlung sind, und alle sind sehr zufrieden!<br />\nSie nimmt sich immer viel Zeit und auch in meiner Schwangerschaft habe ich mich bei ihr immer gut versorgt gef\u00fchlt, und musste daf\u00fcr kein einziges Mal in die Tasche greifen!<br />\nDas einzig negative ist die lange Wartezeit in der Praxis. Daf\u00fcr nimmt sie sich aber auch Zeit und arbeitet nicht wie andere \u00c4rzte wie am Flie\u00dfband.<br />\nIch kann sie nur weiter empfehlen!",
"target": 1.0
},
{
"text": "Ich hatte nie den Eindruck \"Der N\u00e4chste bitte\" Er hatte sofort meine Beschwerden erkannt und Abhilfe geschafft.",
"target": 1.0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='float32', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 280191 |
| valid | 70050 |
|
false |
# Dataset Card for Clean(maybe) Indonesia mC4
## Dataset Description
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
- **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)
### Dataset Summary
A thoroughly cleaned version of the Indonesia split of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4). Based on the [Common Crawl dataset](https://commoncrawl.org). The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
### Data Fields
The data contains the following fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp of extraction as a string
### Data Splits
You can load any subset like this:
```python
from datasets import load_dataset
mc4_id_tiny = load_dataset("munggok/mc4-id", "tiny")
```
Since splits are quite large, you may want to traverse them using the streaming mode available starting from 🤗 Datasets v1.9.0:
```python
from datasets import load_dataset
mc4_id_full_stream = load_dataset("munggok/mc4-id", "full", split='train', streaming=True)
print(next(iter(mc4_id_full_stream))) # Prints the example presented above
```
## Dataset Creation
Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`.
## Considerations for Using the Data
### Discussion of Biases
Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
## Additional Information
### Dataset Curators
Authors at AllenAI are the original curators for the `mc4` corpus.
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
If you use this dataset in your work, please cite us and the original mC4 authors as:
```
@inproceedings{xue-etal-2021-mt5,
title = "m{T}5: A Massively Multilingual Pre-trained Text-to-Text Transformer",
author = "Xue, Linting and
Constant, Noah and
Roberts, Adam and
Kale, Mihir and
Al-Rfou, Rami and
Siddhant, Aditya and
Barua, Aditya and
Raffel, Colin",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.41",
doi = "10.18653/v1/2021.naacl-main.41",
pages = "483--498",
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
false |
# nateraw/auto-cats-and-dogs
Image Classification Dataset
## Usage
```python
from PIL import Image
from datasets import load_dataset
def pil_loader(path: str):
with open(path, 'rb') as f:
im = Image.open(f)
return im.convert('RGB')
def image_loader(example_batch):
example_batch['image'] = [
pil_loader(f) for f in example_batch['file']
]
return example_batch
ds = load_dataset('nateraw/auto-cats-and-dogs')
ds = ds.with_transform(image_loader)
```
|
false |
# nateraw/auto-exp-2
Image Classification Dataset
## Usage
```python
from PIL import Image
from datasets import load_dataset
def pil_loader(path: str):
with open(path, 'rb') as f:
im = Image.open(f)
return im.convert('RGB')
def image_loader(example_batch):
example_batch['image'] = [
pil_loader(f) for f in example_batch['file']
]
return example_batch
ds = load_dataset('nateraw/auto-exp-2')
ds = ds.with_transform(image_loader)
```
|
false | ## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard)
- **Paper:** Pending
### Dataset Summary
EpiSet4NER is a bronze-standard dataset for epidemiological entity recognition of location, epidemiologic types (e.g. "prevalence", "annual incidence", "estimated occurrence"), and epidemiological rates (e.g. "1.7 per 1,000,000 live births", "2.1:1.000.000", "one in five million", "0.03%") created by the [Genetic and Rare Diseases Information Center (GARD)](https://rarediseases.info.nih.gov/), a program in [the National Center for Advancing Translational Sciences](https://ncats.nih.gov/), one of the 27 [National Institutes of Health](https://www.nih.gov/). It was labeled programmatically using spaCy NER and rule-based methods. This weakly-supervised teaching method allowed us to construct this imprecise dataset with minimal manual effort and achieve satisfactory performance on a multi-type token classification problem. The test set was manually corrected by 3 NCATS researchers and a GARD curator (genetic and rare disease expert). It was used to train [EpiExtract4GARD](https://huggingface.co/ncats/EpiExtract4GARD), a BioBERT-based model fine-tuned for NER.
An [example](https://pubmed.ncbi.nlm.nih.gov/24237863/) of 'train' looks as follows.
```
{
"id": "333",
"tokens": ['Conclusions', 'The', 'birth', 'prevalence', 'of', 'CLD', 'in', 'the', 'northern', 'Netherlands', 'was', '21.1/10,000', 'births', '.'],
"ner_tags": [0, 0, 0, 3, 0, 0, 0, 0, 0, 1, 0, 5, 6, 0],
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature that indicates sentence number.
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-LOC` (1), `I-LOC` (2), `B-EPI` (3), `I-EPI` (4),`B-STAT` (5),`I-STAT` (6).
### Data Splits
|name |train |validation|test|
|---------|-----:|----:|----:|
|EpiSet \# of abstracts|456|114|50|
|EpiSet \# tokens |117888|31262|13910|
## Dataset Creation

*Figure 1:* Creation of EpiSet4NER by NIH/NCATS
Comparing the programmatically labeled test set to the manually corrected test set allowed us to measure the precision, recall, and F1 of the programmatic labeling.
*Table 1:* Programmatic labeling of EpiSet4NER
| Evaluation Level | Entity | Precision | Recall | F1 |
|:----------------:|:------------------------:|:---------:|:------:|:-----:|
| Entity-Level | Overall | 0.559 | 0.662 | 0.606 |
| | Location | 0.597 | 0.661 | 0.627 |
| | Epidemiologic Type | 0.854 | 0.911 | 0.882 |
| | Epidemiologic Rate | 0.175 | 0.255 | 0.207 |
| Token-Level | Overall | 0.805 | 0.710 | 0.755 |
| | Location | 0.868 | 0.713 | 0.783 |
| | Epidemiologic Type | 0.908 | 0.908 | 0.908 |
| | Epidemiologic Rate | 0.739 | 0.645 | 0.689 |
An example of the text labeling:

*Figure 2:* Text Labeling using spaCy and rule-based labeling. Ideal labeling is bolded on the left. Actual programmatic output is on the right. [\[Figure citation\]](https://pubmed.ncbi.nlm.nih.gov/33649778/)
### Curation Rationale
To train ML/DL models that automate the process of rare disease epidemiological curation. This is crucial information to patients & families, researchers, grantors, and policy makers, primarily for funding purposes.
### Source Data
620 rare disease abstracts classified as epidemiological by a LSTM RNN rare disease epi classifier from 488 diseases. See Figure 1.
#### Initial Data Collection and Normalization
A random sample of 500 disease names were gathered from a list of ~6061 rare diseases tracked by GARD until ≥50 abstracts had been returned for each disease or the EBI RESTful API results were exhausted. Though we called ~25,000 abstracts from PubMed's db, only 7699 unique abstracts were returned for 488 diseases. Out of 7699 abstracts, only 620 were classified as epidemiological by the LSTM RNN epidemiological classifier.
### Annotations
#### Annotation process
Programmatic labeling. See [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/create_labeled_dataset_V2.ipynb) and then [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/modify_existing_labels.ipynb). The test set was manually corrected after creation.
#### Who are the annotators?
Programmatic labeling was done by [@William Kariampuzha](https://github.com/wzkariampuzha), one of the NCATS researchers.
The test set was manually corrected by 2 more NCATS researchers and a GARD curator (genetic and rare disease expert).
### Personal and Sensitive Information
None. These are freely available abstracts from PubMed.
## Considerations for Using the Data
### Social Impact of Dataset
Assisting 25-30 millions Americans with rare diseases. Additionally can be useful for Orphanet or CDC researchers/curators.
### Discussion of Biases and Limitations
- There were errors in the source file that contained rare disease synonyms of names, which may have led to some unrelated abstracts being included in the training, validation, and test sets.
- The abstracts were gathered through the EBI API and is thus subject to any biases that the EBI API had. The NCBI API returns very different results as shown by an API analysis here.
- The [long short-term memory recurrent neural network epi classifier](https://pubmed.ncbi.nlm.nih.gov/34457147/) was used to sift the 7699 rare disease abstracts. This model had a hold-out validation F1 score of 0.886 and a test F1 (which was compared against a GARD curator who used full-text articles to determine truth-value of epidemiological abstract) of 0.701. With 620 epi abstracts filtered from 7699 original rare disease abstracts, there are likely several false positives and false negative epi abstracts.
- Tokenization was done by spaCy which may be a limitation (or not) for current and future models trained on this set.
- The programmatic labeling was very imprecise as seen by Table 1. This is likely the largest limitation of the [BioBERT-based model](https://huggingface.co/ncats/EpiExtract4GARD) trained on this set.
- The test set was difficult to validate even for general NCATS researchers, which is why we relied on a rare disease expert to verify our modifications. As this task of epidemiological information identification is quite difficult for non-expert humans to complete, this set, and especially a gold-standard dataset in the possible future, represents a challenging gauntlet for NLP systems, especially those focusing on numeracy, to compete on.
## Additional Information
### Dataset Curators
[NIH GARD](https://rarediseases.info.nih.gov/about-gard/pages/23/about-gard)
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at NCATS/Axle Informatics for adding this dataset. |
true |
# Dataset Card for GitHub Issues
## Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond. |
false |
# Dataset Card for frwiki_good_pages_el
## Dataset Description
- Repository: [enwiki_el](https://github.com/GaaH/enwiki_el)
- Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr)
### Dataset Summary
It is intended to be used to train Entity Linking (EL) systems. Links in Wikipedia articles are used to detect named entities.
### Languages
- English
## Dataset Structure
```
{
"title": "Title of the page",
"qid": "QID of the corresponding Wikidata entity",
"words": ["tokens"],
"wikipedia": ["Wikipedia description of each entity"],
"labels": ["NER labels"],
"titles": ["Wikipedia title of each entity"],
"qids": ["QID of each entity"],
}
```
The `words` field contains the article’s text splitted on white-spaces. The other fields are list with same length as `words` and contains data only when the respective token in `words` is the __start of an entity__. For instance, if the _i-th_ token in `words` is an entity, then the _i-th_ element of `wikipedia` contains a description, extracted from Wikipedia, of this entity. The same applies for the other fields. If the entity spans multiple words, then only the index of the first words contains data.
The only exception is the `labels` field, which is used to delimit entities. It uses the IOB encoding: if the token is not part of an entity, the label is `"O"`; if it is the first word of a multi-word entity, the label is `"B"`; otherwise the label is `"I"`. |
false |
# Axolotl-Spanish-Nahuatl : Parallel corpus for Spanish-Nahuatl machine translation
## Table of Contents
- [Dataset Card for [Axolotl-Spanish-Nahuatl]](#dataset-card-for-Axolotl-Spanish-Nahuatl)
## Dataset Description
- **Source 1:** http://www.corpus.unam.mx/axolotl
- **Source 2:** http://link.springer.com/article/10.1007/s10579-014-9287-y
- **Repository:1** https://github.com/ElotlMX/py-elotl
- **Repository:2** https://github.com/christos-c/bible-corpus/blob/master/bibles/Nahuatl-NT.xml
- **Paper:** https://aclanthology.org/N15-2021.pdf
## Dataset Collection
In order to get a good translator, we collected and cleaned two of the most complete Nahuatl-Spanish parallel corpora available. Those are Axolotl collected by an expert team at UNAM and Bible UEDIN Nahuatl Spanish crawled by Christos Christodoulopoulos and Mark Steedman from Bible Gateway site.
After this, we ended with 12,207 samples from Axolotl due to misalignments and duplicated texts in Spanish in both original and nahuatl columns and 7,821 samples from Bible UEDIN for a total of 20028 utterances.
## Team members
- Emilio Morales [(milmor)](https://huggingface.co/milmor)
- Rodrigo Martínez Arzate [(rockdrigoma)](https://huggingface.co/rockdrigoma)
- Luis Armando Mercado [(luisarmando)](https://huggingface.co/luisarmando)
- Jacobo del Valle [(jjdv)](https://huggingface.co/jjdv)
## Applications
- MODEL: Spanish Nahuatl Translation Task with a T5 model in ([t5-small-spanish-nahuatl](https://huggingface.co/hackathon-pln-es/t5-small-spanish-nahuatl))
- DEMO: Spanish Nahuatl Translation in ([Spanish-nahuatl](https://huggingface.co/spaces/hackathon-pln-es/Spanish-Nahuatl-Translation)) |
false |
# RuNNE dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
Part of NEREL dataset (https://arxiv.org/abs/2108.13112), a Russian dataset
for named entity recognition and relation extraction, used in RuNNE (2022)
competition (https://github.com/dialogue-evaluation/RuNNE).
Entities may be nested (see https://arxiv.org/abs/2108.13112).
Entity types list:
* AGE
* AWARD
* CITY
* COUNTRY
* CRIME
* DATE
* DISEASE
* DISTRICT
* EVENT
* FACILITY
* FAMILY
* IDEOLOGY
* LANGUAGE
* LAW
* LOCATION
* MONEY
* NATIONALITY
* NUMBER
* ORDINAL
* ORGANIZATION
* PENALTY
* PERCENT
* PERSON
* PRODUCT
* PROFESSION
* RELIGION
* STATE_OR_PROVINCE
* TIME
* WORK_OF_ART
## Dataset Structure
There are two "configs" or "subsets" of the dataset.
Using
`load_dataset('MalakhovIlya/RuNNE', 'ent_types')['ent_types']`
you can download list of entity types (
Dataset({
features: ['type'],
num_rows: 29
})
)
Using
`load_dataset('MalakhovIlya/RuNNE', 'data')` or `load_dataset('MalakhovIlya/RuNNE')`
you can download the data itself (DatasetDict)
Dataset consists of 3 splits: "train", "test" and "dev". Each of them contains text document. "Train" and "test" splits also contain annotated entities, "dev" doesn't.
Each entity is represented by a string of the following format: "\<start> \<stop> \<type>", where \<start> is a position of the first symbol of entity in text, \<stop> is the last symbol position in text and \<type> is a one of the aforementioned list of types.
P.S.
Original NEREL dataset also contains relations, events and linked entities, but they were not added here yet ¯\\\_(ツ)_/¯
## Citation Information
@article{Artemova2022runne,
title={{RuNNE-2022 Shared Task: Recognizing Nested Named Entities}},
author={Artemova, Ekaterina and Zmeev, Maksim and Loukachevitch, Natalia and Rozhkov, Igor and Batura, Tatiana and Braslavski, Pavel and Ivanov, Vladimir and Tutubalina, Elena},
journal={Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference "Dialog"},
year={2022}
}
|
true |
# Dataset Card for DanFEVER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/StrombergNLP/danfever](https://github.com/StrombergNLP/danfever)
- **Repository:** [https://stromberg.ai/publication/danfever/](https://stromberg.ai/publication/danfever/)
- **Paper:** [https://aclanthology.org/2021.nodalida-main.47/](https://aclanthology.org/2021.nodalida-main.47/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Leon Derczynski](mailto:leod@itu.dk)
- **Size of downloaded dataset files:** 2.82 MiB
- **Size of the generated dataset:** 2.80 MiB
- **Total amount of disk used:** 5.62 MiB
### Dataset Summary
We present a dataset, DanFEVER, intended for multilingual misinformation research. The dataset is in Danish and has the same format as the well-known English FEVER dataset. It can be used for testing methods in multilingual settings, as well as for creating models in production for the Danish language.
### Supported Tasks and Leaderboards
This dataset supports the FEVER task, but in Danish.
* PwC leaderboard: [Fact Verification on DanFEVER](https://paperswithcode.com/sota/fact-verification-on-danfever)
### Languages
This dataset is in Danish; the bcp47 is `da_DK`.
## Dataset Structure
### Data Instances
```
{
'id': '0',
'claim': 'Den 31. oktober 1920 opdagede Walter Baade kometen (944) Hidalgo i det ydre solsystem.',
'label': 0,
'evidence_extract': '(944) Hidalgo (oprindeligt midlertidigt navn: 1920 HZ) er en mørk småplanet med en diameter på ca. 50 km, der befinder sig i det ydre solsystem. Objektet blev opdaget den 31. oktober 1920 af Walter Baade. En asteroide (småplanet, planetoide) er et fast himmellegeme, hvis bane går rundt om Solen (eller en anden stjerne). Pr. 5. maj 2017 kendes mere end 729.626 asteroider og de fleste befinder sig i asteroidebæltet mellem Mars og Jupiter.',
'verifiable': 1,
'evidence': 'wiki_26366, wiki_12289',
'original_id': '1'
}
```
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
A dump of the Danish Wikipedia of 13 February 2020 was stored as well as the relevant articles from Den Store Danske (excerpts only, to comply with copyright laws). Two teams of two people independently sampled evidence, and created and annotated claims from these two sites.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
The source language is from Wikipedia contributors editors and from dictionary contributors and editors.
### Annotations
#### Annotation process
Detailed in [this paper](http://www.derczynski.com/papers/danfever.pdf).
#### Who are the annotators?
The annotators are native Danish speakers and masters students of IT; two female, two male, ages 25-35.
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to enable construction of fact-checking systems in Danish. A system that succeeds at this may be able to identify questionable conclusions or inferences.
### Discussion of Biases
The data is drawn from relatively formal topics, and so may perform poorly outside these areas.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The data here is licensed CC-BY 4.0. If you use this data, you MUST state its origin.
### Citation Information
Refer to this work as:
> Nørregaard and Derczynski (2021). "DanFEVER: claim verification dataset for Danish", Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa).
Bibliographic reference:
````
@inproceedings{norregaard-derczynski-2021-danfever,
title = "{D}an{FEVER}: claim verification dataset for {D}anish",
author = "N{\o}rregaard, Jeppe and Derczynski, Leon",
booktitle = "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)",
year = "2021",
publisher = {Link{\"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.47",
pages = "422--428"
}
```
|
false |
# Dataset Card for "ipm-nel"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** []()
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [http://www.derczynski.com/papers/ner_single.pdf](http://www.derczynski.com/papers/ner_single.pdf)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 120 KB
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
This data is for the task of named entity recognition and linking/disambiguation over tweets. It comprises
the addition of an entity URI layer on top of an NER-annotated tweet dataset. The task is to detect entities
and then provide a correct link to them in DBpedia, thus disambiguating otherwise ambiguous entity surface
forms; for example, this means linking "Paris" to the correct instance of a city named that (e.g. Paris,
France vs. Paris, Texas).
The data concentrates on ten types of named entities: company, facility, geographic location, movie, musical
artist, person, product, sports team, TV show, and other.
The file is tab separated, in CoNLL format, with line breaks between tweets.
* Data preserves the tokenisation used in the Ritter datasets.
* PoS labels are not present for all tweets, but where they could be found in the Ritter data, they're given.
* In cases where a URI could not be agreed, or was not present in DBpedia, the linking URI is `NIL`.
See the paper, [Analysis of Named Entity Recognition and Linking for Tweets](http://www.derczynski.com/papers/ner_single.pdf) for a full description of the methodology.
### Supported Tasks and Leaderboards
* Dataset leaderboard on PWC: [Entity Linking on Derczynski](https://paperswithcode.com/sota/entity-linking-on-derczynski-1)
### Languages
English of unknown region (`bcp47:en`)
## Dataset Structure
### Data Instances
#### ipm_nel
- **Size of downloaded dataset files:** 120 KB
- **Size of the generated dataset:**
- **Total amount of disk used:**
An example of 'train' looks as follows.
```
{
'id': '0',
'tokens': ['#Astros', 'lineup', 'for', 'tonight', '.', 'Keppinger', 'sits', ',', 'Downs', 'plays', '2B', ',', 'CJ', 'bats', '5th', '.', '@alysonfooter', 'http://bit.ly/bHvgCS'],
'ner_tags': [9, 0, 0, 0, 0, 7, 0, 0, 7, 0, 0, 0, 7, 0, 0, 0, 0, 0],
'uris': "['http://dbpedia.org/resource/Houston_Astros', '', '', '', '', 'http://dbpedia.org/resource/Jeff_Keppinger', '', '', 'http://dbpedia.org/resource/Brodie_Downs', '', '', '', 'NIL', '', '', '', '', '']"
}
```
### Data Fields
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
- `uris`: a `list` of URIs (`string`) that disambiguate entities. Set to `NIL` when an entity has no DBpedia entry, or blank for outside-of-entity tokens.
### Data Splits
| name |train|
|---------|----:|
|ipm_nel|183 sentences|
## Dataset Creation
### Curation Rationale
To gather a social media benchmark for named entity linking that is sufficiently different from newswire data.
### Source Data
#### Initial Data Collection and Normalization
The data is partly harvested from that distributed by [Ritter / Named Entity Recognition in Tweets: An Experimental Study](https://aclanthology.org/D11-1141/),
and partly taken from Twitter by the authors.
#### Who are the source language producers?
English-speaking Twitter users, between October 2011 and September 2013
### Annotations
#### Annotation process
The authors were allocated documents and marked them for named entities (where these were not already present) and then attempted to find
the best-fitting DBpedia entry for each entity found. Each entity mention was labelled by a random set of three volunteers.
The annotation task was mediated using Crowdflower (Biewald, 2012). Our interface design was to show each volunteer the text of the tweet, any URL links contained
therein, and a set of candidate targets from DBpedia. The volunteers were encouraged to click on the URL links from the
tweet, to gain addition context and thus ensure that the correct DBpedia URI is chosen by them. Candidate entities were
shown in random order, using the text from the corresponding DBpedia abstracts (where available) or the actual DBpedia
URI otherwise. In addition, the options ‘‘none of the above’’, ‘‘not an entity’’ and ‘‘cannot decide’’ were added, to allow the
volunteers to indicate that this entity mention has no corresponding DBpedia URI (none of the above), the highlighted text
is not an entity, or that the tweet text (and any links, if available) did not provide sufficient information to reliably disambiguate the entity mention.
#### Who are the annotators?
The annotators are 10 volunteer NLP researchers, from the authors and the authors' institutions.
### Personal and Sensitive Information
The data was public at the time of collection. User names are preserved.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
The data is annotated by NLP researchers; we know that this group has high agreement but low recall on English twitter text [C16-1111](https://aclanthology.org/C16-1111/).
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0. You must
acknowledge the author if you use this data, but apart from that, you're quite
free to do most things. See https://creativecommons.org/licenses/by/4.0/legalcode .
### Citation Information
```
@article{derczynski2015analysis,
title={Analysis of named entity recognition and linking for tweets},
author={Derczynski, Leon and Maynard, Diana and Rizzo, Giuseppe and Van Erp, Marieke and Gorrell, Genevieve and Troncy, Rapha{\"e}l and Petrak, Johann and Bontcheva, Kalina},
journal={Information Processing \& Management},
volume={51},
number={2},
pages={32--49},
year={2015},
publisher={Elsevier}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
true |
# Dataset Card for "dkstance / DAST"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/jointrumourstanceandveracity/](https://stromberg.ai/publication/jointrumourstanceandveracity/)
- **Repository:** [https://figshare.com/articles/dataset/Danish_stance-annotated_Reddit_dataset/8217137](https://figshare.com/articles/dataset/Danish_stance-annotated_Reddit_dataset/8217137)
- **Paper:** [https://aclanthology.org/W19-6122/](https://aclanthology.org/W19-6122/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
This is an SDQC stance-annotated Reddit dataset for the Danish language generated within a thesis project. The dataset consists of over 5000 comments structured as comment trees and linked to 33 source posts.
The dataset is applicable for supervised stance classification and rumour veracity prediction.
### Supported Tasks and Leaderboards
* Stance prediction
### Languages
## Dataset Structure
### Data Instances
#### DAST / dkstance
- **Size of downloaded dataset files:** 4.72 MiB
- **Size of the generated dataset:** 3.69 MiB
- **Total amount of disk used:** 8.41 MiB
An example of 'train' looks as follows.
```
{
'id': '1',
'native_id': 'ebwjq5z',
'text': 'Med de udfordringer som daginstitutionerne har med normeringer, og økonomi i det hele taget, synes jeg det er en vanvittig beslutning at prioritere skattebetalt vegansk kost i daginstitutionerne. Brug dog pengene på noget mere personale, og lad folk selv betale for deres individuelle kostønsker.',
'parent_id': 'a6o3us',
'parent_text': 'Mai Mercado om mad i daginstitutioner: Sund kost rimer ikke på veganer-mad',
'parent_stance': 0,
'source_id': 'a6o3us',
'source_text': 'Mai Mercado om mad i daginstitutioner: Sund kost rimer ikke på veganer-mad',
'source_stance': 0
}
```
### Data Fields
- `id`: a `string` feature.
- `native_id`: a `string` feature representing the native ID of the entry.
- `text`: a `string` of the comment text in which stance is annotated.
- `parent_id`: the `native_id` of this comment's parent.
- `parent_text`: a `string` of the parent comment's text.
- `parent_stance`: the label of the stance in the comment towards its parent comment.
```
0: "Supporting",
1: "Denying",
2: "Querying",
3: "Commenting",
```
- `source_id`: the `native_id` of this comment's source / post.
- `source_text`: a `string` of the source / post text.
- `source_stance`: the label of the stance in the comment towards the original source post.
```
0: "Supporting",
1: "Denying",
2: "Querying",
3: "Commenting",
```
### Data Splits
| name |size|
|---------|----:|
|train|3122|
|validation|1066|
|test|1060|
These splits are specified after the original reserach was reported. The splits add an extra level of rigour, in that no source posts' comment tree is spread over more than one partition.
## Dataset Creation
### Curation Rationale
Comments around rumourous claims to enable rumour and stance analysis in Danish
### Source Data
#### Initial Data Collection and Normalization
The data is from Reddit posts that relate to one of a specific set of news stories; these stories are enumerated in the paper.
#### Who are the source language producers?
Danish-speaking Twitter users.
### Annotations
#### Annotation process
There was multi-user annotation process mediated through a purpose-built interface for annotating stance in Reddit threads.
#### Who are the annotators?
* Age: 20-30.
* Gender: male.
* Race/ethnicity: white northern European.
* Native language: Danish.
* Socioeconomic status: higher education student.
### Personal and Sensitive Information
The data was public at the time of collection. User names are not preserved.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
The source of the text has a strong demographic bias, being mostly young white men who are vocal their opinions. This constrains both the styles of language and discussion contained in the data, as well as the topics discussed and viewpoints held.
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
An NLP data statement is included in the paper describing the work, [https://aclanthology.org/W19-6122.pdf](https://aclanthology.org/W19-6122.pdf)
### Citation Information
```
@inproceedings{lillie-etal-2019-joint,
title = "Joint Rumour Stance and Veracity Prediction",
author = "Lillie, Anders Edelbo and
Middelboe, Emil Refsgaard and
Derczynski, Leon",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6122",
pages = "208--221",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
true |
# Dataset Card for "polstance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/politicalstanceindanish/](https://stromberg.ai/publication/politicalstanceindanish/)
- **Repository:** [https://github.com/StrombergNLP/Political-Stance-in-Danish/](https://github.com/StrombergNLP/Political-Stance-in-Danish/)
- **Paper:** [https://aclanthology.org/W19-6121/](https://aclanthology.org/W19-6121/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 548 KB
- **Size of the generated dataset:** 222 KB
- **Total amount of disk used:** 770 KB
### Dataset Summary
Political stance in Danish. Examples represent statements by
politicians and are annotated for, against, or neutral to a given topic/article.
### Supported Tasks and Leaderboards
*
### Languages
Danish, bcp47: `da-DK`
## Dataset Structure
### Data Instances
#### polstance
An example of 'train' looks as follows.
```
{
'id': '0',
'topic': 'integration',
'quote': 'Der kunne jeg godt tænke mig, at der stod mere eksplicit, at de (landene, red.) skal bekæmpe menneskesmuglere og tage imod deres egne borgere',
'label': 2,
'quoteID': '516',
'party': 'Det Konservative Folkeparti',
'politician': 'Naser Khader',
}
```
### Data Fields
- `id`: a `string` feature.
- `topic`: a `string` expressing a topic.
- `quote`: a `string` to be classified for its stance to the topic.
- `label`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "against",
1: "neutral",
2: "for",
```
- `quoteID`: a `string` of the internal quote ID.
- `party`: a `string` describing the party affiliation of the quote utterer at the time of utterance.
- `politician`: a `string` naming the politician who uttered the quote.
### Data Splits
| name |train|
|---------|----:|
|polstance|900 sentences|
## Dataset Creation
### Curation Rationale
Collection of quotes from politicians to allow detecting how political quotes orient to issues.
### Source Data
#### Initial Data Collection and Normalization
The data is taken from proceedings of the Danish parliament, the Folketing - [ft.dk](https://ft.dk).
#### Who are the source language producers?
Danish polticians
### Annotations
#### Annotation process
Annotators labelled comments for being against, neutral, or for a specified topic
#### Who are the annotators?
Danish native speakers, 20s, male, studying Software Design.
### Personal and Sensitive Information
The data was public at the time of collection and will remain open public record by law in Denmark.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{lehmann2019political,
title={Political Stance in Danish},
author={Lehmann, Rasmus and Derczynski, Leon},
booktitle={Proceedings of the 22nd Nordic Conference on Computational Linguistics},
pages={197--207},
year={2019}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
false |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/StrombergNLP/bornholmsk
- **Repository:** https://github.com/StrombergNLP/bornholmsk
- **Paper:** https://aclanthology.org/W19-6138/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This corpus introduces language processing resources and tools for Bornholmsk, a language spoken on the island of Bornholm, with roots in Danish and closely related to Scanian.
Sammenfattnijng på borrijnholmst: Dæjnna artikkelijn introduserer natursprågsresurser å varktoi for borrijnholmst, ed språg a dær snakkes på ön Borrijnholm me rødder i danst å i nær familia me skånst.
For more details, see the paper [Bornholmsk Natural Language Processing: Resources and Tools](https://aclanthology.org/W19-6138/).
### Supported Tasks and Leaderboards
*
### Languages
Bornholmsk, a language variant of Danish spoken on the island of Bornholm. bcp47: `da-bornholm`
## Dataset Structure
### Data Instances
13169 lines, 175 167 words, 801 KB
### Data Fields
`id`: the sentence ID, `int`
`text`: the Bornholmsk text, `string`
### Data Splits
Monolithic
## Dataset Creation
### Curation Rationale
To gather as much digital Bornholmsk together as possible
### Source Data
#### Initial Data Collection and Normalization
From many places - see paper for details. Sources include poems, songs, translations from Danish, folk stories, dictionary entries.
#### Who are the source language producers?
Native speakers of Bornholmsk who have produced works in their native language, or translated them to Danish. Much of the data is the result of a community of Bornholmsk speakers volunteering their time across the island in an effort to capture this endangered language.
### Annotations
#### Annotation process
No annotations
#### Who are the annotators?
No annotations
### Personal and Sensitive Information
Unknown, but low risk of presence, given the source material
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to capture Bornholmsk digitally and provide a way for NLP systems to interact with it, and perhaps even spark interest in dealing with the language.
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
This collection of Bornholmsk is curated by Leon Derczynski and Alex Speed Kjeldsen
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
```
@inproceedings{derczynski-kjeldsen-2019-bornholmsk,
title = "Bornholmsk Natural Language Processing: Resources and Tools",
author = "Derczynski, Leon and
Kjeldsen, Alex Speed",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6138",
pages = "338--344",
}
``` |
false |
# Dataset Card for Bingsu/arcalive_220506
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/Bingsu/arcalive_220506
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
[아카라이브 베스트 라이브 채널](https://arca.live/b/live)의 2021년 8월 16일부터 2022년 5월 6일까지의 데이터를 수집하여, 댓글만 골라낸 데이터입니다.
커뮤니티 특성상, 민감한 데이터들도 많으므로 사용에 주의가 필요합니다.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
ko
## Dataset Structure
### Data Instances
- Size of downloaded dataset files: 21.3 MB
### Data Fields
- text: `string`
### Data Splits
| | train |
| ---------- | ------ |
| # of texts | 195323 |
```pycon
>>> from datasets import load_dataset
>>>
>>> data = load_dataset("Bingsu/arcalive_220506")
>>> data["train"].features
{'text': Value(dtype='string', id=None)}
```
```pycon
>>> data["train"][0]
{'text': '오오오오...'}
```
|
false |
Token classification dataset developed from dataset by Katarina Nimas Kusumawati's undergraduate thesis:
**"Identifikasi Entitas Bernama dalam Domain Medis pada Layanan Konsultasi Kesehatan Berbahasa Menggunkan Alrogritme Bidirectional-LSTM-CRF"**
Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia - 2022
I just performed stratified train-validation-test split work from the original dataset.
Compatible with HuggingFace token-classification script (Tested in 4.17)
https://github.com/huggingface/transformers/tree/v4.17.0/examples/pytorch/token-classification |
true |
# Dataset Card for "rustance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://figshare.com/articles/dataset/dataset_csv/7151906](https://figshare.com/articles/dataset/dataset_csv/7151906)
- **Repository:** [https://github.com/StrombergNLP/rustance](https://github.com/StrombergNLP/rustance)
- **Paper:** [https://link.springer.com/chapter/10.1007/978-3-030-14687-0_16](https://link.springer.com/chapter/10.1007/978-3-030-14687-0_16), [https://arxiv.org/abs/1809.01574](https://arxiv.org/abs/1809.01574)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 212.54 KiB
- **Size of the generated dataset:** 186.76 KiB
- **Total amount of disk used:** 399.30KiB
### Dataset Summary
This is a stance prediction dataset in Russian. The dataset contains comments on news articles,
and rows are a comment, the title of the news article it responds to, and the stance of the comment
towards the article.
Stance detection is a critical component of rumour and fake news identification. It involves the extraction of the stance a particular author takes related to a given claim, both expressed in text. This paper investigates stance classification for Russian. It introduces a new dataset, RuStance, of Russian tweets and news comments from multiple sources, covering multiple stories, as well as text classification approaches to stance detection as benchmarks over this data in this language. As well as presenting this openly-available dataset, the first of its kind for Russian, the paper presents a baseline for stance prediction in the language.
### Supported Tasks and Leaderboards
* Stance Detection: [Stance Detection on RuStance](https://paperswithcode.com/sota/stance-detection-on-rustance)
### Languages
Russian, as spoken on the Meduza website (i.e. from multiple countries) (`bcp47:ru`)
## Dataset Structure
### Data Instances
#### rustance
- **Size of downloaded dataset files:** 349.79 KiB
- **Size of the generated dataset:** 366.11 KiB
- **Total amount of disk used:** 715.90 KiB
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'Волки, волки!!',
'title': 'Минобороны обвинило «гражданского сотрудника» в публикации скриншота из игры вместо фото террористов. И показало новое «неоспоримое подтверждение»',
'stance': 3
}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string` expressing a stance.
- `title`: a `string` of the target/topic annotated here.
- `stance`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "support",
1: "deny",
2: "query",
3: "comment",
```
### Data Splits
| name |train|
|---------|----:|
|rustance|958 sentences|
## Dataset Creation
### Curation Rationale
Toy data for training and especially evaluating stance prediction in Russian
### Source Data
#### Initial Data Collection and Normalization
The data is comments scraped from a Russian news site not situated in Russia, [Meduza](https://meduza.io/), in 2018.
#### Who are the source language producers?
Russian speakers including from the Russian diaspora, especially Latvia
### Annotations
#### Annotation process
Annotators labelled comments for supporting, denying, querying or just commenting on a news article.
#### Who are the annotators?
Russian native speakers, IT education, male, 20s.
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of misinformative content being in this data. The data has NOT been vetted for any content.
### Discussion of Biases
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{lozhnikov2018stance,
title={Stance prediction for russian: data and analysis},
author={Lozhnikov, Nikita and Derczynski, Leon and Mazzara, Manuel},
booktitle={International Conference in Software Engineering for Defence Applications},
pages={176--186},
year={2018},
organization={Springer}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
true |
# Dataset Card for sd-nlp
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sourcedata.embo.org
- **Repository:** https://github.com/source-data/soda-roberta
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** thomas.lemberger@embo.org
### Dataset Summary
This dataset is based on the content of the SourceData (https://sourcedata.embo.org) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471).
The dataset is pre-tokenized with the `roberta-base` tokenizer.
Additional details at https://github.com/source-data/soda-roberta
### Supported Tasks and Leaderboards
Tags are provided as [IOB2-style tags](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)).
`PANELIZATION`: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. `PANELIZATION` provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends.
`NER`: biological and chemical entities are labeled. Specifically the following entities are tagged:
- `SMALL_MOLECULE`: small molecules
- `GENEPROD`: gene products (genes and proteins)
- `SUBCELLULAR`: subcellular components
- `CELL`: cell types and cell lines.
- `TISSUE`: tissues and organs
- `ORGANISM`: species
- `EXP_ASSAY`: experimental assays
`ROLES`: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are:
- `CONTROLLED_VAR`: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations.
- `MEASURED_VAR`: entities that are associated with the variables measured and the object of the measurements.
`BORING`: entities are marked with the tag `BORING` when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...).
### Languages
The text in the dataset is English.
## Dataset Structure
### Data Instances
```json
{
"tokens": [
"<s>", "Figure", "\u01205", ".", "\u0120Figure", "\u01205", ".", "A", "\u0120ER", "p", "57", "fl", "ox", "/", "fl", "ox", "\u0120mice", "\u0120were", "\u0120crossed", "\u0120with", "\u0120Nest", "in", "\u0120Cre", "\u0120trans", "genic", "\u0120mice", "\u0120to", "\u0120generate", "\u0120nervous", "\u0120system", "\u0120specific", "\u0120ER", "p", "57", "\u0120deficient", "\u0120animals", ".", "\u0120The", "\u0120levels", "\u0120of", "\u0120ER", "p", "57", "\u0120protein", "\u0120in", "\u0120the", "\u0120spinal", "\u0120cord", "\u0120were", "\u0120monitored", "\u0120by", "\u0120Western", "\u0120blot", ".", "\u0120ER", "p", "57", "WT", "\u0120(", "n", "=", "4", "),", "\u0120ER", "p", "57", "N", "es", "+", "/-", "\u0120(", "n", "=", "5", ")", "\u0120and", "\u0120ER", "p", "57", "N", "es", "-", "/-", "\u0120(", "n", "=", "4", ")", "\u0120mice", ".", "\u0120H", "SP", "90", "\u0120levels", "\u0120were", "\u0120determined", "\u0120as", "\u0120a", "\u0120loading", "\u0120control", ".", "\u0120Right", "\u0120panel", ":", "\u0120Quant", "ification", "\u0120of", "\u0120ER", "p", "57", "\u0120levels", "\u0120was", "\u0120performed", "\u0120relative", "\u0120to", "\u0120H", "sp", "90", "\u0120levels", ".", "\u0120B", "\u0120Body", "\u0120weight", "\u0120measurements", "\u0120were", "\u0120performed", "\u0120for", "\u0120indicated", "\u0120time", "\u0120points", "\u0120in", "\u0120ER", "p", "57", "WT", "\u0120(", "n", "=", "50", "),", "\u0120ER", "p", "57", "N", "es", "+", "/-", "\u0120(", "n", "=", "32", ")", "\u0120and", "\u0120ER", "p", "57", "N", "es", "-", "/-", "\u0120(", "n", "=", "19", ")", "\u0120mice", ".", "\u0120C", "\u0120Rot", "ar", "od", "\u0120performance", "\u0120was", "\u0120performed", "\u0120ER", "p", "57", "WT", "\u0120(", "n", "=", "20", "),", "\u0120ER", "p", "57", "N", "es", "+", "/-", "\u0120(", "n", "=", "15", ")", "\u0120and", "\u0120ER", "p", "57", "N", "es", "-", "/-", "\u0120(", "n", "=", "8", ")", "\u0120mice", ".", "\u0120D", "\u0120H", "anging", "\u0120test", "\u0120performance", "\u0120was", "\u0120assessed", "\u0120in", "\u0120ER", "p", "57", "WT", "\u0120(", "n", "=", "41", "),", "\u0120ER", "p", "57", "N", "es", "+", "/-", "\u0120(", "n", "=", "32", ")", "\u0120and", "\u0120ER", "p", "57", "N", "es", "-", "/-", "\u0120(", "n", "=", "12", ")", "\u0120mice", ".", "\u0120E", "\u0120Kaplan", "-", "Me", "ier", "\u0120survival", "\u0120curve", "\u0120for", "\u0120ER", "p", "57", "N", "es", "-", "/-", "\u0120mice", "\u0120(", "N", "\u0120=", "\u012019", ")", "\u0120that", "\u0120prematurely", "\u0120died", "\u0120or", "\u0120had", "\u0120to", "\u0120be", "\u0120sacrificed", "\u0120because", "\u0120of", "\u0120health", "\u0120reasons", "\u0120between", "\u0120the", "\u0120ages", "\u012022", "\u0120and", "\u012073", "\u0120days", ".", "\u0120Mean", "\u0120survival", "\u0120of", "\u0120this", "\u0120sub", "group", "\u0120of", "\u0120animals", "\u0120was", "\u012057", "\u0120days", ".", "\u0120ER", "p", "57", "WT", "\u0120(", "n", "=", "50", ")", "\u0120and", "\u0120ER", "p", "57", "N", "es", "+", "/-", "\u0120(", "n", "=", "32", ")", "\u0120mice", "\u0120are", "\u0120shown", "\u0120as", "\u0120a", "\u0120reference", ".", "\u0120F", "\u0120Hist", "ological", "\u0120analysis", "\u0120of", "\u0120Ne", "u", "N", "\u0120and", "\u0120GF", "AP", "\u0120st", "aining", "\u0120was", "\u0120performed", "\u0120in", "\u0120spinal", "\u0120cord", "\u0120tissue", "\u0120from", "\u0120ER", "p", "57", "WT", "\u0120and", "\u0120ER", "p", "57", "N", "es", "-", "/-", "\u0120mice", "\u0120in", "\u0120three", "\u0120animals", "\u0120per", "\u0120group", "\u0120using", "\u0120indirect", "\u0120immun", "of", "lu", "orescence", ".", "\u0120The", "\u0120nucleus", "\u0120was", "\u0120stained", "\u0120with", "\u0120H", "oe", "ch", "st", ".", "\u0120Representative", "\u0120images", "\u0120from", "\u0120one", "\u0120mouse", "\u0120per", "\u0120group", "\u0120are", "\u0120shown", ".", "\u0120Scale", "\u0120bar", ":", "\u012050", "\u0120\u00ce\u00bc", "m", ".", "\u0120G", "\u0120St", "ere", "ological", "\u0120analysis", "\u0120of", "\u0120the", "\u0120spinal", "\u0120cord", "\u0120from", "\u0120ER", "p", "57", "WT", "\u0120(", "n", "\u0120=", "\u01204", "),", "\u0120ER", "p", "57", "N", "es", "+", "/-", "\u0120(", "n", "\u0120=", "\u01204", ")", "\u0120and", "\u0120ER", "p", "57", "N", "es", "-", "/-", "\u0120(", "n", "\u0120=", "\u01204", ")", "\u0120mice", ".", "\u0120Alternate", "\u0120series", "\u0120of", "\u0120sections", "\u0120from", "\u0120the", "\u0120spinal", "\u0120cord", "\u0120of", "\u0120the", "\u0120mice", "\u0120were", "\u0120either", "\u0120stained", "\u0120for", "\u0120N", "iss", "l", "\u0120(", "top", "\u0120row", "\u0120images", ")", "\u0120or", "\u0120processed", "\u0120for", "\u0120immun", "oh", "ist", "ochemistry", "\u0120for", "\u0120the", "\u0120ch", "olin", "ergic", "\u0120cell", "\u0120marker", "\u0120Ch", "oline", "\u0120Ac", "et", "yl", "\u0120Transfer", "ase", "\u0120(", "Ch", "AT", ",", "\u0120bottom", "\u0120row", "\u0120images", ").", "\u0120The", "\u0120nucle", "oli", "\u0120of", "\u0120the", "</s>"
],
"input_ids": [
0, 40683, 195, 4, 17965, 195, 4, 250, 13895, 642, 4390, 4825, 4325, 73, 4825, 4325, 15540, 58, 7344, 19, 12786, 179, 12022, 6214, 44131, 15540, 7, 5368, 7464, 467, 2167, 13895, 642, 4390, 38396, 3122, 4, 20, 1389, 9, 13895, 642, 4390, 8276, 11, 5, 21431, 13051, 58, 14316, 30, 2027, 39144, 4, 13895, 642, 4390, 25982, 36, 282, 5214, 306, 238, 13895, 642, 4390, 487, 293, 2744, 40398, 36, 282, 5214, 245, 43, 8, 13895, 642, 4390, 487, 293, 12, 40398, 36, 282, 5214, 306, 43, 15540, 4, 289, 4186, 3248, 1389, 58, 3030, 25, 10, 16761, 797, 4, 5143, 2798, 35, 28256, 5000, 9, 13895, 642, 4390, 1389, 21, 3744, 5407, 7, 289, 4182, 3248, 1389, 4, 163, 13048, 2408, 19851, 58, 3744, 13, 4658, 86, 332, 11, 13895, 642, 4390, 25982, 36, 282, 5214, 1096, 238, 13895, 642, 4390, 487, 293, 2744, 40398, 36, 282, 5214, 2881, 43, 8, 13895, 642, 4390, 487, 293, 12, 40398, 36, 282, 5214, 1646, 43, 15540, 4, 230, 9104, 271, 1630, 819, 21, 3744, 13895, 642, 4390, 25982, 36, 282, 5214, 844, 238, 13895, 642, 4390, 487, 293, 2744, 40398, 36, 282, 5214, 996, 43, 8, 13895, 642, 4390, 487, 293, 12, 40398, 36, 282, 5214, 398, 43, 15540, 4, 211, 289, 23786, 1296, 819, 21, 11852, 11, 13895, 642, 4390, 25982, 36, 282, 5214, 4006, 238, 13895, 642, 4390, 487, 293, 2744, 40398, 36, 282, 5214, 2881, 43, 8, 13895, 642, 4390, 487, 293, 12, 40398, 36, 282, 5214, 1092, 43, 15540, 4, 381, 25353, 12, 5096, 906, 7967, 9158, 13, 13895, 642, 4390, 487, 293, 12, 40398, 15540, 36, 487, 5457, 753, 43, 14, 30088, 962, 50, 56, 7, 28, 26936, 142, 9, 474, 2188, 227, 5, 4864, 820, 8, 6521, 360, 4, 30750, 7967, 9, 42, 2849, 13839, 9, 3122, 21, 4981, 360, 4, 13895, 642, 4390, 25982, 36, 282, 5214, 1096, 43, 8, 13895, 642, 4390, 487, 293, 2744, 40398, 36, 282, 5214, 2881, 43, 15540, 32, 2343, 25, 10, 5135, 4, 274, 31862, 9779, 1966, 9, 3864, 257, 487, 8, 32727, 591, 1690, 8173, 21, 3744, 11, 21431, 13051, 11576, 31, 13895, 642, 4390, 25982, 8, 13895, 642, 4390, 487, 293, 12, 40398, 15540, 11, 130, 3122, 228, 333, 634, 18677, 13998, 1116, 6487, 45094, 4, 20, 38531, 21, 31789, 19, 289, 3540, 611, 620, 4, 10308, 3156, 31, 65, 18292, 228, 333, 32, 2343, 4, 33256, 2003, 35, 654, 46911, 119, 4, 272, 312, 2816, 9779, 1966, 9, 5, 21431, 13051, 31, 13895, 642, 4390, 25982, 36, 282, 5457, 204, 238, 13895, 642, 4390, 487, 293, 2744, 40398, 36, 282, 5457, 204, 43, 8, 13895, 642, 4390, 487, 293, 12, 40398, 36, 282, 5457, 204, 43, 15540, 4, 43510, 651, 9, 9042, 31, 5, 21431, 13051, 9, 5, 15540, 58, 1169, 31789, 13, 234, 3006, 462, 36, 8766, 3236, 3156, 43, 50, 12069, 13, 13998, 2678, 661, 39917, 13, 5, 1855, 21716, 44858, 3551, 17540, 732, 18675, 6208, 594, 4360, 18853, 3175, 36, 4771, 2571, 6, 2576, 3236, 3156, 322, 20, 38898, 6483, 9, 5, 2
],
"label_ids": {
"entity_types": [
"O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "B-ORGANISM", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "B-GENEPROD", "O", "O", "B-ORGANISM", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "B-TISSUE", "I-TISSUE", "O", "O", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-ORGANISM", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-ORGANISM", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-ORGANISM", "O", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-ORGANISM", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "B-ORGANISM", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-ORGANISM", "O", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "B-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "B-TISSUE", "I-TISSUE", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "B-ORGANISM", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "O", "O", "B-SUBCELLULAR", "O", "O", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "O", "O", "O", "O", "O", "B-ORGANISM", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "O", "O", "B-TISSUE", "I-TISSUE", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-ORGANISM", "O", "O", "O", "O", "O", "O", "O", "B-TISSUE", "I-TISSUE", "O", "O", "B-ORGANISM", "O", "O", "O", "O", "B-SUBCELLULAR", "I-SUBCELLULAR", "I-SUBCELLULAR", "O", "O", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "I-EXP_ASSAY", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "I-GENEPROD", "I-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "B-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "O", "B-SUBCELLULAR", "I-SUBCELLULAR", "O", "O", "O"
],
"geneprod_roles": [
"O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-MEASURED_VAR", "I-MEASURED_VAR", "I-MEASURED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-MEASURED_VAR", "I-MEASURED_VAR", "I-MEASURED_VAR", "O", "B-MEASURED_VAR", "I-MEASURED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "I-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-MEASURED_VAR", "I-MEASURED_VAR", "I-MEASURED_VAR", "I-MEASURED_VAR", "I-MEASURED_VAR", "I-MEASURED_VAR", "I-MEASURED_VAR", "O", "B-MEASURED_VAR", "I-MEASURED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"
],
"boring": [
"O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "B-BORING", "I-BORING", "B-BORING", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "I-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "B-BORING", "I-BORING", "I-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "I-BORING", "I-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "I-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "I-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "I-BORING", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"
],
"panel_start": [
"O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"
]
}
}
```
### Data Fields
- `input_ids`: token id in `roberta-base` tokenizers' vocabulary provided as a`list` of `int`
- `label_ids`:
- `entity_types`: `list` of `strings` for the IOB2 tags for entity type; possible value in `["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL", "B-CELL", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]`
- `geneprod_roles`: `list` of `strings` for the IOB2 tags for experimental roles; values in `["O", "I-CONTROLLED_VAR", "B-CONTROLLED_VAR", "I-MEASURED_VAR", "B-MEASURED_VAR"]`
- `boring`: `list` of `strings` for IOB2 tags for entities unrelated to causal design; values in `["O", "I-BORING", "B-BORING"]`
- `panel_start`: `list` of `strings` for IOB2 tags `["O", "B-PANEL_START"]`
### Data Splits
- train:
- features: ['input_ids', 'labels', 'tag_mask'],
- num_rows: 48_771
- test:
- features: ['input_ids', 'labels', 'tag_mask'],
- num_rows: 13_801
- validation:
- features: ['input_ids', 'labels', 'tag_mask'],
- num_rows: 7_178
## Dataset Creation
### Curation Rationale
The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train models for text segmentation, named entity recognition and semantic role labeling.
### Source Data
#### Initial Data Collection and Normalization
Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021.
#### Who are the source language producers?
The examples are extracted from the figure legends from scientific papers in cell and molecular biology.
### Annotations
#### Annotation process
The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org)
#### Who are the annotators?
Curators of the SourceData project.
### Personal and Sensitive Information
None known.
## Considerations for Using the Data
### Social Impact of Dataset
Not applicable.
### Discussion of Biases
The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org)
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Thomas Lemberger, EMBO.
### Licensing Information
CC BY 4.0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@tlemberger](https://github.com/tlemberger>) for adding this dataset.
|
false |
# Polish-Political-Advertising
## Info
Political campaigns are full of political ads posted by candidates on social media. Political advertisement constitute a basic form of campaigning, subjected to various social requirements. We present the first publicly open dataset for detecting specific text chunks and categories of political advertising in the Polish language. It contains 1,705 human-annotated tweets tagged with nine categories, which constitute campaigning under Polish electoral law.
> We achieved a 0.65 inter-annotator agreement (Cohen's kappa score). An additional annotator resolved the mismatches between the first two annotators improving the consistency and complexity of the annotation process.
## Tasks (input, output and metrics)
Political Advertising Detection
**Input** ('*tokens'* column): sequence of tokens
**Output** ('tags*'* column): sequence of tags
**Domain**: politics
**Measurements**: F1-Score (seqeval)
**Example:**
Input: `['@k_mizera', '@rdrozd', 'Problemem', 'jest', 'mała', 'produkcja', 'dlatego', 'takie', 'ceny', '.', '10', '000', 'mikrofirm', 'zamknęło', 'się', 'w', 'poprzednim', 'tygodniu', 'w', 'obawie', 'przed', 'ZUS', 'a', 'wystarczyło', 'zlecić', 'tym', 'co', 'chcą', 'np', '.', 'szycie', 'masek', 'czy', 'drukowanie', 'przyłbic', 'to', 'nie', 'wymaga', 'super', 'sprzętu', ',', 'umiejętności', '.', 'nie', 'będzie', 'pit', ',', 'vat', 'i', 'zus', 'będą', 'bezrobotni']`
Input (translated by DeepL): `@k_mizera @rdrozd The problem is small production that's why such prices . 10,000 micro businesses closed down last week for fear of ZUS and all they had to do was outsource to those who want e.g . sewing masks or printing visors it doesn't require super equipment , skills . there will be no pit , vat and zus will be unemployed`
Output: `['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-WELFARE', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-WELFARE', 'O', 'B-WELFARE', 'O', 'B-WELFARE', 'O', 'B-WELFARE']`
## Data splits
| Subset | Cardinality |
|:-----------|--------------:|
| train | 1020 |
| test | 341 |
| validation | 340 |
## Class distribution
| Class | train | validation | test |
|:--------------------------------|--------:|-------------:|-------:|
| B-HEALHCARE | 0.237 | 0.226 | 0.233 |
| B-WELFARE | 0.210 | 0.232 | 0.183 |
| B-SOCIETY | 0.156 | 0.153 | 0.149 |
| B-POLITICAL_AND_LEGAL_SYSTEM | 0.137 | 0.143 | 0.149 |
| B-INFRASTRUCTURE_AND_ENVIROMENT | 0.110 | 0.104 | 0.133 |
| B-EDUCATION | 0.062 | 0.060 | 0.080 |
| B-FOREIGN_POLICY | 0.040 | 0.039 | 0.028 |
| B-IMMIGRATION | 0.028 | 0.017 | 0.018 |
| B-DEFENSE_AND_SECURITY | 0.020 | 0.025 | 0.028 |
## License
[Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Links
[HuggingFace](https://huggingface.co/datasets/laugustyniak/political-advertising-pl)
[Paper](https://aclanthology.org/2020.winlp-1.28/)
## Citing
> ACL WiNLP 2020 Paper
```bibtex
@inproceedings{augustyniak-etal-2020-political,
title = "Political Advertising Dataset: the use case of the Polish 2020 Presidential Elections",
author = "Augustyniak, Lukasz and Rajda, Krzysztof and Kajdanowicz, Tomasz and Bernaczyk, Micha{\l}",
booktitle = "Proceedings of the The Fourth Widening Natural Language Processing Workshop",
month = jul,
year = "2020",
address = "Seattle, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.winlp-1.28",
pages = "110--114"
}
```
> Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Datasets and Benchmarks Track
```bibtex
@inproceedings{NEURIPS2022_890b206e,
author = {Augustyniak, Lukasz and Tagowski, Kamil and Sawczyn, Albert and Janiak, Denis and Bartusiak, Roman and Szymczak, Adrian and Janz, Arkadiusz and Szyma\'{n}ski, Piotr and W\k{a}troba, Marcin and Morzy, Miko\l aj and Kajdanowicz, Tomasz and Piasecki, Maciej},
booktitle = {Advances in Neural Information Processing Systems},
editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},
pages = {21805--21818},
publisher = {Curran Associates, Inc.},
title = {This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish},
url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/890b206ebb79e550f3988cb8db936f42-Paper-Datasets_and_Benchmarks.pdf},
volume = {35},
year = {2022}
}
``` |
false |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://raingo.github.io/TGIF-Release/
- **Repository:** https://github.com/raingo/TGIF-Release
- **Paper:** https://arxiv.org/abs/1604.02748
- **Point of Contact:** mailto: yli@cs.rochester.edu
### Dataset Summary
The Tumblr GIF (TGIF) dataset contains 100K animated GIFs and 120K sentences describing visual content of the animated GIFs. The animated GIFs have been collected from Tumblr, from randomly selected posts published between May and June of 2015. We provide the URLs of animated GIFs in this release. The sentences are collected via crowdsourcing, with a carefully designed annotation interface that ensures high quality dataset. We provide one sentence per animated GIF for the training and validation splits, and three sentences per GIF for the test split. The dataset shall be used to evaluate animated GIF/video description techniques.
### Languages
The captions in the dataset are in English.
## Dataset Structure
### Data Fields
- `video_path`: `str` "https://31.media.tumblr.com/001a8b092b9752d260ffec73c0bc29cd/tumblr_ndotjhRiX51t8n92fo1_500.gif"
-`video_bytes`: `large_bytes` video file in bytes format
- `en_global_captions`: `list_str` List of english captions describing the entire video
### Data Splits
| |train |validation| test | Overall |
|-------------|------:|---------:|------:|------:|
|# of GIFs|80,000 |10,708 |11,360 |102,068 |
### Annotations
Quoting [TGIF paper](https://arxiv.org/abs/1604.02748): \
"We annotated animated GIFs with natural language descriptions using the crowdsourcing service CrowdFlower.
We carefully designed our annotation task with various
quality control mechanisms to ensure the sentences are both
syntactically and semantically of high quality.
A total of 931 workers participated in our annotation
task. We allowed workers only from Australia, Canada, New Zealand, UK and USA in an effort to collect fluent descriptions from native English speakers. Figure 2 shows the
instructions given to the workers. Each task showed 5 animated GIFs and asked the worker to describe each with one
sentence. To promote language style diversity, each worker
could rate no more than 800 images (0.7% of our corpus).
We paid 0.02 USD per sentence; the entire crowdsourcing
cost less than 4K USD. We provide details of our annotation
task in the supplementary material."
### Personal and Sensitive Information
Nothing specifically mentioned in the paper.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Licensing Information
This dataset is provided to be used for approved non-commercial research purposes. No personally identifying information is available in this dataset.
### Citation Information
```bibtex
@InProceedings{tgif-cvpr2016,
author = {Li, Yuncheng and Song, Yale and Cao, Liangliang and Tetreault, Joel and Goldberg, Larry and Jaimes, Alejandro and Luo, Jiebo},
title = "{TGIF: A New Dataset and Benchmark on Animated GIF Description}",
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}
```
### Contributions
Thanks to [@leot13](https://github.com/leot13) for adding this dataset. |
false |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/StrombergNLP/bornholmsk](https://github.com/StrombergNLP/bornholmsk)
- **Repository:** [https://github.com/StrombergNLP/bornholmsk](https://github.com/StrombergNLP/bornholmsk)
- **Paper:** [https://aclanthology.org/W19-6138/](https://aclanthology.org/W19-6138/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 490 KB
- **Size of the generated dataset:** 582 KB
- **Total amount of disk used:** 1072 KB
### Dataset Summary
This dataset is parallel text for Bornholmsk and Danish.
For more details, see the paper [Bornholmsk Natural Language Processing: Resources and Tools](https://aclanthology.org/W19-6138/).
### Supported Tasks and Leaderboards
*
### Languages
Bornholmsk, a language variant of Danish spoken on the island of Bornholm, and Danish. bcp47: `da-bornholm` and `da-DK`
## Dataset Structure
### Data Instances
### Data Fields
`id`: the sentence ID, `int`
`da-bornholm`: the Bornholmsk text, `string`
`da`: the Danish translation, `string`
### Data Splits
* Train: 5785 sentence pairs
* Validation: 500 sentence pairs
* Test: 500 sentence pairs
## Dataset Creation
### Curation Rationale
To gather as much parallel Bornholmsk together as possible
### Source Data
#### Initial Data Collection and Normalization
From a translation of Kuhre's Sansager, a selection of colloquial resources, and a prototype Bornholmsk/Danish dictionary
#### Who are the source language producers?
Native speakers of Bornholmsk who have produced works in their native language, or translated them to Danish. Much of the data is the result of a community of Bornholmsk speakers volunteering their time across the island in an effort to capture this endangered language.
### Annotations
#### Annotation process
No annotations
#### Who are the annotators?
Native speakers of Bornholmsk, mostly aged 60+.
### Personal and Sensitive Information
Unknown, but low risk of presence, given the source material
## Considerations for Using the Data
### Social Impact of Dataset
The hope behind this data is to enable people to learn and use Bornholmsk
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
This collection of Bornholmsk is curated by Leon Derczynski and Alex Speed Kjeldsen
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
```
@inproceedings{derczynski-kjeldsen-2019-bornholmsk,
title = "Bornholmsk Natural Language Processing: Resources and Tools",
author = "Derczynski, Leon and
Kjeldsen, Alex Speed",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6138",
pages = "338--344",
}
``` |
false |
# Dataset Card for "lmqg/qg_subjqa"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
Modified version of [SubjQA](https://github.com/megagonlabs/SubjQA) for question generation (QG) task.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset can be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"question": "How is book?",
"paragraph": "I am giving "Gone Girl" 3 stars, but only begrudgingly. In my mind, any book that takes me 3 months and 20 different tries to read is not worth 3 stars, especially a book written by an author I already respect. And I am not kidding, for me the first half of "Gone Girl" was a PURE TORTURE to read.Amy Dunn disappears on the day of her 5th wedding anniversary. All gradually uncovered evidence suggests that her husband, Nick, is somehow involved. Did he kill her? Was she kidnapped? What happened to Amy? One thing is clear, Nick and Amy's marriage wasn't as perfect as everybody thought.The first part of the novel is all about the investigation into Amy's disappearance, slow unraveling of Nick's dirty secrets, reminiscing about the troubled history of Nick and Amy's marriage as told in Amy's hidden diary. I strained and strained my brain trying to understand why this chunk of Gone Girl had no appeal to me whatsoever. The only answer I have is this: I am really not into reading about rich white people's problems. You want to whine to me about your dwindling trust fund? Losing your cushy New York job? Moving south and "only" renting a mansion there? Being unhappy because you have too much free time on your hands and you are used to only work as a hobby? You want to make fun of your lowly, un-posh neighbors and their casseroles? Well, I am not interested. I'd rather read about someone not necessarily likable, but at least worthy of my empathy, not waste my time on self-centered, spoiled, pathetic people who don't know what real problems are. Granted, characters in Flynn's previous novels ("Sharp Objects" and "Dark Places") are pretty pathetic and and at times revolting too, but I always felt some strange empathy towards them, not annoyance and boredom, like I felt reading about Amy and Nick's marriage voes.But then second part, with its wicked twist, changed everything. The story became much more exciting, dangerous and deranged. The main characters revealed sides to them that were quite shocking and VERY entertaining. I thought the Gillian Flynn I knew before finally unleashed her talent for writing utterly unlikable and crafty women. THEN I got invested in the story, THEN I cared.Was it too little too late though? I think it was. Something needed to be done to make "Gone Girl" a better read. Make it shorter? Cut out first part completely? I don't know. But because of my uneven experience with this novel I won't be able to recommend "Gone Girl" as readily as I did Flynn's earlier novels, even though I think this horror marriage story (it's not a true mystery, IMO) has some brilliantly written psycho goodness in it and an absolutely messed up ending that many loathed but I LOVED. I wish it didn't take so much time and patience to get to all of that...",
"answer": "any book that takes me 3 months and 20 different tries to read is not worth 3 stars",
"sentence": "In my mind, any book that takes me 3 months and 20 different tries to read is not worth 3 stars , especially a book written by an author I already respect.",
"paragraph_sentence": "I am giving "Gone Girl" 3 stars, but only begrudgingly. <hl> In my mind, any book that takes me 3 months and 20 different tries to read is not worth 3 stars , especially a book written by an author I already respect. <hl> And I am not kidding, for me the first half of "Gone Girl" was a PURE TORTURE to read. Amy Dunn disappears on the day of her 5th wedding anniversary. All gradually uncovered evidence suggests that her husband, Nick, is somehow involved. Did he kill her? Was she kidnapped? What happened to Amy? One thing is clear, Nick and Amy's marriage wasn't as perfect as everybody thought. The first part of the novel is all about the investigation into Amy's disappearance, slow unraveling of Nick's dirty secrets, reminiscing about the troubled history of Nick and Amy's marriage as told in Amy's hidden diary. I strained and strained my brain trying to understand why this chunk of Gone Girl had no appeal to me whatsoever. The only answer I have is this: I am really not into reading about rich white people's problems. You want to whine to me about your dwindling trust fund? Losing your cushy New York job? Moving south and "only" renting a mansion there? Being unhappy because you have too much free time on your hands and you are used to only work as a hobby? You want to make fun of your lowly, un-posh neighbors and their casseroles? Well, I am not interested. I'd rather read about someone not necessarily likable, but at least worthy of my empathy, not waste my time on self-centered, spoiled, pathetic people who don't know what real problems are. Granted, characters in Flynn's previous novels ("Sharp Objects" and "Dark Places") are pretty pathetic and and at times revolting too, but I always felt some strange empathy towards them, not annoyance and boredom, like I felt reading about Amy and Nick's marriage voes. But then second part, with its wicked twist, changed everything. The story became much more exciting, dangerous and deranged. The main characters revealed sides to them that were quite shocking and VERY entertaining. I thought the Gillian Flynn I knew before finally unleashed her talent for writing utterly unlikable and crafty women. THEN I got invested in the story, THEN I cared. Was it too little too late though? I think it was. Something needed to be done to make "Gone Girl" a better read. Make it shorter? Cut out first part completely? I don't know. But because of my uneven experience with this novel I won't be able to recommend "Gone Girl" as readily as I did Flynn's earlier novels, even though I think this horror marriage story (it's not a true mystery, IMO) has some brilliantly written psycho goodness in it and an absolutely messed up ending that many loathed but I LOVED. I wish it didn't take so much time and patience to get to all of that...",
"paragraph_answer": "I am giving "Gone Girl" 3 stars, but only begrudgingly. In my mind, <hl> any book that takes me 3 months and 20 different tries to read is not worth 3 stars <hl>, especially a book written by an author I already respect. And I am not kidding, for me the first half of "Gone Girl" was a PURE TORTURE to read.Amy Dunn disappears on the day of her 5th wedding anniversary. All gradually uncovered evidence suggests that her husband, Nick, is somehow involved. Did he kill her? Was she kidnapped? What happened to Amy? One thing is clear, Nick and Amy's marriage wasn't as perfect as everybody thought.The first part of the novel is all about the investigation into Amy's disappearance, slow unraveling of Nick's dirty secrets, reminiscing about the troubled history of Nick and Amy's marriage as told in Amy's hidden diary. I strained and strained my brain trying to understand why this chunk of Gone Girl had no appeal to me whatsoever. The only answer I have is this: I am really not into reading about rich white people's problems. You want to whine to me about your dwindling trust fund? Losing your cushy New York job? Moving south and "only" renting a mansion there? Being unhappy because you have too much free time on your hands and you are used to only work as a hobby? You want to make fun of your lowly, un-posh neighbors and their casseroles? Well, I am not interested. I'd rather read about someone not necessarily likable, but at least worthy of my empathy, not waste my time on self-centered, spoiled, pathetic people who don't know what real problems are. Granted, characters in Flynn's previous novels ("Sharp Objects" and "Dark Places") are pretty pathetic and and at times revolting too, but I always felt some strange empathy towards them, not annoyance and boredom, like I felt reading about Amy and Nick's marriage voes.But then second part, with its wicked twist, changed everything. The story became much more exciting, dangerous and deranged. The main characters revealed sides to them that were quite shocking and VERY entertaining. I thought the Gillian Flynn I knew before finally unleashed her talent for writing utterly unlikable and crafty women. THEN I got invested in the story, THEN I cared.Was it too little too late though? I think it was. Something needed to be done to make "Gone Girl" a better read. Make it shorter? Cut out first part completely? I don't know. But because of my uneven experience with this novel I won't be able to recommend "Gone Girl" as readily as I did Flynn's earlier novels, even though I think this horror marriage story (it's not a true mystery, IMO) has some brilliantly written psycho goodness in it and an absolutely messed up ending that many loathed but I LOVED. I wish it didn't take so much time and patience to get to all of that...",
"sentence_answer": "In my mind, <hl> any book that takes me 3 months and 20 different tries to read is not worth 3 stars <hl> , especially a book written by an author I already respect.",
"paragraph_id": "1b7cc3db9ec681edd253a41a2785b5a9",
"question_subj_level": 1,
"answer_subj_level": 1,
"domain": "books"
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
### Data Splits
| name |train|validation|test |
|-------------|----:|---------:|----:|
|default (all)|4437 | 659 |1489 |
| books |636 | 91 |190 |
| electronics |696 | 98 |237 |
| movies |723 | 100 |153 |
| grocery |686 | 100 |378 |
| restaurants |822 | 128 |135 |
| tripadvisor |874 | 142 |396 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
false |
# Danish Gigaword Corpus, Reddit (filtered)
*Version*: 1.0.0
*License*: See the respective dataset
This dataset is a variant of the Danish Gigaword [3], which excludes the sections containing
tweets and modified news contained in danavis20.
Twitter was excluded as it was a sample of a dataset which was available to the authors only.
DanAvis20 (or danavis) was excluded due to preprocessing described in [3] (version 1 on
[arxiv](https://arxiv.org/pdf/2005.03521v1.pdf))including shuffling of sentences,
pseudonymization of proper nouns and the replacement of infrequent content-words with
statistical cognates, which could lead to sentences such as *"Der er skilsmissesager i
forsikringsselskabet"*.
Additionally this dataset includes the [reddit-da](https://huggingface.co/datasets/DDSC/reddit-da) dataset, which includes
1,908,887 documents. This dataset has had low-quality text removed using a series
of heuristic filters. Following filtering,
DAGW$_{DFM}$ is deduplicated to remove exact and near-duplicates. For more on data
cleaning, see the section on post-processing.
This dataset included 1,310,789,818 tokens before filtering, and 833,664,528 (0.64%) after.
# Dataset information
This is a composite dataset consisting of Danish Gigaword and
[reddit-da](https://huggingface.co/datasets/DDSC/reddit-da). Thus it does not contains its own documentation. For more information, we recommend checking the documentation of the
respective datasets.
### Motivation:
**For what purpose was the dataset created? Who created the dataset? Who funded the
creation of the dataset?**
This dataset was created with the purpose of pre-training Danish language models. It was created by a team of
researchers at the Center for Humanities Computing Aarhus (CHCAA) using a codebase jointly
developed with partners from industry and academia, e.g. KMD, Ekstra Bladet, deepdivr,
and Bristol University. For more on collaborators on this project see
the [GitHub repository](https://github.com/centre-for-humanities-computing/danish-foundation-models
).
## Processing
### Quality Filter:
DAGW$_{DFM}$ applies a filter akin to [2]. It keeps documents that:
- Contain at least 2 Danish stopwords. For the stopword list, we use the one used in
SpaCy v.3.1.4.
- Have a mean word length between 3 and 10.
- Have a token length between 50 and 100,000.
- Contain fewer than 5,000,000 characters.
- Among all words, at least 60% have at least one alphabetic character.
- Have a symbol-to-word ratio lower than 10% for hashtags and ellipsis.
- Have fewer than 90% of lines starting with a bullet point.
- Have fewer than 30% of lines ending with an ellipsis.
- Have a low degree of repetitious text:
- Fewer than 30% duplicate lines.
- Fewer than 30% duplicate paragraphs.
- Fewer than 30% of characters are contained within duplicate lines.
- The top 2-4 grams constitute less than 20%, 18%, and 16% of characters, respectively.
- Where, for each document, 5-10 grams which occur more than once, constitute less than 15%, 14%, 13%, 12%, 11%, and 10% of
the characters, respectively.
### Deduplication
The deduplication removed all documents with a 13-gram similarity higher than 80%
following the MinHash algorithm [1] using 128 permutations. The MinHash algorithm is a
probabilistic data structure for approximating the Jaccard similarity between two sets.
# References:
- [1] Broder, Andrei Z. "On the resemblance and containment of documents."
Proceedings. Compression and Complexity of SEQUENCES 1997
(Cat. No. 97TB100171). IEEE, 1997.
- [2] Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F.,
Aslanides, J., Henderson, S., Ring, R., Young, S., Rutherford, E., Hennigan,
T., Menick, J., Cassirer, A., Powell, R., Driessche, G. van den, Hendricks,
L. A., Rauh, M., Huang, P.-S., … Irving, G. (2021).
Scaling Language Models: Methods, Analysis & Insights from Training Gopher.
https://arxiv.org/abs/2112.11446v2
- [3] Strømberg-Derczynski, L., Ciosici, M., Baglini, R., Christiansen, M. H.,
Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A.,
Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Madsen, J., Petersen, M. L.,
Rystrøm, J. H., & Varab, D. (2021). The Danish Gigaword corpus. Proceedings of the
23rd Nordic Conference on Computational Linguistics (NoDaLiDa), 413–421.
https://aclanthology.org/2021.nodalida-main.46
### Citation
If you wish to cite this work, please see the GitHub page for an up-to-date citation:
https://github.com/centre-for-humanities-computing/danish-foundation-models
|
false |
# Dataset Card for named_timexes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [https://aclanthology.org/R13-1015/](https://aclanthology.org/R13-1015/)
- **Leaderboard:**
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This is a dataset annotated for _named temporal expression_ chunks.
The
commonest temporal expressions typically
contain date and time words, like April or
hours. Research into recognising and interpreting these typical expressions is mature in many languages. However, there is
a class of expressions that are less typical,
very varied, and difficult to automatically
interpret. These indicate dates and times,
but are harder to detect because they often do not contain time words and are not
used frequently enough to appear in conventional temporally-annotated corpora –
for example *Michaelmas* or *Vasant Panchami*.
For more details see [Recognising and Interpreting Named Temporal Expressions](https://aclanthology.org/R13-1015.pdf)
### Supported Tasks and Leaderboards
* Task: Named Entity Recognition (temporal expressions)
### Languages
Englsih
## Dataset Structure
### Data Instances
### Data Fields
Each tweet contains an ID, a list of tokens, and a list of timex chunk flags.
- `id`: a `string` feature.
- `tokens`: a `list` of `strings` .
- `ntimex_tags`: a `list` of class IDs (`int`s) for whether a token is out-of-timex or in a timex chunk.
```
0: O
1: T
```
### Data Splits
Section|Token count
---|---:
train|87 050
test|30 010
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Attribution 4.0 International (CC BY 4.0)
### Citation Information
```
@inproceedings{brucato-etal-2013-recognising,
title = "Recognising and Interpreting Named Temporal Expressions",
author = "Brucato, Matteo and
Derczynski, Leon and
Llorens, Hector and
Bontcheva, Kalina and
Jensen, Christian S.",
booktitle = "Proceedings of the International Conference Recent Advances in Natural Language Processing {RANLP} 2013",
month = sep,
year = "2013",
address = "Hissar, Bulgaria",
publisher = "INCOMA Ltd. Shoumen, BULGARIA",
url = "https://aclanthology.org/R13-1015",
pages = "113--121",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
false |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [https://arxiv.org/abs/2206.08727](https://arxiv.org/abs/2206.08727)
- **Leaderboard:**
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This is a native-speaker-generated parallel corpus of Faroese and Danish
### Supported Tasks and Leaderboards
*
### Languages
* Danish
* Faroese
## Dataset Structure
### Data Instances
3995 parallel sentences
### Data Fields
* `id`: the sentence pair ID, `string`
* `origin`: the original sentence identifier text, `string`
* `fo`: the Faroese text, `string`
* `da`: the Danish text, `string`
### Data Splits
Monolithic
## Dataset Creation
### Curation Rationale
To gather a broad range of topics about the Faroes and the rest of the world, to enable a general-purpose Faroese:Danish translation system
### Source Data
#### Initial Data Collection and Normalization
* EUROparl Danish
* Dimmaletting, Faroese newspaper
* Tatoeba Danish / Faroese
#### Who are the source language producers?
### Annotations
#### Annotation process
No annotations
#### Who are the annotators?
Two Faroese native speakers, one male one female, in their 20s, masters degrees, living in Denmark
### Personal and Sensitive Information
None due to the sources used
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
This collection of Faroese is curated by Leon Derczynski
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
```
``` |
false | # AutoTrain Dataset for project: code_summarization
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project code_summarization.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "def read(self, table, columns, keyset, index=\"\", limit=0, partition=None):\n \"\"\"Perform a ``St[...]",
"target": "Perform a ``StreamingRead`` API request for rows in a table.\n\n :type table: str\n :para[...]"
},
{
"text": "def maf_somatic_variant_stats(variant, variant_metadata):\n \"\"\"\n Parse out the variant calling [...]",
"target": "Parse out the variant calling statistics for a given variant from a MAF file\n\n Assumes the MAF fo[...]"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 800 |
| valid | 200 |
|
true |
# Dataset Card for AraStance
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/latynt/ans](https://github.com/latynt/ans)
- **Paper:** [https://arxiv.org/abs/2005.10410](https://arxiv.org/abs/2005.10410)
- **Point of Contact:** [Jude Khouja](jude@latynt.com)
### Dataset Summary
The dataset is a collection of news titles in arabic along with paraphrased and corrupted titles. The stance prediction version is a 3-class classification task. Data contains three columns: s1, s2, stance.
### Languages
Arabic
## Dataset Structure
### Data Instances
An example of 'train' looks as follows:
```
{
'id': '0',
's1': 'هجوم صاروخي يستهدف مطار في طرابلس ويجبر ليبيا على تغيير مسار الرحلات الجوية',
's2': 'هدوء الاشتباكات فى طرابلس',
'stance': 0
}
```
### Data Fields
- `id`: a 'string' feature.
- `s1`: a 'string' expressing a claim/topic.
- `s2`: a 'string' to be classified for its stance to the source.
- `stance`: a class label representing the stance the article expresses towards the claim. Full tagset with indices:
```
0: "disagree",
1: "agree",
2: "other",
```
### Data Splits
|name|instances|
|----|----:|
|train|2652|
|validation|755|
|test|379|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors
### Licensing Information
The authors distribute this data under the Apache License, Version 2.0
### Citation Information
```
@inproceedings{,
title = "Stance Prediction and Claim Verification: An {A}rabic Perspective",
author = "Khouja, Jude",
booktitle = "Proceedings of the Third Workshop on Fact Extraction and {VER}ification ({FEVER})",
year = "2020",
address = "Seattle, USA",
publisher = "Association for Computational Linguistics",
}
```
### Contributions
Thanks to [mkonxd](https://github.com/mkonxd) for adding this dataset. |
true |
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
#### default
An example of `training` looks as follows:
```json
{
'prediction_ts': 1650092416.0,
'age': 44,
'gender': 'female',
'context': 'movies',
'text': "An interesting premise, and Billy Drago is always good as a dangerous nut-bag (side note: I'd love to see Drago, Stephen McHattie and Lance Hendrikson in a flick together; talk about raging cheekbones!). The soundtrack wasn't terrible, either.<br /><br />But the acting--even that of such professionals as Drago and Debbie Rochon--was terrible, the directing worse (perhaps contributory to the former), the dialog chimp-like, and the camera work, barely tolerable. Still, it was the SETS that got a big 10 on my oy-vey scale. I don't know where this was filmed, but were I to hazard a guess, it would be either an open-air museum, or one of those re-enactment villages, where everything is just a bit too well-kept to do more than suggest the real Old West. Okay, so it was shot on a college kid's budget. That said, I could have forgiven one or two of the aforementioned faults. But taken all together, and being generous, I could not see giving it more than three stars.",
'label': 0
}
```
### Data Fields
#### default
The data fields are the same among all splits. An example of `training` looks as follows:
- `prediction_ts`: a `float` feature.
- `age`: an `int` feature.
- `gender`: a `string` feature.
- `context`: a `string` feature.
- `text`: a `string` feature.
- `label`: a `ClassLabel` feature, with possible values including negative(0) and positive(1).
### Data Splits
| name |training|validation|production |
|----------|-------:|---------:|----------:|
| default | 9916 | 2479 | 40079 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. |
false |
# rumi-jawi
Notebooks to gather the dataset at https://github.com/huseinzol05/malay-dataset/tree/master/normalization/rumi-jawi |
true |
# Dataset Card for AraStance
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/Tariq60/arastance](https://github.com/Tariq60/arastance)
- **Paper:** [https://arxiv.org/abs/2104.13559](https://arxiv.org/abs/2104.13559)
- **Point of Contact:** [Tariq Alhindi](tariq@cs.columbia.edu)
### Dataset Summary
The AraStance dataset contains true and false claims, where each claim is paired with one or more documents. Each claim–article pair has a stance label: agree, disagree, discuss, or unrelated.
### Languages
Arabic
## Dataset Structure
### Data Instances
An example of 'train' looks as follows:
```
{
'id': '0',
'claim': 'تم رفع صورة السيسي في ملعب ليفربول',
'article': 'خطفت مكة محمد صلاح نجلة نجم ليفربول الإنجليزي الأنظار في ظهورها بملعب آنفيلد عقب مباراة والدها أمام برايتون في ختام الدوري الإنجليزي والتي انتهت بفوز الأول برباعية نظيفة. وأوضحت صحيفة "ميرور" البريطانية أن مكة محمد صلاح أضفت حالة من المرح في ملعب آنفيلد أثناء مداعبة الكرة بعد تتويج نجم منتخب مصر بجائزة هداف الدوري الإنجليزي. وأشارت إلى أن مكة أظهرت بعضًا من مهاراتها بمداعبة الكرة ونجحت في خطف قلوب مشجعي الريدز.',
'stance': 3
}
```
### Data Fields
- `id`: a 'string' feature.
- `claim`: a 'string' expressing a claim/topic.
- `article`: a 'string' to be classified for its stance to the source.
- `stance`: a class label representing the stance the article expresses towards the claim. Full tagset with indices:
```
0: "Agree",
1: "Disagree",
2: "Discuss",
3: "Unrelated",
```
### Data Splits
|name|instances|
|----|----:|
|train|2848|
|validation|569|
|test|646|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0
### Citation Information
```
@article{arastance,
url = {https://arxiv.org/abs/2104.13559},
author = {Alhindi, Tariq and Alabdulkarim, Amal and Alshehri, Ali and Abdul-Mageed, Muhammad and Nakov, Preslav},
title = {AraStance: A Multi-Country and Multi-Domain Dataset of Arabic Stance Detection for Fact Checking},
year = {2021},
copyright = {Creative Commons Attribution 4.0 International}
}
```
### Contributions
Thanks to [mkonxd](https://github.com/mkonxd) for adding this dataset. |
false | # AutoTrain Dataset for project: test-auto
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project test-auto.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "aen: {R.C. Sproul: Holy, Holy, Holy}{65%}{85%}{blessing for Isaiah. Here we find a prophet doing som[...]",
"target": "Instead of announcing God's curse upon the sinful nations who were in rebellion against Him, Isaiah [...]"
},
{
"text": "aen: {Data Connector for Salesforce}{52%}{100%}{to point out is that we do have a SOQL editor availa[...]",
"target": "This will allow you to customize the query further than is available in our graphic interface. Now t[...]"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 408041 |
| valid | 102011 |
|
false |
# Citations
```
@misc{Aniemore,
author = {Артем Аментес, Илья Лубенец, Никита Давидчук},
title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека},
year = {2022},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https://huggingface.com/aniemore/Aniemore}},
email = {hello@socialcode.ru}
}
```
|
false |
# Citations
```
@misc{Aniemore,
author = {Артем Аментес, Илья Лубенец, Никита Давидчук},
title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека},
year = {2022},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https://huggingface.com/aniemore/Aniemore}},
email = {hello@socialcode.ru}
}
``` |
false |
# Dataset for evaluation of (zero-shot) discourse marker prediction with language models
This is the Big-Bench version of our discourse marker prediction dataset, [Discovery](https://huggingface.co/datasets/discovery)
Design considerations:
<https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/discourse_marker_prediction>
GPT2 has to zero-shot 15% accuracy with on this multiple-choice task based on language modeling perplexity. As a comparison, a fully supervised model, trained with 10k examples per marker with ROBERTA and default hyperparameters with one epoch, leads to an accuracy of 30% with 174 possible markers. This shows that this task is hard for GPT2 and that the model didn't memorize the discourse markers, but that high accuracies are still possible.
# Citation
```
@inproceedings{sileo-etal-2019-mining,
title = "Mining Discourse Markers for Unsupervised Sentence Representation Learning",
author = "Sileo, Damien and
Van De Cruys, Tim and
Pradel, Camille and
Muller, Philippe",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1351",
doi = "10.18653/v1/N19-1351",
pages = "3477--3486",
}
``` |
true | ### Dataset Summary
The dataset contains user reviews about medical institutions.
In total it contains 12,036 reviews. A review tagged with the <em>general</em> sentiment and sentiments on 5 aspects: <em>quality, service, equipment, food, location</em>.
### Data Fields
Each sample contains the following fields:
- **review_id**;
- **content**: review text;
- **general**;
- **quality**;
- **service**;
- **equipment**;
- **food**;
- **location**.
### Python
```python3
import pandas as pd
df = pd.read_json('medical_institutions_reviews.jsonl', lines=True)
df.sample(5)
```
|
true | # AutoTrain Dataset for project: Poem_Rawiy_detection
## Dataset Descritpion
We used the APCD dataset cited hereafter for pretraining the model. The dataset has been cleaned and only the main text and the Qafiyah columns were kept:
```
@Article{Yousef2019LearningMetersArabicEnglish-arxiv,
author = {Yousef, Waleed A. and Ibrahime, Omar M. and Madbouly, Taha M. and Mahmoud,
Moustafa A.},
title = {Learning Meters of Arabic and English Poems With Recurrent Neural Networks: a Step
Forward for Language Understanding and Synthesis},
journal = {arXiv preprint arXiv:1905.05700},
year = 2019,
url = {https://github.com/hci-lab/LearningMetersPoems}
}
```
### Languages
The BCP-47 code for the dataset's language is ar.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\u0643\u0644\u0651\u064c \u064a\u064e\u0632\u0648\u0644\u064f \u0633\u064e\u0631\u064a\u0639\u0627\u064b \u0644\u0627 \u062b\u064e\u0628\u0627\u062a\u064e \u0644\u0647\u064f \u0641\u0643\u064f\u0646 \u0644\u0650\u0648\u064e\u0642\u062a\u0643\u064e \u064a\u0627 \u0645\u0650\u0633\u0643\u064a\u0646\u064f \u0645\u064f\u063a\u062a\u064e\u0646\u0650\u0645\u0627",
"target": 27
},
{
"text": "\u0648\u0642\u062f \u0623\u0628\u0631\u0632\u064e \u0627\u0644\u0631\u0651\u064f\u0645\u0651\u064e\u0627\u0646\u064f \u0644\u0644\u0637\u0631\u0641\u0650 \u063a\u064f\u0635\u0652\u0646\u064e\u0647\u064f \u0646\u0647\u0648\u062f\u0627\u064b \u062a\u064f\u0635\u0627\u0646\u064f \u0627\u0644\u0644\u0645\u0633\u064e \u0639\u0646 \u0643\u0641\u0651\u0650 \u0623\u062d\u0645\u0642\u0650",
"target": 23
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=35, names=['\u0621', '\u0624', '\u0627', '\u0628', '\u062a', '\u062b', '\u062c', '\u062d', '\u062e', '\u062f', '\u0630', '\u0631', '\u0632', '\u0633', '\u0634', '\u0635', '\u0636', '\u0637', '\u0637\u0646', '\u0638', '\u0639', '\u063a', '\u0641', '\u0642', '\u0643', '\u0644', '\u0644\u0627', '\u0645', '\u0646', '\u0647', '\u0647\u0640', '\u0647\u0646', '\u0648', '\u0649', '\u064a'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1347718 |
| valid | 336950 |
|
false |
# Dataset Card for "lmqg/qg_squadshifts"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
Modified version of [SQuADShifts](https://modestyachts.github.io/squadshifts-website/index.html) for question generation (QG) task.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset can be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"question": "has there ever been a legal challange?",
"paragraph": "The status of the Armenian Apostolic Church within the Republic of Armenia is defined in the country's constitution. Article 8.1 of the Constitution of Armenia states: "The Republic of Armenia recognizes the exclusive historical mission of the Armenian Apostolic Holy Church as a national church, in the spiritual life, development of the national culture and preservation of the national identity of the people of Armenia." Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church".",
"answer": "Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church",
"sentence": "Article 8.1 of the Constitution of Armenia states: "The Republic of Armenia recognizes the exclusive historical mission of the Armenian Apostolic Holy Church as a national church, in the spiritual life, development of the national culture and preservation of the national identity of the people of Armenia." Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church",
"paragraph_sentence": "The status of the Armenian Apostolic Church within the Republic of Armenia is defined in the country's constitution. <hl> Article 8.1 of the Constitution of Armenia states: "The Republic of Armenia recognizes the exclusive historical mission of the Armenian Apostolic Holy Church as a national church, in the spiritual life, development of the national culture and preservation of the national identity of the people of Armenia." Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church". <hl>",
"paragraph_answer": "The status of the Armenian Apostolic Church within the Republic of Armenia is defined in the country's constitution. Article 8.1 of the Constitution of Armenia states: "The Republic of Armenia recognizes the exclusive historical mission of the Armenian Apostolic Holy Church as a national church, in the spiritual life, development of the national culture and preservation of the national identity of the people of Armenia." <hl> Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church". <hl>",
"sentence_answer": "Article 8.1 of the Constitution of Armenia states: "The Republic of Armenia recognizes the exclusive historical mission of the Armenian Apostolic Holy Church as a national church, in the spiritual life, development of the national culture and preservation of the national identity of the people of Armenia." <hl> Among others, ethnographer Hranush Kharatyan has questioned the constitutionality of the phrase "national church". <hl>"
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
### Data Splits
| name |train | valid | test |
|-------------|------:|------:|-----:|
|default (all)|9209|6283 |18,844|
| amazon |3295|1648|4942|
| new_wiki |2646|1323|3969|
| nyt |3355|1678|5032|
| reddit |3268|1634|4901|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
false |
# Dataset Card for "lmqg/qg_esquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [SQuAD-es](https://huggingface.co/datasets/squad_es) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Spanish (es)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'comedia musical',
'question': '¿Qué género de película protagonizó Beyonce con Cuba Gooding, Jr?',
'sentence': 'en la comedia musical ',
'paragraph': 'En julio de 2002, Beyoncé continuó su carrera como actriz interpretando a Foxxy Cleopatra junto a Mike Myers en la película de comedia, Austin Powers in Goldmember, que pasó su primer fin de semana en la cima de la taquilla de Estados Unidos. Beyoncé lanzó "Work It Out" como el primer sencillo de su álbum de banda sonora que entró en el top ten en el Reino Unido, Noruega y Bélgica. En 2003, Knowles protagonizó junto a Cuba Gooding, Jr., en la comedia musical The Fighting Temptations como Lilly, una madre soltera de quien el personaje de Gooding se enamora. Beyoncé lanzó "Fighting Temptation" como el primer sencillo de la banda sonora de la película, con Missy Elliott, MC Lyte y Free que también se utilizó para promocionar la película. Otra de las contribuciones de Beyoncé a la banda sonora, "Summertime", fue mejor en las listas de Estados Unidos.',
'sentence_answer': 'en la <hl> comedia musical <hl> ',
'paragraph_answer': 'En julio de 2002, Beyoncé continuó su carrera como actriz interpretando a Foxxy Cleopatra junto a Mike Myers en la película de comedia, Austin Powers in Goldmember, que pasó su primer fin de semana en la cima de la taquilla de Estados Unidos. Beyoncé lanzó "Work It Out" como el primer sencillo de su álbum de banda sonora que entró en el top ten en el Reino Unido, Noruega y Bélgica. En 2003, Knowles protagonizó junto a Cuba Gooding, Jr., en la <hl> comedia musical <hl> The Fighting Temptations como Lilly, una madre soltera de quien el personaje de Gooding se enamora. Beyoncé lanzó "Fighting Temptation" como el primer sencillo de la banda sonora de la película, con Missy Elliott, MC Lyte y Free que también se utilizó para promocionar la película. Otra de las contribuciones de Beyoncé a la banda sonora, "Summertime", fue mejor en las listas de Estados Unidos.',
'paragraph_sentence': 'En julio de 2002, Beyoncé continuó su carrera como actriz interpretando a Foxxy Cleopatra junto a Mike Myers en la película de comedia, Austin Powers in Goldmember, que pasó su primer fin de semana en la cima de la taquilla de Estados Unidos. Beyoncé lanzó "Work It Out" como el primer sencillo de su álbum de banda sonora que entró en el top ten en el Reino Unido, Noruega y Bélgica. En 2003, Knowles protagonizó junto a Cuba Gooding, Jr. , <hl> en la comedia musical <hl> The Fighting Temptations como Lilly, una madre soltera de quien el personaje de Gooding se enamora. Beyoncé lanzó "Fighting Temptation" como el primer sencillo de la banda sonora de la película, con Missy Elliott, MC Lyte y Free que también se utilizó para promocionar la película. Otra de las contribuciones de Beyoncé a la banda sonora, "Summertime", fue mejor en las listas de Estados Unidos.',
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|77025| 10570 |10570|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
false |
# Dataset Card for "lmqg/qg_ruquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [SberQuaD](https://huggingface.co/datasets/sberquad) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Russian (ru)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'известковыми выделениями сине-зелёных водорослей',
'question': 'чем представлены органические остатки?',
'sentence': 'Они представлены известковыми выделениями сине-зелёных водорослей , ходами червей, остатками кишечнополостных.'
'paragraph': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. Они представлены..."
'sentence_answer': "Они представлены <hl> известковыми выделениями сине-зелёных водорослей <hl> , ход...",
'paragraph_answer': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. Они представлены <hl> известковыми выделениям...",
'paragraph_sentence': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. <hl> Они представлены известковыми выделениями сине-зелёных водорослей , ходами червей, остатками кишечнополостных. <hl> Кроме..."
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
| 45327 | 5036 |23936 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
false |
# Dataset Card for "lmqg/qg_dequad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [GermanQuAD](https://huggingface.co/datasets/deepset/germanquad) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Spanish (es)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'elektromagnetischer Linearführungen',
'question': 'Was kann den Verschleiß des seillosen Aufzuges minimieren?',
'sentence': 'Im Rahmen der Forschungen an dem seillosen Aufzug wird ebenfalls an der Entwicklung elektromagnetischer Linearführungen gearbeitet, um den Verschleiß der seillosen Aufzugsanlage bei hohem Fahrkomfort zu minimieren.',
'paragraph': "Aufzugsanlage\n\n=== Seilloser Aufzug ===\nAn der RWTH Aachen im Institut für Elektrische Maschinen wurde ein seilloser Aufzug entwickelt und ein Prototyp aufgebaut. Die Kabine wird hierbei durch z..."
'sentence_answer': "Im Rahmen der Forschungen an dem seillosen Aufzug wird ebenfalls an der Entwicklung <hl> elektromagnetischer Linearführungen <hl> gearbeitet, um den Verschleiß der seillosen Aufzugsanlage bei...",
'paragraph_answer': "Aufzugsanlage === Seilloser Aufzug === An der RWTH Aachen im Institut für Elektrische Maschinen wurde ein seilloser Aufzug entwickelt und ein Prototyp aufgebaut. Die Kabine wird hierbei durc...",
'paragraph_sentence': "Aufzugsanlage === Seilloser Aufzug === An der RWTH Aachen im Institut für Elektrische Maschinen wurde ein seilloser Aufzug entwickelt und ein Prototyp aufgebaut. Die Kabine wird hierbei du..."
}
```
## Data Fields
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
### Data Splits
|train|validation|test |
|----:|---------:|----:|
|9314 | 2204 | 2204|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
true |
# Dataset Card for Fewshot Table Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/JunShern/few-shot-pretraining
- **Paper:** Paper-Title
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The Fewshot Table dataset consists of tables that naturally occur on the web, that are formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. The dataset consists of approximately 413K tables that are extracted from the [WDC Web Table Corpora](http://webdatacommons.org/webtables/) 2015, which is released under the Apache-2.0 license. The WDC Web Table Corpora "contains vast amounts of HTML tables. [...] The Web Data Commons project extracts relational Web tables from the [Common Crawl](https://commoncrawl.org/), the largest and most up-to-date Web corpus that is currently available to the public."
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide i.e. we have 1000's tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e. 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g. multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by finetuning/pretraining onour dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each table, i.e. task is represented as a json-lines file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': ?? (potentially remove this from data)
'url': url to the website containing the table
'wdcFile': ? (potentially remove this from data)
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
How do we convert tables to few-shot tasks?
Unlike unstructured text, structured data in the form of tables lends itself easily to the few-shot task format. Given a table where each row is an instance of a similar class and the columns describe the attributes of each instance, we can turn each row into a task example to predict one attribute given the others. When the table has more than one row, we instantly have multiple examples of this task by using each row as a single example, and thus each table becomes a few-shot dataset for a particular task.
The few-shot setting in this setting is significant: Tables often do not come with clear instructions for each field, so tasks may be underspecified if prompted in a zero-shot manner, but the intended task becomes clearer when examples are provided. This makes a good two-way match: The few-shot format is a perfect setup for table learning, and tables provide a natural dataset for few-shot training.
### Source Data
#### Initial Data Collection and Normalization
We downloaded the [WDC Web Table Corpora](http://webdatacommons.org/webtables/) 2015 dataset and focus on relational tables. In the following, we describe the steps we executed to filter the WDC Web Table Corpora and create our task dataset. Given a set of relation tables, we apply defined preprocessing steps to ensure all the tables can be handled consistently. Each table can then spawn one or more tasks using a simple predict-one-column approach. Finally, all tasks produced in this manner undergo simple rule-based checks, i.e. any candidates that do not meet some defined minimum requirements for a well-formed task are rejected. Following this approach, we start with 50 million tables in the initial corpus and produce a longlist of 400K tasks.
1. We select only relational tables.
2. We make sure all tables are vertical (horizontal tables are simply transposed) and remove duplicate rows.
3. To create task we use what in the literature is referred to as verbalizers. For example, a table with 3 columns may be cast as three different tasks: predict column A given B and C, predict column B given A and C, and predict column C given A and B.
4. Rule-based-checks to reject tables:
a) We reject 25M tables that have fewer than 6 rows (so we can do at least k=5-shot learning)
b) We reject tables with > 20% non-English text as measured by [SpaCy](https://spacy.io/)
c) Given 2 Million passing tables we consider each table column as a potential output column, and concatenate all other columns to form the input (which produces 5.6 M candidate tasks)
5. Rule-based-checks to reject tasks
a) We reject a task if it has less than 6 rows. Note that tasks may have fewer rows than their origin tables since we remove rows where the output column is empty.
b) We reject tasks if any input maps to multiple outputs.
c) We reject tasks if it has fewer than 2 output classes.
d) We reject a task if the output column alone has >20% non-English text.
e) We reject a task if the classes are heavily imbalanced.
6. Lastly we apply domain-level filtering. Initial iterations of our dataset found a significant imbalance in terms of the website of origin for our generated tasks. In particular, we found that the mos-frequent domain in the WDC corpus, Cappex.com, was emphasized by our export criteria such that this website alone represented 41% of our total tasks. Since we want our dataset to represent the diversity of all the tables available on the web, we apply a hard fix for this imbalance by limiting the number of tasks per domain. Starting from the initial corpus of 50M tables from 323160 web domains, our resulting longlist of tasks comprises more than X for a total of 413350 tasks.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
No annotation Process
#### Who are the annotators?
-
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g. data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop models that are better at few-shot learning and have higher few-shot performance by fine-tuning few-shot tasks extracted from tables.
While tables have a similar structure to few-shot tasks and we do see an improved performance on few-shot tasks in our paper, we want to make clear that finetuning on tables also has its risks. First of all, since the tables are extracted from the web, they may contain user identities or otherwise sensitive information which a model might reveal at inference, or which could influence the learning process of a model in a negative way. Second, since tables are very diverse in nature, the model also trains on low-quality data or data with an unusual structure. While it is interesting that training on such data improves few-shot performance on downstream tasks, this could also imply that the model learns concepts that are very dissimilar to human concepts that would be useful for a certain downstream task. In other words, it is possible that the model learns weird things that are helpful on the evaluated downstream tasks, but might lead to bad out-of-distribution behavior.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content for toxic content.
This implies that a model trained on our dataset will reinforce harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Mention all authors
### Licensing Information
Apache 2.0
### Citation Information
[Needs More Information] |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
true |
# Dataset Card for "Non-Parallel MultiEURLEX (incl. Translations)"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/nlpaueb/multi-eurlex/tree/realistic-zero-shot
- **Repository:** https://github.com/nlpaueb/multi-eurlex/tree/realistic-zero-shot
- **Paper:** TBA
- **Leaderboard:** N/A
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
**Documents**
MultiEURLEX of Chalkidis et al. (2021) comprises 65k EU laws in 23 official EU languages. Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. Each EUROVOC label ID is associated with a *label descriptor*, e.g., [60, agri-foodstuffs], [6006, plant product], [1115, fruit]. The descriptors are also available in the 23 languages. Chalkidis et al. (2019) published a monolingual (English) version of this dataset, called EUR-LEX, comprising 57k EU laws with the originally assigned gold labels.
In this new version, dubbed "Non-Parallel MultiEURLEX (incl. Translations)", MultiEURLEX comprises non-parallel documents across 5 languages (English, German, French, Greek, and Slovak), i.e., 11,000 different documents per language, including also translations from English to the rest of the 4 available languages.
### Supported Tasks and Leaderboards
MultiEURLEX can be used for legal topic classification, a multi-label classification task where legal documents need to be assigned concepts (in our case, from EUROVOC) reflecting their topics. Unlike EUR-LEX, however, MultiEURLEX supports labels from three different granularities (EUROVOC levels). More importantly, apart from monolingual (*one-to-one*) experiments, it can be used to study cross-lingual transfer scenarios, including *one-to-many* (systems trained in one language and used in other languages with no training data), and *many-to-one* or *many-to-many* (systems jointly trained in multiple languages and used in one or more other languages).
The dataset is not yet part of an established benchmark.
### Languages
The EU has 24 official languages. When new members join the EU, the set of official languages usually expands, except the languages are already included. MultiEURLEX covers 23 languages from seven language families (Germanic, Romance, Slavic, Uralic, Baltic, Semitic, Hellenic). EU laws are published in all official languages, except Irish, for resource-related reasons (Read more at https://europa.eu/european-union/about-eu/eu-languages_en). This wide coverage makes MultiEURLEX a valuable testbed for cross-lingual transfer. All languages use the Latin script, except for Bulgarian (Cyrillic script) and Greek. Several other languages are also spoken in EU countries. The EU is home to over 60 additional indigenous regional or minority languages, e.g., Basque, Catalan, Frisian, Saami, and Yiddish, among others, spoken by approx. 40 million people, but these additional languages are not considered official (in terms of EU), and EU laws are not translated to them.
This version of MultiEURLEX covers 5 EU languages (English, German, French, Greek, and Slovak). It also includes machine-translated versions of the documents using the EasyNMT framework (https://github.com/UKPLab/EasyNMT) utilizing the many-to-many M2M_100_418M model of Fan et al. (2020) for el-to-en and el-to-de pairs and the OPUS-MT (Tiedemann et al., 2020) models for the rest.
## Dataset Structure
### Data Instances
**Multilingual use of the dataset**
When the dataset is used in a multilingual setting selecting the the 'all_languages' flag:
```python
from datasets import load_dataset
dataset = load_dataset('nlpaueb/multi_eurlex', 'all_languages')
```
```json
{
"celex_id": "31979D0509",
"text": {"en": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,",
"en2fr": "DU CONSEIL du 24 mai 1979 concernant l'aide financiere de la Communaute e l'eradication de la peste porcine africaine en Espagne (79/509/CEE)\nLE CONSEIL DES COMMUNAUTAS EUROPENNES ...",
"en2de": "...",
"en2el": "...",
"en2sk": "..."
},
"labels": [
1,
13,
47
]
}
```
**Monolingual use of the dataset**
When the dataset is used in a monolingual setting selecting the ISO language code for one of the 5 supported languages, or supported translation pairs in the form src2trg, where src and trg are ISO language codes, e.g., en2fr for English translated to French. For example:
```python
from datasets import load_dataset
dataset = load_dataset('nlpaueb/multi_eurlex', 'en2fr')
```
```json
{
"celex_id": "31979D0509",
"text": "DU CONSEIL du 24 mai 1979 concernant l'aide financiere de la Communaute e l'eradication de la peste porcine africaine en Espagne (79/509/CEE)\nLE CONSEIL DES COMMUNAUTAS EUROPENNES ...",
"labels": [
1,
13,
47
]
}
```
### Data Fields
**Multilingual use of the dataset**
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`text`: (dict[**str**]) A dictionary with the 23 languages as keys and the full content of each document as values.\
`labels`: (**List[int]**) The relevant EUROVOC concepts (labels).
**Monolingual use of the dataset**
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`text`: (**str**) The full content of each document across languages.\
`labels`: (**List[int]**) The relevant EUROVOC concepts (labels).
If you want to use the descriptors of the EUROVOC concepts, similar to [Chalkidis et al. (2020)](https://aclanthology.org/2020.emnlp-main.607/), please download the relevant JSON file [here](https://raw.githubusercontent.com/nlpaueb/multi-eurlex/master/data/eurovoc_descriptors.json).
Then you may load it and use it:
```python
import json
from datasets import load_dataset
# Load the English part of the dataset
dataset = load_dataset('nlpaueb/multi_eurlex', 'en', split='train')
# Load (label_id, descriptor) mapping
with open('./eurovoc_descriptors.json') as jsonl_file:
eurovoc_concepts = json.load(jsonl_file)
# Get feature map info
classlabel = dataset.features["labels"].feature
# Retrieve IDs and descriptors from dataset
for sample in dataset:
print(f'DOCUMENT: {sample["celex_id"]}')
# DOCUMENT: 32006D0213
for label_id in sample['labels']:
print(f'LABEL: id:{label_id}, eurovoc_id: {classlabel.int2str(label_id)}, \
eurovoc_desc:{eurovoc_concepts[classlabel.int2str(label_id)]}')
# LABEL: id: 1, eurovoc_id: '100160', eurovoc_desc: 'industry'
```
### Data Splits
<table>
<tr><td> Language </td> <td> ISO code </td> <td> Member Countries where official </td> <td> EU Speakers [1] </td> <td> Number of Documents [2] </td> </tr>
<tr><td> English </td> <td> <b>en</b> </td> <td> United Kingdom (1973-2020), Ireland (1973), Malta (2004) </td> <td> 13/ 51% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> German </td> <td> <b>de</b> </td> <td> Germany (1958), Belgium (1958), Luxembourg (1958) </td> <td> 16/32% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> French </td> <td> <b>fr</b> </td> <td> France (1958), Belgium(1958), Luxembourg (1958) </td> <td> 12/26% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> Greek </td> <td> <b>el</b> </td> <td> Greece (1981), Cyprus (2008) </td> <td> 3/4% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> Slovak </td> <td> <b>sk</b> </td> <td> Slovakia (2004) </td> <td> 1/1% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
</table>
[1] Native and Total EU speakers percentage (%) \
[2] Training / Development / Test Splits
## Dataset Creation
### Curation Rationale
The original dataset was curated by Chalkidis et al. (2021).\
The new version of the dataset was curated by Xenouleas et al. (2022).\
The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
### Source Data
#### Initial Data Collection and Normalization
The original data are available at the EUR-LEX portal (https://eur-lex.europa.eu) in unprocessed formats (HTML, XML, RDF). The documents were downloaded from the EUR-LEX portal in HTML. The relevant EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).
Chalkidis et al. (2021) stripped HTML mark-up to provide the documents in plain text format and inferred the labels for EUROVOC levels 1--3, by backtracking the EUROVOC hierarchy branches, from the originally assigned labels to their ancestors in levels 1--3, respectively.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/). EUROVOC has eight levels of concepts. Each document is assigned one or more concepts (labels). If a document is assigned a concept, the ancestors and descendants of that concept are typically not assigned to the same document. The documents were originally annotated with concepts from levels 3 to 8.
Chalkidis et al. (2021)augmented the annotation with three alternative sets of labels per document, replacing each assigned concept by its ancestor from level 1, 2, or 3, respectively.
Thus, Chalkidis et al. (2021) provide four sets of gold labels per document, one for each of the first three levels of the hierarchy, plus the original sparse label assignment.Levels 4 to 8 cannot be used independently, as many documents have gold concepts from the third level; thus many documents will be mislabeled, if we discard level 3.
#### Who are the annotators?
Publications Office of EU (https://publications.europa.eu/en)
### Personal and Sensitive Information
The dataset contains publicly available EU laws that do not include personal or sensitive information with the exception of trivial information presented by consent, e.g., the names of the current presidents of the European Parliament and European Council, and other administration bodies.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Xenouleas et al. (2021)
### Licensing Information
We provide MultiEURLEX with the same licensing as the original EU data (CC-BY-4.0):
© European Union, 1998-2021
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
### Citation Information
*Stratos Xenouleas, Alexia Tsoukara, Giannis Panagiotakis Ilias Chalkidis, and Ion Androutsopoulos.*
*Realistic Zero-Shot Cross-Lingual Transfer in Legal Topic Classification.*
*Proceedings of 12th Hellenic Conference on Artificial Intelligence (SETN 2022). Corfu, Greece. 2022*
```
@InProceedings{xenouleas-etal-2022-realistic-multieurlex,
author = {Xenouleas, Stratos
and Tsoukara, Alexia
and Panagiotakis, Giannis
and Chalkidis, Ilias
and Androutsopoulos, Ion},
title = {Realistic Zero-Shot Cross-Lingual Transfer in Legal Topic Classification},
booktitle = {Proceedings of 12th Hellenic Conference on Artificial Intelligence (SETN 2022)},
year = {2022},
publisher = {Association for Computer Machinery},
location = {Corfu, Greece},
}
```
### Contributions
Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset. |
true | # AutoTrain Dataset for project: quality-customer-reviews
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project quality-customer-reviews.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": " Love this truck, I think it is light years better than the competition. I have driven or owned all [...]",
"target": 1
},
{
"text": " I purchased this to haul our 4 horse trailer since the standard iterations of the domestic vehicles[...]",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=5, names=['good', 'great', 'ok', 'poor', 'terrible'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 9166 |
| valid | 2295 |
|
true | # AutoTrain Dataset for project: qa-team-car-review-project
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project qa-team-car-review-project.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": " ",
"target": 1
},
{
"text": " Mazda truck costs less than the sister look-a-like Ford; Mazda is a \"A\" series of the Ford Ranger, [...]",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['great', 'ok', 'poor'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 19731 |
| valid | 4935 |
|
true | # AutoTrain Dataset for project: car-review-project
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project car-review-project. It contains consumer car ratings and reviews from [Edmunds website](https://www.kaggle.com/datasets/ankkur13/edmundsconsumer-car-ratings-and-reviews)
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": " ",
"target": 1
},
{
"text": " Mazda truck costs less than the sister look-a-like Ford; Mazda is a \"A\" series of the Ford Ranger, [...]",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['great', 'ok', 'poor'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 19731 |
| valid | 4935 |
|
true | |
false |
# Dataset Card for frwiki_good_pages_el
## Dataset Description
- Repository: [frwiki_el](https://github.com/GaaH/frwiki_el)
- Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr)
### Dataset Summary
This dataset contains articles from the French Wikipédia.
It is intended to be used to train Entity Linking (EL) systems. Links in articles are used to detect named entities.
The dataset `frwiki` contains sentences of each Wikipedia pages.
The dataset `entities` contains description for each Wikipedia pages.
### Languages
- French
## Dataset Structure
### frwiki
```
{
"name": "Title of the page",
"wikidata_id": "Identifier of the related Wikidata entity. Can be null.",
"wikipedia_id": "Identifier of the Wikipedia page",
"wikipedia_url": "URL to the Wikipedia page",
"wikidata_url": "URL to the Wikidata page. Can be null.",
"sentences" : [
{
"text": "text of the current sentence",
"ner": ["list", "of", "ner", "labels"],
"mention_mappings": [
(start_of_first_mention, end_of_first_mention),
(start_of_second_mention, end_of_second_mention)
],
"el_wikidata_id": ["wikidata id of first mention", "wikidata id of second mention"],
"el_wikipedia_id": [wikipedia id of first mention, wikipedia id of second mention],
"el_wikipedia_title": ["wikipedia title of first mention", "wikipedia title of second mention"]
}
]
"words": ["words", "in", "the", "sentence"],
"ner": ["ner", "labels", "of", "each", "words"],
"el": ["el", "labels", "of", "each", "words"]
}
```
### entities
```
{
"name": "Title of the page",
"wikidata_id": "Identifier of the related Wikidata entity. Can be null.",
"wikipedia_id": "Identifier of the Wikipedia page",
"wikipedia_url": "URL to the Wikipedia page",
"wikidata_url": "URL to the Wikidata page. Can be null.",
"description": "Description of the entity"
}
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.