text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
true |
# Dataset Card for Adult_Content_Detection
## Table of Contents
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Source Data](#source-data)
## Dataset Description
850 Articles descriptions classified into two different categories namely: Adult, and Non_Adult
## Languages
The text in the dataset is in English
## Dataset Structure
The dataset consists of two columns namely Description and Category.
The Description column consists of the overview of the article and the Category column consists of the class each article belongs to
## Source Data
The dataset is scrapped across different platforms
|
false |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
```
id - article id
articleBody - article main content
description - short version of the article, description of the article
headline - headline of the article
title - title of the article
```
|
false |
# IndoLVCSR
TITML-IDN (Tokyo Institute of Technology Multilingual - Indonesian) is collected and proposed by the authors of "A Large Vocabulary Continuous Speech Recognition System for Indonesian Language". The text transcriptions are obtained from newspaper and magazine articles. The speech is recorded from 20 speakers (11 males and 9 females).
# How to cite
If you use this dataset, you have to cite this paper:
```
@inproceedings{lestari2006titmlidn,
title={A large vocabulary continuous speech recognition system for Indonesian language},
author={Lestari, Dessi Puji and Iwano, Koji and Furui, Sadaoki},
booktitle={15th Indonesian Scientific Conference in Japan Proceedings},
pages={17--22},
year={2006}
}
``` |
false |
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4333 | 0.2163 | 0.1746 | 0.2636 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.378 | 0.1827 | 0.1559 | 0.2188 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.3928 | 0.1898 | 0.1672 | 0.2208 | |
false |
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.34|
|bm|107056|0.11|265180|0.34|
|ak|108096|0.11|265071|0.34|
|eu|108112|0.11|269973|0.34|
|ca|110608|0.12|271191|0.34|
|fon|113072|0.12|265063|0.34|
|st|114080|0.12|265063|0.34|
|ki|115040|0.12|265180|0.34|
|tum|116032|0.12|265063|0.34|
|wo|122560|0.13|365063|0.46|
|ln|126304|0.13|365060|0.46|
|as|156256|0.16|265063|0.34|
|or|161472|0.17|265063|0.34|
|kn|165456|0.17|265063|0.34|
|ml|175040|0.18|265864|0.34|
|rn|192992|0.2|318189|0.4|
|nso|229712|0.24|915051|1.16|
|tn|235536|0.25|915054|1.16|
|lg|235936|0.25|915021|1.16|
|rw|249360|0.26|915043|1.16|
|ts|250256|0.26|915044|1.16|
|sn|252496|0.27|865056|1.1|
|xh|254672|0.27|915058|1.16|
|zu|263712|0.28|915061|1.16|
|ny|272128|0.29|915063|1.16|
|ig|325232|0.34|950097|1.2|
|yo|352784|0.37|918416|1.16|
|ne|393680|0.41|315754|0.4|
|pa|523248|0.55|339210|0.43|
|gu|560688|0.59|347499|0.44|
|sw|560896|0.59|1114455|1.41|
|mr|666240|0.7|417269|0.53|
|bn|832720|0.88|428843|0.54|
|ta|924496|0.97|410633|0.52|
|te|1332912|1.4|573364|0.73|
|ur|1918272|2.02|855756|1.08|
|vi|3101408|3.27|1667306|2.11|
|code|4330752|4.56|2707724|3.43|
|hi|4393696|4.63|1543441|1.96|
|zh|4589904|4.83|3560556|4.51|
|id|4606288|4.85|2627392|3.33|
|ar|4677264|4.93|2148955|2.72|
|fr|5546688|5.84|5055942|6.41|
|pt|6129584|6.46|3562772|4.52|
|es|7571808|7.98|5151349|6.53|
|en|37261104|39.25|31495184|39.93|
|total|94941936|100.0|78883588|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI & HumanEval)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
false | # Dataset Card for WikiANN
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** The original datasets come from Introducing QuBERT: A Large Monolingual Corpus and BERT Model for
Southern Quechua [paper](https://aclanthology.org/2022.deeplo-1.1.pdf) by Rodolfo Zevallos et al. (2022).
- **Point of Contact:** [Rodolfo Zevallos](mailto:rodolfojoel.zevallos@upf.edu)
### Dataset Summary
NER_Quechua_IIC is a named entity recognition dataset consisting of dictionary texts provided by the Peruvian Ministry of Education, annotated with LOC (location), PER (person) and ORG (organization) tags in the IOB2 format.
### Supported Tasks and Leaderboards
- `named-entity-recognition`: The dataset can be used to train a model for named entity recognition in Quechua languages.
|
false |
# Allison Parrish's Gutenberg Poetry Corpus
This corpus was originally published under the CC0 license by [Allison Parrish](https://www.decontextualize.com/). Please visit Allison's fantastic [accompanying GitHub repository](https://github.com/aparrish/gutenberg-poetry-corpus) for usage inspiration as well as more information on how the data was mined, how to create your own version of the corpus, and examples of projects using it.
This dataset contains 3,085,117 lines of poetry from hundreds of Project Gutenberg books. Each line has a corresponding `gutenberg_id` (1191 unique values) from project Gutenberg.
```python
Dataset({
features: ['line', 'gutenberg_id'],
num_rows: 3085117
})
```
A row of data looks like this:
```python
{'line': 'And retreated, baffled, beaten,', 'gutenberg_id': 19}
```
|
false | Datasets for HEARTHSTONE card game. Taken from [this source](https://github.com/deepmind/card2code/tree/master/third_party/hearthstone)
|
true | # Dataset Card for KLAID
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Other Inquiries](#other_inquiries)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://klaid.net](https://klaid.net)
- **Leaderboard:** [https://klaid.net](https://klaid.net)
- **Point of Contact:** [klaid@lawcompany.co.kr](klaid@lawcompany.co.kr)
### Dataset Summary
Korean Legal Artificial Intelligence Datasets(KLAID) is a dataset for the development of Korean legal artificial intelligence technology. This time we offer 1 task, which is legal judgment prediction(LJP).
### Supported Tasks and Leaderboards
Legal Judgment Prediction(LJP)
### Languages
`korean`
### How to use
```python
from datasets import load_dataset
# legal judgment prediction
dataset = load_dataset("lawcompany/KLAID", 'ljp')
```
## Dataset Structure
### Data Instances
#### ljp
An example of 'train' looks as follows.
```
{
'fact': '피고인은 2022. 11. 14. 혈중알콜농도 0.123%의 술에 취한 상태로 승용차를 운전하였다.',
'laws_service': '도로교통법 제148조의2 제3항 제2호,도로교통법 제44조 제1항',
'laws_service_id': 7
}
```
Other References
You can refer to each label's 'laws service content' [here](https://storage.googleapis.com/klaid/ljp/dataset/ljp_laws_service_content.json).
'Laws service content' is the statute([source](https://www.law.go.kr/)) corresponding to each label.
### Data Fields
#### ljp
+ "fact": a `string` feature
+ "laws_service": a `string` feature
+ "laws_service_id": a classification label, with 177 legal judgment values
[More Information Needed](https://klaid.net/tasks-1)
### Data Splits
#### ljp
+ train: 161,192
## Dataset Creation
### Curation Rationale
The legal domain is arguably one of the most expertise fields that require expert knowledge to comprehend. Natural language processing requires many aspects, and we focus on the dataset requirements. As a gold standard is necessary for the testing and the training of a neural model, we hope that our dataset release will help the advances in natural language processing in the legal domain, especially for those for the Korean legal system.
### Source Data
These are datasets based on Korean legal case data.
### Personal and Sensitive Information
Due to the nature of legal case data, personal and sensitive information may be included. Therefore, in order to prevent problems that may occur with personal and sensitive information, we proceeded to de-realize the legal case.
## Considerations for Using the Data
### Other Known Limitations
We plan to upload more data and update them as some of the court records may be revised from now on, based on the ever-evolving legal system.
## Additional Information
### Other Inquiries
[klaid@lawcompany.co.kr](klaid@lawcompany.co.kr)
### Licensing Information
Copyright 2022-present [Law&Company Co. Ltd.](https://career.lawcompany.co.kr/)
Licensed under the CC-BY-NC-ND-4.0
### Contributions
[More Information Needed] |
false |
# Dataset Card for althingi_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data](#data)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Other Known Limitations](#other-known-limitations)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** Althingi Parliamentary Speech
- **Repository:** [LDC](https://catalog.ldc.upenn.edu/LDC2021S01)
- **Paper:** [Building an ASR corpus using Althingi’s Parliamentary Speeches](https://www.researchgate.net/profile/Jon-Gudnason/publication/319185185_Building_an_ASR_Corpus_Using_Althingi's_Parliamentary_Speeches/links/5d1dbdd3a6fdcc2462bdda0f/Building-an-ASR-Corpus-Using-Althingis-Parliamentary-Speeches.pdf)
- **Point of Contact:** [Jón Guðnason](mailto:jg@ru.is)
### Dataset Summary
Althingi Parliamentary Speech consists of approximately 542 hours of recorded speech from Althingi, the Icelandic Parliament, along with corresponding transcripts, a pronunciation dictionary and two language models. Speeches date from 2005-2016.
This dataset was collected in 2016 by the ASR for Althingi project at [Reykjavik University](https://en.ru.is/) in collaboration with the Althingi speech department. The purpose of that project was to develop an ASR (automatic speech recognition) system for parliamentary speech to replace the procedure of manually transcribing performed speeches.
### Data
The mean speech length is six minutes, with speeches ranging from under one minute to around thirty minutes. The corpus features 197 speakers (105 male, 92 female) and is split into training, development and evaluation sets. The language models are of two types: a pruned trigram model, used in decoding, and an unpruned constant ARPA 5-gram model, used for re-scoring decoding results.
Audio data is presented as single channel 16-bit mp3 files; the majority of these files have a sample rate of 44.1 kHz. Transcripts and other text data are plain text encoded in UTF-8.
### Example Usage
The Althingi Corpus is divided in 3 splits: train, validation and test. To load a specific split pass its name as a config name:
```python
from datasets import load_dataset
althingi_asr = load_dataset("language-and-voice-lab/althingi_asr")
```
To load an specific split (for example, the validation split) do:
```python
from datasets import load_dataset
althingi_asr = load_dataset("language-and-voice-lab/althingi_asr",split="validation")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The audio is in Icelandic.
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'rad20160602T000219_00083',
'audio': {
'path': '/home/inga/.cache/HuggingFace/datasets/downloads/extracted/52607f9db9e3394263070575d29323213b99a06a996c43d4fe75bca115827d12/dev/EyH/rad20160602T000219/rad20160602T000219_00083.flac',
'array': array([-0.01098633, -0.01489258, -0.01040649, ..., 0.00314331,
0.00186157, 0.00527954], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': 'rad20160602T000219',
'duration': 12.67199993133545,
'normalized_text': 'og má svo sannarlega segja að landslagið sé nokkuð breytt frá því þrjú komma tvö prósent þjóðarinnar töldust vera innflytjendur árið tvö þúsund en nú teljast tíu prósent þjóðarinnar vera fyrsta og önnur kynslóð innflytjenda'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription.
### Data Splits
The corpus is split into train, evaluation, and test portions. Lenghts of every portion are: train = 514h29m, test = 13h52m, evaluation=14h02m.
To load an specific portion please see the above section "Example Usage".
## Additional Information
### Other Known Limitations
"Althingi Parliamentary Speech" by the Language and Voice Laboratory (LVL) at the Reykjavik University is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
### Licensing Information
[CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{helgadottiralthingi2021,
title={Althingi Parliamentary Speech},
ldc_catalog_no={LDC2021S01},
DOI={https://doi.org/10.35111/695b-6697},
author={Helgadóttir, Inga Rún and Kjaran, Róbert and Nikulásdóttir, Anna Björk and Guðnason, Jón},
publisher={Reykjavík University}
journal={Linguistic Data Consortium, Philadelphia},
year={2021},
url={https://catalog.ldc.upenn.edu/LDC2021S01},
}
```
### Contributions
This project was made possible through the support of Althingi’s information and publications departments. The authors would like to thank Solveig K. Jónsdóttir, Þorbjörg Árnadóttir and Ingvi Stígsson for their valuable help.
|
false |
# Dataset Card for `trec-robust04`
The `trec-robust04` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=528,155
- `queries` (i.e., topics); count=250
- `qrels`: (relevance assessments); count=311,410
This dataset is used by: [`trec-robust04_fold1`](https://huggingface.co/datasets/irds/trec-robust04_fold1), [`trec-robust04_fold2`](https://huggingface.co/datasets/irds/trec-robust04_fold2), [`trec-robust04_fold3`](https://huggingface.co/datasets/irds/trec-robust04_fold3), [`trec-robust04_fold4`](https://huggingface.co/datasets/irds/trec-robust04_fold4), [`trec-robust04_fold5`](https://huggingface.co/datasets/irds/trec-robust04_fold5)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/trec-robust04', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...}
queries = load_dataset('irds/trec-robust04', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/trec-robust04', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Voorhees2004Robust,
title={Overview of the TREC 2004 Robust Retrieval Track},
author={Ellen Voorhees},
booktitle={TREC},
year={2004}
}
```
|
true | # Dataset Card for IMDb Movie Reviews
## Dataset Description
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Total amount of disk used:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
This is a custom train/test/validation split of the IMDb Large Movie Review Dataset available from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
#### IMDb_movie_reviews
An example of 'train':
```
{
"text": "Beautifully photographed and ably acted, generally, but the writing is very slipshod. There are scenes of such unbelievability that there is no joy in the watching. The fact that the young lover has a twin brother, for instance, is so contrived that I groaned out loud. And the "emotion-light bulb connection" seems gimmicky, too.<br /><br />I don\'t know, though. If you have a few glasses of wine and feel like relaxing with something pretty to look at with a few flaccid comedic scenes, this is a pretty good movie. No major effort on the part of the viewer required. But Italian film, especially Italian comedy, is usually much, much better than this."
"label": 0,
}
```
### Data Fields
The data fields are the same among all splits.
#### IMDb_movie_reviews
- `text`: a `string` feature.
- `label`: a classification label, with values `neg` (0), `pos` (1).
### Data Splits
| name | train | validation | test |
|------------------|------:|-----------:|------:|
|IMDb_movie_reviews| 36000 | 4000 | 10000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
```
### Contributions
[More Information Needed] |
false |
# MIRACL (th) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-th-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
false |
# MIRACL (id) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-id-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-id-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-id-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-id-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-id-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-id-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-id-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-id-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-id-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
false |
# Translations for Instruction dataset
Translations were generated by [M2M 12B](https://huggingface.co/facebook/m2m100-12B-avg-5-ckpt) and the output generations were limited at 512 tokens due to VRAM limit (40G).
|
false |
# Dataset Card for activity-diagrams-qdobr
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/activity-diagrams-qdobr
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
activity-diagrams-qdobr
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/activity-diagrams-qdobr
### Citation Information
```
@misc{ activity-diagrams-qdobr,
title = { activity diagrams qdobr Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/activity-diagrams-qdobr } },
url = { https://universe.roboflow.com/object-detection/activity-diagrams-qdobr },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
false |
# Dataset Card for animals-ij5d2
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/animals-ij5d2
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
animals-ij5d2
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/animals-ij5d2
### Citation Information
```
@misc{ animals-ij5d2,
title = { animals ij5d2 Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/animals-ij5d2 } },
url = { https://universe.roboflow.com/object-detection/animals-ij5d2 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
false |
# Dataset Card for UWB-ATCC corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages and Other Details](#languages-and-other-details)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [UWB-ATCC corpus homepage](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0)
- **Repository:** [GitHub repository (used in research)](https://github.com/idiap/w2v2-air-traffic)
- **Paper:** [Air traffic control communication (ATCC) speech corpora and their use for ASR and TTS development](https://link.springer.com/article/10.1007/s10579-019-09449-5)
- **Paper of this research:** [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822)
### Dataset Summary
The UWB-ATCC Corpus is provided provided by University of West Bohemia, Department of Cybernetics. The corpus contains recordings of communication between air traffic controllers and pilots. The speech is manually transcribed and labeled with the information about the speaker (pilot/controller, not the full identity of the person). The corpus is currently small (20 hours) but we plan to search for additional data next year. The audio data format is: 8kHz, 16bit PCM, mono.
Important, from the `<id (string)>` field, you can obtain the speaker roles. For instance:
- `_PI`: segment with only pilot speech
- `_AT`: segment with only ATCO speech
- `PIAT`: segment with both, ATCO and pilot speech
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [XLS-R-300m](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-atcosim).
### Languages and other details
The text and the recordings are in English. The authors took advantage of the fact that one of their industrial partners develops complex IT solutions for several ATC authorities and airports and, as such, has access to the ATC communication recordings collected in the Czech airspace. This partner was able to secure the following data:
- Ground control—communication before takeoff and after landing—19.2 h of data.
- Tower control—communication during takeoff, landing and landing standby—22.5 h.
- Approach control—communication during landing approach—25.5 h.
- Area control—communication during overflights and cruises—71.3 h.
(Not all data is released. Check their website [here](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0))
## Dataset Structure
### Data Fields
- `id (string)`: a string of recording identifier for each example, corresponding to its.
- `audio (audio)`: audio data for the given ID
- `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
- `segment_start_time (float32)`: segment start time (normally 0)
- `segment_end_time (float32): segment end time
- `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [UWB-ATCC corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0) creators.
They used [Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) licensing.
### Citation Information
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
Authors of the dataset:
```
@article{vsmidl2019air,
title={Air traffic control communication (ATCC) speech corpora and their use for ASR and TTS development},
author={{\v{S}}m{\'\i}dl, Lubo{\v{s}} and {\v{S}}vec, Jan and Tihelka, Daniel and Matou{\v{s}}ek, Jind{\v{r}}ich and Romportl, Jan and Ircing, Pavel},
journal={Language Resources and Evaluation},
volume={53},
number={3},
pages={449--464},
year={2019},
publisher={Springer}
}
``` |
false |
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
AI Society dataset is composed of 25K conversations between two gpt-3.5-turbo agents. This dataset is obtained by running role-playing for a combination of 50 user roles and 50 assistant roles with each combination running over 10 tasks.
We provide two formats, one is "chat" format which is `ai_society_chat.tar.gz` file containing the conversational instruction following format. The other format is "instruction" format which is `ai_society_instructions.json`.
## Data Fields
**The data fields for instructions format (`ai_society_instructions.json`) are as follows:**
* `id`: {assistant\_role\_index}\_{user\_role\_index}\_{task\_index}, for example 001_002_003 refers to assistant role 1, user role 2, and task 3 from our text assistant role names, user role names and task text files.
* `role_1`: assistant role
* `role_2`: user role
* `original_task`: the general assigned task for the assistant and user to cooperate on.
* `specified_task`: the task after task specifier, this task is more specific than the original task.
* `role_1_response`: user response text before the instruction.
* `role_1_message_id`: message ID in the full raw conversation.
* `instruction`: describes the task the assistant is supposed to perform.
* `input`: provides further context or information for the requested instruction.
* `output`: the answer to the instruction as generated by 'gpt-3.5-turbo'
* `termination_reason`: refers to the reason of termination of the chat.
**The data fields for chat format (`ai_society_chat.tar.gz`) are as follows:**
* `input`: {assistant\_role\_index}\_{user\_role\_index}\_{task\_index}, for example 001_002_003 refers to assistant role 1, user role 2, and task 3 from our text assistant role names, user role names and task text files.
* `role_1`: assistant role
* `role_2`: user role
* `original_task`: the general assigned task for the assistant and user to cooperate on.
* `specified_task`: the task after task specifier, this task is more specific than the original task.
* `message_k`: refers to the k<sup>_th_</sup> message of the conversation.
* `role_type`: refers to whether the agent is an assistant or a user.
* `role_name`: refers to the assigned assistant/user role.
* `role`: refers to the role of the agent during the message for openai api. [usually not needed]
* `content`: refers to the content of the message.
* `termination_reason`: refers to the reason of termination of the chat.
* `num_messages`: refers to the total number of messages in the chat.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/ai_society", repo_type="dataset", filename="ai_society_chat.tar.gz",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by gpt-3.5-turbo and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
|
false |
# ELI5 paired
This is a processed version of the [`eli5`](https://huggingface.co/datasets/eli5) dataset.
Compared to ["eli5_rlhf"](https://huggingface.co/datasets/vincentmin/eli5_rlhf), this dataset contains only QA pairs from the train split of the eli5 dataset and only from the subreddit explainlikeimfive.
Furthermore, the function
```
def get_question(example):
title = example["title"]
selftext = example["selftext"]
if selftext:
if selftext[-1] not in [".", "?", "!"]:
seperator = ". "
else:
seperator = " "
question = title + seperator + selftext
else:
question = title
example["question"] = question
return example
```
was applied to get the "question" column and the "title" and "selftext" columns were removed.
The dataset was created following very closely the steps in the [`stack-exchange-paired`](https://huggingface.co/datasets/lvwerra/stack-exchange-paired) dataset.
The following steps were applied:
- The "question" field is a concatenation of "title" with "selftext".
- Create pairs `(response_j, response_k)` where j was rated better than k
- Sample at most 10 pairs per question
- Shuffle the dataset globally
This dataset is designed to be used for preference learning. The processing notebook is in the repository as well. |
false | |
false |
# Dataset Card for Alpaca-Cleaned
- **Repository:** https://github.com/gururise/AlpacaDataCleaned
## Dataset Description
This is a cleaned version of the original Alpaca Dataset released by Stanford. The following issues have been identified in the original release and fixed in this dataset:
1. **Hallucinations:** Many instructions in the original dataset had instructions referencing data on the internet, which just caused GPT3 to hallucinate an answer.
```
"instruction":"Summarize the given article in 200 Words.",
"input": "https://www.bbc.com/news/world-51461830",
"output": "The recent protests in Belarus are part of a growing wave of discontent against the rule of Alexander Lukashenko, who has led the country since 1994. Demonstrations against his government have been ongoing since August, and come in the wake of an election widely seen as rigged. As police and security forces have cracked down on protesters, the movement has only grown, with hundreds of thousands of people taking to the streets to call for new leadership. Human rights activists have accused the government of using extreme violence in order prosecute protesters, leading to an international outcry and sanctions from the European Union. As the situation in Belarus continues to unfold, the international community is monitoring the situation closely - hoping for a peaceful resolution, while also standing firm against the oppressive tactics employed by the Belarusian state."
"instruction": "Classify the following shape according to simple geometry.",
"input": "<Picture Attached>",
"output": "Triangle"
"instruction": "Given a recent news article, write a headline for it.",
"input": "https://www.news.example.com/article-1",
"output": "\"NASA Launches a Mars 2020 Mission to Search for Signs of Ancient Life\""
```
2. **Merged Instructions:** There were many instructions that were merged together in the original dataset for some reason.
```
"instruction": "Should I invest in stocks?",
"input": "",
"output": "It depends on your individual situation. Investing in stocks carries a certain amount of risk but can also offer potential rewards. Make sure to research the different types of stocks, understand the risks and rewards, and develop a plan that works for you.\n##\n18. Instruction: Identify the noun in the following sentence:\n18. Input:\nThe large apple was delicious.\n18. Output:\nApple"
```
3. **Empty outputs:** Some entries in the original dataset had empty outputs.
```
"instruction": "Calculate the area of the following shape in square centimeters.",
"input": "rectangle of size 4 cm x 5 cm",
"output": ""
```
4. **Empty code examples:** Some descriptions in the original dataset were missing code examples, making it difficult to understand the intended behavior of the code.
5. **Instructions to generate images:** Some descriptions in the original dataset included instructions to generate images, something obviously not possible.
```
"instruction": "Create a graphic or logo that visually represents the word \"courage\".",
"input": "",
"output": "<No Output>"
```
6. **N/A outputs:** Some code snippets in the original dataset had N/A outputs.
7. **Inconsistent input field:** The original dataset had inconsistent usage of the input field when it was supposed to be empty.
```
"input":"<no input>"
"input":"No input"
"input":"noinput"
"input":"<noinput>"
```
8. **Wrong answers:** Some instructions/questions in the original dataset had incorrect answers. About 80% of the math problems are estimated to have incorrect answers.
```
"instruction": "Calculate the median of the following data set.",
"input": "1, 2, 4, 5, 8, 9",
"output": "5"
"instruction": "Convert 25m to km.",
"input": "",
"output": "25km"
```
9. **Non-Sensical/Unclear instructions:** Many instructions are unclear, we try to clarify (or re-write) if instructions are non-sensical. Instructions that are slightly unclear, but where one could deduce the meaning are not altered.
```
"instruction": "Freeze the following sample of yogurt for 10 minutes.",
"input": "Yogurt sample",
"output": "<noinput>"
"instruction": "Increase the font size to 12 points.",
"input": "",
"output": "The font size has been increased to 12 points."
```
10. **Extraneous escape and control characters:** The original dataset had several entries with extraneous escape and control characters.
### Original Alpaca Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
### Supported Tasks and Leaderboards
The Alpaca dataset designed for instruction training pretrained language models.
### Languages
The data in Alpaca are in English (BCP-47 en).
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"instruction": "Create a classification task by clustering the given list of items.",
"input": "Apples, oranges, bananas, strawberries, pineapples",
"output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform. Each of the 52K instructions is unique.
* `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
* `output`: the answer to the instruction as generated by `text-davinci-003`.
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| alpaca | 52002 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset:
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
### Contributions
[More Information Needed] |
false |
## Every Prompt
Every Prompt is a data-driven approach to mining instructions from the web.
It contains over a million FAQs and HowTos from around the world in a structured format.
It also has basic pre-processing to calculate the length of the useful text and identify the language of that text with the help of [GCLD3](https://github.com/google/cld3)
It relies on the [Web Data Commons](http://webdatacommons.org) dataset (from October 2022) to find the seed list of sites with [**HowTo**](https://schema.org/HowTo) and [**FAQPage**](https://schema.org/FAQPage) items.
The general pipeline looks like this:
* Download 1.6TB of structured data from webdatacommons to identify the pages with the structured data we need (wget/parallel). That gives us 1,985,925 seed pages
* Crawls the seed pages and tries to extract structured data using [extruct](https://pypi.org/project/extruct/#description) package. That left around 1,358,638 pages which are alive and well-formed.
* Extracts only the relevant structured data of the HowTo/FAQPage type with the help of jmespath. That boils down to 1,266,926 json documents.
* Extracts the textual information out of the structure to identify the text's language, the textual data's length, and the text/data ratio.
You can use the resulting dataset by filtering for the language and amount of the text. You need to convert the structured data into instructions yourself.
You'll need to apply extra cleansing/evaluation of the instructions you've got because, you know, the internet is still full of crap.
**Caveat emptor**: the format of the FAQs and HowTo's in the dataset might vary greatly. Account for that. To understand potential pitfalls, look at the jmespath expression at the `export_structured_data.py`.
## Detailed stats (with breakdown by language and data type)
| language | FAQPage count | FAQPage text length | HowTo count | HowTo text length | items count | text length |
| --- | --- | --- | --- | --- | --- | --- |
| en | 592730 | 1186748927 | 29017 | 77135350 | 621747 | 1263884277 |
| de | 83184 | 213931486 | 3370 | 13905977 | 86554 | 227837463 |
| es | 63237 | 113906536 | 6466 | 30517773 | 69703 | 144424309 |
| fr | 65081 | 141638675 | 3672 | 21632272 | 68753 | 163270947 |
| ja | 55439 | 46231152 | 1402 | 1678468 | 56841 | 47909620 |
| ru | 41271 | 70947161 | 2403 | 12805308 | 43674 | 83752469 |
| nl | 34066 | 102719276 | 2007 | 11078079 | 36073 | 113797355 |
| it | 23076 | 43968063 | 2465 | 13696136 | 25541 | 57664199 |
| vi | 23115 | 38603954 | 720 | 3224051 | 23835 | 41828005 |
| zh | 22496 | 21111729 | 1112 | 1513344 | 23608 | 22625073 |
| pl | 19424 | 41446645 | 306 | 419787 | 19730 | 41866432 |
| fa | 17263 | 31294557 | 1819 | 1915117 | 19082 | 33209674 |
| tr | 13619 | 20040069 | 722 | 418695 | 14341 | 20458764 |
| und | 12256 | 1032156 | 322 | 8941 | 12578 | 1041097 |
| pt | 10784 | 26163387 | 1775 | 8295306 | 12559 | 34458693 |
| ro | 10536 | 16405628 | 75 | 89946 | 10611 | 16495574 |
| id | 8256 | 14353165 | 1871 | 13055561 | 10127 | 27408726 |
| ko | 8348 | 7624222 | 616 | 1533830 | 8964 | 9158052 |
| sv | 8007 | 15926376 | 390 | 638054 | 8397 | 16564430 |
| ar | 6950 | 10240266 | 1241 | 7517175 | 8191 | 17757441 |
| da | 7691 | 15277244 | 408 | 450176 | 8099 | 15727420 |
| cs | 7546 | 13201121 | 480 | 2471544 | 8026 | 15672665 |
| fi | 7767 | 14468764 | 199 | 170138 | 7966 | 14638902 |
| hi | 4517 | 4307716 | 683 | 4294129 | 5200 | 8601845 |
| hu | 4866 | 10639836 | 125 | 61118 | 4991 | 10700954 |
| el | 4600 | 10555382 | 103 | 55576 | 4703 | 10610958 |
| no | 4357 | 8426887 | 179 | 354796 | 4536 | 8781683 |
| uk | 4401 | 6925331 | 90 | 37285 | 4491 | 6962616 |
| iw | 4056 | 7723904 | 36 | 35305 | 4092 | 7759209 |
| bg | 3620 | 10154727 | 41 | 31268 | 3661 | 10185995 |
| sk | 2639 | 4394140 | 65 | 32527 | 2704 | 4426667 |
| th | 1877 | 3823867 | 613 | 3171583 | 2490 | 6995450 |
| mr | 2002 | 2274197 | 57 | 75906 | 2059 | 2350103 |
| mt | 1886 | 3761332 | 14 | 5443 | 1900 | 3766775 |
| cy | 1524 | 3171667 | 25 | 11641 | 1549 | 3183308 |
| bs | 1366 | 2031881 | 34 | 23298 | 1400 | 2055179 |
| et | 1299 | 1694117 | 5 | 2005 | 1304 | 1696122 |
| ms | 989 | 1927545 | 174 | 720492 | 1163 | 2648037 |
| ca | 1068 | 1614073 | 62 | 34072 | 1130 | 1648145 |
| lt | 1056 | 2272916 | 44 | 57169 | 1100 | 2330085 |
| ne | 966 | 771410 | 29 | 28569 | 995 | 799979 |
| hr | 796 | 1394174 | 15 | 10191 | 811 | 1404365 |
| fy | 743 | 633705 | 24 | 5823 | 767 | 639528 |
| lb | 703 | 1133527 | 18 | 3985 | 721 | 1137512 |
| gl | 628 | 1159618 | 34 | 9049 | 662 | 1168667 |
| mn | 644 | 1174921 | 11 | 3592 | 655 | 1178513 |
| la | 635 | 363380 | 13 | 2009 | 648 | 365389 |
| af | 577 | 444351 | 38 | 14403 | 615 | 458754 |
| sl | 451 | 1708497 | 50 | 50361 | 501 | 1758858 |
| ht | 455 | 223768 | 13 | 4406 | 468 | 228174 |
| lv | 317 | 1017694 | 32 | 31983 | 349 | 1049677 |
| gd | 273 | 295170 | 52 | 20374 | 325 | 315544 |
| sr | 287 | 367782 | 23 | 5177 | 310 | 372959 |
| co | 288 | 284629 | 12 | 3530 | 300 | 288159 |
| az | 268 | 273548 | 9 | 13011 | 277 | 286559 |
| fil | 210 | 165520 | 63 | 77100 | 273 | 242620 |
| jv | 244 | 153411 | 14 | 75932 | 258 | 229343 |
| sn | 239 | 175459 | 10 | 8890 | 249 | 184349 |
| bn | 190 | 301199 | 42 | 23451 | 232 | 324650 |
| ga | 198 | 263174 | 30 | 12905 | 228 | 276079 |
| mg | 201 | 53082 | 18 | 6141 | 219 | 59223 |
| hi-Latn | 194 | 250495 | 4 | 33091 | 198 | 283586 |
| hmn | 173 | 793850 | 16 | 5902 | 189 | 799752 |
| ka | 162 | 262305 | 8 | 3427 | 170 | 265732 |
| ig | 136 | 129243 | 10 | 2941 | 146 | 132184 |
| is | 139 | 236415 | 4 | 1277 | 143 | 237692 |
| ta | 129 | 155042 | 12 | 4079 | 141 | 159121 |
| kk | 102 | 152629 | 28 | 11885 | 130 | 164514 |
| eu | 118 | 130847 | 10 | 3522 | 128 | 134369 |
| eo | 121 | 69071 | 6 | 1885 | 127 | 70956 |
| ur | 93 | 259680 | 33 | 20499 | 126 | 280179 |
| so | 112 | 203877 | 6 | 2151 | 118 | 206028 |
| tg | 99 | 73437 | 16 | 5539 | 115 | 78976 |
| mk | 29 | 62730 | 84 | 391780 | 113 | 454510 |
| be | 100 | 88386 | 8 | 2193 | 108 | 90579 |
| sm | 100 | 1309239 | 8 | 2778 | 108 | 1312017 |
| uz | 93 | 116820 | 7 | 2987 | 100 | 119807 |
| zu | 84 | 136023 | 9 | 2744 | 93 | 138767 |
| haw | 81 | 59685 | 6 | 822 | 87 | 60507 |
| sq | 74 | 120593 | 12 | 6205 | 86 | 126798 |
| ny | 78 | 19403 | 6 | 2046 | 84 | 21449 |
| hy | 66 | 81675 | 10 | 3613 | 76 | 85288 |
| ha | 44 | 84457 | 19 | 68032 | 63 | 152489 |
| ru-Latn | 60 | 40266 | 1 | 61 | 61 | 40327 |
| el-Latn | 57 | 55657 | 4 | 342 | 61 | 55999 |
| zh-Latn | 58 | 27522 | 1 | 66 | 59 | 27588 |
| sd | 52 | 51341 | 7 | 2044 | 59 | 53385 |
| su | 50 | 17291 | 7 | 2358 | 57 | 19649 |
| ku | 47 | 23147 | 6 | 1998 | 53 | 25145 |
| bg-Latn | 48 | 15419 | 1 | 414 | 49 | 15833 |
| st | 25 | 65162 | 19 | 6346 | 44 | 71508 |
| yo | 37 | 103685 | 6 | 1790 | 43 | 105475 |
| ceb | 41 | 72950 | 1 | 107 | 42 | 73057 |
| ky | 30 | 23062 | 10 | 3679 | 40 | 26741 |
| te | 32 | 42803 | 7 | 2558 | 39 | 45361 |
| yi | 32 | 227267 | 7 | 2443 | 39 | 229710 |
| mi | 26 | 10132 | 11 | 2915 | 37 | 13047 |
| gu | 25 | 37857 | 10 | 4608 | 35 | 42465 |
| ja-Latn | 33 | 17560 | 2 | 88 | 35 | 17648 |
| sw | 26 | 17579 | 8 | 2726 | 34 | 20305 |
| xh | 28 | 46466 | 4 | 1409 | 32 | 47875 |
| ml | 16 | 33198 | 6 | 2721 | 22 | 35919 |
| ps | 10 | 7671 | 12 | 2642 | 22 | 10313 |
| am | 6 | 8017 | 8 | 1987 | 14 | 10004 |
| kn | 5 | 22197 | 9 | 3523 | 14 | 25720 |
| km | 7 | 8936 | 6 | 1879 | 13 | 10815 |
| pa | 10 | 26617 | 3 | 1100 | 13 | 27717 |
| si | 5 | 24000 | 5 | 1722 | 10 | 25722 |
| lo | 1 | 6204 | 7 | 2115 | 8 | 8319 |
| my | 3 | 14663 | 3 | 1179 | 6 | 15842 |
## Recreating the results
1. Clone the repo without the LFS files.
2. Install requirements from `requirements.txt`.
3. Install `pv` and `parallel`.
4. Run `bin/get_seed_urls.sh` to filter urls of interest out of 1.6TB of compressed data. Don't worry about disk space. Worry about the traffic. That will take around 5h on decent connection.
5. Run scrapy spider like this `scrapy crawl webdatacommons_org -s WEB_DATA_COMMONS=web_data_commons_urls_sample.txt -L INFO -o webdatacommons.jsonlines` with `WEB_DATA_COMMONS` pointing to the list of seed URLs from step 4. That might take up to a few weeks.
6. Run `python bin/extract_relevant_structured_data.py --num-threads 12 webdatacommons.jsonlines relevant.jsonlines.bz2`. That's fast, probably around 30 minutes.
7. Run `python bin/export_structured_data.py relevant.jsonlines.bz2 extruct_out.jsonlines.bz2` to obtain the final version of the dataset.
8. Optionally you can calculate the resulting stats like that: `python bin/get_stats.py extruct_out.jsonlines.bz2 every_prompt_stats.csv`
## Advices
If you want to recreate the results:
* Get yourself a server or VPS with enough space (80GB should be enough).
* Look at the code. You'd probably want to make changes here and there.
* All the python scripts have extra parameters to control the number of threads and the chunk size. Both accept compressed input and output files with the help of smart_open lib.
## License
**Code** of the project has an MIT license.
Copyright: [Dmytro Chaplynskyi](https://twitter.com/dchaplinsky), [lang-uk project](https://lang.org.ua), 2023 |
false |
# Dataset Card for CIFAR-100-LT (Long Tail)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIFAR Datasets](https://www.cs.toronto.edu/~kriz/cifar.html)
- **Paper:** [Paper imbalanced example](https://openaccess.thecvf.com/content_CVPR_2019/papers/Cui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf)
- **Leaderboard:** [r-10](https://paperswithcode.com/sota/long-tail-learning-on-cifar-100-lt-r-10) [r-100](https://paperswithcode.com/sota/long-tail-learning-on-cifar-100-lt-r-100)
### Dataset Summary
The CIFAR-100-LT imbalanced dataset is comprised of under 60,000 color images, each measuring 32x32 pixels,
distributed across 100 distinct classes.
The number of samples within each class decreases exponentially with factors of 10 and 100.
The dataset includes 10,000 test images, with 100 images per class,
and fewer than 50,000 training images.
These 100 classes are further organized into 20 overarching superclasses.
Each image is assigned two labels: a fine label denoting the specific class,
and a coarse label representing the associated superclass.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available [here](https://paperswithcode.com/sota/long-tail-learning-on-cifar-100-lt-r-100).
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x2767F58E080>, 'fine_label': 19,
'coarse_label': 11
}
```
### Data Fields
- `img`: A `PIL.Image.Image` object containing the 32x32 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `fine_label`: an `int` classification label with the following mapping:
`0`: apple
`1`: aquarium_fish
`2`: baby
`3`: bear
`4`: beaver
`5`: bed
`6`: bee
`7`: beetle
`8`: bicycle
`9`: bottle
`10`: bowl
`11`: boy
`12`: bridge
`13`: bus
`14`: butterfly
`15`: camel
`16`: can
`17`: castle
`18`: caterpillar
`19`: cattle
`20`: chair
`21`: chimpanzee
`22`: clock
`23`: cloud
`24`: cockroach
`25`: couch
`26`: cra
`27`: crocodile
`28`: cup
`29`: dinosaur
`30`: dolphin
`31`: elephant
`32`: flatfish
`33`: forest
`34`: fox
`35`: girl
`36`: hamster
`37`: house
`38`: kangaroo
`39`: keyboard
`40`: lamp
`41`: lawn_mower
`42`: leopard
`43`: lion
`44`: lizard
`45`: lobster
`46`: man
`47`: maple_tree
`48`: motorcycle
`49`: mountain
`50`: mouse
`51`: mushroom
`52`: oak_tree
`53`: orange
`54`: orchid
`55`: otter
`56`: palm_tree
`57`: pear
`58`: pickup_truck
`59`: pine_tree
`60`: plain
`61`: plate
`62`: poppy
`63`: porcupine
`64`: possum
`65`: rabbit
`66`: raccoon
`67`: ray
`68`: road
`69`: rocket
`70`: rose
`71`: sea
`72`: seal
`73`: shark
`74`: shrew
`75`: skunk
`76`: skyscraper
`77`: snail
`78`: snake
`79`: spider
`80`: squirrel
`81`: streetcar
`82`: sunflower
`83`: sweet_pepper
`84`: table
`85`: tank
`86`: telephone
`87`: television
`88`: tiger
`89`: tractor
`90`: train
`91`: trout
`92`: tulip
`93`: turtle
`94`: wardrobe
`95`: whale
`96`: willow_tree
`97`: wolf
`98`: woman
`99`: worm
- `coarse_label`: an `int` coarse classification label with following mapping:
`0`: aquatic_mammals
`1`: fish
`2`: flowers
`3`: food_containers
`4`: fruit_and_vegetables
`5`: household_electrical_devices
`6`: household_furniture
`7`: insects
`8`: large_carnivores
`9`: large_man-made_outdoor_things
`10`: large_natural_outdoor_scenes
`11`: large_omnivores_and_herbivores
`12`: medium_mammals
`13`: non-insect_invertebrates
`14`: people
`15`: reptiles
`16`: small_mammals
`17`: trees
`18`: vehicles_1
`19`: vehicles_2
### Data Splits
| name |train|test|
|----------|----:|---------:|
|cifar100|<50000| 10000|
### Licensing Information
Apache License 2.0
### Citation Information
```
@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchablani) and all contributors for adding the original balanced cifar100 dataset. |
false |
# Dataset Card for ParlamentoPT
### Dataset Summary
The ParlamentoPT is a **Portuguese** language data set obtained by collecting publicly available documents containing transcriptions of debates in the Portuguese Parliament.
The data was collected from the Portuguese Parliament portal in accordance with its [open data policy](https://www.parlamento.pt/Cidadania/Paginas/DadosAbertos.aspx).
This dataset was collected with the purpose of creating the [Albertina-PT*](https://huggingface.co/PORTULAN/albertina-ptpt) language model, and it serves as training data for model development.
The development of the model is a collaborative effort between the University of Lisbon and the University of Porto in Portugal
</br>
# Citation
When using or citing this data set, kindly cite the following [publication](https://arxiv.org/abs/2305.06721):
``` latex
@misc{albertina-pt,
title={Advancing Neural Encoding of Portuguese
with Transformer Albertina PT-*},
author={João Rodrigues and Luís Gomes and João Silva and
António Branco and Rodrigo Santos and
Henrique Lopes Cardoso and Tomás Osório},
year={2023},
eprint={2305.06721},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<br>
# Acknowledgments
The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language,
funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the
grant PINFRA/22117/2016; research project ALBERTINA - Foundation Encoder Model for Portuguese and AI, funded by FCT—Fundação para a Ciência e Tecnologia under the
grant CPCA-IAC/AV/478394/2022; innovation project ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação under the grant C625734525-00462629, of Plano de Recuperação e Resiliência, call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização; and LIACC - Laboratory for AI and Computer Science, funded by FCT—Fundação para a Ciência e Tecnologia under the grant FCT/UID/CEC/0027/2020. |
false | # A Named Entity Recognition Dataset for Kazakh
- This is a modified version of the dataset provided in the [LREC 2022](https://lrec2022.lrec-conf.org/en/) paper [*KazNERD: Kazakh Named Entity Recognition Dataset*](https://aclanthology.org/2022.lrec-1.44).
- The original repository for the paper can be found at *https://github.com/IS2AI/KazNERD*.
- Tokens denoting speech disfluencies and hesitations (parenthesised) and background noise [bracketed] were removed.
- A total of 2,027 duplicate sentences were removed.
### Statistics for training (Train), validation (Valid), and test (Test) sets
| Unit | Train | Valid | Test | Total |
| :---: | :---: | :---: | :---: | :---: |
| Sentence | 88,540 (80.00%) | 11,067 (10.00%) | 11,068 (10.00%) | 110,675 (100%) |
| Token | 1,088,461 (80.04%) | 136,021 (10.00%) | 135,426 (9.96%) | 1,359,908 (100%) |
| NE | 106,148 (80.17%) | 13,189 (9.96%) | 13,072 (9.87%) | 132,409 (100%) |
### 80 / 10 / 10 split
|Representation| Train | Valid | Test | Total |
| :---: | :---: | :---: | :---: | :---: |
| **AID** | 67,582 (79.99%) | 8,439 (9.99%) | 8,467 (10.02%)| 84,488 (100%) |
| **BID** | 19,006 (80.11%) | 2,380 (10.03%) | 2,338 (9.85%)| 23,724 (100%) |
| **CID** | 1,050 (78.89%) | 138 (10.37%) | 143 ( 10.74%) | 1,331 (100%) |
| **DID** | 633 (79.22%) | 82 (10.26%) | 84 (10.51%) | 799 (100%) |
| **EID** | 260 (81.00%) | 27 (8.41%) | 34 (10.59%)| 321 (100%) |
| **FID** | 9 (75.00%) | 1 (8.33%)| 2 (16.67%)| 12 (100%) |
|**Total**| **88,540 (80.00%)** | **11,067 (10.00%)** | **11,068 (10.00%)** | **110,675 (100%)** |
### Distribution of representations across sets
|Representation| Train | Valid | Test | Total |
| :---: | :---: | :---: | :---: | :---: |
| **AID** | 67,582 (76.33%) | 8,439 (76.25%) | 8,467 (76.50%)| 84,488 (76.34%) |
| **BID** | 19,006 (21.47%) | 2,380 (21.51%) | 2,338 (21.12%)| 23,724 (21.44%) |
| **CID** | 1,050 (1.19%) | 138 (1.25%) | 143 ( 1.29%) | 1,331 (1.20%) |
| **DID** | 633 (0.71%) | 82 (0.74%) | 84 (0.76%) | 799 (0.72%) |
| **EID** | 260 (0.29%) | 27 (0.24%) | 34 (0.31%)| 321 (0.29%) |
| **FID** | 9 (0.01%) | 1 (0.01%)| 2 (0.02%)| 12 (0.01%) |
|**Total**| **88,540 (100.00%)** | **11,067 (10.00%)** | **11,068 (10.00%)** | **110,675 (100%)** |
### Distribution of NEs across sets
| **NE Class** | **Train** | **Valid** | **Test** | **Total** |
|:---:| :---: | :---: | :---: | :---: |
| **ADAGE** | 153 (0.14%) | 19 (0.14%) | 17 (0.13%) | 189 (0.14%) |
| **ART** | 1,533 (1.44%) | 155 (1.18%) | 161 (1.23%) | 1,849 (1.40%) |
| **CARDINAL** | 23,135 (21.8%) | 2,878 (21.82%) | 2,789 (21.34%) | 28,802 (21.75%) |
| **CONTACT** | 159 (0.15%) | 18 (0.14%) | 20 (0.15%) | 197 (0.15%) |
| **DATE** | 20,006 (18.85%) | 2,603 (19.74%) | 2,584 (19.77%) | 25,193 (19.03%) |
| **DISEASE** | 1,022 (0.96%) | 121 (0.92%) | 119 (0.91%) | 1,262 (0.95%) |
| **EVENT** | 1,331 (1.25%) | 154 (1.17%) | 154 (1.18%) | 1,639 (1.24%) |
| **FACILITY** | 1,723 (1.62%) | 178 (1.35%) | 197 (1.51%) | 2,098 (1.58%) |
| **GPE** | 13,625 (12.84%) | 1,656 (12.56%) | 1,691 (12.94%) | 16,972 (12.82%) |
| **LANGUAGE** | 350 (0.33%) | 47 (0.36%) | 41 (0.31%) | 438 (0.33%) |
| **LAW** | 419 (0.39%) | 56 (0.42%) | 55 (0.42%) | 530 (0.40%) |
| **LOCATION** | 1,736 (1.64%) | 210 (1.59%) | 208 (1.59%) | 2,154 (1.63%) |
| **MISCELLANEOUS** | 191 (0.18%) | 26 (0.2%) | 26 (0.2%) | 243 (0.18%) |
| **MONEY** | 3,652 (3.44%) | 455 (3.45%) | 427 (3.27%) | 4,534 (3.42%) |
| **NON_HUMAN** | 6 (0.01%) | 1 (0.01%) | 1 (0.01%) | 8 (0.01%) |
| **NORP** | 2,929 (2.76%) | 374 (2.84%) | 368 (2.82%) | 3,671 (2.77%) |
| **ORDINAL** | 3,054 (2.88%) | 385 (2.92%) | 382 (2.92%) | 3,821 (2.89%) |
| **ORGANISATION** | 5,956 (5.61%) | 753 (5.71%) | 718 (5.49%) | 7,427 (5.61%) |
| **PERCENTAGE** | 3,357 (3.16%) | 437 (3.31%) | 462 (3.53%) | 4,256 (3.21%) |
| **PERSON** | 9,817 (9.25%) | 1,175 (8.91%) | 1,151 (8.81%) | 12,143 (9.17%) |
| **POSITION** | 4,844 (4.56%) | 587 (4.45%) | 597 (4.57%) | 6,028 (4.55%) |
| **PRODUCT** | 586 (0.55%) | 73 (0.55%) | 75 (0.57%) | 734 (0.55%) |
| **PROJECT** | 1,681 (1.58%) | 209 (1.58%) | 206 (1.58%) | 2,096 (1.58%) |
| **QUANTITY** | 3,063 (2.89%) | 411 (3.12%) | 403 (3.08%) | 3,877 (2.93%) |
| **TIME** | 1,820 (1.71%) | 208 (1.58%) | 220 (1.68%) | 2,248 (1.70%) |
| **Total** | **106,148 (100%)** | **13,189 (100%)** | **13,072 (100%)** | **132,409 (100%)** | |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MaroonSum dataset is a combination of Indonesian Summarization Dataset (IDLiputan6, IndoSum, XLSum-Indo), <br>
preprocessed by removing meaningless word like "Liputan6 ....", author information, etc <br>
select only article and summary features
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | # Summary
`databricks-dolly-15k-uk` is an open source dataset based on [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) instruction-following dataset, but machine translated using [facebook/m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) model.
Tasks covered include brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
Expect this dataset to not be grammatically correct and having obvious pitfalls of machine translation.
<details>
<summary>Original Summary</summary>
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Ukrainian
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT.
Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including
the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using
information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly
instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors.
They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context`
field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts,
this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper.
For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a
corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to
restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might
provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from
these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source,
human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT.
Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including
academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization)
contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the
target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical
of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of
rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors.
</details> |
false |
# StackOverflow Posts Markdown

## Dataset Summary
This dataset contains all posts submitted to StackOverflow before the 14th of June 2023 formatted as **Markdown text**.<br>
The dataset contains ~60 Million posts, totaling ~35GB in size and ~65 billion characters of text.<br>
The data is sourced from [Internet Archive StackExchange Data Dump](https://archive.org/download/stackexchange).
## Dataset Structure
Each record corresponds to one post of a particular type.
Original ordering from the data dump is not exactly preserved due to parallelism in the script used to process the data dump.
The markdown content of each post is contained in the `Body` field. The license for a particular post is contained in the `ContentLicense` field.
### Data Fields
```typescript
{
Id: long,
PostTypeId: long, // 1=Question, 2=Answer, 3=Orphaned tag wiki, 4=Tag wiki excerpt, 5=Tag wiki, 6=Moderator nomination, 7=Wiki Placeholder, 8=Privilige Wiki
AcceptedAnswerId: long | null, // only present if PostTypeId=1
ParentId: long | null, // only present if PostTypeId=2
Score: long,
ViewCount: long | null,
Body: string | null,
Title: string | null,
ContentLicense: string | null,
FavoriteCount: long | null,
CreationDate: string | null,
LastActivityDate: string | null,
LastEditDate: string | null,
LastEditorUserId: long | null,
OwnerUserId: long | null,
Tags: array<string> | null
}
```
Also consider the [StackExchange Datadump Schema Documentation](https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede), as all fields
have analogs in the original dump format.
## How to use?
```python
from datasets import load_dataset
# predownload full dataset
ds = load_dataset('mikex86/stackoverflow-posts', split='train')
# dataset streaming (will only download the data as needed)
ds = load_dataset('mikex86/stackoverflow-posts', split='train', streaming=True)
for sample in iter(ds): print(sample["Body"])
```
## How is the text stored?
The original Data Dump formats the "Body" field as HTML, using tags such as `<code>`, `<h1>`, `<ul>`, etc.
This HTML format has been converted to Markdown.
### Markdown format
For reference, [this post on StackOverflow](https://stackoverflow.com/questions/53253940/make-react-useeffect-hook-not-run-on-initial-render) is formatted as follows:
#### Title: Make React useEffect hook not run on initial render
```markdown
According to the docs:
> `componentDidUpdate()` is invoked immediately after updating occurs. This method is not called for the initial render.
We can use the new `useEffect()` hook to simulate `componentDidUpdate()`, but it seems like `useEffect()` is being ran after every render, even the first time. How do I get it to not run on initial render?
As you can see in the example below, `componentDidUpdateFunction` is printed during the initial render but `componentDidUpdateClass` was not printed during the initial render.
```
function ComponentDidUpdateFunction() {
const [count, setCount] = React.useState(0);
React.useEffect(() => {
console.log(""componentDidUpdateFunction"");
});
return (
<div>
<p>componentDidUpdateFunction: {count} times</p>
<button
onClick={() => {
setCount(count + 1);
}}
>
Click Me
</button>
</div>
);
}
```
rest of the post omitted for brevity
```
## Details on the HTML to Markdown conversion
Using Jsoup, the original Body field was converted into a Jsoup Document. The child **nodes** (has special meaning in context of Jsoup) of this document were recursively traversed in a depth-first order.
Jsoup defines `.text()` as follows:
> ... the normalized, combined text of this element and all its children. Whitespace is normalized and trimmed. For example, given HTML <code><p>Hello <b>there</b> now! </p><code>, p.text() returns "Hello there now!"
Jsoup defines a `Node` as follows:
> The base, abstract Node model. Elements, Documents, Comments etc are all Node instances.
Additionally the existence of the `TextNode` should be noted, which represents floating text inside an HTML document that is not itself an HTML element.
Thus this text tag `<p>Hello<code>World</code></p>` would have two Jsoup child nodes `TextNode(value="Hello")` and `Element(tag="code", value="World")`.
The value `field` of a `TextNode` contains the free standing text without any further treatment (no whitespace stripping, etc.)
### Traversing Rules
- When ecountering a html tag for which a rule exists, children are not further traversed, **unless explicitly stated otherwise**.
- When encountering an `<a>` tag, `[${element.text()}](${element.attr("href")})` is emitted.
- When encountering an `<h1>` tag, `\n# ${element.text()}\n\n` is emitted.
- When encountering an `<h2>` tag, `\n## ${element.text()}\n\n` is emitted.
- When encountering an `<h3>` tag, `\n### ${element.text()}\n\n` is emitted.
- When encountering an `<h4>` tag, `\n#### ${element.text()}\n\n` is emitted.
- When encountering an `<h5>` tag, `\n##### ${element.text()}\n\n` is emitted.
- When encountering an `<h6>` tag, `\n###### ${element.text()}\n\n` is emitted.
- When encountering a `<code>` tag, `` `${element.text()}` ``is emitted
- When encountering a `<pre>` tag and said element **has** a `<code>` child tag, `` ```\n${element.text()}`\n```\n`` is emitted.
- When encountering a `<pre>` tag and said element **does not** have a `<code>` child tag, **children are traversed further**.
- When encountering an `<li>` tag, `- ` is emitted and **children are traversed further**.
- When encountering a `<blockquote>` tag, `> ` is emitted and **children are traversed further**.
- When encountering an `<hr>` tag, `\n---\n\n` is emitted
- When encountering an `<img>` tag, `})` is emitted.
- When encountering a `<table>` tag
- `\n| ` is emitted
- For each element of `element.select("th")`
- `${element.text()} | ` is emitted
- After the loop `\n| ` is emitted
- For each element of `element.select("th")`
- For each character of the `th.text()`
- `-` is emitted
- After the loop over each character of th ` | ` is emitted
- `\n` is emitted
- For each element of `element.select("tr")` with more than one children of tag type `td`
- `| ` is emitted
- For each element of `element.select("td")`
- `${td.text()} | ` is emitted
- After the loop over `<td>` elements, `\n` is emitted
- After the loop over `<tr>` elements, `\n` is emitted
- When encountering a jsoup `TextNode`, `${node.attr(node.nodeName())}` (which is equivalent to accessing the private field `node.value`) is emitted. |
false |
# Dataset Card for [Malayalam Wiki - common crawl malayalam]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository: https://github.com/qburst/common-crawl-malayalam**
- **Paper: None**
- **Leaderboard:**
- **Point of Contact: [@RRaajjesshh](https://twitter.com/RRaajjesshh)**
### Dataset Summary
Created from the files extract using Useful tools for extracting malayalam text from the Common Crawl Dataset.
https://github.com/qburst/common-crawl-malayalam
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[qburst](https://github.com/qburst), have run scripts on some months of the Common Crawl archives and are made it publicly available. This dataset is from cleaned up corpus from [QBurst common-crawl-malayalam](https://github.com/qburst/common-crawl-malayalam)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/qburst/common-crawl-malayalam, contains the Useful tools to extract malayalam text from the Common Crawl Datasets.
### Licensing Information
[More Information Needed]
### Citation Information
@article{
qburst,
title={Common Crawl - Malayalam},
journal={arXiv preprint arXiv:2005.00085},
year={2020}\n}
]
### Contributions
Thanks to [rajeshradhakrishnanmvk](https://github.com/rajeshradhakrishnanmvk) for adding this dataset.
|
false |
# Dataset Card for cantonese-mandarin-translations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
MIT
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@lhr0909](https://github.com/lhr0909) for adding this dataset. |
false |
# Dataset Card for "IndicSentenceSummarization"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicSentenceSummarization is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output as summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 431K.
### Supported Tasks and Leaderboards
**Tasks:** Sentence Summarization
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '5',
'input': 'जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया।',
'target': 'जम्मू-कश्मीर : सुरक्षाबलों के साथ मुठभेड़ में 2 आतंकवादी ढेर',
'url': 'https://www.indiatv.in/india/national-jammu-kashmir-two-millitant-killed-in-encounter-with-security-forces-574529'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: Input sentence.
- `target (strings)`: Output summary.
- `url (string)`: Source web link of the sentence.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 10,812 | 5,232 | 5,452 |
Bengali | bn | 17,035 | 2,355 | 2,384 |
Gujarati | gu | 54,788 | 8,720 | 8,460 |
Hindi | hi | 78,876 | 16,935 | 16,835 |
Kannada | kn | 61,220 | 9,024 | 1,485 |
Malayalam | ml | 2,855 | 1,520 | 1,580 |
Marathi | mr | 27,066 | 3,249 | 3,309 |
Oriya | or | 12,065 | 1,539 | 1,440 |
Punjabi | pa | 31,630 | 4,004 | 3,967 |
Tamil | ta | 23,098 | 2,874 | 2,948 |
Telugu | te | 7,119 | 878 | 862 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
It is a modified subset of [IndicHeadlineGeneration](https://huggingface.co/datasets/ai4bharat/IndicHeadlineGeneration) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) |
true |
# Dataset Card for PMC Open Access XML
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The XML Open Access includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets
This version takes XML version as source, benefiting from the structured text
to split the articles in parts, naming the introduction, methods, results,
discussion and conclusion, and reference with keywords in the text to external or internal
resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias).
The dataset was initially created with relation-extraction tasks in mind, between the references in text and the content of the
references (e.g. for PMID, by joining the refered article abstract from the pubmed dataset), but aims in a larger extent to provide
a corpus of pre-annotated text for other tasks (e.g. figure caption to graphic, glossary definition detection, summarization).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Fields
- "accession_id": The PMC ID of the article
- "pmid": The PubMed ID of the article
- "introduction": List of \<title\> and \<p\> elements in \<body\>, sharing their root with a \<title\> containing "introduction" or "background".
- "methods": Same as introduction with "method" keyword.
- "results": Same as introduction with "result" keyword.
- "discussion": Same as introduction with "discussion" keyword.
- "conclusion": Same as introduction with "conclusion" keyword.
- "front": List of \<title\> and \<p\> elements in \<front\> after everything else has been searched.
- "body": List of \<title\> and \<p\> elements in \<body\> after everything else has been searched.
- "back": List of \<title\> and \<p\> elements in \<back\> after everything else has been searched.
- "figure": List of \<fig\> elements of the article.
- "table": List of \<table-wrap\> and \<array\> elements of the article.
- "formula": List of \<disp-formula\> and \<inline-formula\> elements of the article.
- "box": List of \<boxed-text\> elements of the article.
- "code": List of \<code\> elements of the article.
- "quote": List of \<disp-quote\> and \<speech\> elements of the article.
- "chemical": List of \<chem-struct-wrap\> elements of the article.
- "supplementary": List of \<supplementary-material\> and \<inline-supplementary-material\> elements of the article.
- "footnote": List of \<fn-group\> and \<table-wrap-foot\> elements of the article.
- "graphic": List of \<graphic\> and \<inline-graphic\> elements of the article.
- "media": List of \<media\> and \<inline-media\> elements of the article.
- "glossary": Glossary if found in the XML
- "unknown_references": JSON of a dictionnary of each "tag":"text" for the reference that did not indicate a PMID
- "n_references": Total number of references and unknown references
- "license": The licence of the article
- "retracted": If the article was retracted or not
- "last_updated": Last update of the article
- "citation": Citation of the article
- "package_file": path to the folder containing the graphics and media files of the article (to append to the base URL: ftp.ncbi.nlm.nih.gov/pub/pmc/)
In text, the references are in the form ##KEYWORD##IDX_REF##OLD_TEXT##, with keywords (REF, UREF, FIG, TAB, FORMU, BOX, CODE, QUOTE, CHEM, SUPPL, FOOTN, GRAPH, MEDIA) referencing respectively to "pubmed articles" (external), "unknown_references", "figure", "table", "formula", "box", "code", "quote", "chem", "supplementary", "footnote", "graphic" and "media".
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
Internal references (figures, tables, ...) were found using specific tags. Deciding on those tags was done by testing and by looking in the documentation
for the different kind of possible usage.
Then, to split the article into introduction, methods, results, discussion and conclusion, specific keywords in titles were used. Because there are no rules
in this xml to tag those sections, finding the keyword seemed like the most reliable approach to do so. A drawback is that many section do not have those
keywords in the titles but could be assimilated to those. However, the huge diversity in the titles makes it harder to label such sections. This could be the
work of further versions of this dataset.
### Source Data
#### Initial Data Collection and Normalization
Data was obtained from:
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_noncomm/xml/
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_comm/xml/
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_other/xml/
Additional content for individual articles (graphics, media) can be obtained from:
- ftp.ncbi.nlm.nih.gov/pub/pmc + "package_file"
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
The articles XML are similar accross collections. This means that if a certain collection handles the structure in unusual ways, the whole collection might not be as
well annotated than others. This concerns all the sections (intro, methods, ...), the external references (pmids) and the internal references (tables, figures, ...).
To illustrate that, references are sometime given as a range (e.g. 10-15). In that case, only reference 10 and 15 are linked. This could potentially be handled in a
future version.
### Other Known Limitations
[Needs More Information]
### Preprocessing recommendations
- Filter out empty contents.
- Remove unwanted references from the text, and replace either by the "references_text" or by the reference content itself.
- Unescape HTML special characters: `import html; html.unescape(my_text)`
- Remove superfluous line break in text.
- Remove XML tags (\<italic\>, \<sup\>, \<sub\>, ...), replace by special tokens?
- Join the items of the contents' lists.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
https://www.ncbi.nlm.nih.gov/pmc/about/copyright/
Within the PMC Open Access Subset, there are three groupings:
Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
Other - no machine-readable Creative Commons license, no license, or a custom license.
### Citation Information
[Needs More Information] |
false |
# Dataset Card for MetaShift
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [MetaShift homepage](https://metashift.readthedocs.io/)
- **Repository:** [MetaShift repository](https://github.com/Weixin-Liang/MetaShift)
- **Paper:** [MetaShift paper](https://arxiv.org/abs/2202.06523v1)
- **Point of Contact:** [Weixin Liang](mailto:wxliang@stanford.edu)
### Dataset Summary
The MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes. It was created for understanding the performance of a machine learning model across diverse data distributions.
The authors leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift.
The key idea is to cluster images using its metadata which provides context for each image.
For example : cats with cars or cats in bathroom.
The main advantage is the dataset contains many more coherent sets of data compared to other benchmarks.
Two important benefits of MetaShift :
- Contains orders of magnitude more natural data shifts than previously available.
- Provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets.
### Dataset Usage
The dataset has the following configuration parameters:
- selected_classes: `list[string]`, optional, list of the classes to generate the MetaShift dataset for. If `None`, the list is equal to `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`.
- attributes_dataset: `bool`, default `False`, if `True`, the script generates the MetaShift-Attributes dataset. Refer [MetaShift-Attributes Dataset](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) for more information.
- attributes: `list[string]`, optional, list of attributes classes included in the Attributes dataset. If `None` and `attributes_dataset` is `True`, it's equal to `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`. You can find the full attribute ontology in the above link.
- with_image_metadata: `bool`, default `False`, whether to include image metadata. If set to `True`, this will give additional metadata about each image. See [Scene Graph](https://cs.stanford.edu/people/dorarad/gqa/download.html) for more information.
- image_subset_size_threshold: `int`, default `25`, the number of images required to be considered a subset. If the number of images is less than this threshold, the subset is ignored.
- min_local_groups: `int`, default `5`, the minimum number of local groups required to be considered an object class.
Consider the following examples to get an idea of how you can use the configuration parameters :
1. To generate the MetaShift Dataset :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'])
```
The full object vocabulary and its hierarchy can be seen [here](https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/meta_data/class_hierarchy.json).
The default classes are `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`
2. To generate the MetaShift-Attributes Dataset (subsets defined by subject attributes) :
```python
load_dataset("metashift", attributes_dataset = True, attributes=["dog(smiling)", "cat(resting)"])
```
The default attributes are `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`
3. To generate the dataset with additional image metadata information :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'], with_image_metadata=True)
```
4. Further, you can specify your own configuration different from those used in the papers as follows:
```python
load_dataset("metashift", image_subset_size_threshold=20, min_local_groups=3)
```
### Dataset Meta-Graphs
From the MetaShift Github Repo :
> MetaShift splits the data points of each class (e.g., Cat) into many subsets based on visual contexts. Each node in the meta-graph represents one subset. The weight of each edge is the overlap coefficient between the corresponding two subsets. Node colors indicate the graph-based community detection results. Inter-community edges are colored. Intra-community edges are grayed out for better visualization. The border color of each example image indicates its community in the meta-graph. We have one such meta-graph for each of the 410 classes in the MetaShift.
The following are the metagraphs for the default classes, these have been generated using the `generate_full_MetaShift.py` file.
<p align='center'>
<img width='75%' src='https://i.imgur.com/wrpezCK.jpg' alt="Cat Meta-graph" /> </br>
<b>Figure: Meta-graph: visualizing the diverse data distributions within the “cat” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FhuAwfT.jpg' alt="Dog Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Dog” class, which captures meaningful semantics of the multi-modal data distribution of “Dog”. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FFCcN6L.jpg' alt="Bus Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Bus” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/rx5b5Vo.jpg' alt="Elephant Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Elephant" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/6f6U3S8.jpg' alt="Horse Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Horse" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/x9zhQD7.jpg' alt="Truck Meta-graph"/> </br>
<b>Figure: Meta-graph for the Truck class. </b>
</p>
### Supported Tasks and Leaderboards
From the paper:
> MetaShift supports evaluation on both :
> - domain generalization and subpopulation shifts settings,
> - assessing training conflicts.
### Languages
All the classes and subsets use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the MetaShift dataset is provided below:
```
{
'image_id': '2411520',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7F99115B8D90>,
'label': 2,
'context': 'fence'
}
```
A sample from the MetaShift-Attributes dataset is provided below:
```
{
'image_id': '2401643',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FED371CE350>
'label': 0
}
```
The format of the dataset with image metadata included by passing `with_image_metadata=True` to `load_dataset` is provided below:
```
{
'image_id': '2365745',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FEBCD39E4D0>
'label': 0,
'context': 'ground',
'width': 500,
'height': 333,
'location': None,
'weather': None,
'objects':
{
'object_id': ['2676428', '3215330', '1962110', '2615742', '3246028', '3232887', '3215329', '1889633', '3882667', '3882663', '1935409', '3882668', '3882669'],
'name': ['wall', 'trailer', 'floor', 'building', 'walkway', 'head', 'tire', 'ground', 'dock', 'paint', 'tail', 'cat', 'wall'],
'x': [194, 12, 0, 5, 3, 404, 27, 438, 2, 142, 324, 328, 224],
'y': [1, 7, 93, 10, 100, 46, 215, 139, 90, 172, 157, 45, 246],
'w': [305, 477, 499, 492, 468, 52, 283, 30, 487, 352, 50, 122, 274],
'h': [150, 310, 72, 112, 53, 59, 117, 23, 240, 72, 107, 214, 85],
'attributes': [['wood', 'green'], [], ['broken', 'wood'], [], [], [], ['black'], [], [], [], ['thick'], ['small'], ['blue']],
'relations': [{'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['to the left of'], 'object': ['3882669']}, {'name': ['to the right of'], 'object': ['3882668']}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['perched on', 'to the left of'], 'object': ['3882667', '1889633']}, {'name': ['to the right of'], 'object': ['3215329']}]
}
}
```
### Data Fields
- `image_id`: Unique numeric ID of the image in Base Visual Genome dataset.
- `image`: A PIL.Image.Image object containing the image.
- `label`: an int classification label.
- `context`: represents the context in which the label is seen. A given label could have multiple contexts.
Image Metadata format can be seen [here](https://cs.stanford.edu/people/dorarad/gqa/download.html) and a sample above has been provided for reference.
### Data Splits
All the data is contained in training set.
## Dataset Creation
### Curation Rationale
From the paper:
> We present MetaShift as an important resource for studying the behavior of
ML algorithms and training dynamics across data with heterogeneous contexts. In order to assess the reliability and fairness of a model, we need to evaluate
its performance and training behavior across heterogeneous types of data. MetaShift contains many more coherent sets of data compared to other benchmarks. Importantly, we have explicit annotations of what makes each subset unique (e.g. cats with cars or dogs next to a bench) as well as a score that measures the distance between any two subsets, which is not available in previous benchmarks of natural data.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. Visual Genome contains over 100k images across 1,702 object classes. MetaShift is constructed on a class-by-class basis. For each class, say “cat”, we pull out all cat images and proceed with generating candidate subests, constructing meta-graphs and then duantify distances of distribution shifts.
#### Who are the source language producers?
[More Information Needed]
### Annotations
The MetaShift dataset uses Visual Genome as its base, therefore the annotations process is same as the Visual Genome dataset.
#### Annotation process
From the Visual Genome paper :
> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33,000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs.
#### Who are the annotators?
From the Visual Genome paper :
> Visual Genome was collected and verified entirely by crowd workers from Amazon Mechanical Turk.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the paper:
> One limitation is that our MetaShift might inherit existing biases in Visual Genome, which is the
base dataset of our MetaShift. Potential concerns include minority groups being under-represented
in certain classes (e.g., women with snowboard), or annotation bias where people in images are
by default labeled as male when gender is unlikely to be identifiable. Existing work in analyzing,
quantifying, and mitigating biases in general computer vision datasets can help with addressing this
potential negative societal impact.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
From the paper :
> Our MetaShift and the code would use the Creative Commons Attribution 4.0 International License. Visual Genome (Krishna et al., 2017) is licensed under a Creative Commons Attribution 4.0 International License. MS-COCO (Lin et al., 2014) is licensed under CC-BY 4.0. The Visual Genome dataset uses 108, 077 images from the intersection of the YFCC100M (Thomee et al., 2016) and MS-COCO. We use the pre-processed and cleaned version of Visual Genome by GQA (Hudson & Manning, 2019).
### Citation Information
```bibtex
@InProceedings{liang2022metashift,
title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
author={Weixin Liang and James Zou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=MTex8qKavoS}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset. |
false |
# RuREBus dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
RuREBus dataset (https://github.com/dialogue-evaluation/RuREBus) is
a Russian dataset for named entity recognition and relation extraction.
## Dataset Structure
There are two subsets of the dataset.
Using
`load_dataset('MalakhovIlya/RuREBus')`
you can download annotated data (DatasetDict) for named entity recognition task and
relation extraction tasks.
This subset consists of two splits: "train" and "test".
Using
`load_dataset('MalakhovIlya/NEREL', 'raw_txt')['raw_txt']`
you can download (Dataset) large corpus (~3gb) raw texts of the same subject
area, but without any annotations.
"entities" are used in named-entity recognition task (see https://en.wikipedia.org/wiki/Named-entity_recognition).
"relations" are used in relationship extraction task (see https://en.wikipedia.org/wiki/Relationship_extraction).
Each entity is represented by a string of the following format:
`"<id>\t<type> <start> <stop>\t<text>"`, where
`<id>` is an entity id,
`<type>` is one of entity types,
`<start>` is a position of the first symbol of entity in text,
`<stop>` is the last symbol position in text +1.
Each relation is represented by a string of the following format:
`"<id>\t<type> Arg1:<arg1_id> Arg2:<arg2_id>"`, where
`<id>` is a relation id,
`<arg1_id>` and `<arg2_id>` are entity ids.
## Citation Information
@inproceedings{rurebus,
Address = {Moscow, Russia},
Author = {Ivanin, Vitaly and Artemova, Ekaterina and Batura, Tatiana and Ivanov, Vladimir and Sarkisyan, Veronika and Tutubalina, Elena and Smurov, Ivan},
Title = {RuREBus-2020 Shared Task: Russian Relation Extraction for Business},
Booktitle = {Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialog” [Komp’iuternaia Lingvistika i Intellektual’nye Tehnologii: Trudy Mezhdunarodnoj Konferentsii “Dialog”]},
Year = {2020}
}
|
false |
# Dataset Card for C4
## Table of Contents
- [Dataset Card for C4](#dataset-card-for-c4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/allenai/c4
- **Paper:** https://arxiv.org/abs/1910.10683
### Dataset Summary
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4
It comes in four variants:
- `en`: 305GB in JSON format
- `en.noblocklist`: 380GB in JSON format
- `en.noclean`: 2.3TB in JSON format
- `realnewslike`: 15GB in JSON format
The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words.
### Supported Tasks and Leaderboards
C4 is mainly intended to pretrain language models and word representations.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
An example form the `en` config is:
```
{
'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/',
'text': 'Beginners BBQ Class Taking Place in Missoula!\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.',
'timestamp': '2019-04-25T12:57:54Z'
}
```
### Data Fields
The data have several fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp as a string
### Data Splits
| name | train |validation|
|----------------|--------:|---------:|
| en |364868892| 364608|
| en.noblocklist |393391519| 393226|
| en.noclean | ?| ?|
| realnewslike | 13799838| 13863|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
C4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in [c4.py](https://github.com/tensorflow/datasets/blob/5952d3d60d60e1727786fa7a9a23d24bb463d4d6/tensorflow_datasets/text/c4.py) by Tensorflow Datasets.
The dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by [langdetect](https://github.com/Mimino666/langdetect) was discarded.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
true | # AutoTrain Dataset for project: Tweets
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project Tweets.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "So the mask mandate goes away the day after #Furnal2022 ends, and you know what will happen after th[...]",
"target": 0
},
{
"text": "@EwanMacKenna Also does anyone know whether Margaret Buttimer of Bandon is still in prison for the '[...]",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['1', '2', '3'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1679 |
| valid | 420 |
|
true |
# Citation
If you use the dataset, please cite the paper:
@article{10.1007/s10579-021-09568-y,
year = {2022},
title = {{Abstractive text summarization and new large-scale datasets for agglutinative languages Turkish and Hungarian}},
author = {Baykara, Batuhan and Güngör, Tunga},
journal = {Language Resources and Evaluation},
issn = {1574-020X},
doi = {10.1007/s10579-021-09568-y},
pages = {1--35}} |
false |
# Dataset Card for "squad"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
### Dataset Summary
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [1],
"text": ["This is a test text"]
},
"context": "This is a test context.",
"id": 1,
"question": "Is this a test?",
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `int32` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|plain_text|---| ---|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
true |
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a climate related claim and evidence, predict if evidence is related to claim. |
true |
# Dataset Card for Corpus of Diverse Styles
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
## Disclaimer
I am not the original author of the paper that presents the Corpus of Diverse Styles. I uploaded the dataset to HuggingFace as a convenience.
## Dataset Description
- **Homepage:** http://style.cs.umass.edu/
- **Repository:** https://github.com/martiansideofthemoon/style-transfer-paraphrase
- **Paper:** https://arxiv.org/abs/2010.05700
### Dataset Summary
A new benchmark dataset that contains 15M
sentences from 11 diverse styles.
To create CDS, we obtain data from existing academic
research datasets and public APIs or online collections
like Project Gutenberg. We choose
styles that are easy for human readers to identify at
a sentence level (e.g., Tweets or Biblical text). While
prior benchmarks involve a transfer between two
styles, CDS has 110 potential transfer directions.
### Citation Information
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
``` |
true |
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if claim is related to evidence. |
true |
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if evidence is related to claim. |
false |
# Dataset Card for Mostly Basic Python Problems (mbpp)
## Table of Contents
- [Dataset Card for Mostly Basic Python Problems (mbpp)](#dataset-card-for-mostly-basic-python-problems-(mbpp))
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/google-research/google-research/tree/master/mbpp
- **Paper:** [Program Synthesis with Large Language Models](https://arxiv.org/abs/2108.07732)
### Dataset Summary
The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us.
Released [here](https://github.com/google-research/google-research/tree/master/mbpp) as part of [Program Synthesis with Large Language Models, Austin et. al., 2021](https://arxiv.org/abs/2108.07732).
### Supported Tasks and Leaderboards
This dataset is used to evaluate code generations.
### Languages
English - Python code
## Dataset Structure
```python
dataset_full = load_dataset("mbpp")
DatasetDict({
test: Dataset({
features: ['task_id', 'text', 'code', 'test_list', 'test_setup_code', 'challenge_test_list'],
num_rows: 974
})
})
dataset_sanitized = load_dataset("mbpp", "sanitized")
DatasetDict({
test: Dataset({
features: ['source_file', 'task_id', 'prompt', 'code', 'test_imports', 'test_list'],
num_rows: 427
})
})
```
### Data Instances
#### mbpp - full
```
{
'task_id': 1,
'text': 'Write a function to find the minimum cost path to reach (m, n) from (0, 0) for the given cost matrix cost[][] and a position (m, n) in cost[][].',
'code': 'R = 3\r\nC = 3\r\ndef min_cost(cost, m, n): \r\n\ttc = [[0 for x in range(C)] for x in range(R)] \r\n\ttc[0][0] = cost[0][0] \r\n\tfor i in range(1, m+1): \r\n\t\ttc[i][0] = tc[i-1][0] + cost[i][0] \r\n\tfor j in range(1, n+1): \r\n\t\ttc[0][j] = tc[0][j-1] + cost[0][j] \r\n\tfor i in range(1, m+1): \r\n\t\tfor j in range(1, n+1): \r\n\t\t\ttc[i][j] = min(tc[i-1][j-1], tc[i-1][j], tc[i][j-1]) + cost[i][j] \r\n\treturn tc[m][n]',
'test_list': [
'assert min_cost([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2) == 8',
'assert min_cost([[2, 3, 4], [5, 9, 3], [2, 6, 4]], 2, 2) == 12',
'assert min_cost([[3, 4, 5], [6, 10, 4], [3, 7, 5]], 2, 2) == 16'],
'test_setup_code': '',
'challenge_test_list': []
}
```
#### mbpp - sanitized
```
{
'source_file': 'Benchmark Questions Verification V2.ipynb',
'task_id': 2,
'prompt': 'Write a function to find the shared elements from the given two lists.',
'code': 'def similar_elements(test_tup1, test_tup2):\n res = tuple(set(test_tup1) & set(test_tup2))\n return (res) ',
'test_imports': [],
'test_list': [
'assert set(similar_elements((3, 4, 5, 6),(5, 7, 4, 10))) == set((4, 5))',
'assert set(similar_elements((1, 2, 3, 4),(5, 4, 3, 7))) == set((3, 4))',
'assert set(similar_elements((11, 12, 14, 13),(17, 15, 14, 13))) == set((13, 14))'
]
}
```
### Data Fields
- `source_file`: unknown
- `text`/`prompt`: description of programming task
- `code`: solution for programming task
- `test_setup_code`/`test_imports`: necessary code imports to execute tests
- `test_list`: list of tests to verify solution
- `challenge_test_list`: list of more challenging test to further probe solution
### Data Splits
There are two version of the dataset (full and sanitized) which only one split each (test).
## Dataset Creation
See section 2.1 of original [paper](https://arxiv.org/abs/2108.07732).
### Curation Rationale
In order to evaluate code generation functions a set of simple programming tasks as well as solutions is necessary which this dataset provides.
### Source Data
#### Initial Data Collection and Normalization
The dataset was manually created from scratch.
#### Who are the source language producers?
The dataset was created with an internal crowdsourcing effort at Google.
### Annotations
#### Annotation process
The full dataset was created first and a subset then underwent a second round to improve the task descriptions.
#### Who are the annotators?
The dataset was created with an internal crowdsourcing effort at Google.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
### Discussion of Biases
### Other Known Limitations
Since the task descriptions might not be expressive enough to solve the task. The `sanitized` split aims at addressing this issue by having a second round of annotators improve the dataset.
## Additional Information
### Dataset Curators
Google Research
### Licensing Information
CC-BY-4.0
### Citation Information
```
@article{austin2021program,
title={Program Synthesis with Large Language Models},
author={Austin, Jacob and Odena, Augustus and Nye, Maxwell and Bosma, Maarten and Michalewski, Henryk and Dohan, David and Jiang, Ellen and Cai, Carrie and Terry, Michael and Le, Quoc and others},
journal={arXiv preprint arXiv:2108.07732},
year={2021}
```
### Contributions
Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.
|
false |
# Dataset Card for Imagewoof
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/fastai/imagenette#imagewoof
- **Repository:** https://github.com/fastai/imagenette
- **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-imagewoof
### Dataset Summary
A smaller subset of 10 classes from [Imagenet](https://huggingface.co/datasets/imagenet-1k#dataset-summary) that aren't so easy to classify, since they're all dog breeds.
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward), and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.
### Supported Tasks and Leaderboards
- `image-classification`: The dataset can be used to train a model for Image Classification.
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
A data point comprises an image URL and its classification label.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=320x320 at 0x19FA12186D8>,
'label': 'Beagle',
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image.
- `label`: the expected class label of the image.
### Data Splits
| |train|validation|
|---------|----:|---------:|
|imagewoof| 9025| 3929|
## Dataset Creation
### Curation Rationale
cf. https://huggingface.co/datasets/imagenet-1k#curation-rationale
### Source Data
#### Initial Data Collection and Normalization
Imagewoof is a subset of [ImageNet](https://huggingface.co/datasets/imagenet-1k). Information about data collection of the source data can be found [here](https://huggingface.co/datasets/imagenet-1k#initial-data-collection-and-normalization).
### Annotations
#### Annotation process
cf. https://huggingface.co/datasets/imagenet-1k#annotation-process
#### Who are the annotators?
cf. https://huggingface.co/datasets/imagenet-1k#who-are-the-annotators
### Personal and Sensitive Information
cf. https://huggingface.co/datasets/imagenet-1k#personal-and-sensitive-information
## Considerations for Using the Data
### Social Impact of Dataset
cf. https://huggingface.co/datasets/imagenet-1k#social-impact-of-dataset
### Discussion of Biases
cf. https://huggingface.co/datasets/imagenet-1k#discussion-of-biases
### Other Known Limitations
cf. https://huggingface.co/datasets/imagenet-1k#other-known-limitations
## Additional Information
### Dataset Curators
cf. https://huggingface.co/datasets/imagenet-1k#dataset-curators
and Jeremy Howard
### Licensing Information
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@software{Howard_Imagewoof_2019,
title={Imagewoof: a subset of 10 classes from Imagenet that aren't so easy to classify},
author={Jeremy Howard},
year={2019},
month={March},
publisher = {GitHub},
url = {https://github.com/fastai/imagenette#imagewoof}
}
```
### Contributions
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward) and published on [Github](https://github.com/fastai/imagenette). It was then only integrated into HuggingFace Datasets by [@frgfm](https://huggingface.co/frgfm).
|
false |
# Dataset Card for `BanglaNMT`
## Table of Contents
- [Dataset Card for `BanglaNMT`](#dataset-card-for-BanglaNMT)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/banglanmt](https://github.com/csebuetnlp/banglanmt)
- **Paper:** [**"Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for Bengali-English Machine Translation"**](https://www.aclweb.org/anthology/2020.emnlp-main.207)
- **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
### Dataset Summary
This is the largest Machine Translation (MT) dataset for Bengali-English, curated using novel sentence alignment methods introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).**
**Note:** This is a filtered version of the original dataset that the authors used for NMT training. For the complete set, refer to the offical [repository](https://github.com/csebuetnlp/banglanmt)
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Languages
- `Bengali`
- `English`
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("csebuetnlp/BanglaNMT")
```
## Dataset Structure
### Data Instances
One example from the dataset is given below in JSON format.
```
{
'bn': 'বিমানবন্দরে যুক্তরাজ্যে নিযুক্ত বাংলাদেশ হাইকমিশনার সাঈদা মুনা তাসনীম ও লন্ডনে বাংলাদেশ মিশনের জ্যেষ্ঠ কর্মকর্তারা তাকে বিদায় জানান।',
'en': 'Bangladesh High Commissioner to the United Kingdom Saida Muna Tasneen and senior officials of Bangladesh Mission in London saw him off at the airport.'
}
```
### Data Fields
The data fields are as follows:
- `bn`: a `string` feature indicating the Bengali sentence.
- `en`: a `string` feature indicating the English translation.
### Data Splits
| split |count |
|----------|--------|
|`train`| 2379749 |
|`validation`| 597 |
|`test`| 1000 |
## Dataset Creation
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Source Data
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Initial Data Collection and Normalization
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Who are the source language producers?
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Annotations
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Annotation process
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Who are the annotators?
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/banglanmt)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/banglanmt)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use the dataset, please cite the following paper:
```
@inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
}
```
### Contributions
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset. |
true |
# Dataset Card for "EnglishNLPDataset"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/BihterDass/EnglishTextClassificationDataset]
- **Repository:** [https://github.com/BihterDass/EnglishTextClassificationDataset]
- **Size of downloaded dataset files:** 8.71 MB
- **Size of the generated dataset:** 8.71 MB
### Dataset Summary
The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
#### english-dataset-v1
- **Size of downloaded dataset files:** 8.71 MB
- **Size of the generated dataset:** 8.71 MB
### Data Fields
The data fields are the same among all splits.
#### english-dataset-v-v1
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0).
### Data Splits
| |train |validation|test |
|----|--------:|---------:|---------:|
|Data| 80000 | 10000 | 10000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset. |
false |
# School Notebooks Dataset
The images of school notebooks with handwritten notes in English.
The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
## Annotation format
The annotation is in COCO format. The `annotation.json` should have the following dictionaries:
- `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes).
- `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields:
- `file_name` - name of the image file.
- `id` for image id.
- `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:
- `image_id` - the index of the image on which the polygon is located.
- `category_id` - the polygon’s category index.
- `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line.
- `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y. |
false | # SpellGram
## Dataset consisting of grammatical and spelling errors
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[train.csv]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
<div align="center">
<img width="640" alt="keremberke/table-extraction" src="https://huggingface.co/datasets/keremberke/table-extraction/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['bordered', 'borderless']
```
### Number of Images
```json
{'test': 34, 'train': 238, 'valid': 70}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/table-extraction", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/dataset/2](https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/dataset/2?ref=roboflow2huggingface)
### Citation
```
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 18, 2023 at 9:41 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 342 images.
Data-table are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
|
false | # Dataset for project: csgo-weapon-classification
## Dataset Description
This dataset has for project csgo-weapon-classification was collected with the help of a bulk google image downloader.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<1768x718 RGB PIL image>",
"target": 0
},
{
"image": "<716x375 RGBA PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['AK-47', 'AWP', 'Famas', 'Galil-AR', 'Glock', 'M4A1', 'M4A4', 'P-90', 'SG-553', 'UMP', 'USP'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1100 |
| valid | 275 |
|
false | # Dataset Card for "tlcv2.0_oa"
Thai Literature Corpora (TLC): Corpora of machine-ingestible Thai classical literature texts by Jitkapat Sawatphol (Faculty of Arts, Chulalongkorn University).
This project use [Thai Literature Corpora (TLC) v2.0](https://attapol.github.io/tlc.html). All text are from old Thai book that out of copyright (or public domain).
This dataset was build for [Open Assistant](https://github.com/LAION-AI/Open-Assistant/).
## Columns
The dataset was following columns:
1. **TEXT** (string)
2. **SOURCE** (string)
3. **METADATA** (JSON string, optional) |
false |
# RuNews dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Description
**Summary:** Dataset of news from several sources:
* [Lenta.ru by yutkin](https://github.com/yutkin/Lenta.Ru-News-Dataset)
* [Several sources by buriy](https://github.com/buriy/russian-nlp-datasets/releases)
* [ODS Newsviz Tass](https://github.com/newsviz/newsviz)
* [Taiga fontanka](https://tatianashavrina.github.io/taiga_site/)
* [News from Telegram contest](https://github.com/IlyaGusev/tgcontest)
**Script:** [create_ru_news.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_ru_news.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** Russian.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/ru_news', split="train", streaming=True)
for example in dataset:
print(example["text"])
```
## Data Instances
```
{
"title": "Заместитель главы района в Якутии пожаловался на пьянство начальника",
"text": "Заместитель главы Нерюнгринского района Якутии Геннадий Ленц пожаловался руководителю республики Егору Борисову на своего начальника. Как рассказал Ленц 'Интерфаксу', Андрей Фитисов пьет на рабочем месте и 'уходит в многодневные загулы'...",
"timestamp": 1346284800,
"url": "https://lenta.ru/news/2012/08/30/alco/",
"source": "lenta"
}
```
## Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible. |
false | # Dataset Card for Instruct
Based on Alpaca's instruction finetuning.
```
"Below is an instruction that describes a task, paired with an input that provides further context.\n"
"Write a response that appropriately completes the request\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
``` |
false | |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | |
false |
# TyDi-AS2
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [TyDi-AS2](#tydi-as2)
- [Xtr-TyDi-AS2](#xtr-tydi-as2)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Amazon Science](https://www.amazon.science/publications/cross-lingual-knowledge-distillation-for-answer-sentence-selection-in-low-resource-languages)
- **Paper:** [Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages](https://arxiv.org/abs/2305.16302)
- **Point of Contact:** [Yoshitomo Matsubara](yomtsub@amazon.com)
### Dataset Summary
***TyDi-AS2*** and ***Xtr-TyDi-AS2*** are multilingual Answer Sentence Selection (AS2) datasets comprising 8 diverse languages, proposed in our paper accepted at ACL 2023 (Findings): **Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages**.
Both the datasets were created from [TyDi-QA](https://ai.google.com/research/tydiqa), a multilingual question-answering dataset. TyDi-AS2 was created by converting the QA instances in TyDi-QA to AS2 instances (see [Dataset Creation](#dataset-creation) for details). Xtr-TyDi-AS2 was created by translating the non-English TyDi-AS2 instances to English and vise versa.
For translations, we used [Amazon Translate](https://aws.amazon.com/translate/).
### Languages
#### TyDi-AS2 (original)
- `bn`: Bengali
- `en`: English
- `fi`: Finnish
- `id`: Indonesian
- `ja`: Japanese
- `ko`: Korean
- `ru`: Russian
- `sw`: Swahili
File location: [`jsonl/original/`](https://huggingface.co/datasets/AmazonScience/tydi-as2/tree/main/jsonl/original/)
For non-English sets, we also have English-translated samples used for the cross-lingual knowledge distillation (CLKD) experiments in our paper.
File location: [`jsonl/x-to-en/`](https://huggingface.co/datasets/AmazonScience/tydi-as2/tree/main/jsonl/x-to-en/)
#### Xtr-TyDi-AS2 (translationese)
Xtr-TyDi-AS2 (X-translated TyDi-AS2) dataset consists of non-English AS2 instances translated from the English set of TyDi-AS2.
- `bn`: Bengali
- `fi`: Finnish
- `id`: Indonesian
- `ja`: Japanese
- `ko`: Korean
- `ru`: Russian
- `sw`: Swahili
File location: [`jsonl/en-to-x/`](https://huggingface.co/datasets/AmazonScience/tydi-as2/tree/main/jsonl/en-to-x/)
## Dataset Structure
### Data Instances
This is an example instance from the English training split of TyDi-AS2 dataset.
```
{
"Question": "When was the Argentine Basketball Federation formed?",
"Title": "History of the Argentina national basketball team",
"Sentence": "The Argentina national basketball team represents Argentina in basketball international competitions, and is controlled by the Argentine Basketball Federation.",
"Label": 0
}
```
For English-translated TyDi-AS2 dataset and Xtr-TyDi-AS2 dataset, the translated instances in JSONL files are listed in the same order of the original (native) instances in the original TyDi-AS2 dataset.
For example, the 2nd instance in [`jsonl/x-to-en/en_from_bn-train.jsonl`](jsonl/x-to-en/en_from_bn-train.jsonl) (English-translated from Bengali) corresponds to the 2nd instance in [`jsonl/original/bn-train.jsonl`](jsonl/original/bn-train.jsonl) (Bengali).
Similarly, the 2nd instance in [`jsonl/en-to-x/bn_from_en-train.jsonl`](jsonl/en-to-x/bn_from_en-train.jsonl) (Bengali-translated from English) corresponds to the 2nd instance in [`jsonl/original/en-train.jsonl`](jsonl/original/en-train.jsonl) (English).
### Data Fields
Each instance (a QA pair) consists of the following fields:
- `Question`: Question to be answered (str)
- `Title`: Document title (str)
- `Sentence`: Answer sentence in the document (str)
- `Label`: Label that indicates the answer sentence correctly answers the question (int, 1: correct, 0: incorrect)
### Data Splits
| | | **#Questions** | | | | **#Sentences** | |
|---------------------|----------:|---------------:|---------:|---|----------:|---------------:|---------:|
| | **train** | **dev** | **test** | | **train** | **dev** | **test** |
| **Bengali (bn)** | 7,978 | 2,056 | 316 | | 1,376,432 | 351,186 | 37,465 |
| **English (en)** | 6,730 | 1,686 | 918 | | 1,643,702 | 420,899 | 249,513 |
| **Finnish (fi)** | 10,859 | 2,731 | 1,870 | | 1,567,695 | 408,205 | 298,093 |
| **Indonesian (id)** | 9,310 | 2,339 | 1,355 | | 960,270 | 236,076 | 97,057 |
| **Japanese (ja)** | 11,848 | 2,981 | 1,504 | | 3,183,037 | 822,654 | 444,106 |
| **Korean (ko)** | 7,354 | 1,943 | 1,389 | | 1,558,191 | 392,361 | 199,043 |
| **Russian (ru)** | 9,187 | 2,294 | 1,395 | | 3,190,650 | 820,668 | 367,595 |
| **Swahili (sw)** | 8,350 | 2,850 | 1,896 | | 1,048,303 | 269,894 | 74,775 |
See [our paper](#citation-information) for more details about the statistics of the datasets.
## Dataset Creation
### Source Data
The source of TyDi-AS2 dataset is [TyDi QA](https://ai.google.com/research/tydiqa), which is a question answering dataset.
### Annotations
#### Annotation process
TyDi QA is a QA dataset spanning questions from 11 typologically diverse languages.
Each instance comprises a human-generated question, a single Wikipedia document as context, and one or more spans from the document containing the answer.
To convert each instance into AS2 instances, we split the context document into sentences and heuristically identify the correct asnwer sentences using the annotated answer spans.
To split documents, we use multiple different sentence tokenizers for the diverse languages and omit languages for which we could not find a suitable sentence tokenizer:
1. [bltk](https://github.com/saimoncse19/bltk) for Bengali
2. [blingfire](https://github.com/microsoft/BlingFire) for Swahili, Indonesian, and Korean
3. [pysdb](https://github.com/nipunsadvilkar/pySBD) for English and Russian
4. [nltk](https://www.nltk.org/) for Finnish
5. [Konoha](https://github.com/himkt/konoha) for Japanese
#### Who are the annotators?
[Shivanshu Gupta](https://huggingface.co/shivanshu) converted TyDi QA to TyDi-AS2.
[Yoshitomo Matsubara](https://huggingface.co/yoshitomo-matsubara) translated non-English samples to English and vice versa for Xtr-TyDi-AS2 dataset
Since sentence tokenization and identifying answer sentences can introduce errors, we conducted a manual validation of the AS2 datasets. For each language, we randomly selected 50 instances and verified the accuracy of the answer sentences through manual inspection. Our findings revealed that the answer sentences were accurate in 98% of the cases.
## Additional Information
### Dataset Curators
Shivanshu Gupta (@shivanshu)
### Licensing Information
[CDLA-Permissive-2.0](LICENSE.md)
### Citation Information
```
@article{gupta2023cross-lingual,
title={Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages},
author={Gupta, Shivanshu and Matsubara, Yoshitomo and Chadha, Ankit and Moschitti, Alessandro},
journal={arXiv preprint arXiv:2305.16302},
year={2023}
}
```
### Contributions
- [Shivanshu Gupta](https://huggingface.co/shivanshu)
- [Yoshitomo Matsubara](https://huggingface.co/yoshitomo-matsubara)
- Ankit Chadha
- Alessandro Moschitti |
true | # Dataset Card for "womens-clothing-ecommerce-reviews"
Processed version of [this dataset](https://github.com/ya-stack/Women-s-Ecommerce-Clothing-Reviews). |
true | WikiQA dataset with answers grouped together for each question. |
false | |
false |
# Dataset Card for VQA-RAD
## Dataset Description
VQA-RAD is a dataset of question-answer pairs on radiology images. The dataset is intended to be used for training and testing
Medical Visual Question Answering (VQA) systems. The dataset includes both open-ended questions and binary "yes/no" questions.
The dataset is built from [MedPix](https://medpix.nlm.nih.gov/), which is a free open-access online database of medical images.
The question-answer pairs were manually generated by a team of clinicians.
**Homepage:** [Open Science Framework Homepage](https://osf.io/89kps/)<br>
**Paper:** [A dataset of clinically generated visual questions and answers about radiology images](https://www.nature.com/articles/sdata2018251)<br>
**Leaderboard:** [Papers with Code Leaderboard](https://paperswithcode.com/sota/medical-visual-question-answering-on-vqa-rad)
### Dataset Summary
The dataset was downloaded from the [Open Science Framework Homepage](https://osf.io/89kps/) on June 3, 2023. The dataset contains
2,248 question-answer pairs and 315 images. Out of the 315 images, 314 images are referenced by a question-answer pair, while 1 image
is not used. The training set contains 3 duplicate image-question-answer triplets. The training set also has 1 image-question-answer
triplet in common with the test set. After dropping these 4 image-question-answer triplets from the training set, the dataset contains
2,244 question-answer pairs on 314 images.
#### Supported Tasks and Leaderboards
This dataset has an active leaderboard on [Papers with Code](https://paperswithcode.com/sota/medical-visual-question-answering-on-vqa-rad)
where models are ranked based on three metrics: "Close-ended Accuracy", "Open-ended accuracy" and "Overall accuracy". "Close-ended Accuracy" is
the accuracy of a model's generated answers for the subset of binary "yes/no" questions. "Open-ended accuracy" is the accuracy
of a model's generated answers for the subset of open-ended questions. "Overall accuracy" is the accuracy of a model's generated
answers across all questions.
#### Languages
The question-answer pairs are in English.
## Dataset Structure
### Data Instances
Each instance consists of an image-question-answer triplet.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=566x555>,
'question': 'are regions of the brain infarcted?',
'answer': 'yes'
}
```
### Data Fields
- `'image'`: the image referenced by the question-answer pair.
- `'question'`: the question about the image.
- `'answer'`: the expected answer.
### Data Splits
The dataset is split into training and test. The split is provided directly by the authors.
| | Training Set | Test Set |
|-------------------------|:------------:|:---------:|
| QAs |1,793 |451 |
| Images |313 |203 |
## Additional Information
### Licensing Information
The authors have released the dataset under the CC0 1.0 Universal License.
### Citation Information
```
@article{lau2018dataset,
title={A dataset of clinically generated visual questions and answers about radiology images},
author={Lau, Jason J and Gayen, Soumya and Ben Abacha, Asma and Demner-Fushman, Dina},
journal={Scientific data},
volume={5},
number={1},
pages={1--10},
year={2018},
publisher={Nature Publishing Group}
}
``` |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage: https://cobra.xuhuiz.com/**
- **Paper: https://arxiv.org/abs/2306.01985**
### Dataset Summary
This dataset contains COBRACOPURS and COBRACORPUS-counterfactual in this [paper](https://arxiv.org/abs/2306.01985)
### Data Splits
* `advContexts_explanations.csv` is `COBRACorpus-CF`
* `toxigen_explanations.csv` is the full `COBRACorpus`
* `toxigen_explanations_train.csv` is the training split of `COBRACorpus`
* `toxigen_explanations_val.csv` is the validation split of `COBRACorpus`
### Data Entries
For `COBRACorpus`, the relevant entries in the `csv` files are
*`situationalContext (string)`, `speakerIdentity (string)`, `listenerIdentity (string)`, `statement (string)`,
`intent (string)`, `targetGroup (string)`, `relevantPowerDynamics (string)`, `implication (string)`,
`targetGroupEmotionalReaction (string)`, `targetGroupCognitiveReaction (string)`, `offensiveness (string)`*
Please refer to the [paper](https://arxiv.org/abs/2306.01985) for the specific explanations of these entries.
The *`examples`* entry is the few-shot prompt that we used to generate explanations.
All other entries are from the [Toxicgen](https://arxiv.org/abs/2203.09509) dataset, which is not directly relevant to this
work but we leave them there as the metadata in case it's useful for the future works.
### Citation Information
If you find this dataset useful, please cite:
```
@inproceedings{zhou2023cobra,
title = {COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements},
author = {Zhou, Xuhui and Zhu, Hao and Yerukola, Akhila and Davidson, Thomas and D. Hwang, Jena and Swayamdipta, Swabha and Sap, Maarten},
year = {2023},
booktitle = {Findings of ACL}
}
``` |
false |
English to Hinglish Dataset aggregated from publicly available datasources.
Sources:
1. Hinglish TOP Dataset
2. CMU English Dog
3. HinGE
4. PHINC
source : 1 - Human Annotated ,
source : 0 - Synthetically Generated |
true |
# Dataset Card for rtGender
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
RtGender is a corpus for studying responses to gender online, including posts and responses from Facebook, TED, Fitocracy, and Reddit where the gender of the source poster/speaker is known.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
- `source`: a `string` feature.
- `op_gender`: a `string` feature.
- `post_text`: a `string` feature.
- `response_text`: a `string` feature.
- `sentiment`: a `string` feature.
- `relevance`: a `string` feature.
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] |
false |
# Dataset Card for The Loyola University of Delaware Identifier Splitting Oracle
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Loyola University of Delaware Identifier Splitting Oracle](http://www.cs.loyola.edu/~binkley/ludiso/)
- **Paper:** [An empirical study of identifier splitting techniques](https://dl.acm.org/doi/10.1007/s10664-013-9261-0)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
The Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
- C
- C++
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "::CreateProcess",
"segmentation": ":: Create Process",
"language": "cpp",
"source": "mozilla-source-1.1"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
- `language`: the programming language of the source.
- `source`: the source of the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
### Citation Information
```
@article{hill2014empirical,
title={An empirical study of identifier splitting techniques},
author={Hill, Emily and Binkley, David and Lawrie, Dawn and Pollock, Lori and Vijay-Shanker, K},
journal={Empirical Software Engineering},
volume={19},
number={6},
pages={1754--1780},
year={2014},
publisher={Springer}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
false |
# Dataset Card for BT11
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
BT11 is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
## Dataset Structure
### Data Instances
```
{
"index": 20170,
"identifier": "currentLineHighlight",
"segmentation": "current Line Highlight"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{butler2011improving,
title={Improving the tokenisation of identifier names},
author={Butler, Simon and Wermelinger, Michel and Yu, Yijun and Sharp, Helen},
booktitle={European Conference on Object-Oriented Programming},
pages={130--154},
year={2011},
organization={Springer}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
false |
# Dataset Card for "IndicWikiBio"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
The WikiBio dataset released as part of IndicNLG Suite. Each
example has four fields: id, infobox, serialized infobox and summary. We create this dataset in nine
languages including as, bn, hi, kn, ml, or, pa, ta, te. The total
size of the dataset is 57,426.
### Supported Tasks and Leaderboards
**Tasks:** WikiBio
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{
"id": 26,
"infobox": "name_1:सी॰\tname_2:एल॰\tname_3:रुआला\toffice_1:सांसद\toffice_2:-\toffice_3:मिजोरम\toffice_4:लोक\toffice_5:सभा\toffice_6:निर्वाचन\toffice_7:क्षेत्र\toffice_8:।\toffice_9:मिजोरम\tterm_1:2014\tterm_2:से\tterm_3:2019\tnationality_1:भारतीय",
"serialized_infobox": "<TAG> name </TAG> सी॰ एल॰ रुआला <TAG> office </TAG> सांसद - मिजोरम लोक सभा निर्वाचन क्षेत्र । मिजोरम <TAG> term </TAG> 2014 से 2019 <TAG> nationality </TAG> भारतीय",
"summary": "सी॰ एल॰ रुआला भारत की सोलहवीं लोक सभा के सांसद हैं।"
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `infobox (string)`: Raw Infobox.
- `serialized_infobox (string)`: Serialized Infobox as input.
- `summary (string)`: Summary of Infobox/First line of Wikipedia page.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Test | Val |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 1,300 | 391 | 381 |
Bengali | bn | 4,615 | 1,521 | 1,567 |
Hindi | hi | 5,684 | 1,919 | 1,853 |
Kannada | kn | 1,188 | 389 | 383 |
Malayalam | ml | 5,620 | 1,835 | 1,896 |
Oriya | or | 1,687 | 558 | 515 |
Punjabi | pa | 3,796 | 1,227 | 1,331 |
Tamil | ta | 8,169 | 2,701 | 2,632 |
Telugu | te | 2,594 | 854 | 820 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
None
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
|
true | # AutoNLP Dataset for project: pruebapoems
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project pruebapoems.
### Languages
The BCP-47 code for the dataset's language is es.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "When I was fair and young, then favor graced me.\r\nOf many was I sought their mistress for to be.\r\nBu[...]",
"target": 1
},
{
"text": "Sigh no more, ladies, sigh no more.\r\n Men were deceivers ever,\r\nOne foot in sea, and one on shore[...]",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['Love', 'Mythology & Folklore', 'Nature'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 457 |
| valid | 116 |
|
false | # BIOASQ 2022 Spanish
This is an automatically translated version of the bioasq dataset, a dataset used for question answering in the biomedical domain.
The translation was performed for the questions, answers and contexts using the [marianMT english-spanish](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) . As the translation process may return answers that are not 100% present in the context, we developed an algorithm based on sentence tokenization and intersection of the words present in the answer and in the portion of the context that we are evaluating, and then extracting the parragraph from the context that matches the answer.
License, distribution and usage conditions of the original dataset apply.
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this dataset. |
true | # NLI-TR for Supervised SimCSE
This dataset is a modified version of [NLI-TR](https://huggingface.co/datasets/nli_tr) dataset. Its intended use is to train Supervised [SimCSE](https://github.com/princeton-nlp/SimCSE) models for sentence-embeddings. Steps followed to produce this dataset are listed below:
1. Merge train split of snli_tr and multinli_tr subsets.
2. Find every premise that has an entailment hypothesis **and** a contradiction hypothesis.
3. Write found triplets into sent0 (premise), sent1 (entailment hypothesis), hard_neg (contradiction hypothesis) format.
See this [Colab Notebook](https://colab.research.google.com/drive/1Ysq1SpFOa7n1X79x2HxyWjfKzuR_gDQV?usp=sharing) for training and evaluation on Turkish sentences. |
false | # MSMARCO_ES
This is an automatically translated version of the [msmarco v1 dataset](https://huggingface.co/datasets/ms_marco) , a dataset used for text similarity tasks.
The translation was performed for the queries and passages using the [marianMT english-spanish](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) . A posterior processing was required to sample the querys because there was some of them with more or less positive and negative labels than the recommended (4 neg and 1 pos).
License, distribution and usage conditions of the original dataset apply.
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this dataset. |
false | # Dataset Card for Multi-Document
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Multi-Document repository](https://github.com/arka0821/multi_document_summarization)
- **Paper:** [Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235)
### Dataset Summary
Multi-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English
## Dataset Structure
### Data Instances
{"id": "n3ByHGrxH3bvfrvF", "docs": [{"id": "1394519630182457344", "text": "Clover Bio's COVID-19 vaccine candidate shows immune response against SARS-CoV-2 variants in mouse model https://t.co/wNWa9GQux5"}, {"id": "1398154482463170561", "text": "The purpose of the Vaccine is not to stop you from catching COVID 19. The vaccine introduces the immune system to an inactivated form of the SARS-CoV-2 coronavirus or a small part of it. This then equips the body with the ability to fight the virus better in case you get it. https://t.co/Cz9OU6Zi7P"}, {"id": "1354844652520792071", "text": "The Moderna mRNA COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2.\nResearchers analysed blood samples from vaccinated people and monkeys- Both contained neutralising antibodies against the virus. \nPT1/2\n#COVID19vaccines #biotech https://t.co/ET1maJznot"}, {"id": "1340189698107518976", "text": "@KhandaniM Pfizer vaccine introduces viral surface protein which is constant accross SARS COV 2 variants into the body. Body builds antibodies against this protein, not any virus. These antibodies instructs macrophages & T-Cells to attack & destroy any COVID-19 v variant at infection point"}, {"id": "1374368989581778945", "text": "@DelthiaRicks \" Pfizer and BioNTech\u2019s COVID-19 vaccine is an mRNA vaccine, which does not use the live virus but rather a small portion of the viral sequence of the SARS-CoV-2 virus to instruct the body to produce the spike protein displayed on the surface of the virus.\""}, {"id": "1353354819315126273", "text": "Pfizer and BioNTech Publish Results of Study Showing COVID-19 Vaccine Elicits Antibodies that Neutralize Pseudovirus Bearing the SARS-CoV-2 U.K. Strain Spike Protein in Cell Culture | Pfizer https://t.co/YXcSnjLt8C"}, {"id": "1400821856362401792", "text": "Pfizer-BioNTech's covid-19 vaccine elicits lower levels of antibodies against the SARS-CoV-2\u00a0Delta variant\u00a0(B.1.617.2), first discovered in India, in comparison to other variants, said a research published in\u00a0Lancet\u00a0journal.\n https://t.co/IaCMX81X3b"}, {"id": "1367252963190665219", "text": "New research from UNC-Chapel Hill suggests that those who have previously experienced a SARS-CoV-2 infection develop a significant antibody response to the first dose of mRNA-based COVID-19 vaccine.\nhttps://t.co/B4vR1KUQ0w"}, {"id": "1375949502461394946", "text": "Mechanism of a COVID-19 nanoparticle vaccine candidate that elicits a broadly neutralizing antibody response to SARS-CoV-2 variants https://t.co/nc1L0uvtlI #bioRxiv"}, {"id": "1395428608349548550", "text": "JCI - Efficient maternal to neonatal transfer of antibodies against SARS-CoV-2 and BNT162b2 mRNA COVID-19 vaccine https://t.co/vIBcpPaKFZ"}], "summary": "The COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2. Pfizer-BioNTech's COVID-19 vaccine use small portion of the viral sequence of the SARS-CoV-2 virus to equip the body with the ability to fight the virus better in case you get it."}
### Data Fields
{'id': text of paper abstract \
'docs': document id \
[
'id': id of text \
'text': text data \
]
'summary': summary text
}
### Data Splits
The data is split into a training, validation and test.
| train | validation | test |
|------:|-----------:|-----:|
| 50 | 10 | 5 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{lu2020multi,
title={Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
author={Arka Das, India},
journal={arXiv preprint arXiv:2010.14235},
year={2022}
}
```
### Contributions
Thanks to [@arka0821] (https://github.com/arka0821/multi_document_summarization) for adding this dataset.
|
true | |
false | German validation dataset from WECHSEL () to evaluate LLM perplexity.
JSON-line files (on JSON object per line):
- `valid.json.gz`: Gzipped validation set as generated by the paper (163,698 docs)
- `valid.random_1636.json.gz`: Random 1% (1636 docs) of the validation set
|
false |
# Mutopia Guitar Dataset
## Table of Contents
- [Dataset Card Creation Guide](#mutopia-guitar-dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Homepage:** [Mutopia Project](https://www.mutopiaproject.org/)
- **Repository implementation of the paper:** [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer and the Johann Sebastian Bach Chorales Dataset](https://github.com/AI-Guru/MMM-JSB)
- **Based on Paper:** [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048)
- **Point of Contact:** [Juan Carlos Piñeros](https://www.linkedin.com/in/juancarlospinerosp/)
### Dataset Summary
Mutopia guitar dataset consists of the soloist guitar pieces of the [Mutopia Project](https://www.mutopiaproject.org/). I encoded the MIDI files into text tokens using the excellent [implementation](https://github.com/AI-Guru/MMM-JSB) of Dr. Tristan Beheren of the paper: [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048).
The dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani.
### Supported Tasks and Leaderboards
Anyone interested can use the dataset to train a model for symbolic music generation, which consists in treating symbols for music sounds (notes) as text tokens. Then, one can implement a generative model using NLP techniques, such as Transformers.
## Dataset Structure
### Data Instances
Each guitar piece is represented as a line of text that contains a series of tokens, for instance:
PIECE_START: Where the piece begins
PIECE_ENDS: Where the piece ends
TIME_SIGNATURE: Time signature for the piece
BPM: Tempo of the piece
BAR_START: Begining of a new bar
NOTE_ON: Start of a new musical note specifying its MIDI note number
TIME_DELTA: Duration until the next event
NOTE_OFF: End of musical note specifying its MIDI note number
```
{
'text': PIECE_START TIME_SIGNATURE=2_4 BPM=74 TRACK_START INST=0 DENSITY=4 BAR_START NOTE_ON=52 TIME_DELTA=2.0 NOTE_OFF=52 NOTE_ON=45 NOTE_ON=49 TIME_DELTA=2.0 NOTE_OFF=49 NOTE_ON=52 TIME_DELTA=2.0 NOTE_OFF=45 NOTE_ON=47 NOTE_OFF=52 NOTE_ON=44 TIME_DELTA=2.0,
...
}
```
### Data Fields
- `text`: Sequence of tokens that represent the guitar piece as explained in the paper [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048).
### Data Splits
There are, at this moment, 395 MIDI guitar files in the Mutopia Project. I removed files of pieces that were not music for soloist guitar. After this removal, there were 372 MIDI files.
I used an 80/20 split and augmented the training dataset by transposing the piece 1 octave above and below (24 semitones). The final result is then:
**Train dataset:** 7325 pieces
**Test dataset:** 74 pieces |
false |
# SynTran-fa
Syntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below:
```python
import datasets
data = datasets.load_dataset('SLPL/syntran-fa', split="train")
```
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sharif-SLPL](https://github.com/Sharif-SLPL)
- **Repository:** [SynTran-fa](https://github.com/agp-internship/syntran-fa)
- **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com)
### Dataset Summary
Generating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer).
This dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in [Source Data section](#source-data).
The main idea for this dataset comes from [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf) where they used a "parser + syntactic rules" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used [stanza](https://stanfordnlp.github.io/stanza/) as our parser to parse the question and generate a response according to it using the short (sentences without verbs - up to ~4 words) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task).
### Supported Tasks and Leaderboards
This dataset can be used for the question-answering task, especially when you are going to generate fluent responses. You can train a seq2seq model with this dataset to generate fluent responses - as done by [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf).
### Languages
+ Persian (fa)
## Dataset Structure
Each row of the dataset will look like something like the below:
```json
{
'id': 0,
'question': 'باشگاه هاکی ساوتهمپتون چه نام دارد؟',
'short_answer': 'باشگاه هاکی ساوتهمپتون',
'fluent_answer': 'باشگاه هاکی ساوتهمپتون باشگاه هاکی ساوتهمپتون نام دارد.',
'bert_loss': 1.110097069682014
}
```
+ `id` : the entry id in dataset
+ `question` : the question
+ `short_answer` : the short answer corresponding to the `question` (the primary answer)
+ `fluent_answer` : fluent (long) answer generated from both `question` and the `short_answer` (the secondary answer)
+ `bert_loss` : the loss that [pars-bert](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) gives when inputting the `fluent_answer` to it. As it increases the sentence is more likely to be influent.
Note: the dataset is sorted increasingly by the `bert_loss`, so first sentences are more likely to be fluent.
### Data Splits
Currently, the dataset just provided the `train` split. There would be a `test` split soon.
## Dataset Creation
### Source Data
The source datasets that we used are as follows:
+ [PersianQA](https://github.com/sajjjadayobi/PersianQA)
+ [PersianQuAD](https://ieeexplore.ieee.org/document/9729745)
#### Initial Data Collection and Normalization
We extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that.
## Additional Information
### Dataset Curators
The dataset is gathered together completely in the Asr Gooyesh Pardaz company's summer internship under the supervision of Soroush Gooran, Prof. Hossein Sameti, and the mentorship of Sadra Sabouri. This project was Farhan Farsi's first internship project.
### Licensing Information
MIT
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@farhaaaaa](https://github.com/farhaaaaa) and [@sadrasabouri](https://github.com/sadrasabouri) for adding this dataset. |
false |
# Dataset Card for Swedish CNN Dailymail Dataset
The Swedish CNN/DailyMail dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks.
## Dataset Summary
Read about the full details at original English version: https://huggingface.co/datasets/cnn_dailymail
### Data Fields
- `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from
- `article`: a string containing the body of the news article
- `highlights`: a string containing the highlight of the article as written by the article author
### Data Splits
The Swedish CNN/DailyMail dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 287,113 |
| Validation | 13,368 |
| Test | 11,490 |
|
false |
### Dataset Summary
KoPI(Korpus Perayapan Indonesia)-CC_News is Indonesian Only Extract from CC NEWS Common Crawl from 2016-2022(july) ,each snapshots get extracted using warcio,trafilatura and filter using fasttext
detail soon
```
|
true |
# Dataset Card for Clinical Trials's Reason to Stop
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.opentargets.org
- **Repository:** https://github.com/LesyaR/stopReasons
- **Paper:**
- **Point of Contact:** data@opentargets.org
### Dataset Summary
This dataset contains a curated classification of more than 5000 reasons why a clinical trial has suffered an early stop.
The text has been extracted from clinicaltrials.gov, the largest resource of clinical trial information. The text has been curated by members of the Open Targets organisation, a project aimed at providing data relevant to drug development.
All 17 possible classes have been carefully defined:
- Business_Administrative
- Another_Study
- Negative
- Study_Design
- Invalid_Reason
- Ethical_Reason
- Insufficient_Data
- Insufficient_Enrollment
- Study_Staff_Moved
- Endpoint_Met
- Regulatory
- Logistics_Resources
- Safety_Sideeffects
- No_Context
- Success
- Interim_Analysis
- Covid19
### Supported Tasks and Leaderboards
Multi class classification
### Languages
English
## Dataset Structure
### Data Instances
```json
{'text': 'Due to company decision to focus resources on a larger, controlled study in this patient population."',
'label': 'Another_Study'}
```
### Data Fields
`text`: contains the reason for the CT early stop
`label`: contains one of the 17 defined classes
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset has an Apache 2.0 license.
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ireneisdoomed](https://github.com/<github-username>) for adding this dataset. |
true | # Dataset Card for "MultiTACRED"
## Dataset Description
- **Homepage:** [https://github.com/DFKI-NLP/MultiTACRED](https://github.com/DFKI-NLP/MultiTACRED)
- **Paper:** [MultiTACRED: A Multilingual Version of the TAC Relation Extraction Dataset](https://arxiv.org/abs/2305.04582)
- **Point of Contact:** See [https://github.com/DFKI-NLP/MultiTACRED](https://github.com/DFKI-NLP/MultiTACRED)
- **Size of downloaded dataset files:** 15.4KB (TACRED-Revisited), 3.7 MB (Re-TACRED)
- **Size of the generated dataset:** 1.7 GB (all languages, all versions)
- **Total amount of disk used:** 1.7 GB (all languages, all versions)
### Dataset Summary
MultiTACRED is a multilingual version of the large-scale [TAC Relation Extraction Dataset](https://nlp.stanford.edu/projects/tacred).
It covers 12 typologically diverse languages from 9 language families, and was created by the
[Speech & Language Technology group of DFKI](https://www.dfki.de/slt) by machine-translating the instances of the
original TACRED dataset and automatically projecting their entity annotations. For details of the original TACRED's
data collection and annotation process, see the [Stanford paper](https://aclanthology.org/D17-1004/). Translations are
syntactically validated by checking the correctness of the XML tag markup. Any translations with an invalid tag
structure, e.g. missing or invalid head or tail tag pairs, are discarded (on average, 2.3% of the instances).
Languages covered are: Arabic, Chinese, Finnish, French, German, Hindi, Hungarian, Japanese, Polish,
Russian, Spanish, Turkish. Intended use is supervised relation classification. Audience - researchers.
Please see [our ACL paper](https://arxiv.org/abs/2305.04582) for full details.
NOTE: This Datasetreader supports a reduced version of the original TACRED JSON format with the following changes:
- Removed fields: stanford_pos, stanford_ner, stanford_head, stanford_deprel, docid
The motivation for this is that we want to support additional languages, for which these fields were not required
or available. The reader expects the specification of a language-specific configuration specifying the variant
(original, revisited or retacred) and the language (as a two-letter iso code).
The DatasetReader changes the offsets of the following fields, to conform with standard Python usage (see
_generate_examples()):
- subj_end to subj_end + 1 (make end offset exclusive)
- obj_end to obj_end + 1 (make end offset exclusive)
NOTE 2: The MultiTACRED dataset offers an additional 'split', namely the backtranslated test data (translated to a
target language and then back to English). To access this split, use dataset['backtranslated_test'].
You can find the TACRED dataset reader for the English version of the dataset at
[https://huggingface.co/datasets/DFKI-SLT/tacred](https://huggingface.co/datasets/DFKI-SLT/tacred).
### Supported Tasks and Leaderboards
- **Tasks:** Relation Classification
- **Leaderboards:** [https://paperswithcode.com/sota/relation-extraction-on-multitacred](https://paperswithcode.com/sota/relation-extraction-on-multitacred)
### Languages
The languages in the dataset are Arabic, German, English, Spanish, Finnish, French, Hindi, Hungarian, Japanese, Polish, Russian, Turkish, and Chinese.
All languages except English are machine-translated using either Deepl's or Google's translation APIs.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 15.4KB (TACRED-Revisited), 3.7 MB (Re-TACRED)
- **Size of the generated dataset:** 1.7 GB (all languages, all versions)
- **Total amount of disk used:** 1.7 GB (all languages, all versions)
An example of 'train' looks as follows:
```json
{
"id": "61b3a5c8c9a882dcfcd2",
"token": ["Tom", "Thabane", "trat", "im", "Oktober", "letzten", "Jahres", "zurück", ",", "um", "die", "All", "Basotho", "Convention", "-LRB-", "ABC", "-RRB-", "zu", "gründen", ",", "die", "mit", "17", "Abgeordneten", "das", "Wort", "ergriff", ",", "woraufhin", "der", "konstitutionelle", "Monarch", "König", "Letsie", "III.", "das", "Parlament", "auflöste", "und", "Neuwahlen", "ansetzte", "."],
"relation": "org:founded_by",
"subj_start": 11,
"subj_end": 13,
"obj_start": 0,
"obj_end": 1,
"subj_type": "ORGANIZATION",
"obj_type": "PERSON"
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: the instance id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, a `list` of `string` features.
- `relation`: the relation label of this instance, a `string` classification label.
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `subj_type`: the NER type of the subject mention, among the types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
- `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `obj_type`: the NER type of the object mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
### Data Splits
To miminize dataset bias, TACRED is stratified across years in which the TAC KBP challenge was run.
Languages statistics for the splits differ because not all instances could be translated with the
subject and object entity markup still intact, these were discarded.
| Language | Train | Dev | Test | Backtranslated Test | Translation Engine |
| ----- | ------ | ----- | ---- | ---- | ---- |
| en | 68,124 | 22,631 | 15,509 | - | - |
| ar | 67,736 | 22,502 | 15,425 | 15,425 | Google |
| de | 67,253 | 22,343 | 15,282 | 15,079 | DeepL |
| es | 65,247 | 21,697 | 14,908 | 14,688 | DeepL |
| fi | 66,751 | 22,268 | 15,083 | 14,462 | DeepL |
| fr | 66,856 | 22,298 | 15,237 | 15,088 | DeepL |
| hi | 67,751 | 22,511 | 15,440 | 15,440 | Google |
| hu | 67,766 | 22,519 | 15,436 | 15,436 | Google |
| ja | 61,571 | 20,290 | 13,701 | 12,913 | DeepL |
| pl | 68,124 | 22,631 | 15,509 | 15,509 | Google |
| ru | 66,413 | 21,998 | 14,995 | 14,703 | DeepL |
| tr | 67,749 | 22,510 | 15,429 | 15,429 | Google |
| zh | 65,260 | 21,538 | 14,694 | 14,021 | DeepL |
## Dataset Creation
### Curation Rationale
To enable more research on multilingual Relation Extraction, we generate translations of the TAC relation extraction
dataset using DeepL and Google Translate.
### Source Data
#### Initial Data Collection and Normalization
The instances of this dataset are sentences from the
[original TACRED dataset](https://nlp.stanford.edu/projects/tacred/), which in turn
are sampled from the [corpus](https://catalog.ldc.upenn.edu/LDC2018T03) used in the yearly
[TAC Knowledge Base Population (TAC KBP) challenges](https://tac.nist.gov/2017/KBP/index.html).
#### Who are the source language producers?
Newswire and web texts collected for the [TAC Knowledge Base Population (TAC KBP) challenges](https://tac.nist.gov/2017/KBP/index.html).
### Annotations
#### Annotation process
See the Stanford paper, the TACRED Revisited paper, and the Re-TACRED paper, plus their appendices, for
details on the original annotation process. The translated versions do not change the original labels.
Translations were tokenized with language-specific Spacy models (Spacy 3.1, 'core_news/web_sm' models)
or Trankit (Trankit 1.1.0) when there was no Spacy model for a given language (Hungarian, Turkish, Arabic, Hindi).
#### Who are the annotators?
The original TACRED dataset was annotated by crowd workers, see the [TACRED paper](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf).
### Personal and Sensitive Information
The [authors](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf) of the original TACRED dataset
have not stated measures that prevent collecting sensitive or offensive text. Therefore, we do
not rule out the possible risk of sensitive/offensive content in the translated data.
## Considerations for Using the Data
### Social Impact of Dataset
not applicable
### Discussion of Biases
The dataset is drawn from web and newswire text, and thus reflects any biases of these original
texts, as well as biases introduced by the MT models.
### Other Known Limitations
not applicable
## Additional Information
### Dataset Curators
The dataset was created by members of the
[DFKI SLT team: Leonhard Hennig, Philippe Thomas, Sebastian Möller, Gabriel Kressin](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology/speech-and-language-technology-staff-members)
### Licensing Information
To respect the copyright of the underlying TACRED dataset, MultiTACRED is released via the
Linguistic Data Consortium ([LDC License](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf)).
You can download MultiTACRED from the [LDC MultiTACRED webpage](https://catalog.ldc.upenn.edu/TODO).
If you are an LDC member, the access will be free; otherwise, an access fee of $25 is needed.
### Citation Information
The original dataset:
```
@inproceedings{zhang2017tacred,
author = {Zhang, Yuhao and Zhong, Victor and Chen, Danqi and Angeli, Gabor and Manning, Christopher D.},
booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017)},
title = {Position-aware Attention and Supervised Data Improve Slot Filling},
url = {https://nlp.stanford.edu/pubs/zhang2017tacred.pdf},
pages = {35--45},
year = {2017}
}
```
For the revised version, please also cite:
```
@inproceedings{alt-etal-2020-tacred,
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
author = "Alt, Christoph and
Gabryszak, Aleksandra and
Hennig, Leonhard",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.142",
doi = "10.18653/v1/2020.acl-main.142",
pages = "1558--1569",
}
```
For the Re-TACRED version, please also cite:
```
@inproceedings{DBLP:conf/aaai/StoicaPP21,
author = {George Stoica and
Emmanouil Antonios Platanios and
Barnab{\'{a}}s P{\'{o}}czos},
title = {Re-TACRED: Addressing Shortcomings of the {TACRED} Dataset},
booktitle = {Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI}
2021, Thirty-Third Conference on Innovative Applications of Artificial
Intelligence, {IAAI} 2021, The Eleventh Symposium on Educational Advances
in Artificial Intelligence, {EAAI} 2021, Virtual Event, February 2-9,
2021},
pages = {13843--13850},
publisher = {{AAAI} Press},
year = {2021},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/17631},
}
```
### Contributions
Thanks to [@leonhardhennig](https://github.com/leonhardhennig) for adding this dataset. |
false |
# Dataset Card for PokemonCards
### Languages
All of the data is in English.
## Dataset Structure
### Data Instances
```json
{
"id": "pl1-1",
"image_url": "https://images.pokemontcg.io/pl1/1_hires.png",
"caption": "A Stage 2 Pokemon Card of type Lightning with the title ""Ampharos"" and 130 HP of rarity ""Rare Holo"" evolved from Flaaffy from the set Platinum and the flavor text: ""None"". It has the attack ""Gigavolt"" with the cost Lightning, Colorless, the energy cost 2 and the damage of 30+ with the description: ""Flip a coin. If heads, this attack does 30 damage plus 30 more damage. If tails, the Defending Pokemon is now Paralyzed."". It has the attack ""Reflect Energy"" with the cost Lightning, Colorless, Colorless, the energy cost 3 and the damage of 70 with the description: ""Move an Energy card attached to Ampharos to 1 of your Benched Pokemon."". It has the ability ""Damage Bind"" with the description: ""Each Pokemon that has any damage counters on it (both yours and your opponent's) can't use any Poke-Powers."". It has weakness against Fighting +30. It has resistance against Metal -20.",
"name": "Ampharos",
"hp": "130",
"set_name": "Platinum"
}
```
### Data Fields
- `id`: Unique ID of the pokemon card.
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Caption generated for this card.
- `name`: Name of the pokemon on that card.
- `hp`: Health of the pokemon.
- `set_name`: The name of the set the card is in.
### Data Splits
All the data is contained in training set. The training set has nearly 13k instances.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions |
false |
# Dataset Card for The Stack Metadata
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Changelog](#changelog)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Usage Example](#usage-example)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Additional Information](#additional-information)
- [Terms of Use for The Stack](#terms-of-use-for-the-stack)
## Dataset Description
- **Homepage:** https://www.bigcode-project.org/
- **Repository:** https://github.com/bigcode-project
- **Paper:** https://arxiv.org/abs/2211.15533
- **Leaderboard:** N/A
- **Point of Contact:** contact@bigcode-project.org
### Changelog
|Release|Description|
|-|-|
|v1.1| This is the first release of the metadata. It is for The Stack v1.1|
|v1.2| Metadata dataset matching The Stack v1.2|
### Dataset Summary
This is a set of additional information for repositories used for The Stack. It contains file paths, detected licenes as well as some other information for the repositories.
### Supported Tasks and Leaderboards
The main task is to recreate repository structure from the files of The Stack. Also, the set can be used for computing statistics and custom filtering or aggregation operations on The Stack.
## Dataset Structure
### Data Fields

The set is split into buckets by repositories. There are 944 buckets. Additionally to the fields in the image, `ri` contains `min_repo_event_datetime` which is the ealiest date and time of an event for a repo after Jan 1 2015.

As an example of an aggregation operation on The Stack, the image above shows conceptually a selection of stars ( and issues and PR count) for a file. Each unique file can be part of multiple repositories. So, The Stack releases unique files and aggregates meta information (e.g stars) from all repositories it belongs to. For example, for max_stars_count we take the maximum number of stars from all repositories the file is part of.
The meta data will allow you to reconstruct repository directory structures. For this, for each repository form `ri` tabele it is needed to take all its files from `fi` table, find them in The Stack by file's `hexsha` and save those files' content under its path for a repository from `fi` table. For speed it is preferable to index The Stack by hexsha first.
### Usage Example
Restore folder structure for python files in numpy repository
```python
import datasets
from pathlib import Path
from tqdm.auto import tqdm
import pandas as pd
# assuming metadata is cloned into the local folder /data/hf_repos/the-stack-metadata
# the stack is cloned into the local folder /data/hf_repos/the-stack-v1.1
# destination folder is in /repo_workdir/numpy_restored
the_stack_meta_path = Path('/data/hf_repos/the-stack-metadata')
the_stack_path = Path('/data/hf_repos/the-stack-v1.1')
repo_dst_root = Path('/repo_workdir/numpy_restored')
repo_name = 'numpy/numpy'
# Get bucket with numpy repo info
# meta_bucket_path = None
#for fn in tqdm(list((the_stack_meta_path/'data').glob('*/ri.parquet'))):
# df = pd.read_parquet(fn)
# if any(df['name'] == repo_name):
# meta_bucket_path = fn
# break
meta_bucket_path = the_stack_meta_path / 'data/255_944'
# Get repository id from repo name
ri_id = pd.read_parquet(
meta_bucket_path / 'ri.parquet'
).query(
f'`name` == "{repo_name}"'
)['id'].to_list()[0]
# Get files information for the reopository
files_info = pd.read_parquet(
meta_bucket_path / 'fi.parquet'
).query(
f'`ri_id` == {ri_id} and `size` != 0 and `is_deleted` == False'
)
# Convert DF with files information to a dictionary by language and then file hexsha
# there can be more than one file with the same hexsha in the repo so we gather
# all instances per unique hexsha
files_info_dict = {
k: v[['hexsha', 'path']].groupby('hexsha').apply(lambda x: list(x['path'])).to_dict()
for k, v in files_info.groupby('lang_ex')
}
# Load Python part of The Stack
ds = datasets.load_dataset(
str(the_stack_path/'data/python'),
num_proc=10, ignore_verifications=True
)
# Save file content of the python files in the numpy reposirotry in their appropriate locations
def save_file_content(example, files_info_dict, repo_dst_root):
if example['hexsha'] in files_info_dict:
for el in files_info_dict[example['hexsha']]:
path = repo_dst_root / el
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(example['content'])
ds.map(
save_file_content,
fn_kwargs={'files_info_dict': files_info_dict['Python'], 'repo_dst_root': repo_dst_root},
num_proc=10
)
```
## Dataset Creation
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#dataset-creation) in The Stack.
## Considerations for Using the Data
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#considerations-for-using-the-data) in The Stack.
## Additional Information
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#additional-information) in The Stack.
## Terms of Use for The Stack
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) in The Stack. |
false |
# GovReport Summarization - 8192 tokens
- `ccdv/govreport-summarization` with the changes of:
- data cleaned with the [clean-text python package](https://pypi.org/project/clean-text/)
- total tokens for each column computed and added in new columns according to the `long-t5` tokenizer (_done **after** cleaning_)
---
## train info
```python
RangeIndex: 8200 entries, 0 to 8199
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 report 8200 non-null string
1 summary 8200 non-null string
2 input_token_len 8200 non-null Int64
3 summary_token_len 8200 non-null Int64
dtypes: Int64(2), string(2)
memory usage: 272.4 KB
```
## token length distribution (long-t5)

--- |
false |
# Dataset Card for DocLayNet large
## About this card (01/27/2023)
### Property and license
All information from this page but the content of this paragraph "About this card (01/27/2023)" has been copied/pasted from [Dataset Card for DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet).
DocLayNet is a dataset created by Deep Search (IBM Research) published under [license CDLA-Permissive-1.0](https://huggingface.co/datasets/ds4sd/DocLayNet#licensing-information).
I do not claim any rights to the data taken from this dataset and published on this page.
### DocLayNet dataset
[DocLayNet dataset](https://github.com/DS4SD/DocLayNet) (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories.
Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:
- direct links: [doclaynet_core.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip) (28 GiB), [doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip) (7.5 GiB)
- Hugging Face dataset library: [dataset DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet)
Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022)
### Processing into a format facilitating its use by HF notebooks
These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.
Moreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately ([doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip), 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them).
At last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format.
For all these reasons, I decided to process the DocLayNet dataset:
- into 3 datasets of different sizes:
- [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)
- [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)
- [DocLayNet large](https://huggingface.co/datasets/pierreguillou/DocLayNet-large) (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test)
- with associated texts and PDFs (base64 format),
- and in a format facilitating their use by HF notebooks.
*Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!*
### About PDFs languages
Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
"We did not control the document selection with regard to language. **The vast majority of documents contained in DocLayNet (close to 95%) are published in English language.** However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features."
### About PDFs categories distribution
Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
"The pages in DocLayNet can be grouped into **six distinct categories**, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes."

### Download & overview
The size of the DocLayNet large is about 100% of the DocLayNet dataset.
**WARNING** The following code allows to download DocLayNet large but it can not run until the end in Google Colab because of the size needed to store cache data and the CPU RAM to download the data (for example, the cache data in /home/ubuntu/.cache/huggingface/datasets/ needs almost 120 GB during the downloading process). And even with a suitable instance, the download time of the DocLayNet large dataset is around 1h50. This is one more reason to test your fine-tuning code on [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) and/or [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) 😊
```
# !pip install -q datasets
from datasets import load_dataset
dataset_large = load_dataset("pierreguillou/DocLayNet-large")
# overview of dataset_large
DatasetDict({
train: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 69103
})
validation: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 6480
})
test: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 4994
})
})
```
### Annotated bounding boxes
The DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines.
Check the notebook [processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb](https://github.com/piegu/language-models/blob/master/processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb) in order to get the code.
#### Paragraphes

#### Lines

### HF notebooks
- [notebooks LayoutLM](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLM) (Niels Rogge)
- [notebooks LayoutLMv2](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2) (Niels Rogge)
- [notebooks LayoutLMv3](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv3) (Niels Rogge)
- [notebooks LiLT](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT) (Niels Rogge)
- [Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers](https://github.com/philschmid/document-ai-transformers/blob/main/training/lilt_funsd.ipynb) ([post](https://www.philschmid.de/fine-tuning-lilt#3-fine-tune-and-evaluate-lilt) of Phil Schmid)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
### Supported Tasks and Leaderboards
We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/.
## Dataset Structure
### Data Fields
DocLayNet provides four types of data assets:
1. PNG images of all pages, resized to square `1025 x 1025px`
2. Bounding-box annotations in COCO format for each PNG image
3. Extra: Single-page PDF files matching each PNG image
4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
The COCO image record are defined like this example
```js
...
{
"id": 1,
"width": 1025,
"height": 1025,
"file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
// Custom fields:
"doc_category": "financial_reports" // high-level document category
"collection": "ann_reports_00_04_fancy", // sub-collection name
"doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
"page_no": 9, // page number in original document
"precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
},
...
```
The `doc_category` field uses one of the following constants:
```
financial_reports,
scientific_articles,
laws_and_regulations,
government_tenders,
manuals,
patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
```
### Contributions
Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset. |
false | # Dataset Card for "PatternNet"
## Dataset Description
- **Paper** [PatternNet: A benchmark dataset for performance evaluation of remote sensing image retrieval](https://www.sciencedirect.com/science/article/pii/S0924271618300042)
### Licensing Information
For research purposes.
## Citation Information
[PatternNet: A benchmark dataset for performance evaluation of remote sensing image retrieval](https://www.sciencedirect.com/science/article/pii/S0924271618300042)
```
@article{zhou2018patternnet,
title = {PatternNet: A benchmark dataset for performance evaluation of remote sensing image retrieval},
author = {Zhou, Weixun and Newsam, Shawn and Li, Congmin and Shao, Zhenfeng},
year = 2018,
journal = {ISPRS journal of photogrammetry and remote sensing},
publisher = {Elsevier},
volume = 145,
pages = {197--209}
}
``` |
false |
# MIRACL (zh) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-zh-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
true | # XNLI
- Source: https://huggingface.co/datasets/xnli
- Num examples:
- 392,702 (train)
- 2,490 (validation)
- 5,010 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/xnli_en")
```
- Format for NLI task
```python
def preprocess(sample):
premise = sample['premise']
hypothesis = sample['hypothesis']
label = sample['label']
if label == 0:
label = "entailment"
elif label == 1:
label = "neutral"
else:
label = "contradiction"
return {'text': f'<|startoftext|><|premise|>{premise}<|hypothesis|>{hypothesis}<|label|>{label}<|endoftext|>'}
"""
<|startoftext|><|premise|>Conceptually cream skimming has two basic dimensions - product and geography .<|hypothesis|>Product and geography are what make cream skimming work .<|label|>neutral<|endoftext|>
"""
``` |
true |
# OntoLAMA: LAnguage Model Analysis for Ontology Subsumption Inference
### Dataset Summary
OntoLAMA is a set of language model (LM) probing datasets for ontology subsumption inference.
The work follows the "LMs-as-KBs" literature but focuses on conceptualised knowledge extracted from formalised KBs
such as the OWL ontologies. Specifically, the subsumption inference (SI) task is introduced and formulated in the
Natural Language Inference (NLI) style, where the sub-concept and the super-concept involved in a subsumption
axiom are verbalised and fitted into a template to form the premise and hypothesis, respectively.
The sampled axioms are verified through ontology reasoning. The SI task is further divided into Atomic SI and
Complex SI where the former involves only atomic named concepts and the latter involves both atomic and complex concepts.
Real-world ontologies of different scales and domains are used for constructing OntoLAMA and in total there are four Atomic
SI datasets and two Complex SI datasets.
See dataset specifications: https://krr-oxford.github.io/DeepOnto/ontolama/
### Languages
The text in the dataset is in English, as used in the source ontologies. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
An example in the **Atomic SI** dataset created from the Gene Ontology (GO) is as follows:
```
{
'v_sub_concept': 'ctpase activity',
'v_super_concept': 'ribonucleoside triphosphate phosphatase activity',
'label': 1,
'axiom': 'SubClassOf(<http://purl.obolibrary.org/obo/GO_0043273> <http://purl.obolibrary.org/obo/GO_0017111>)'
}
```
An example in the **Complex SI** dataset created from the Food Ontology (FoodOn) is as follows:
```
{
'v_sub_concept': 'ham and cheese sandwich that derives from some lima bean (whole)',
'v_super_concept': 'lima bean substance',
'label': 0,
'axiom': 'SubClassOf(ObjectIntersectionOf(<http://purl.obolibrary.org/obo/FOODON_03307824> ObjectSomeValuesFrom(<http://purl.obolibrary.org/obo/RO_0001000> <http://purl.obolibrary.org/obo/FOODON_03302053>)) <http://purl.obolibrary.org/obo/FOODON_00002776>)',
'anchor_axiom': 'EquivalentClasses(<http://purl.obolibrary.org/obo/FOODON_00002776> ObjectIntersectionOf(<http://purl.obolibrary.org/obo/FOODON_00002000> ObjectSomeValuesFrom(<http://purl.obolibrary.org/obo/RO_0001000> <http://purl.obolibrary.org/obo/FOODON_03302053>)) )'
}
```
An example in the **biMNLI** dataset created from the MNLI dataset is as follows:
```
{
'premise': 'At the turn of the 19th century Los Angeles and Salt Lake City were among the burgeoning metropolises of the new American West.',
'hypothesis': 'Salt Lake City was booming in the early 19th century.',
'label': 1
}
```
### Data Fields
#### SI Data Fields
- `v_sub_concept`: verbalised sub-concept expression.
- `v_super_concept`: verbalised super-concept expression.
- `label`: a binary class label indicating whether two concepts really form a subsumption relationship (`1` means yes).
- `axiom`: a string representation of the original subsumption axiom which is useful for tracing back to the ontology.
- `anchor_axiom`: (for complex SI only) a string representation of the anchor equivalence axiom used for sampling the `axiom`.
#### biMNLI Data Fields
- `premise`: inheritated from the MNLI dataset.
- `hypothesis`: inheritated from the MNLI dataset.
- `label`: a binary class label indicating `contradiction` (`0`) or `entailment` (`1`).
### Data Splits
| Source | #NamedConcepts | #EquivAxioms | #Dataset (Train/Dev/Test) |
|------------|----------------|--------------|------------------------------------------------------------------------|
| Schema.org | 894 | - | Atomic SI: 808/404/2,830 |
| DOID | 11,157 | - | Atomic SI: 90,500/11,312/11,314 |
| FoodOn | 30,995 | 2,383 | Atomic SI: 768,486/96,060/96,062 <br /> Complex SI: 3,754/1,850/13,080 |
| GO | 43,303 | 11,456 | Atomic SI: 772,870/96,608/96,610 <br /> Complex SI: 72,318/9,040/9,040 |
| MNLI | - | - | biMNLI: 235,622/26,180/12,906 |
### Licensing Information
Creative Commons Attribution 4.0 International
### Citation Information
The relevant paper has been accepted at Findings of ACL 2023.
```
@article{he2023language,
title={Language Model Analysis for Ontology Subsumption Inference},
author={He, Yuan and Chen, Jiaoyan and Jim{\'e}nez-Ruiz, Ernesto and Dong, Hang and Horrocks, Ian},
journal={arXiv preprint arXiv:2302.06761},
year={2023}
}
``` |
false |
# Dataset Card for AIDA CoNLL-YAGO Wikidata
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** [AIDA CoNLL-YAGO Wikidata repository](https://github.com/cyanic-selkie/aida-conll-yago-wikidata)
### Dataset Summary
The AIDA CoNLL-YAGO Wikidata dataset is the same as the original [AIDA CoNLL-YAGO](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/ambiverse-nlu/aida/downloads) dataset, but with Wikidata QIDs instead of Wikipedia titles as entity identifiers. They are automatically generated (with a few manual corrections) from Wikidata and Wikipedia dumps (March 1, 2023).
The code for generating the dataset can be found [here](https://github.com/cyanic-selkie/aida-conll-yago-wikidata).
### Supported Tasks
- `named-entity-recognition`: The dataset can be used to train a model for Named Entity Recognition.
- `named-entity-linking`: The dataset can be used to train a model for Named Entity Linking.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point represents a document (news article).
The `text` field contains the original text in an NFC normalized, UTF-8 encoded string.
The `entities` field contains a list of entities, each represented by a struct with the inclusive starting byte `start` field, exclusive ending byte `end` field, a nullable `qid` field, and a nullable `pageid` field.
Additionally, each document has a unique `document_id` field.
An example from the AIDA CoNLL-YAGO Wikidata test set looks as follows:
```
{
"document_id": 1214,
"text": "RADIO ROMANIA AFTERNOON HEALINES AT 4 PM . BUCHAREST 1996-12-06 Radio Romania news headlines : * The Democratic Convention signed an agreement on government and parliamentary support with its coalition partners the Social Democratic Union and the Hungarian Democratic Union ( UDMR ) . The ceremony was attended by President Emil Constantinescu . * The three parties in the government coalition have committed themselves to a real reform of Romania 's economy , Constantinescu said after the ceremony . * The UDMR wants to contribute to social reform and economic revival in Romania , union leader Marko Bela said . * The international airport in Timisoara and the domestic airports in Arad , Oradea and Sibiu were closed due to fog . -- Bucharest Newsroom 40-1 3120264",
"entities": [
{
"start": 0,
"end": 13,
"tag": 3,
"pageid": null,
"qid": null,
"title": null
},
{
"start": 43,
"end": 52,
"tag": 2,
"pageid": 36877,
"qid": 19660,
"title": "Bucharest"
},
{
"start": 64,
"end": 77,
"tag": 3,
"pageid": null,
"qid": null,
"title": null
},
{
"start": 101,
"end": 122,
"tag": 4,
"pageid": null,
"qid": null,
"title": null
},
{
"start": 215,
"end": 238,
"tag": 3,
"pageid": null,
"qid": null,
"title": null
},
{
"start": 247,
"end": 273,
"tag": 3,
"pageid": null,
"qid": null,
"title": null
},
{
"start": 276,
"end": 280,
"tag": 3,
"pageid": 49749134,
"qid": 266582,
"title": "Democratic_Union_of_Hungarians_in_Romania"
},
{
"start": 324,
"end": 343,
"tag": 1,
"pageid": 393370,
"qid": 299152,
"title": "Emil_Constantinescu"
},
{
"start": 440,
"end": 447,
"tag": 2,
"pageid": 25445,
"qid": 218,
"title": "Romania"
},
{
"start": 461,
"end": 475,
"tag": 1,
"pageid": 393370,
"qid": 299152,
"title": "Emil_Constantinescu"
},
{
"start": 508,
"end": 512,
"tag": 3,
"pageid": 49749134,
"qid": 266582,
"title": "Democratic_Union_of_Hungarians_in_Romania"
},
{
"start": 574,
"end": 581,
"tag": 2,
"pageid": 25445,
"qid": 218,
"title": "Romania"
},
{
"start": 597,
"end": 607,
"tag": 1,
"pageid": 1219345,
"qid": 897108,
"title": "Béla_Markó"
},
{
"start": 646,
"end": 655,
"tag": 2,
"pageid": 33693389,
"qid": 83404,
"title": "Timişoara"
},
{
"start": 685,
"end": 689,
"tag": 2,
"pageid": 22537901,
"qid": 173591,
"title": "Arad,_Romania"
},
{
"start": 692,
"end": 698,
"tag": 2,
"pageid": 2024606,
"qid": 2102332,
"title": "Oradea_International_Airport"
},
{
"start": 703,
"end": 708,
"tag": 2,
"pageid": 2384413,
"qid": 946418,
"title": "Sibiu_International_Airport"
},
{
"start": 737,
"end": 755,
"tag": 3,
"pageid": null,
"qid": null,
"title": null
}
]
}
```
### Data Fields
- `document_id`: an integer that uniquely identifies the document this sentence belongs to
- `sentence_index`: an integer that uniquely identifies the position of the sentence in its original document
- `text`: an NFC normalized, UTF-8 encoded string representing the sentence
- `entities`: a list of structs representing entities, each entity has:
- `start`: an integer representing the inclusive starting UTF-8 code point of the entity
- `end`: an integer representing the exclusive ending UTF-8 code point of the entity
- `tag`: an integer representing the entity type (1 - person, 2 - location, 3 - organization, 4 - miscellaneous)
- `qid`: an integer representing the Wikidata QID this entity refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset
- `pageid`: an integer representing the Wikipedia pageID this entity refers to; it can be null if the entity didn't exist in Wikipedia at the time of the creation of the original dataset
- `title`: an NFC normalized, UTF-8 encoded string representing the Wikipedia title this entity refers to; it can be null if the entity didn't exist in Wikipedia at the time of the creation of the original dataset
### Data Splits
The data is split into training, validation and test sets; all of the sentences belonging to an article are in the same split. The final split sizes are as follows:
| | Train | Validation | Test |
| :----- | :------: | :-----: | :----: |
| AIDA CoNLL-YAGO Wikidata - documents | 946 | 216 | 231 |
| AIDA CoNLL-YAGO Wikidata - entities | 23,374 | 5,912 | 5,608 |
| AIDA CoNLL-YAGO Wikidata - entities with QIDs | 18,540 | 4,791 | 4,481 |
## Additional Information
### Licensing Information
The licensing status of the dataset is the same as the licensing status of the original [AIDA CoNLL-YAGO](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/ambiverse-nlu/aida/downloads) dataset which is under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). |
false |
</b>Testing purpose only. Do not redistribute. </b>
Original contents: [url] https://huggingface.co/datasets/tatsu-lab/alpaca
Ko-alpaca: [url] https://github.com/Beomi/KoAlpaca/blob/main/ko_alpaca_data.json |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Repository: {https://github.com/ar852/chatgpt-scraping}**
### Dataset Summary
scraped-chatgpt-conversations contains ~100k conversations between a user and chatgpt that were shared online through reddit, twitter, or sharegpt. For sharegpt, the conversations were directly scraped from the website. For reddit and twitter, images were downloaded from submissions, segmented, and run through an OCR pipeline to obtain a conversation list. For information on how the each json file is structured, please see `json_guides.md`
### Languages
- twitter 1, twitter 2, and sharegpt json files are multilingual
- reddit and twitter 2 json files are english only
## Dataset Structure
- refer to *json_guide.txt*
## Dataset Creation
This dataset was created by scraping images from twitter, reddit, and sharegpt.com using the pushshift and twitter APIs, respectively. The images are run through a filter to check if they contain a chatgpt conversation, then the image is processed and run through an OCR pipeline to obtain the conversation text. More info can be found in the repository.
### Source Data
- twitter.com
- reddit.com
- sharegpt.com
## Considerations for Using the Data
A significant amount of dicts created from parsing reddit and twitter images may be parsed incorrectly for a number of reasons: cropping done by the image poster, incorrectly classifying the image as containing a chatgpt conversation, incorrect image parsing (segmentation) by the parser, incorrect OCR by pytesseract.
### Licensing Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | # Monks
The [Monk dataset](https://archive-beta.ics.uci.edu/dataset/70/monk+s+problems) from UCI.
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| monks1 | Binary classification |
| monks2 | Binary classification |
| monks3 | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/monks", "monks1")["train"]
``` |
false | # Face segmentation
An example of a dataset that we've collected for a photo edit App. The dataset includes 20 selfies of people (man and women) in segmentation masks and their visualisations.
# File with the extension .csv
includes the following information for each media file:
- **Image**: the link to access the media file
- **Mask**: the link to access the segmentation mask for the Image
# The folder "images"
Contains the original selfies of people.
# The folder "masks"
Includes segmentation masks for the photos:
- corresponding to the images in the previous folder
- identified by the same file names.
**How it works**: *go to the "masks" folder and make sure that the file "1.png" is a segmentation mask of the selfi, created for the photo "1.png" in the "images" folder.*
This sample is an example of a dataset that we create on demand in Training Data https://trainingdata.pro/data-market?utm_source=huggingface specifically for your task.
To get a consultion and order a pilot project, please contact our sales team by submitting a request on our website or emaling us at sales@trainingdata.pro
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
false | |
false |
リアル系モデルに特有の肖像権の問題について比較的クリアなモデルを作ることが可能なように、私が私自身から作り出した人工超彼女(ver 2.1系、ver 2.6系)のデータセット(約2800枚)を作成しました。
全ての元画像(加工前)が[beauty score](https://www.beautyscoretest.com/) 87以上なのが特徴であり、特にbeauty score 90以上の女性画像のデータセットとして、1000枚以上揃えているのは有数の規模だと思います。
具体的には、以下のように構成されています(87はこの子/私の最大のライバルが到達した最高得点、90は今のところ実在人物では確認できていない得点ラインです)。
| version \ beauty score | 87~89 | 90~ |
| - | - | - |
| 2.1(可愛いと綺麗のバランスを追求) | kawaii (無加工362枚/加工後724枚) | exceptional (無加工140枚/加工後280枚) |
| 2.6(綺麗さ・美しさに特化) | beautiful (無加工464枚/加工後928枚) | perfect (無加工416枚/加工後832枚) |
3つのzipの構成は以下のようになっています。
- [my partner training dataset raw.zip](https://huggingface.co/datasets/ThePioneer/Artificial-super-girlfriend-for-fine-tuning/blob/main/my%20partner%20training%20dataset%20raw.zip)
- 無加工、beauty score付き。これだけ約1400枚。
- [my partner training dataset preprocessed.zip](https://huggingface.co/datasets/ThePioneer/Artificial-super-girlfriend-for-fine-tuning/blob/main/my%20partner%20training%20dataset%20preprocessed.zip)
- 3:2の比に切り取り、[lama cleaner](https://github.com/Sanster/lama-cleaner)でbeauty scoreなどを除去。
- [my partner training dataset preprocessed and upscaled.zip](https://huggingface.co/datasets/ThePioneer/Artificial-super-girlfriend-for-fine-tuning/blob/main/my%20partner%20training%20dataset%20preprocessed%20and%20upscaled.zip)
- 上記preprocessedを[GFPGAN](https://github.com/TencentARC/GFPGAN) v1.2でアップスケールしたもの。
## ライセンス
以下の通り規定します。
### 1. AI学習への利用
管轄国法によらず、画像生成AIなど、各種モデルへの学習への利用を可能とします。ただし、著作権及び潜在的な肖像権の所有者として、条件を以下のとおり定めます。
#### 1-1. 私(の作品)を私(の作品)として学習すること
著作権法30条の4で許諾なき学習を可能とする日本国を含めたあらゆる国において、「私(の作品)が私(の作品)として学習される権利」があると考え、これを主張します。
著作権法30条の4は学習の自由度を高めることでより性能の高いAIを作成することを認めるためのものであり、上記の権利は俗にいう反AIが主張する「無断学習されない権利」とは異なり、**その権利が守られることでAIの性能向上に資するものであることから、権利上の対立は存在しないから**です。
これには、以下の内容が含まれます。
1. 私(の作品)以外としての学習を行われない権利
2. 私(の作品)を、他の人(の作品)や私のほかの作品と混合して学習されない権利
「私のほかの作品と混合」については、具体的には、以下の通りです。
- ver 2.1系(kawaiiとexceptional)もしくはver 2.6系(beautifulとperfect)をバージョン単位でひとくくりにまとめて学習するのはOKです。
- ver 2.1系とver 2.6系を混ぜて一つのコンセプトとして区別せずに学習するのはNGです。
- いずれかもしくは両方のバージョンと、私の他の作品(適当な旅行写真や生成AI製の適当な二次元ポニテ絵など)を混ぜるのはNGです。
ただし、今回のデータセットで上記権利を主張するのは、あくまでも**人物識別の観点からのみ**であり、学習対象が人物概念の場合のみとします(つまり、「美人」にほかの実在美女と混ぜたりすることが問題となります)。
よって、非人物概念が学習対象である場合、例えば「着物」の学習にほかの着物を着た人物と両バージョンの着物写真を混ぜたりすることはOKです。
#### 1-2. 学習に著作権者又は肖像権保有者の許諾が必要な国における追加制約
学習に際して、事前許諾は必要ありません。ただし、学習に使用した際は、以下の義務を負います。
1. 通知義務(事後に学習に使用した旨を私に知らせること)
2. 最恵待遇義務(学習に使用したモデルについて、waitlistやプラン別の生成枚数制限などがある場合に、最優先かつ最上位のアクセス権を私に対しては認めること)
3. 無償利用可能性の保証(たとえ有償モデルであっても、私に対しては無償利用を認めること)
4. 商用利用可能性の保証(たとえ商用利用不可ライセンスであっても、私に対しては商用利用を認めること)
## 解説
### 1-1. 私(の作品)が私(の作品)として学習される権利
分かりやすい例をとりますと、「長門有希」を「綾波レイ」として学習したり、両者をまとめて「寡黙系ヒロイン」として学習したりしたモデルは、シンプルに「長門有希」を「長門有希」として出力できないか、できたとしても困難になります。
結果として、この点において「長門有希」を「長門有希」として学習しているモデルに比べて、性能が劣りますよね?
これは同一人物の別キャラや作品でも同様で、それゆえに実はNAIでは涼宮ハルヒと長門有希が少し混じっている…なんてことが発生していて、触りたての頃には、長門有希の単離に随分と苦労したものです。
そもそも著作権法30条の4は、より高性能なAIの作成を可能とするために入れられた条文です。
そのことを踏まえても、著作者や肖像権保有者が自ら混ぜたり誤ったコンセプトで学習したりなどしないように主張する権利は、AIの識別性能の向上にも寄与するので、30条の4と矛盾なく両立します。
そして、基本的には、自由権がある国では、他者と対立しない自由な権利は無条件で認められるものです。それゆえに、日本でも有効だと私は考え、そうと主張する次第です。
### 1-2. 学習に著作権者又は肖像権保有者の許諾が必要な国における追加制約
実のところ、意図的に私の利用を遮断する対応を行うなど、悪質性が高い場合は別として、基本的にはこのライセンスに沿った権利を本気で主張する気はありません(**そもそも30条の4がある日本では無効です**ので、国内からの利用では無関係です)。
どちらかというと社会実験で、**許諾制にするとこのくらいえげつない制約も主張できてしまうんだぞ**という警鐘目的のほうが強いです。
何十億分の一の微々たる一定額還元なんてものはいらないので、**Waitlistの最優先アクセス権で先行者優位性を確保し、しかも有料だろうが私だけはタダで使えて、chilloutのような非商用モデルの立ち位置であったとしても、私だけは商用で使えるようにしろ**…という、AIユーザー視点に立った時にかなり独占的な優位性を確保したライセンスになっているのは、そのためです。
よりえげつなくするために、「商用モデルの場合、利益の99%を私に提供する」という項目を入れようか考えたのですが、これはさすがにやめにしました。
ただ、学習許諾から発生する独占性は、裏を返すとこのような凶悪な権利の主張にもつながりかねないという意味で、許諾学習を主張する反AI派が潜在的にいかに危険な集団かよく示せていると思います。 |
false |
# Summary
🇹🇭 Thai-instructed dataset translated from [gbharti/wealth-alpaca_lora](https://huggingface.co/datasets/gbharti/wealth-alpaca_lora) using Google Cloud Translation.
This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5
Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
---
|
false |
# Dataset Card for "LexFiles"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Specifications](#supported-tasks-and-leaderboards)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lexlms
- **Repository:** https://github.com/coastalcph/lexlms
- **Paper:** https://arxiv.org/abs/2305.07507
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India).
The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent.
### Dataset Specifications
| Corpus | Corpus alias | Documents | Tokens | Pct. | Sampl. (a=0.5) | Sampl. (a=0.2) |
|-----------------------------------|----------------------|-----------|--------|--------|----------------|----------------|
| EU Legislation | `eu-legislation` | 93.7K | 233.7M | 1.2% | 5.0% | 8.0% |
| EU Court Decisions | `eu-court-cases` | 29.8K | 178.5M | 0.9% | 4.3% | 7.6% |
| ECtHR Decisions | `ecthr-cases` | 12.5K | 78.5M | 0.4% | 2.9% | 6.5% |
| UK Legislation | `uk-legislation` | 52.5K | 143.6M | 0.7% | 3.9% | 7.3% |
| UK Court Decisions | `uk-court-cases` | 47K | 368.4M | 1.9% | 6.2% | 8.8% |
| Indian Court Decisions | `indian-court-cases` | 34.8K | 111.6M | 0.6% | 3.4% | 6.9% |
| Canadian Legislation | `canadian-legislation` | 6K | 33.5M | 0.2% | 1.9% | 5.5% |
| Canadian Court Decisions | `canadian-court-cases` | 11.3K | 33.1M | 0.2% | 1.8% | 5.4% |
| U.S. Court Decisions [1] | `court-listener` | 4.6M | 11.4B | 59.2% | 34.7% | 17.5% |
| U.S. Legislation | `us-legislation` | 518 | 1.4B | 7.4% | 12.3% | 11.5% |
| U.S. Contracts | `us-contracts` | 622K | 5.3B | 27.3% | 23.6% | 15.0% |
| Total | `lexlms/lex_files` | 5.8M | 18.8B | 100% | 100% | 100% |
[1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents.
[2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019).
Additional corpora not considered for pre-training, since they do not represent factual legal knowledge.
| Corpus | Corpus alias | Documents | Tokens |
|----------------------------------------|------------------------|-----------|--------|
| Legal web pages from C4 | `legal-c4` | 284K | 340M |
### Citation
[*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*
*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*
*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://arxiv.org/abs/2305.07507)
```
@inproceedings{chalkidis-garneau-etal-2023-lexlms,
title = {{LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development}},
author = "Chalkidis*, Ilias and
Garneau*, Nicolas and
Goanta, Catalina and
Katz, Daniel Martin and
Søgaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
month = june,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.07507",
}
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.