id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
persiannlp/parsinlu_reading_comprehension | 2022-10-25T09:54:26.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|wikipedia|google",
"language:fa",
"license:cc-by-nc-sa-4.0",
"arxiv:20... | persiannlp | A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers. | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | 0 | 5 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|wikipedia|google
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for PersiNLU (Reading Comprehension)
## Table of Contents
- [Dataset Card for PersiNLU (Reading Comprehension)](#dataset-card-for-persi_nlu_reading_comprehension)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```
{
'question': 'پیامبر در چه سالی به پیامبری رسید؟',
'url': 'https://fa.wikipedia.org/wiki/%D9%85%D8%AD%D9%85%D8%AF',
'passage': 'محمد که از روش زندگی مردم مکه ناخشنود بود، گهگاه در غار حرا در یکی از کوه\u200cهای اطراف آن دیار به تفکر و عبادت می\u200cپرداخت. به باور مسلمانان، محمد در همین مکان و در حدود ۴۰ سالگی از طرف خدا به پیامبری برگزیده، و وحی بر او فروفرستاده شد. در نظر آنان، دعوت محمد همانند دعوت دیگر پیامبرانِ کیش یکتاپرستی مبنی بر این بود که خداوند (الله) یکتاست و تسلیم شدن برابر خدا راه رسیدن به اوست.',
'answers': [
{'answer_start': 160, 'answer_text': 'حدود ۴۰ سالگی'}
]
}
```
### Data Fields
- `question`: the question, mined using Google auto-complete.
- `passage`: the passage that contains the answer.
- `url`: the url from which the passage was mined.
- `answers`: a list of answers, containing the string and the index of the answer.
### Data Splits
The train/test split contains 600/575 samples.
## Dataset Creation
### Curation Rationale
The question were collected via Google auto-complete.
The answers were annotated by native speakers.
For more details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
| 5,090 | [
[
-0.042449951171875,
-0.0596923828125,
0.018280029296875,
0.00893402099609375,
-0.0167236328125,
-0.007450103759765625,
-0.031280517578125,
-0.0177154541015625,
0.0283355712890625,
0.03009033203125,
-0.047149658203125,
-0.053955078125,
-0.037933349609375,
0.0... |
pierreant-p/jcvd-or-linkedin | 2021-07-14T18:26:09.000Z | [
"region:us"
] | pierreant-p | null | null | 0 | 5 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
projecte-aina/vilasum | 2023-09-13T12:49:32.000Z | [
"task_categories:summarization",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:cc-by-nc-4.0",
"arxiv:2202.06871",
"region:us"
] | projecte-aina | VilaSum is a summarization dataset for evaluation. It is extracted from a newswire corpus crawled from Vilaweb. The corpus consists of 13,843 instances that are composed by the headline and the body. | @misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1 | 5 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- ca
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets: []
task_categories:
- summarization
task_ids: []
pretty_name: casum
---
# Dataset Card for VilaSum
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:**[Sequence to Sequence Resources for Catalan](https://arxiv.org/pdf/2202.06871.pdf)
- **Point of Contact:** [Ona de Gibert Bonet](mailto:ona.degibert@bsc.es)
### Dataset Summary
VilaSum is a summarization dataset for evaluation. It is extracted from a newswire corpus crawled from the Catalan news portal [VilaWeb](https://www.vilaweb.cat/). The corpus consists of 13,843 instances that are composed by the headline and the body.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for abstractive summarization. Success on this task is typically measured by achieving a high Rouge score. The [mbart-base-ca-casum](https://huggingface.co/projecte-aina/bart-base-ca-casum) model currently achieves a 35.04.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
'summary': 'Un vídeo corrobora les agressions a dues animalistes en un correbou del Mas de Barberans',
'text': 'Noves imatges, a les quals ha tingut accés l'ACN, certifiquen les agressions i la destrucció del material d'enregistrament que han denunciat dues activistes d'AnimaNaturalis en la celebració d'un acte de bous a la plaça al Mas de Barberans (Montsià). En el vídeo es veu com unes quantes persones s'abalancen sobre les noies que reben estirades i cops mentre els intenten prendre les càmeres. Membres de la comissió taurina intervenen per aturar els presumptes agressors però es pot escoltar com part del públic victoreja la situació. Els Mossos d'Esquadra presentaran aquest dilluns al migdia l'atestat dels fets al Jutjat d'Amposta. Dissabte ja es van detenir quatre persones que van quedar en llibertat a l'espera de ser cridats pel jutge. Es tracta de tres homes i una dona de Sant Carles de la Ràpita, tots ells membres de la mateixa família.'
}
```
### Data Fields
- `summary` (str): Summary of the piece of news
- `text` (str): The text of the piece of news
### Data Splits
Due to the reduced size of the dataset, we use it only for evaluation as a test set.
- test: 13,843 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language. There exist few resources for summarization in Catalan.
### Source Data
#### Initial Data Collection and Normalization
We obtained each headline and its corresponding body of each news piece on [VilaWeb](https://www.vilaweb.cat/) and applied the following cleaning pipeline: deduplicating the documents, removing the documents with empty attributes, and deleting some boilerplate sentences.
#### Who are the source language producers?
The news portal [VilaWeb](https://www.vilaweb.cat/).
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Since all data comes from public websites, no anonymization process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of summarization models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that since the data comes from unreliable web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by MT4All CEF project and the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest preprint:
```bibtex
@misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
[N/A]
| 5,877 | [
[
-0.030792236328125,
-0.03521728515625,
0.0029354095458984375,
0.041229248046875,
-0.037506103515625,
0.017425537109375,
-0.005397796630859375,
-0.0162506103515625,
0.064208984375,
0.0438232421875,
-0.0243072509765625,
-0.07183837890625,
-0.058349609375,
0.02... |
rajeshradhakrishnan/malayalam_wiki | 2022-07-04T12:21:06.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"languag... | rajeshradhakrishnan | Common Crawl - Malayalam. | @article{qburst,
title={Common Crawl - Malayalam},
author={n.d},
year={2020},
journal={n.d},
} | 1 | 5 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
paperswithcode_id: wikitext-2
pretty_name: rajeshradhakrishnan/malayalam_wiki
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for [Malayalam Wiki - common crawl malayalam]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository: https://github.com/qburst/common-crawl-malayalam**
- **Paper: None**
- **Leaderboard:**
- **Point of Contact: [@RRaajjesshh](https://twitter.com/RRaajjesshh)**
### Dataset Summary
Created from the files extract using Useful tools for extracting malayalam text from the Common Crawl Dataset.
https://github.com/qburst/common-crawl-malayalam
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[qburst](https://github.com/qburst), have run scripts on some months of the Common Crawl archives and are made it publicly available. This dataset is from cleaned up corpus from [QBurst common-crawl-malayalam](https://github.com/qburst/common-crawl-malayalam)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/qburst/common-crawl-malayalam, contains the Useful tools to extract malayalam text from the Common Crawl Datasets.
### Licensing Information
[More Information Needed]
### Citation Information
@article{
qburst,
title={Common Crawl - Malayalam},
journal={arXiv preprint arXiv:2005.00085},
year={2020}\n}
]
### Contributions
Thanks to [rajeshradhakrishnanmvk](https://github.com/rajeshradhakrishnanmvk) for adding this dataset.
| 3,615 | [
[
-0.0247344970703125,
-0.037017822265625,
0.0133056640625,
0.0276641845703125,
-0.041748046875,
0.00019800662994384766,
-0.01294708251953125,
-0.006591796875,
0.06390380859375,
0.0248260498046875,
-0.04754638671875,
-0.0654296875,
-0.041900634765625,
0.035949... |
seamew/Weibo | 2021-10-09T13:58:21.000Z | [
"region:us"
] | seamew | null | null | 1 | 5 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
shivkumarganesh/CoLA | 2021-10-30T19:53:06.000Z | [
"region:us"
] | shivkumarganesh | null | null | 1 | 5 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
valurank/PoliticalBias | 2022-10-21T13:38:13.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | valurank | null | null | 3 | 5 | 2022-03-02T23:29:22 | ---
license:
- other
language:
- en
multilinguality:
- monolingual
task_categories:
- classification
task_ids:
- classification
---
# Dataset Card for PoliticalBias
## Table of Contents
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Source Data](#source-data)
## Dataset Description
roughly 8200 articles written by the website’s editors, each article covering one topic with 3 links that describe the same piece of news from different angles (usually one from the right, one from the left, and one from the center)
## Languages
The text in the dataset is in English
## Dataset Structure
The dataset consists of four columns namely Left, Right, Center, and Main URL
## Source Data
The dataset is scrapped from http://allsides.com/
| 809 | [
[
-0.040802001953125,
-0.018096923828125,
0.0213165283203125,
0.01898193359375,
-0.053741455078125,
0.0125274658203125,
-0.00736236572265625,
-0.001834869384765625,
0.0421142578125,
0.05450439453125,
-0.045623779296875,
-0.0716552734375,
-0.041107177734375,
0.... |
versae/modernisa | 2021-11-30T23:27:52.000Z | [
"region:us"
] | versae | Modernisa | @InProceedings{--,
author = {---},
title = {---},
booktitle = {---},
year = 2021,
address = "---"
} | 0 | 5 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
wmt/wmt13 | 2021-02-19T10:47:07.000Z | [
"region:us"
] | wmt | null | null | 0 | 5 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
yuvalkirstain/summ_screen_fd_t5 | 2022-01-09T06:22:00.000Z | [
"region:us"
] | yuvalkirstain | null | null | 0 | 5 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
zapsdcn/sciie | 2021-12-08T20:19:27.000Z | [
"region:us"
] | zapsdcn | null | null | 0 | 5 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Alvenir/alvenir_asr_da_eval | 2022-06-16T09:13:33.000Z | [
"license:cc-by-4.0",
"region:us"
] | Alvenir | Dataset of a little bit more than 5hours primarily intended as an evaluation dataset for Danish. | null | 5 | 5 | 2022-03-04T13:14:47 | ---
license: cc-by-4.0
---
# Dataset Card alvenir_asr_da_eval
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Prompts/sentence selection](#prompts/sentence-selection)
- [Recording](#recording)
- [Evaluation](#evaluation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** https://alvenir.ai
- **Repository:** https://github.com/danspeech/alvenir-asr-da-eval/
### Dataset Summary
This dataset was created by Alvenir in order to evaluate ASR models in Danish. It can also be used for training but the amount is very limited.
The dataset consists of .wav files with corresponding reference text. The amount of data is just above 5 hours spread across 50 speakers with age in the interval 20-60 years old. The data was collected by a third party vendor through their software and people. All recordings have been validated.
## Dataset Structure
### Data Instances
A data point consists of a path to the audio file, called path and its sentence. Additional fields will eventually be added such as age and gender.
`
{'audio': {'path': `some_path.wav', 'array': array([-0.044223, -0.00031411, -0.00435671, ..., 0.00612312, 0.00014581, 0.00091009], dtype=float32), 'sampling_rate': 16000}}
`
### Data Fields
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence the user was prompted to speak
### Data Splits
Since the idea behind the dataset is for it to be used as a test/eval ASR dataset for Danish, there is only test split.
## Dataset Creation
### Prompts/sentence selection
The sentences used for prompts were gathered from the danish part of open subtitles (OSS) (need reference) and wikipedia (WIKI). The OSS prompts sampled randomly across the dataset making sure that all prompts are unique. The WIKI prompts were selected by first training a topic model with 30 topics on wikipedia and than randomly sampling an equal amount of unique sentences from each topic. All sentences were manually inspected.
### Recording
50 unique speakers were all sent 20 WIKI sentences and 60 sentences from OSS. The recordings took place through third party recording software.
### Evaluation
All recordings were evaluated by third party to confirm alignment between audio and text.
### Personal and Sensitive Information
The dataset consists of people who have given their voice to the dataset for ASR purposes. You agree to not attempt to determine the identity of any of the speakers in the dataset.
### Licensing Information
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/)
| 3,467 | [
[
-0.048248291015625,
-0.047576904296875,
-0.00007265806198120117,
0.00891876220703125,
-0.0207366943359375,
-0.01641845703125,
-0.0335693359375,
-0.01036834716796875,
0.0118408203125,
0.03741455078125,
-0.05316162109375,
-0.052093505859375,
-0.032440185546875,
... |
jquiros/suicide | 2022-03-08T11:23:20.000Z | [
"region:us"
] | jquiros | null | null | 4 | 5 | 2022-03-08T11:20:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lewtun/top_quark_tagging | 2022-04-03T14:26:05.000Z | [
"license:cc-by-4.0",
"region:us"
] | lewtun | Top Quark Tagging is a dataset of Monte Carlo simulated hadronic top and QCD dijet events for the evaluation of top quark tagging architectures. The dataset consists of 1.2M training events, 400k validation events and 400k test events. | @dataset{kasieczka_gregor_2019_2603256,
author = {Kasieczka, Gregor and
Plehn, Tilman and
Thompson, Jennifer and
Russel, Michael},
title = {Top Quark Tagging Reference Dataset},
month = mar,
year = 2019,
publisher = {Zenodo},
version = {v0 (2018\_03\_27)},
doi = {10.5281/zenodo.2603256},
url = {https://doi.org/10.5281/zenodo.2603256}
} | 0 | 5 | 2022-03-13T16:55:31 | ---
license: cc-by-4.0
---
# Top Quark Tagging Reference Dataset
A set of MC simulated training/testing events for the evaluation of top quark tagging architectures.
In total 1.2M training events, 400k validation events and 400k test events. Use “train” for training, “val” for validation during the training and “test” for final testing and reporting results.
## Description
* 14 TeV, hadronic tops for signal, qcd diets background, Delphes ATLAS detector card with Pythia8
* No MPI/pile-up included
* Clustering of particle-flow entries (produced by Delphes E-flow) into anti-kT 0.8 jets in the pT range [550,650] GeV
* All top jets are matched to a parton-level top within ∆R = 0.8, and to all top decay partons within 0.8
* Jets are required to have |eta| < 2
* The leading 200 jet constituent four-momenta are stored, with zero-padding for jets with fewer than 200
* Constituents are sorted by pT, with the highest pT one first
* The truth top four-momentum is stored as truth_px etc.
* A flag (1 for top, 0 for QCD) is kept for each jet. It is called is_signal_new
* The variable "ttv" (= test/train/validation) is kept for each jet. It indicates to which dataset the jet belongs. It is redundant as the different sets are already distributed as different files. | 1,295 | [
[
-0.04925537109375,
0.0013942718505859375,
0.00791168212890625,
-0.0147247314453125,
-0.038543701171875,
0.022918701171875,
0.0208740234375,
0.0265045166015625,
0.00611114501953125,
0.0217437744140625,
-0.045562744140625,
-0.048736572265625,
-0.046478271484375,
... |
TomTBT/pmc_open_access_xml | 2023-09-17T08:43:36.000Z | [
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"l... | TomTBT | The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets
This version takes XML version as source, benefiting from the structured text
to split the articles in parts, naming the introduction, methods, results,
discussion and conclusion, and refers with keywords in the text to external or internal
resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias). | null | 0 | 5 | 2022-03-20T09:47:21 | ---
pretty_name: XML-parsed PMC
task_categories:
- text-classification
- summarization
- other
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
size_categories:
- 1M<n<10M
source_datasets:
- original
license:
- cc0-1.0
- cc-by-4.0
- cc-by-sa-4.0
- cc-by-nc-4.0
- cc-by-nd-4.0
- cc-by-nc-nd-4.0
- cc-by-nc-sa-4.0
- unknown
- other
multilinguality:
- monolingual
task_ids: []
tags:
- research papers
- biology
- medecine
---
# Dataset Card for PMC Open Access XML
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The XML Open Access includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets
This version takes XML version as source, benefiting from the structured text
to split the articles in parts, naming the introduction, methods, results,
discussion and conclusion, and reference with keywords in the text to external or internal
resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias).
The dataset was initially created with relation-extraction tasks in mind, between the references in text and the content of the
references (e.g. for PMID, by joining the refered article abstract from the pubmed dataset), but aims in a larger extent to provide
a corpus of pre-annotated text for other tasks (e.g. figure caption to graphic, glossary definition detection, summarization).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Fields
- "accession_id": The PMC ID of the article
- "pmid": The PubMed ID of the article
- "introduction": List of \<title\> and \<p\> elements in \<body\>, sharing their root with a \<title\> containing "introduction" or "background".
- "methods": Same as introduction with "method" keyword.
- "results": Same as introduction with "result" keyword.
- "discussion": Same as introduction with "discussion" keyword.
- "conclusion": Same as introduction with "conclusion" keyword.
- "front": List of \<title\> and \<p\> elements in \<front\> after everything else has been searched.
- "body": List of \<title\> and \<p\> elements in \<body\> after everything else has been searched.
- "back": List of \<title\> and \<p\> elements in \<back\> after everything else has been searched.
- "figure": List of \<fig\> elements of the article.
- "table": List of \<table-wrap\> and \<array\> elements of the article.
- "formula": List of \<disp-formula\> and \<inline-formula\> elements of the article.
- "box": List of \<boxed-text\> elements of the article.
- "code": List of \<code\> elements of the article.
- "quote": List of \<disp-quote\> and \<speech\> elements of the article.
- "chemical": List of \<chem-struct-wrap\> elements of the article.
- "supplementary": List of \<supplementary-material\> and \<inline-supplementary-material\> elements of the article.
- "footnote": List of \<fn-group\> and \<table-wrap-foot\> elements of the article.
- "graphic": List of \<graphic\> and \<inline-graphic\> elements of the article.
- "media": List of \<media\> and \<inline-media\> elements of the article.
- "glossary": Glossary if found in the XML
- "unknown_references": JSON of a dictionnary of each "tag":"text" for the reference that did not indicate a PMID
- "n_references": Total number of references and unknown references
- "license": The licence of the article
- "retracted": If the article was retracted or not
- "last_updated": Last update of the article
- "citation": Citation of the article
- "package_file": path to the folder containing the graphics and media files of the article (to append to the base URL: ftp.ncbi.nlm.nih.gov/pub/pmc/)
In text, the references are in the form ##KEYWORD##IDX_REF##OLD_TEXT##, with keywords (REF, UREF, FIG, TAB, FORMU, BOX, CODE, QUOTE, CHEM, SUPPL, FOOTN, GRAPH, MEDIA) referencing respectively to "pubmed articles" (external), "unknown_references", "figure", "table", "formula", "box", "code", "quote", "chem", "supplementary", "footnote", "graphic" and "media".
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
Internal references (figures, tables, ...) were found using specific tags. Deciding on those tags was done by testing and by looking in the documentation
for the different kind of possible usage.
Then, to split the article into introduction, methods, results, discussion and conclusion, specific keywords in titles were used. Because there are no rules
in this xml to tag those sections, finding the keyword seemed like the most reliable approach to do so. A drawback is that many section do not have those
keywords in the titles but could be assimilated to those. However, the huge diversity in the titles makes it harder to label such sections. This could be the
work of further versions of this dataset.
### Source Data
#### Initial Data Collection and Normalization
Data was obtained from:
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_noncomm/xml/
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_comm/xml/
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_other/xml/
Additional content for individual articles (graphics, media) can be obtained from:
- ftp.ncbi.nlm.nih.gov/pub/pmc + "package_file"
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
The articles XML are similar accross collections. This means that if a certain collection handles the structure in unusual ways, the whole collection might not be as
well annotated than others. This concerns all the sections (intro, methods, ...), the external references (pmids) and the internal references (tables, figures, ...).
To illustrate that, references are sometime given as a range (e.g. 10-15). In that case, only reference 10 and 15 are linked. This could potentially be handled in a
future version.
### Other Known Limitations
[Needs More Information]
### Preprocessing recommendations
- Filter out empty contents.
- Remove unwanted references from the text, and replace either by the "references_text" or by the reference content itself.
- Unescape HTML special characters: `import html; html.unescape(my_text)`
- Remove superfluous line break in text.
- Remove XML tags (\<italic\>, \<sup\>, \<sub\>, ...), replace by special tokens?
- Join the items of the contents' lists.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
https://www.ncbi.nlm.nih.gov/pmc/about/copyright/
Within the PMC Open Access Subset, there are three groupings:
Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
Other - no machine-readable Creative Commons license, no license, or a custom license.
### Citation Information
[Needs More Information] | 8,880 | [
[
-0.034881591796875,
-0.01959228515625,
0.04638671875,
0.0079193115234375,
-0.022613525390625,
-0.0026912689208984375,
-0.01004791259765625,
-0.0233306884765625,
0.021942138671875,
0.03857421875,
-0.04443359375,
-0.06793212890625,
-0.035247802734375,
0.038116... |
IIC/bioasq22_es | 2022-10-23T05:18:18.000Z | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:Helsinki-NLP/opus-mt-en-es",
"language:es",
"region:us"
] | IIC | null | null | 2 | 5 | 2022-03-21T20:55:50 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- es
multilinguality:
- monolingual
pretty_name: BIOASQ
size_categories:
- 100K<n<1M
source_datasets:
- Helsinki-NLP/opus-mt-en-es
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
# BIOASQ 2022 Spanish
This is an automatically translated version of the bioasq dataset, a dataset used for question answering in the biomedical domain.
The translation was performed for the questions, answers and contexts using the [marianMT english-spanish](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) . As the translation process may return answers that are not 100% present in the context, we developed an algorithm based on sentence tokenization and intersection of the words present in the answer and in the portion of the context that we are evaluating, and then extracting the parragraph from the context that matches the answer.
License, distribution and usage conditions of the original dataset apply.
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this dataset. | 1,302 | [
[
-0.0224151611328125,
-0.059539794921875,
0.041717529296875,
0.04534912109375,
-0.034088134765625,
0.0141143798828125,
0.0216064453125,
-0.041473388671875,
0.051422119140625,
0.04095458984375,
-0.06622314453125,
-0.034149169921875,
-0.042877197265625,
0.03994... |
emrecan/nli_tr_for_simcse | 2023-01-25T16:56:04.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"size_categories:100K<n<1M",
"source_datasets:nli_tr",
"language:tr",
"region:us"
] | emrecan | null | null | 0 | 5 | 2022-03-22T12:01:59 | ---
language:
- tr
size_categories:
- 100K<n<1M
source_datasets:
- nli_tr
task_categories:
- text-classification
task_ids:
- semantic-similarity-scoring
- text-scoring
---
# NLI-TR for Supervised SimCSE
This dataset is a modified version of [NLI-TR](https://huggingface.co/datasets/nli_tr) dataset. Its intended use is to train Supervised [SimCSE](https://github.com/princeton-nlp/SimCSE) models for sentence-embeddings. Steps followed to produce this dataset are listed below:
1. Merge train split of snli_tr and multinli_tr subsets.
2. Find every premise that has an entailment hypothesis **and** a contradiction hypothesis.
3. Write found triplets into sent0 (premise), sent1 (entailment hypothesis), hard_neg (contradiction hypothesis) format.
See this [Colab Notebook](https://colab.research.google.com/drive/1Ysq1SpFOa7n1X79x2HxyWjfKzuR_gDQV?usp=sharing) for training and evaluation on Turkish sentences. | 912 | [
[
-0.0283966064453125,
-0.04888916015625,
0.0367431640625,
0.032012939453125,
-0.0178375244140625,
-0.0179290771484375,
0.004093170166015625,
-0.003520965576171875,
0.034027099609375,
0.053131103515625,
-0.06207275390625,
-0.045196533203125,
-0.020263671875,
0... |
laion/laion5B-index | 2022-12-20T23:38:08.000Z | [
"license:cc-by-4.0",
"region:us"
] | laion | null | null | 14 | 5 | 2022-03-26T12:29:28 | ---
license: cc-by-4.0
---
See https://github.com/rom1504/clip-retrieval/blob/main/docs/laion5B_back.md for documentation on usage | 131 | [
[
-0.020111083984375,
-0.0251312255859375,
0.046630859375,
0.045135498046875,
-0.027099609375,
-0.00975799560546875,
0.017364501953125,
0.00821685791015625,
0.046173095703125,
0.038299560546875,
-0.028350830078125,
-0.07049560546875,
-0.01522064208984375,
0.00... |
huggan/apple2orange | 2022-04-12T13:55:40.000Z | [
"arxiv:1703.10593",
"region:us"
] | huggan | null | null | 0 | 5 | 2022-03-29T12:44:10 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 799 | [
[
-0.00348663330078125,
-0.02215576171875,
0.01763916015625,
0.0002982616424560547,
-0.0276947021484375,
0.00079345703125,
-0.0090789794921875,
-0.024566650390625,
0.0032215118408203125,
0.04345703125,
-0.04632568359375,
-0.051910400390625,
-0.02978515625,
0.0... |
vlsb/autotrain-data-security-texts-classification-distilroberta | 2022-03-30T20:48:56.000Z | [
"task_categories:text-classification",
"region:us"
] | vlsb | null | null | 3 | 5 | 2022-03-30T20:48:23 | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: security-texts-classification-distilroberta
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project security-texts-classification-distilroberta.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Netgear launches Bug Bounty Program for Hacker; Offering up to $15,000 in Rewards It might be the ea[...]",
"target": 0
},
{
"text": "Popular Malware Families Using 'Process Doppelg\u00e4nging' to Evade Detection The fileless code injectio[...]",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['irrelevant', 'relevant'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 780 |
| valid | 196 |
| 1,196 | [
[
-0.0189666748046875,
-0.00653839111328125,
-0.01143646240234375,
0.0036449432373046875,
-0.0070343017578125,
0.03619384765625,
0.003177642822265625,
-0.027984619140625,
0.005687713623046875,
0.035247802734375,
-0.033172607421875,
-0.048675537109375,
-0.051300048... |
KevinZ/psycholinguistic_eval | 2022-10-25T10:03:37.000Z | [
"task_categories:multiple-choice",
"task_categories:fill-mask",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en-US",
... | KevinZ | Psycholinguistic dataset from 'What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models'
by Allyson Ettinger | @article{ettinger2020bert,
title={What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models},
author={Ettinger, Allyson},
journal={Transactions of the Association for Computational Linguistics},
volume={8},
pages={34--48},
year={2020},
publisher={MIT Press}
} | 1 | 5 | 2022-04-01T00:04:18 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en-US
license:
- mit
multilinguality:
- monolingual
pretty_name: psycholinguistic_eval
size_categories:
- n<1K
source_datasets: []
task_categories:
- multiple-choice
- fill-mask
- question-answering
- zero-shot-classification
task_ids: []
---
This is a suite of psycholinguistic datasets by Allyson Ettinger. See her [official Github repository](https://github.com/aetting/lm-diagnostics) for specific details. | 506 | [
[
-0.0233001708984375,
-0.054718017578125,
0.0260009765625,
0.001987457275390625,
0.027679443359375,
0.00960540771484375,
-0.0196075439453125,
-0.034942626953125,
0.050262451171875,
0.038970947265625,
-0.056121826171875,
-0.03558349609375,
-0.0299835205078125,
... |
metashift | 2023-01-25T15:03:59.000Z | [
"task_categories:image-classification",
"task_categories:other",
"task_ids:multi-label-image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-b... | null | The MetaShift is a dataset of datasets for evaluating distribution shifts and training conflicts.
The MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes.
It was created for understanding the performance of a machine learning model across diverse data distributions. | @InProceedings{liang2022metashift,
title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
author={Weixin Liang and James Zou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=MTex8qKavoS}
} | 2 | 5 | 2022-04-01T15:16:57 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- image-classification
- other
task_ids:
- multi-label-image-classification
paperswithcode_id: metashift
pretty_name: MetaShift
tags:
- domain-generalization
dataset_info:
features:
- name: image_id
dtype: string
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': cat
'1': dog
'2': bus
'3': truck
'4': elephant
'5': horse
'6': bowl
'7': cup
- name: context
dtype: string
config_name: metashift
splits:
- name: train
num_bytes: 16333509
num_examples: 86808
download_size: 21878013674
dataset_size: 16333509
---
# Dataset Card for MetaShift
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [MetaShift homepage](https://metashift.readthedocs.io/)
- **Repository:** [MetaShift repository](https://github.com/Weixin-Liang/MetaShift)
- **Paper:** [MetaShift paper](https://arxiv.org/abs/2202.06523v1)
- **Point of Contact:** [Weixin Liang](mailto:wxliang@stanford.edu)
### Dataset Summary
The MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes. It was created for understanding the performance of a machine learning model across diverse data distributions.
The authors leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift.
The key idea is to cluster images using its metadata which provides context for each image.
For example : cats with cars or cats in bathroom.
The main advantage is the dataset contains many more coherent sets of data compared to other benchmarks.
Two important benefits of MetaShift :
- Contains orders of magnitude more natural data shifts than previously available.
- Provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets.
### Dataset Usage
The dataset has the following configuration parameters:
- selected_classes: `list[string]`, optional, list of the classes to generate the MetaShift dataset for. If `None`, the list is equal to `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`.
- attributes_dataset: `bool`, default `False`, if `True`, the script generates the MetaShift-Attributes dataset. Refer [MetaShift-Attributes Dataset](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) for more information.
- attributes: `list[string]`, optional, list of attributes classes included in the Attributes dataset. If `None` and `attributes_dataset` is `True`, it's equal to `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`. You can find the full attribute ontology in the above link.
- with_image_metadata: `bool`, default `False`, whether to include image metadata. If set to `True`, this will give additional metadata about each image. See [Scene Graph](https://cs.stanford.edu/people/dorarad/gqa/download.html) for more information.
- image_subset_size_threshold: `int`, default `25`, the number of images required to be considered a subset. If the number of images is less than this threshold, the subset is ignored.
- min_local_groups: `int`, default `5`, the minimum number of local groups required to be considered an object class.
Consider the following examples to get an idea of how you can use the configuration parameters :
1. To generate the MetaShift Dataset :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'])
```
The full object vocabulary and its hierarchy can be seen [here](https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/meta_data/class_hierarchy.json).
The default classes are `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`
2. To generate the MetaShift-Attributes Dataset (subsets defined by subject attributes) :
```python
load_dataset("metashift", attributes_dataset = True, attributes=["dog(smiling)", "cat(resting)"])
```
The default attributes are `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`
3. To generate the dataset with additional image metadata information :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'], with_image_metadata=True)
```
4. Further, you can specify your own configuration different from those used in the papers as follows:
```python
load_dataset("metashift", image_subset_size_threshold=20, min_local_groups=3)
```
### Dataset Meta-Graphs
From the MetaShift Github Repo :
> MetaShift splits the data points of each class (e.g., Cat) into many subsets based on visual contexts. Each node in the meta-graph represents one subset. The weight of each edge is the overlap coefficient between the corresponding two subsets. Node colors indicate the graph-based community detection results. Inter-community edges are colored. Intra-community edges are grayed out for better visualization. The border color of each example image indicates its community in the meta-graph. We have one such meta-graph for each of the 410 classes in the MetaShift.
The following are the metagraphs for the default classes, these have been generated using the `generate_full_MetaShift.py` file.
<p align='center'>
<img width='75%' src='https://i.imgur.com/wrpezCK.jpg' alt="Cat Meta-graph" /> </br>
<b>Figure: Meta-graph: visualizing the diverse data distributions within the “cat” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FhuAwfT.jpg' alt="Dog Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Dog” class, which captures meaningful semantics of the multi-modal data distribution of “Dog”. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FFCcN6L.jpg' alt="Bus Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Bus” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/rx5b5Vo.jpg' alt="Elephant Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Elephant" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/6f6U3S8.jpg' alt="Horse Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Horse" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/x9zhQD7.jpg' alt="Truck Meta-graph"/> </br>
<b>Figure: Meta-graph for the Truck class. </b>
</p>
### Supported Tasks and Leaderboards
From the paper:
> MetaShift supports evaluation on both :
> - domain generalization and subpopulation shifts settings,
> - assessing training conflicts.
### Languages
All the classes and subsets use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the MetaShift dataset is provided below:
```
{
'image_id': '2411520',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7F99115B8D90>,
'label': 2,
'context': 'fence'
}
```
A sample from the MetaShift-Attributes dataset is provided below:
```
{
'image_id': '2401643',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FED371CE350>
'label': 0
}
```
The format of the dataset with image metadata included by passing `with_image_metadata=True` to `load_dataset` is provided below:
```
{
'image_id': '2365745',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FEBCD39E4D0>
'label': 0,
'context': 'ground',
'width': 500,
'height': 333,
'location': None,
'weather': None,
'objects':
{
'object_id': ['2676428', '3215330', '1962110', '2615742', '3246028', '3232887', '3215329', '1889633', '3882667', '3882663', '1935409', '3882668', '3882669'],
'name': ['wall', 'trailer', 'floor', 'building', 'walkway', 'head', 'tire', 'ground', 'dock', 'paint', 'tail', 'cat', 'wall'],
'x': [194, 12, 0, 5, 3, 404, 27, 438, 2, 142, 324, 328, 224],
'y': [1, 7, 93, 10, 100, 46, 215, 139, 90, 172, 157, 45, 246],
'w': [305, 477, 499, 492, 468, 52, 283, 30, 487, 352, 50, 122, 274],
'h': [150, 310, 72, 112, 53, 59, 117, 23, 240, 72, 107, 214, 85],
'attributes': [['wood', 'green'], [], ['broken', 'wood'], [], [], [], ['black'], [], [], [], ['thick'], ['small'], ['blue']],
'relations': [{'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['to the left of'], 'object': ['3882669']}, {'name': ['to the right of'], 'object': ['3882668']}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['perched on', 'to the left of'], 'object': ['3882667', '1889633']}, {'name': ['to the right of'], 'object': ['3215329']}]
}
}
```
### Data Fields
- `image_id`: Unique numeric ID of the image in Base Visual Genome dataset.
- `image`: A PIL.Image.Image object containing the image.
- `label`: an int classification label.
- `context`: represents the context in which the label is seen. A given label could have multiple contexts.
Image Metadata format can be seen [here](https://cs.stanford.edu/people/dorarad/gqa/download.html) and a sample above has been provided for reference.
### Data Splits
All the data is contained in training set.
## Dataset Creation
### Curation Rationale
From the paper:
> We present MetaShift as an important resource for studying the behavior of
ML algorithms and training dynamics across data with heterogeneous contexts. In order to assess the reliability and fairness of a model, we need to evaluate
its performance and training behavior across heterogeneous types of data. MetaShift contains many more coherent sets of data compared to other benchmarks. Importantly, we have explicit annotations of what makes each subset unique (e.g. cats with cars or dogs next to a bench) as well as a score that measures the distance between any two subsets, which is not available in previous benchmarks of natural data.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. Visual Genome contains over 100k images across 1,702 object classes. MetaShift is constructed on a class-by-class basis. For each class, say “cat”, we pull out all cat images and proceed with generating candidate subests, constructing meta-graphs and then duantify distances of distribution shifts.
#### Who are the source language producers?
[More Information Needed]
### Annotations
The MetaShift dataset uses Visual Genome as its base, therefore the annotations process is same as the Visual Genome dataset.
#### Annotation process
From the Visual Genome paper :
> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33,000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs.
#### Who are the annotators?
From the Visual Genome paper :
> Visual Genome was collected and verified entirely by crowd workers from Amazon Mechanical Turk.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the paper:
> One limitation is that our MetaShift might inherit existing biases in Visual Genome, which is the
base dataset of our MetaShift. Potential concerns include minority groups being under-represented
in certain classes (e.g., women with snowboard), or annotation bias where people in images are
by default labeled as male when gender is unlikely to be identifiable. Existing work in analyzing,
quantifying, and mitigating biases in general computer vision datasets can help with addressing this
potential negative societal impact.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
From the paper :
> Our MetaShift and the code would use the Creative Commons Attribution 4.0 International License. Visual Genome (Krishna et al., 2017) is licensed under a Creative Commons Attribution 4.0 International License. MS-COCO (Lin et al., 2014) is licensed under CC-BY 4.0. The Visual Genome dataset uses 108, 077 images from the intersection of the YFCC100M (Thomee et al., 2016) and MS-COCO. We use the pre-processed and cleaned version of Visual Genome by GQA (Hudson & Manning, 2019).
### Citation Information
```bibtex
@InProceedings{liang2022metashift,
title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
author={Weixin Liang and James Zou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=MTex8qKavoS}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset. | 14,423 | [
[
-0.055633544921875,
-0.04583740234375,
0.0179595947265625,
-0.01593017578125,
-0.034881591796875,
-0.007450103759765625,
-0.004100799560546875,
-0.036712646484375,
0.032379150390625,
0.030670166015625,
-0.05926513671875,
-0.062255859375,
-0.0224151611328125,
... |
albertvillanova/mtet | 2022-10-08T07:42:34.000Z | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|bible_para",
"source_datasets:extended|kde4",
"source_datasets:extended|opus_gnome",
"source_datasets:extended|open_subtit... | albertvillanova | MTet (Multi-domain Translation for English-Vietnamese) dataset contains roughly 4.2 million English-Vietnamese pairs of
texts, ranging across multiple different domains such as medical publications, religious texts, engineering articles,
literature, news, and poems.
This dataset extends our previous SAT (Style Augmented Translation) dataset (v1.0) by adding more high-quality
English-Vietnamese sentence pairs on various domains. | @article{mTet2022,
author = {Chinh Ngo, Hieu Tran, Long Phan, Trieu H. Trinh, Hieu Nguyen, Minh Nguyen, Minh-Thang Luong},
title = {MTet: Multi-domain Translation for English and Vietnamese},
journal = {https://github.com/vietai/mTet},
year = {2022},
} | 1 | 5 | 2022-04-06T10:25:42 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
- vi
license:
- cc-by-nc-sa-4.0
multilinguality:
- translation
pretty_name: MTet
size_categories:
- 1M<n<10M
source_datasets:
- original
- extended|bible_para
- extended|kde4
- extended|opus_gnome
- extended|open_subtitles
- extended|tatoeba
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# Dataset Card for MTet
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://translate.vietai.org/
- **Repository:** https://github.com/vietai/mTet
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MTet (Multi-domain Translation for English-Vietnamese) dataset contains roughly 4.2 million English-Vietnamese pairs of
texts, ranging across multiple different domains such as medical publications, religious texts, engineering articles,
literature, news, and poems.
This dataset extends our previous SAT (Style Augmented Translation) dataset (v1.0) by adding more high-quality
English-Vietnamese sentence pairs on various domains.
### Supported Tasks and Leaderboards
- Machine Translation
### Languages
The languages in the dataset are:
- Vietnamese (`vi`)
- English (`en`)
## Dataset Structure
### Data Instances
```
{
'translation': {
'en': 'He said that existing restrictions would henceforth be legally enforceable, and violators would be fined.',
'vi': 'Ông nói những biện pháp hạn chế hiện tại sẽ được nâng lên thành quy định pháp luật, và những ai vi phạm sẽ chịu phạt.'
}
}
```
### Data Fields
- `translation`:
- `en`: Parallel text in English.
- `vi`: Parallel text in Vietnamese.
### Data Splits
The dataset is in a single "train" split.
| | train |
|--------------------|--------:|
| Number of examples | 4163853 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```bibtex
@article{mTet2022,
author = {Chinh Ngo, Hieu Tran, Long Phan, Trieu H. Trinh, Hieu Nguyen, Minh Nguyen, Minh-Thang Luong},
title = {MTet: Multi-domain Translation for English and Vietnamese},
journal = {https://github.com/vietai/mTet},
year = {2022},
}
```
### Contributions
Thanks to [@albertvillanova](https://huggingface.co/albertvillanova) for adding this dataset.
| 4,217 | [
[
-0.017791748046875,
-0.044189453125,
0.0195770263671875,
0.0227203369140625,
-0.030029296875,
-0.003955841064453125,
-0.0304718017578125,
-0.018798828125,
0.03570556640625,
0.052978515625,
-0.045196533203125,
-0.0654296875,
-0.0516357421875,
0.03131103515625... |
ukr-models/Ukr-Synth | 2023-08-31T09:35:43.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:uk",
"license:mit",
"region:us"
] | ukr-models | Large silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags. | null | 9 | 5 | 2022-04-06T17:13:34 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- uk
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- parsing
- part-of-speech
pretty_name: Ukrainian synthetic dataset in conllu format
---
# Dataset Card for Ukr-Synth
## Dataset Description
### Dataset Summary
Large silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags.
Represents a subsample of [Leipzig Corpora Collection for Ukrainian Language](https://wortschatz.uni-leipzig.de/en/download/Ukrainian). The source texts are newspaper texts split into sentences and shuffled. The sentrences are annotated using transformer-based models trained using gold standard Ukrainian language datasets.
### Languages
Ukrainian
## Dataset Structure
### Data Splits
| name |train |validation|
|---------|-------:|---------:|
|conll2003|1000000| 10000|
## Dataset Creation
### Source Data
Leipzig Corpora Collection:
D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.
In: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012
## Additional Information
### Licensing Information
MIT License
Copyright (c) 2022
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | 2,411 | [
[
-0.0240020751953125,
-0.01380157470703125,
0.011962890625,
0.0093231201171875,
-0.034912109375,
0.007904052734375,
-0.0110931396484375,
-0.0200653076171875,
0.012939453125,
0.04168701171875,
-0.0506591796875,
-0.060791015625,
-0.00647735595703125,
0.02165222... |
ghomasHudson/hotpotExtendedAno | 2022-04-13T11:01:17.000Z | [
"region:us"
] | ghomasHudson | null | null | 0 | 5 | 2022-04-13T10:55:51 | # hotpotQA-Extended (Annotated)
A version of [HotpotQA-Extended](https://huggingface.co/datasets/ghomasHudson/hotpotExtended) with extra annotations about what part of the input contains the answer. | 199 | [
[
-0.040283203125,
-0.054901123046875,
0.014862060546875,
0.02947998046875,
-0.026123046875,
-0.00565338134765625,
0.0024280548095703125,
-0.01556396484375,
0.06390380859375,
0.05816650390625,
-0.0487060546875,
-0.0233154296875,
-0.051422119140625,
0.010070800... |
kniemiec/testupdxdxddsgsgdfsgxdate-crack-segmentation | 2022-04-15T22:48:41.000Z | [
"region:us"
] | kniemiec | null | null | 0 | 5 | 2022-04-15T22:48:39 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
mwong/climatetext-claim-related-evaluation | 2022-10-25T10:08:44.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | mwong | null | null | 1 | 5 | 2022-04-20T12:00:50 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a climate related claim and evidence, predict if claim is related to evidence. | 628 | [
[
-0.0106658935546875,
-0.035552978515625,
0.024566650390625,
0.0092315673828125,
-0.018157958984375,
-0.00843048095703125,
-0.013397216796875,
-0.0249176025390625,
0.00278472900390625,
0.065185546875,
-0.03955078125,
-0.043670654296875,
-0.0535888671875,
0.00... |
Yaxin/SemEval2014Task4Raw | 2022-08-15T08:20:00.000Z | [
"region:us"
] | Yaxin | A collection of SemEval2014 specifically designed to aid research in Aspect Based Sentiment Analysis. | @article{2014SemEval,
title={SemEval-2014 Task 4: Aspect Based Sentiment Analysis},
author={ Pontiki, M. and D Galanis and Pavlopoulos, J. and Papageorgiou, H. and Manandhar, S. },
journal={Proceedings of International Workshop on Semantic Evaluation at},
year={2014},
} | 7 | 5 | 2022-04-21T13:32:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
wza/TimeTravel | 2022-05-05T06:42:38.000Z | [
"region:us"
] | wza | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 0 | 5 | 2022-04-27T06:51:36 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
strombergnlp/dkstance | 2022-10-25T21:45:42.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"stance-detection",
"region:us"
] | strombergnlp | This dataset presents a series of stories on Reddit and the conversation around
them, annotated for stance. Stories are also annotated for veracity.
For more details see https://aclanthology.org/W19-6122/ | @inproceedings{lillie-etal-2019-joint,
title = "Joint Rumour Stance and Veracity Prediction",
author = "Lillie, Anders Edelbo and
Middelboe, Emil Refsgaard and
Derczynski, Leon",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6122",
pages = "208--221",
} | 1 | 5 | 2022-04-28T10:07:39 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
paperswithcode_id: dast
pretty_name: DAST
extra_gated_prompt: 'Warning: the data in this repository contains harmful content
(misinformative claims).'
tags:
- stance-detection
---
# Dataset Card for "dkstance / DAST"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/jointrumourstanceandveracity/](https://stromberg.ai/publication/jointrumourstanceandveracity/)
- **Repository:** [https://figshare.com/articles/dataset/Danish_stance-annotated_Reddit_dataset/8217137](https://figshare.com/articles/dataset/Danish_stance-annotated_Reddit_dataset/8217137)
- **Paper:** [https://aclanthology.org/W19-6122/](https://aclanthology.org/W19-6122/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
This is an SDQC stance-annotated Reddit dataset for the Danish language generated within a thesis project. The dataset consists of over 5000 comments structured as comment trees and linked to 33 source posts.
The dataset is applicable for supervised stance classification and rumour veracity prediction.
### Supported Tasks and Leaderboards
* Stance prediction
### Languages
## Dataset Structure
### Data Instances
#### DAST / dkstance
- **Size of downloaded dataset files:** 4.72 MiB
- **Size of the generated dataset:** 3.69 MiB
- **Total amount of disk used:** 8.41 MiB
An example of 'train' looks as follows.
```
{
'id': '1',
'native_id': 'ebwjq5z',
'text': 'Med de udfordringer som daginstitutionerne har med normeringer, og økonomi i det hele taget, synes jeg det er en vanvittig beslutning at prioritere skattebetalt vegansk kost i daginstitutionerne. Brug dog pengene på noget mere personale, og lad folk selv betale for deres individuelle kostønsker.',
'parent_id': 'a6o3us',
'parent_text': 'Mai Mercado om mad i daginstitutioner: Sund kost rimer ikke på veganer-mad',
'parent_stance': 0,
'source_id': 'a6o3us',
'source_text': 'Mai Mercado om mad i daginstitutioner: Sund kost rimer ikke på veganer-mad',
'source_stance': 0
}
```
### Data Fields
- `id`: a `string` feature.
- `native_id`: a `string` feature representing the native ID of the entry.
- `text`: a `string` of the comment text in which stance is annotated.
- `parent_id`: the `native_id` of this comment's parent.
- `parent_text`: a `string` of the parent comment's text.
- `parent_stance`: the label of the stance in the comment towards its parent comment.
```
0: "Supporting",
1: "Denying",
2: "Querying",
3: "Commenting",
```
- `source_id`: the `native_id` of this comment's source / post.
- `source_text`: a `string` of the source / post text.
- `source_stance`: the label of the stance in the comment towards the original source post.
```
0: "Supporting",
1: "Denying",
2: "Querying",
3: "Commenting",
```
### Data Splits
| name |size|
|---------|----:|
|train|3122|
|validation|1066|
|test|1060|
These splits are specified after the original reserach was reported. The splits add an extra level of rigour, in that no source posts' comment tree is spread over more than one partition.
## Dataset Creation
### Curation Rationale
Comments around rumourous claims to enable rumour and stance analysis in Danish
### Source Data
#### Initial Data Collection and Normalization
The data is from Reddit posts that relate to one of a specific set of news stories; these stories are enumerated in the paper.
#### Who are the source language producers?
Danish-speaking Twitter users.
### Annotations
#### Annotation process
There was multi-user annotation process mediated through a purpose-built interface for annotating stance in Reddit threads.
#### Who are the annotators?
* Age: 20-30.
* Gender: male.
* Race/ethnicity: white northern European.
* Native language: Danish.
* Socioeconomic status: higher education student.
### Personal and Sensitive Information
The data was public at the time of collection. User names are not preserved.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
The source of the text has a strong demographic bias, being mostly young white men who are vocal their opinions. This constrains both the styles of language and discussion contained in the data, as well as the topics discussed and viewpoints held.
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
An NLP data statement is included in the paper describing the work, [https://aclanthology.org/W19-6122.pdf](https://aclanthology.org/W19-6122.pdf)
### Citation Information
```
@inproceedings{lillie-etal-2019-joint,
title = "Joint Rumour Stance and Veracity Prediction",
author = "Lillie, Anders Edelbo and
Middelboe, Emil Refsgaard and
Derczynski, Leon",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6122",
pages = "208--221",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| 7,110 | [
[
-0.0416259765625,
-0.040374755859375,
0.028045654296875,
0.010589599609375,
-0.0333251953125,
0.0009179115295410156,
-0.0236053466796875,
-0.0293121337890625,
0.043701171875,
0.0189361572265625,
-0.05487060546875,
-0.0709228515625,
-0.050750732421875,
0.0173... |
johnowhitaker/imagewoof2-320 | 2022-05-08T09:26:37.000Z | [
"region:us"
] | johnowhitaker | null | null | 0 | 5 | 2022-05-08T09:23:24 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
HuggingFaceM4/ActivitiyNet_Captions | 2022-10-23T05:50:46.000Z | [
"task_ids:closed-domain-qa",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10k<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:1705.00754",
"region:us"
] | HuggingFaceM4 | null | null | 2 | 5 | 2022-05-17T11:26:07 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: ActivityNet Captions
size_categories:
- 10k<n<100K
source_datasets:
- original
task_categories:
- video-captionning
task_ids:
- closed-domain-qa
---
# Dataset Card for ActivityNet Captions
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://cs.stanford.edu/people/ranjaykrishna/densevid/
- **Paper:** https://arxiv.org/abs/1705.00754
### Dataset Summary
The ActivityNet Captions dataset connects videos to a series of temporally annotated sentence descriptions. Each sentence covers an unique segment of the video, describing multiple events that occur. These events may occur over very long or short periods of time and are not limited in any capacity, allowing them to co-occur. On average, each of the 20k videos contains 3.65 temporally localized sentences, resulting in a total of 100k sentences. We find that the number of sentences per video follows a relatively normal distribution. Furthermore, as the video duration increases, the number of sentences also increases. Each sentence has an average length of 13.48 words, which is also normally distributed. You can find more details of the dataset under the ActivityNet Captions Dataset section, and under supplementary materials in the paper.
### Languages
The captions in the dataset are in English.
## Dataset Structure
### Data Fields
- `video_id` : `str` unique identifier for the video
- `video_path`: `str` Path to the video file
-`duration`: `float32` Duration of the video
- `captions_starts`: `List_float32` List of timestamps denoting the time at which each caption starts
- `captions_ends`: `List_float32` List of timestamps denoting the time at which each caption ends
- `en_captions`: `list_str` List of english captions describing parts of the video
### Data Splits
| |train |validation| test | Overall |
|-------------|------:|---------:|------:|------:|
|# of videos|10,009 |4,917 |4,885 |19,811 |
### Annotations
Quoting [ActivityNet Captions' paper](https://arxiv.org/abs/1705.00754): \
"Each annotation task was divided into two steps: (1)
Writing a paragraph describing all major events happening
in the videos in a paragraph, with each sentence of the paragraph describing one event, and (2) Labeling the
start and end time in the video in which each sentence in the
paragraph event occurred."
### Who annotated the dataset?
Amazon Mechnical Turk annotators
### Personal and Sensitive Information
Nothing specifically mentioned in the paper.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@InProceedings{tgif-cvpr2016,
@inproceedings{krishna2017dense,
title={Dense-Captioning Events in Videos},
author={Krishna, Ranjay and Hata, Kenji and Ren, Frederic and Fei-Fei, Li and Niebles, Juan Carlos},
booktitle={International Conference on Computer Vision (ICCV)},
year={2017}
}
```
### Contributions
Thanks to [@leot13](https://github.com/leot13) for adding this dataset. | 4,130 | [
[
-0.02154541015625,
-0.037567138671875,
0.01007843017578125,
0.01837158203125,
-0.03656005859375,
-0.011444091796875,
-0.01557159423828125,
-0.00495147705078125,
0.0293426513671875,
0.0284271240234375,
-0.04803466796875,
-0.04229736328125,
-0.046783447265625,
... |
bigscience-data/roots_en_wikinews | 2022-12-12T11:02:53.000Z | [
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | 0 | 5 | 2022-05-18T09:08:30 | ---
language: en
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_en_wikinews
# wikinews_filtered
- Dataset uid: `wikinews_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0307 % of total
- 0.0701 % of ar
- 0.3036 % of pt
- 0.0271 % of en
- 0.0405 % of fr
- 0.2119 % of indic-ta
- 0.0081 % of zh
- 0.0510 % of es
- 0.0725 % of ca
### BigScience processing steps
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
| 2,536 | [
[
-0.03900146484375,
-0.03955078125,
0.0234375,
0.01447296142578125,
-0.008087158203125,
-0.006069183349609375,
-0.0116119384765625,
0.0021800994873046875,
0.046356201171875,
0.03076171875,
-0.055328369140625,
-0.065185546875,
-0.04644775390625,
0.028198242187... |
bigscience-data/roots_id_wikipedia | 2022-12-12T11:06:00.000Z | [
"language:id",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | 2 | 5 | 2022-05-18T09:14:40 | ---
language: id
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_id_wikipedia
# wikipedia
- Dataset uid: `wikipedia`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 3.2299 % of total
- 4.2071 % of en
- 5.6773 % of ar
- 3.3416 % of fr
- 5.2815 % of es
- 12.4852 % of ca
- 0.4288 % of zh
- 0.4286 % of zh
- 5.4743 % of indic-bn
- 8.9062 % of indic-ta
- 21.3313 % of indic-te
- 4.4845 % of pt
- 4.0493 % of indic-hi
- 11.3163 % of indic-ml
- 22.5300 % of indic-ur
- 4.4902 % of vi
- 16.9916 % of indic-kn
- 24.7820 % of eu
- 11.6241 % of indic-mr
- 9.8749 % of id
- 9.3489 % of indic-pa
- 9.4767 % of indic-gu
- 24.1132 % of indic-as
- 5.3309 % of indic-or
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
#### Filters applied to: zh
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-or
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
| 3,635 | [
[
-0.0452880859375,
-0.03839111328125,
0.0224456787109375,
0.01123809814453125,
-0.01319122314453125,
-0.00643157958984375,
-0.01534271240234375,
-0.010009765625,
0.045318603515625,
0.021209716796875,
-0.053924560546875,
-0.06005859375,
-0.0462646484375,
0.032... |
Lais/Sentiment-Analysis-on-Movie-Reviews | 2022-05-28T02:45:54.000Z | [
"region:us"
] | Lais | null | null | 0 | 5 | 2022-05-28T02:45:54 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
arize-ai/ecommerce_reviews_with_language_drift | 2022-07-01T17:26:03.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|imdb",
"language:en",
"license:mit",
"region:us"
] | arize-ai | This dataset was crafted to be used in our tutorial [Link to the tutorial when
ready]. It consists on product reviews from an e-commerce store. The reviews
are labeled on a scale from 1 to 5 (stars). The training & validation sets are
fully composed by reviews written in english. However, the production set has
some reviews written in spanish. At Arize, we work to surface this issue and
help you solve it. | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# }
# | 1 | 5 | 2022-05-31T23:24:11 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
source_datasets:
- extended|imdb
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. | 3,345 | [
[
-0.04541015625,
-0.03265380859375,
0.0182952880859375,
0.00946044921875,
-0.027435302734375,
0.01232147216796875,
-0.024932861328125,
-0.01444244384765625,
0.045501708984375,
0.045562744140625,
-0.074462890625,
-0.0718994140625,
-0.039398193359375,
0.0026817... |
BeIR/trec-covid-generated-queries | 2022-10-23T06:13:36.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | 0 | 5 | 2022-06-17T12:59:43 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | 13,988 | [
[
-0.0396728515625,
-0.03985595703125,
0.01094818115234375,
0.0036602020263671875,
0.00423431396484375,
0.00009590387344360352,
-0.0081939697265625,
-0.0188751220703125,
0.021697998046875,
0.00595855712890625,
-0.034332275390625,
-0.0545654296875,
-0.0263824462890... |
IDEA-CCNL/AFQMC | 2023-04-06T06:32:35.000Z | [
"license:apache-2.0",
"arxiv:2209.02970",
"region:us"
] | IDEA-CCNL | Download from https://www.cluebenchmarks.com/introduce.html | \ | 5 | 5 | 2022-06-28T06:25:33 | ---
license: apache-2.0
---
# AFQMC
Download from https://www.cluebenchmarks.com/introduce.html
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
| 1,194 | [
[
-0.03363037109375,
-0.042755126953125,
0.020751953125,
0.0240936279296875,
-0.0283660888671875,
-0.007068634033203125,
-0.01232147216796875,
-0.0211181640625,
0.01076507568359375,
0.00799560546875,
-0.03955078125,
-0.03167724609375,
-0.0294342041015625,
0.00... |
BDas/Turkish-Dataset | 2022-09-16T07:34:57.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:tr",
... | BDas | The dataset, prepared in Turkish, includes 53.000 tests, 53.000 validations and 160600 train data.
The data is composed of customer comments and created from e-commerce sites. | ----Turkish Data---- | 4 | 5 | 2022-07-04T19:47:10 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- tr
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
pretty_name: 'Turkish NLP Dataset'
---
# Dataset Card for "Turkish-NLP-Dataset"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/BihterDass/turkish-nlp-dataset]
- **Repository:**[https://github.com/BihterDass/turkish-nlp-dataset]
- **Size of downloaded dataset files:** 125.5 MB
- **Size of the generated dataset:** 125.5 MB
### Dataset Summary
The dataset was compiled from user comments from e-commerce sites. It consists of 53,000 validations, 53,000 tests and 160600 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
#### turkish-dataset-v1
- **Size of downloaded dataset files:** 125.5 MB
- **Size of the generated dataset:** 125.5 MB
### Data Fields
The data fields are the same among all splits.
#### turkish-dataset-v-v1
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0).
### Data Splits
| |train |validation|test |
|----|--------:|---------:|---------:|
|Data| 160600 | 53000| 53000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset. | 3,638 | [
[
-0.043487548828125,
-0.042205810546875,
-0.01285552978515625,
0.0225372314453125,
-0.0291900634765625,
-0.009674072265625,
-0.034942626953125,
-0.0290374755859375,
0.0256195068359375,
0.036651611328125,
-0.04364013671875,
-0.06683349609375,
-0.055419921875,
... |
embedding-data/SPECTER | 2022-08-02T03:45:52.000Z | [
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-classification",
"language:en",
"license:mit",
"arxiv:2004.07180",
"region:us"
] | embedding-data | null | null | 0 | 5 | 2022-07-08T02:41:34 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/SPECTER
pretty_name: SPECTER
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "SPECTER"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/allenai/specter](https://github.com/allenai/specter)
- **Repository:** [More Information Needed](https://github.com/allenai/specter/blob/master/README.md)
- **Paper:** [More Information Needed](https://arxiv.org/pdf/2004.07180.pdf)
- **Point of Contact:** [@armancohan](https://github.com/armancohan), [@sergeyf](https://github.com/sergeyf), [@haroldrubio](https://github.com/haroldrubio), [@jinamshah](https://github.com/jinamshah)
### Dataset Summary
Dataset containing triplets (three sentences): anchor, positive, and negative. Contains titles of papers.
Disclaimer: The team releasing SPECTER did not upload the dataset to the Hub and did not write a dataset card.
These steps were done by the Hugging Face team.
## Dataset Structure
Each example in the dataset contains triplets of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
Each example is a dictionary with a key, "set", containing a list of three sentences (anchor, positive, and negative):
```
{"set": [anchor, positive, negative]}
{"set": [anchor, positive, negative]}
...
{"set": [anchor, positive, negative]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using triplets.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/SPECTER")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 684100
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Curation Rationale
[More Information Needed](https://github.com/allenai/specter)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/allenai/specter)
#### Who are the source language producers?
[More Information Needed](https://github.com/allenai/specter)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/allenai/specter)
#### Who are the annotators?
[More Information Needed](https://github.com/allenai/specter)
### Personal and Sensitive Information
[More Information Needed](https://github.com/allenai/specter)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/allenai/specter)
### Discussion of Biases
[More Information Needed](https://github.com/allenai/specter)
### Other Known Limitations
[More Information Needed](https://github.com/allenai/specter)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/allenai/specter)
### Licensing Information
[More Information Needed](https://github.com/allenai/specter)
### Citation Information
### Contributions
| 4,280 | [
[
-0.031280517578125,
-0.027587890625,
0.015777587890625,
0.0037212371826171875,
-0.0131988525390625,
0.0212249755859375,
-0.0223236083984375,
-0.0311737060546875,
0.05230712890625,
0.0300750732421875,
-0.0379638671875,
-0.047393798828125,
-0.042022705078125,
... |
codeparrot/github-jupyter-text-code-pairs | 2022-10-25T09:30:34.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:unknown",
"language:code",
"license:other",
"region:us"
] | codeparrot | null | null | 3 | 5 | 2022-07-13T14:34:33 | ---
annotations_creators: []
language:
- code
license:
- other
multilinguality:
- monolingual
size_categories:
- unknown
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: github-jupyter-text-code-pairs
---
This is a parsed version of [github-jupyter-parsed](https://huggingface.co/datasets/codeparrot/github-jupyter-parsed), with markdown and code pairs. We provide the preprocessing script in [preprocessing.py](https://huggingface.co/datasets/codeparrot/github-jupyter-parsed-v2/blob/main/preprocessing.py). The data is deduplicated and consists of 451662 examples.
For similar datasets with text and Python code, there is [CoNaLa](https://huggingface.co/datasets/neulab/conala) benchmark from StackOverflow, with some samples curated by annotators. | 782 | [
[
-0.0123748779296875,
-0.052947998046875,
0.0116729736328125,
0.0166168212890625,
-0.004180908203125,
-0.0004837512969970703,
-0.019989013671875,
-0.0177001953125,
0.04168701171875,
0.032928466796875,
-0.0286712646484375,
-0.040191650390625,
-0.01267242431640625,... |
prasoonskrishnan/movie | 2022-07-14T06:03:14.000Z | [
"region:us"
] | prasoonskrishnan | null | null | 0 | 5 | 2022-07-14T06:03:14 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
codeparrot/github-jupyter-code-to-text | 2023-01-24T21:33:06.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"license:apache-2.0",
"code",
"region:us"
] | codeparrot | null | null | 14 | 5 | 2022-07-19T14:00:45 | ---
license: apache-2.0
task_categories:
- text-generation
tags:
- code
size_categories:
- 10K<n<100K
---
# Dataset description
This dataset consists of sequences of Python code followed by a a docstring explaining its function. It was constructed by concatenating code and text pairs
from this [dataset](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs) that were originally code and markdown cells in Jupyter Notebooks.
The content of each example the following:
````
[CODE]
"""
Explanation: [TEXT]
End of explanation
"""
[CODE]
"""
Explanation: [TEXT]
End of explanation
"""
...
````
# How to use it
```python
from datasets import load_dataset
ds = load_dataset("codeparrot/github-jupyter-code-to-text", split="train")
````
````
Dataset({
features: ['repo_name', 'path', 'license', 'content'],
num_rows: 47452
})
```` | 857 | [
[
-0.0182342529296875,
-0.031341552734375,
-0.0011348724365234375,
0.0027751922607421875,
-0.0279998779296875,
0.001689910888671875,
-0.03216552734375,
0.005283355712890625,
0.0245819091796875,
0.0404052734375,
-0.0253753662109375,
-0.03668212890625,
-0.0232849121... |
nateraw/fsd50k | 2022-07-26T04:44:10.000Z | [
"region:us"
] | nateraw | null | null | 0 | 5 | 2022-07-26T04:24:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
autoevaluate/autoeval-staging-eval-project-deepset__germanquad-7176bd7d-11875589 | 2022-07-26T14:40:30.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 5 | 2022-07-26T14:38:50 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- deepset/germanquad
eval_info:
task: extractive_question_answering
model: deepset/gelectra-base-germanquad
metrics: []
dataset_name: deepset/germanquad
dataset_config: plain_text
dataset_split: test
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/gelectra-base-germanquad
* Dataset: deepset/germanquad
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjlree](https://huggingface.co/sjlree) for evaluating this model. | 964 | [
[
-0.039093017578125,
-0.04388427734375,
0.0247039794921875,
0.00038361549377441406,
0.0011453628540039062,
0.004848480224609375,
0.0006098747253417969,
-0.0247802734375,
0.00374603271484375,
0.03228759765625,
-0.07379150390625,
-0.017120361328125,
-0.038360595703... |
rungalileo/trec6 | 2022-10-05T22:48:16.000Z | [
"region:us"
] | rungalileo | null | null | 0 | 5 | 2022-08-04T04:56:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pustozerov/crema_d_diarization | 2022-08-16T08:09:57.000Z | [
"region:us"
] | pustozerov | null | null | 0 | 5 | 2022-08-11T17:49:32 | # Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Contributions](#contributions)
annotations_creators:
- no-annotation
language:
- en
language_creators:
- crowdsourced
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: Crema D Diarization
size_categories:
- 10M<n<100M
source_datasets: []
tags: []
task_categories:
- audio-classification
- automatic-speech-recognition
- voice-activity-detection
task_ids:
- audio-emotion-recognition
- speaker-identification
### Contributions
Thanks to [@EvgeniiPustozerov](https://github.com/EvgeniiPustozerov) for adding this dataset.
| 827 | [
[
-0.0361328125,
-0.01006317138671875,
0.002346038818359375,
0.039031982421875,
-0.019378662109375,
0.00185394287109375,
-0.016937255859375,
-0.018798828125,
0.03271484375,
0.036773681640625,
-0.05029296875,
-0.0760498046875,
-0.042449951171875,
0.007740020751... |
jamescalam/oscar-en-minilm-2m | 2022-08-15T18:19:16.000Z | [
"task_categories:sentence-similarity",
"annotations_creators:no-annotation",
"language_creators:other",
"size_categories:1M<n<10M",
"source_datasets:extended|oscar",
"language:en",
"license:afl-3.0",
"embeddings",
"vector search",
"semantic similarity",
"semantic search",
"sentence transformer... | jamescalam | null | null | 1 | 5 | 2022-08-15T13:08:44 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- other
license:
- afl-3.0
multilinguality: []
pretty_name: OSCAR MiniLM Embeddings 2M
size_categories:
- 1M<n<10M
source_datasets:
- extended|oscar
tags:
- embeddings
- vector search
- semantic similarity
- semantic search
- sentence transformers
- sentence similarity
task_categories:
- sentence-similarity
task_ids: []
---
# Oscar EN 2M Embeddings
This dataset contains 2M sentences extracted from the English subset of the OSCAR dataset, and encoded into sentence embeddings using the `sentence-transformers/all-MiniLM-L6-v2` model. | 614 | [
[
-0.005313873291015625,
-0.035247802734375,
0.02972412109375,
0.003124237060546875,
-0.0190277099609375,
-0.007198333740234375,
0.02313232421875,
-0.028778076171875,
0.02294921875,
0.056549072265625,
-0.03277587890625,
-0.0008120536804199219,
-0.05828857421875,
... |
IDEA-CCNL/PretrainCorpusDemo | 2023-04-06T06:32:47.000Z | [
"license:apache-2.0",
"arxiv:2209.02970",
"region:us"
] | IDEA-CCNL | \ | \ | 0 | 5 | 2022-08-19T08:32:25 | ---
license: apache-2.0
---
Only use for Demo
# PretrainCorpusDemo
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
} | 1,164 | [
[
-0.030731201171875,
-0.031219482421875,
0.0279541015625,
0.0333251953125,
-0.02374267578125,
-0.007904052734375,
-0.0218048095703125,
-0.011749267578125,
0.015899658203125,
0.00890350341796875,
-0.036285400390625,
-0.026947021484375,
-0.0224761962890625,
-0.... |
allenai/multixscience_sparse_max | 2022-11-24T16:36:31.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | allenai | null | null | 0 | 5 | 2022-08-25T23:00:00 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==20`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5482 | 0.2243 | 0.0547 | 0.4063 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5476 | 0.2209 | 0.0553 | 0.4026 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5480 | 0.2272 | 0.055 | 0.4039 | | 1,493 | [
[
-0.026214599609375,
0.0027942657470703125,
0.0153656005859375,
0.0085906982421875,
-0.01255035400390625,
0.0082244873046875,
-0.0091094970703125,
0.00806427001953125,
0.040069580078125,
0.02825927734375,
-0.04779052734375,
-0.0352783203125,
-0.05133056640625,
... |
allenai/ms2_sparse_max | 2022-11-24T16:27:49.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | allenai | null | null | 0 | 5 | 2022-08-26T21:40:42 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4333 | 0.2163 | 0.1746 | 0.2636 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.378 | 0.1827 | 0.1559 | 0.2188 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.3928 | 0.1898 | 0.1672 | 0.2208 | | 1,649 | [
[
-0.0182952880859375,
-0.014251708984375,
0.0138702392578125,
0.00982666015625,
-0.011993408203125,
-0.00847625732421875,
-0.01358795166015625,
0.0020122528076171875,
0.0184173583984375,
0.024871826171875,
-0.037384033203125,
-0.0330810546875,
-0.057037353515625,... |
evaluate/glue-ci | 2022-09-15T20:12:43.000Z | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monol... | evaluate | GLUE, the General Language Understanding Evaluation benchmark
(https://gluebenchmark.com/) is a collection of resources for training,
evaluating, and analyzing natural language understanding systems. | @inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
} | 0 | 5 | 2022-08-31T22:17:54 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- acceptability-classification
- natural-language-inference
- semantic-similarity-scoring
- sentiment-classification
- text-classification-other-coreference-nli
- text-classification-other-paraphrase-identification
- text-classification-other-qa-nli
- text-scoring
paperswithcode_id: glue
pretty_name: GLUE (General Language Understanding Evaluation benchmark)
train-eval-index:
- config: cola
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: sst2
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: mrpc
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: qqp
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question1: text1
question2: text2
label: target
- config: stsb
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: mnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation_matched
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_mismatched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_matched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: qnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question: text1
sentence: text2
label: target
- config: rte
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: wnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
configs:
- ax
- cola
- mnli
- mnli_matched
- mnli_mismatched
- mrpc
- qnli
- qqp
- rte
- sst2
- stsb
- wnli
---
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
| 22,061 | [
[
-0.0303192138671875,
-0.057525634765625,
0.00934600830078125,
0.01531219482421875,
-0.006046295166015625,
-0.00428009033203125,
-0.0121917724609375,
-0.0306243896484375,
0.0269012451171875,
0.03167724609375,
-0.058563232421875,
-0.0540771484375,
-0.0360412597656... |
mrm8488/sst2-es-mt | 2022-09-03T16:41:42.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:sst2",
"language:es",
"license:unknown",
"region:us"
] | mrm8488 | null | null | 0 | 5 | 2022-09-02T20:28:50 | ---
language:
- es
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- sst2
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Stanford Sentiment Treebank v2
---
# STT-2 Spanish
## A Spanish translation (using [EasyNMT](https://github.com/UKPLab/EasyNMT)) of the [SST-2 Dataset](https://huggingface.co/datasets/sst2)
#### For more information check the official [Model Card](https://huggingface.co/datasets/sst2) | 503 | [
[
0.004383087158203125,
-0.04754638671875,
0.0153350830078125,
0.043731689453125,
-0.0565185546875,
0.0048980712890625,
0.01467132568359375,
-0.029693603515625,
0.045806884765625,
0.03424072265625,
-0.06097412109375,
-0.035797119140625,
-0.04571533203125,
-0.0... |
victor/autotrain-data-satellite-image-classification | 2022-09-05T09:30:13.000Z | [
"task_categories:image-classification",
"region:us"
] | victor | null | null | 1 | 5 | 2022-09-05T08:58:49 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: satellite-image-classification
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project satellite-image-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<256x256 CMYK PIL image>",
"target": 0
},
{
"image": "<256x256 CMYK PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=1, names=['cloudy'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1200 |
| valid | 300 |
| 989 | [
[
-0.044525146484375,
0.0175323486328125,
0.00677490234375,
0.01751708984375,
-0.034423828125,
0.02001953125,
-0.0169677734375,
-0.022003173828125,
-0.00989532470703125,
0.030364990234375,
-0.04229736328125,
-0.05712890625,
-0.04083251953125,
0.004287719726562... |
onurkoc83/reviews-turkce-text-generation | 2022-09-11T10:02:25.000Z | [
"region:us"
] | onurkoc83 | null | null | 0 | 5 | 2022-09-11T10:02:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
codesue/kelly | 2022-12-18T22:06:55.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:sv",
"license:cc-by-4.0",
"lexicon",
"swedish",
"CEFR",
"region:us"
] | codesue | The Swedish Kelly list is a freely available frequency-based vocabulary list that comprises general-purpose language of modern Swedish. The list was generated from a large web-acquired corpus (SweWaC) of 114 million words dating from the 2010s. It is adapted to the needs of language learners and contains 8,425 most frequent lemmas that cover 80% of SweWaC.\ | @article{Kilgarriff2013,
doi = {10.1007/s10579-013-9251-2},
url = {https://doi.org/10.1007/s10579-013-9251-2},
year = {2013},
month = sep,
publisher = {Springer Science and Business Media {LLC}},
volume = {48},
number = {1},
pages = {121--163},
author = {Adam Kilgarriff and Frieda Charalabopoulou and Maria Gavrilidou and Janne Bondi Johannessen and Saussan Khalil and Sofie Johansson Kokkinakis and Robert Lew and Serge Sharoff and Ravikiran Vadlapudi and Elena Volodina},
title = {Corpus-based vocabulary lists for language learners for nine languages},
journal = {Language Resources and Evaluation}
} | 0 | 5 | 2022-09-16T02:18:16 | ---
annotations_creators:
- expert-generated
language:
- sv
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: kelly
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- lexicon
- swedish
- CEFR
task_categories:
- text-classification
task_ids:
- text-scoring
---
# Dataset Card for Kelly
Keywords for Language Learning for Young and adults alike
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://spraakbanken.gu.se/en/resources/kelly
- **Paper:** https://link.springer.com/article/10.1007/s10579-013-9251-2
### Dataset Summary
The Swedish Kelly list is a freely available frequency-based vocabulary list
that comprises general-purpose language of modern Swedish. The list was
generated from a large web-acquired corpus (SweWaC) of 114 million words
dating from the 2010s. It is adapted to the needs of language learners and
contains 8,425 most frequent lemmas that cover 80% of SweWaC.
### Languages
Swedish (sv-SE)
## Dataset Structure
### Data Instances
Here is a sample of the data:
```python
{
'id': 190,
'raw_frequency': 117835.0,
'relative_frequency': 1033.61,
'cefr_level': 'A1',
'source': 'SweWaC',
'marker': 'en',
'lemma': 'dag',
'pos': 'noun-en',
'examples': 'e.g. god dag'
}
```
This can be understood as:
> The common noun "dag" ("day") has a rank of 190 in the list. It was used 117,835
times in SweWaC, meaning it occured 1033.61 times per million words. This word
is among the most important vocabulary words for Swedish language learners and
should be learned at the A1 CEFR level. An example usage of this word is the
phrase "god dag" ("good day").
### Data Fields
- `id`: The row number for the data entry, starting at 1. Generally corresponds
to the rank of the word.
- `raw_frequency`: The raw frequency of the word.
- `relative_frequency`: The relative frequency of the word measured in
number of occurences per million words.
- `cefr_level`: The CEFR level (A1, A2, B1, B2, C1, C2) of the word.
- `source`: Whether the word came from SweWaC, translation lists (T2), or
was manually added (manual).
- `marker`: The grammatical marker of the word, if any, such as an article or
infinitive marker.
- `lemma`: The lemma of the word, sometimes provided with its spelling or
stylistic variants.
- `pos`: The word's part-of-speech.
- `examples`: Usage examples and comments. Only available for some of the words.
Manual entries were prepended to the list, giving them a higher rank than they
might otherwise have had. For example, the manual entry "Göteborg ("Gothenberg")
has a rank of 20, while the first non-manual entry "och" ("and") has a rank of
87. However, a conjunction and common stopword is far more likely to occur than
the name of a city.
### Data Splits
There is a single split, `train`.
## Dataset Creation
Please refer to the article [Corpus-based approaches for the creation of a frequency
based vocabulary list in the EU project KELLY – issues on reliability, validity and
coverage](https://gup.ub.gu.se/publication/148533?lang=en) for information about how
the original dataset was created and considerations for using the data.
**The following changes have been made to the original dataset**:
- Changed header names.
- Normalized the large web-acquired corpus name to "SweWac" in the `source` field.
- Set the relative frequency of manual entries to null rather than 1000000.
## Additional Information
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0)
### Citation Information
Please cite the authors if you use this dataset in your work:
```bibtex
@article{Kilgarriff2013,
doi = {10.1007/s10579-013-9251-2},
url = {https://doi.org/10.1007/s10579-013-9251-2},
year = {2013},
month = sep,
publisher = {Springer Science and Business Media {LLC}},
volume = {48},
number = {1},
pages = {121--163},
author = {Adam Kilgarriff and Frieda Charalabopoulou and Maria Gavrilidou and Janne Bondi Johannessen and Saussan Khalil and Sofie Johansson Kokkinakis and Robert Lew and Serge Sharoff and Ravikiran Vadlapudi and Elena Volodina},
title = {Corpus-based vocabulary lists for language learners for nine languages},
journal = {Language Resources and Evaluation}
}
```
### Contributions
Thanks to [@spraakbanken](https://github.com/spraakbanken) for creating this dataset
and to [@codesue](https://github.com/codesue) for adding it.
| 4,996 | [
[
-0.035614013671875,
-0.044647216796875,
-0.004241943359375,
0.01053619384765625,
-0.03192138671875,
-0.01421356201171875,
-0.03607177734375,
-0.03216552734375,
0.01715087890625,
0.0232086181640625,
-0.0281829833984375,
-0.0677490234375,
-0.044647216796875,
0... |
mediabiasgroup/BABE | 2023-08-23T05:24:17.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | mediabiasgroup | null | null | 0 | 5 | 2022-09-18T03:18:38 | ---
license: cc-by-nc-sa-4.0
---
# Please cite as
```
@InProceedings{Spinde2021f,
title = "Neural Media Bias Detection Using Distant Supervision With {BABE} - Bias Annotations By Experts",
author = "Spinde, Timo and
Plank, Manuel and
Krieger, Jan-David and
Ruas, Terry and
Gipp, Bela and
Aizawa, Akiko",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.101",
doi = "10.18653/v1/2021.findings-emnlp.101",
pages = "1166--1177",
}
``` | 725 | [
[
-0.04205322265625,
-0.05474853515625,
0.0164642333984375,
0.0307159423828125,
0.004352569580078125,
-0.00894927978515625,
-0.039581298828125,
-0.03106689453125,
0.052398681640625,
0.0171356201171875,
-0.06304931640625,
-0.036712646484375,
-0.04742431640625,
... |
kejian/codesearchnet-python-raw-457k | 2022-09-20T01:45:26.000Z | [
"region:us"
] | kejian | null | null | 2 | 5 | 2022-09-20T01:44:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Miron/Science_Articles | 2022-09-24T23:28:15.000Z | [
"region:us"
] | Miron | null | null | 1 | 5 | 2022-09-24T23:27:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
joelniklaus/eurlex_resources | 2023-05-10T08:04:28.000Z | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:e... | joelniklaus | 4 | 5 | 2022-09-29T07:35:34 | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "EurlexResources: A Corpus Covering the Largest EURLEX Resources"
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for EurlexResources: A Corpus Covering the Largest EURLEX Resources
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/JoelNiklaus/LegalDatasets/tree/main/pretrain/eurlex)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
This dataset contains large text resources (~179GB in total) from EURLEX that can be used for pretraining language models.
Use the dataset like this:
```python
from datasets import load_dataset
config = "de_caselaw" # {lang}_{resource}
dataset = load_dataset("joelito/eurlex_resources", config, split='train', streaming=True)
```
### Supported Tasks and Leaderboards
The dataset supports the task of masked language modeling.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl.xz and there is one split available ("train").
The following resource types are supported: caselaw, decision, directive, intagr, proposal, recommendation, regulation
More information about the resource types can be found here:
- Caselaw: [EU](https://eur-lex.europa.eu/collection/eu-law/eu-case-law.html)
- Decision: [EU](https://eur-lex.europa.eu/EN/legal-content/summary/european-union-decisions.html), [Wikipedia](https://en.wikipedia.org/wiki/Decision_(European_Union))
- Directive: [EU](https://european-union.europa.eu/institutions-law-budget/law/types-legislation_en), [Wikipedia](https://en.wikipedia.org/wiki/Directive_(European_Union))
- Recommendation: [EU](https://eur-lex.europa.eu/EN/legal-content/glossary/recommendation.html), [Wikipedia](https://en.wikipedia.org/wiki/Recommendation_(European_Union))
- Regulation: [EU](https://european-union.europa.eu/institutions-law-budget/law/types-legislation_en), [Wikipedia](https://en.wikipedia.org/wiki/Regulation_(European_Union))
- Intagr: [EU](https://eur-lex.europa.eu/collection/eu-law/inter-agree.html), [Wikipedia](https://en.wikipedia.org/wiki/Treaties_of_the_European_Union)
- Proposal: No resource found
| Source | Size (MB) | Words | Documents | Words/Document |
|:-------------------|------------:|------------:|------------:|-----------------:|
| all_all | 180668 | 12106556233 | 8306749 | 1457 |
| all_caselaw | 34939 | 3413551598 | 2487794 | 1372 |
| all_decision | 28519 | 1698585620 | 1267402 | 1340 |
| all_directive | 4786 | 368577940 | 104187 | 3537 |
| all_intagr | 11421 | 743271516 | 274485 | 2707 |
| all_proposal | 26526 | 2087989530 | 702392 | 2972 |
| all_recommendation | 1886 | 164979037 | 80277 | 2055 |
| all_regulation | 72590 | 3629600992 | 3390212 | 1070 |
| bg_all | 7819 | 398067053 | 348691 | 1141 |
| bg_caselaw | 1588 | 109749174 | 104434 | 1050 |
| bg_decision | 1248 | 58817972 | 54075 | 1087 |
| bg_directive | 263 | 15731608 | 4388 | 3585 |
| bg_intagr | 603 | 31292848 | 11581 | 2702 |
| bg_proposal | 1083 | 60674956 | 29251 | 2074 |
| bg_recommendation | 89 | 5588991 | 3321 | 1682 |
| bg_regulation | 2943 | 116211504 | 141641 | 820 |
| cs_all | 8360 | 471961631 | 449793 | 1049 |
| cs_caselaw | 1163 | 110005022 | 104519 | 1052 |
| cs_decision | 1102 | 58921128 | 54075 | 1089 |
| cs_directive | 186 | 13951134 | 4388 | 3179 |
| cs_intagr | 449 | 28106332 | 11581 | 2426 |
| cs_proposal | 840 | 61838692 | 29252 | 2113 |
| cs_recommendation | 64 | 5416549 | 3323 | 1630 |
| cs_regulation | 4557 | 193722774 | 242655 | 798 |
| da_all | 8932 | 671484862 | 332500 | 2019 |
| da_caselaw | 1746 | 185589641 | 88234 | 2103 |
| da_decision | 1356 | 89498535 | 54085 | 1654 |
| da_directive | 207 | 17525792 | 4388 | 3994 |
| da_intagr | 506 | 35596169 | 11582 | 3073 |
| da_proposal | 1399 | 119759476 | 29257 | 4093 |
| da_recommendation | 100 | 9463897 | 3352 | 2823 |
| da_regulation | 3618 | 214051352 | 141602 | 1511 |
| de_all | 9607 | 695512401 | 348290 | 1996 |
| de_caselaw | 1930 | 193232441 | 104228 | 1853 |
| de_decision | 1449 | 93688222 | 53980 | 1735 |
| de_directive | 218 | 17337760 | 4385 | 3953 |
| de_intagr | 531 | 36791153 | 11580 | 3177 |
| de_proposal | 1556 | 126987454 | 29219 | 4346 |
| de_recommendation | 109 | 9608034 | 3318 | 2895 |
| de_regulation | 3813 | 217867337 | 141580 | 1538 |
| el_all | 12469 | 696216541 | 349667 | 1991 |
| el_caselaw | 2951 | 202027703 | 105138 | 1921 |
| el_decision | 1823 | 94919886 | 54150 | 1752 |
| el_directive | 321 | 19411959 | 4390 | 4421 |
| el_intagr | 701 | 38965777 | 11584 | 3363 |
| el_proposal | 2085 | 128005737 | 29290 | 4370 |
| el_recommendation | 145 | 9344866 | 3357 | 2783 |
| el_regulation | 4443 | 203540613 | 141758 | 1435 |
| en_all | 9217 | 769465561 | 348641 | 2207 |
| en_caselaw | 1846 | 222891827 | 104422 | 2134 |
| en_decision | 1504 | 114626013 | 54054 | 2120 |
| en_directive | 204 | 18860876 | 4388 | 4298 |
| en_intagr | 499 | 39029843 | 11581 | 3370 |
| en_proposal | 1538 | 140781768 | 29242 | 4814 |
| en_recommendation | 97 | 10091809 | 3320 | 3039 |
| en_regulation | 3530 | 223183425 | 141634 | 1575 |
| es_all | 8588 | 725125274 | 348443 | 2081 |
| es_caselaw | 1870 | 220621730 | 104312 | 2115 |
| es_decision | 1334 | 98163499 | 54001 | 1817 |
| es_directive | 221 | 21484479 | 4385 | 4899 |
| es_intagr | 516 | 41841805 | 11581 | 3612 |
| es_proposal | 1366 | 133674486 | 29224 | 4574 |
| es_recommendation | 82 | 8864018 | 3319 | 2670 |
| es_regulation | 3199 | 200475257 | 141621 | 1415 |
| et_all | 6090 | 328068754 | 349615 | 938 |
| et_caselaw | 1074 | 93096396 | 105111 | 885 |
| et_decision | 1069 | 50752324 | 54159 | 937 |
| et_directive | 177 | 11555930 | 4390 | 2632 |
| et_intagr | 436 | 24018147 | 11584 | 2073 |
| et_proposal | 810 | 51600852 | 29283 | 1762 |
| et_recommendation | 61 | 4451369 | 3355 | 1326 |
| et_regulation | 2464 | 92593736 | 141733 | 653 |
| fi_all | 7346 | 404265224 | 349633 | 1156 |
| fi_caselaw | 1596 | 126525296 | 105119 | 1203 |
| fi_decision | 1227 | 59659475 | 54163 | 1101 |
| fi_directive | 204 | 12766491 | 4389 | 2908 |
| fi_intagr | 463 | 25392311 | 11584 | 2192 |
| fi_proposal | 1075 | 69198401 | 29288 | 2362 |
| fi_recommendation | 73 | 5070392 | 3356 | 1510 |
| fi_regulation | 2707 | 105652858 | 141734 | 745 |
| fr_all | 9937 | 828959218 | 348295 | 2380 |
| fr_caselaw | 2158 | 246262666 | 104228 | 2362 |
| fr_decision | 1473 | 108648744 | 53981 | 2012 |
| fr_directive | 222 | 20308801 | 4385 | 4631 |
| fr_intagr | 536 | 41986012 | 11580 | 3625 |
| fr_proposal | 1592 | 149134298 | 29218 | 5104 |
| fr_recommendation | 112 | 11510415 | 3318 | 3469 |
| fr_regulation | 3845 | 251108282 | 141585 | 1773 |
| ga_all | 1028 | 65030095 | 349778 | 185 |
| ga_caselaw | 11 | 696305 | 105205 | 6 |
| ga_decision | 87 | 4415457 | 54189 | 81 |
| ga_directive | 18 | 1512027 | 4390 | 344 |
| ga_intagr | 19 | 1820723 | 11586 | 157 |
| ga_proposal | 289 | 26106889 | 29298 | 891 |
| ga_recommendation | 10 | 902390 | 3361 | 268 |
| ga_regulation | 594 | 29576304 | 141749 | 208 |
| hr_all | 4594 | 258816068 | 348691 | 742 |
| hr_caselaw | 617 | 62432734 | 104434 | 597 |
| hr_decision | 596 | 31911903 | 54075 | 590 |
| hr_directive | 156 | 10855913 | 4388 | 2474 |
| hr_intagr | 450 | 24962086 | 11581 | 2155 |
| hr_proposal | 552 | 33437815 | 29251 | 1143 |
| hr_recommendation | 40 | 3612247 | 3321 | 1087 |
| hr_regulation | 2183 | 91603370 | 141641 | 646 |
| hu_all | 6653 | 375253894 | 349605 | 1073 |
| hu_caselaw | 1278 | 110179375 | 105144 | 1047 |
| hu_decision | 1147 | 57108172 | 54156 | 1054 |
| hu_directive | 200 | 13568304 | 4389 | 3091 |
| hu_intagr | 470 | 27258501 | 11586 | 2352 |
| hu_proposal | 912 | 60882750 | 29291 | 2078 |
| hu_recommendation | 70 | 5312868 | 3357 | 1582 |
| hu_regulation | 2576 | 100943924 | 141682 | 712 |
| it_all | 9586 | 768605772 | 333631 | 2303 |
| it_caselaw | 1889 | 206117726 | 89560 | 2301 |
| it_decision | 1445 | 102848859 | 53983 | 1905 |
| it_directive | 217 | 19687773 | 4385 | 4489 |
| it_intagr | 528 | 40134330 | 11580 | 3465 |
| it_proposal | 1533 | 140713925 | 29218 | 4816 |
| it_recommendation | 109 | 10923431 | 3318 | 3292 |
| it_regulation | 3865 | 248179728 | 141587 | 1752 |
| lt_all | 6400 | 364361783 | 200565 | 1816 |
| lt_caselaw | 1137 | 101808706 | 105477 | 965 |
| lt_decision | 1096 | 55850308 | 21990 | 2539 |
| lt_directive | 185 | 13078983 | 3239 | 4037 |
| lt_intagr | 452 | 27009631 | 7481 | 3610 |
| lt_proposal | 850 | 58553579 | 29272 | 2000 |
| lt_recommendation | 64 | 5121089 | 3363 | 1522 |
| lt_regulation | 2617 | 102939487 | 29743 | 3460 |
| lv_all | 6349 | 363239195 | 349919 | 1038 |
| lv_caselaw | 1153 | 103456811 | 105242 | 983 |
| lv_decision | 1103 | 55512944 | 54224 | 1023 |
| lv_directive | 186 | 13023024 | 4392 | 2965 |
| lv_intagr | 452 | 26693107 | 11630 | 2295 |
| lv_proposal | 96 | 58176216 | 29298 | 1985 |
| lv_recommendation | 64 | 5074494 | 3361 | 1509 |
| lv_regulation | 2545 | 101302599 | 141772 | 714 |
| mt_all | 6540 | 367834815 | 350292 | 1050 |
| mt_caselaw | 1164 | 100423543 | 105479 | 952 |
| mt_decision | 1109 | 55239141 | 54280 | 1017 |
| mt_directive | 203 | 14355266 | 4392 | 3268 |
| mt_intagr | 470 | 27701991 | 11675 | 2372 |
| mt_proposal | 878 | 59749277 | 29274 | 2041 |
| mt_recommendation | 65 | 5039600 | 3363 | 1498 |
| mt_regulation | 2650 | 105325997 | 141829 | 742 |
| nl_all | 9586 | 770312808 | 349407 | 2204 |
| nl_caselaw | 1847 | 206271837 | 105005 | 1964 |
| nl_decision | 1456 | 104060901 | 54152 | 1921 |
| nl_directive | 217 | 19529361 | 4388 | 4450 |
| nl_intagr | 529 | 40247634 | 11584 | 3474 |
| nl_proposal | 1540 | 141258274 | 29279 | 4824 |
| nl_recommendation | 111 | 11002405 | 3355 | 3279 |
| nl_regulation | 3886 | 247942396 | 141644 | 1750 |
| pl_all | 6677 | 406648795 | 350349 | 1160 |
| pl_caselaw | 1231 | 115824759 | 105479 | 1098 |
| pl_decision | 1125 | 60407576 | 54287 | 1112 |
| pl_directive | 197 | 14672157 | 4392 | 3340 |
| pl_intagr | 466 | 28543668 | 11680 | 2443 |
| pl_proposal | 886 | 64728230 | 29317 | 2207 |
| pl_recommendation | 68 | 5769893 | 3363 | 1715 |
| pl_regulation | 2703 | 116702512 | 141831 | 822 |
| pt_all | 8450 | 675152149 | 348449 | 1937 |
| pt_caselaw | 1763 | 198084937 | 104312 | 1898 |
| pt_decision | 1327 | 93278293 | 54007 | 1727 |
| pt_directive | 217 | 19831549 | 4385 | 4522 |
| pt_intagr | 504 | 37999753 | 11581 | 3281 |
| pt_proposal | 1361 | 127461782 | 29224 | 4361 |
| pt_recommendation | 81 | 8396661 | 3319 | 2529 |
| pt_regulation | 3197 | 190099174 | 141621 | 1342 |
| ro_all | 6315 | 415038571 | 350300 | 1184 |
| ro_caselaw | 1110 | 114780999 | 105516 | 1087 |
| ro_decision | 1047 | 59479553 | 54281 | 1095 |
| ro_directive | 206 | 16101628 | 4392 | 3666 |
| ro_intagr | 481 | 31497000 | 11675 | 2697 |
| ro_proposal | 805 | 62130419 | 29274 | 2122 |
| ro_recommendation | 63 | 5977913 | 3363 | 1777 |
| ro_regulation | 2603 | 125071059 | 141799 | 882 |
| sk_all | 6484 | 392235510 | 350570 | 1118 |
| sk_caselaw | 1160 | 110125141 | 105608 | 1042 |
| sk_decision | 1111 | 59576875 | 54349 | 1096 |
| sk_directive | 188 | 14132755 | 4393 | 3217 |
| sk_intagr | 458 | 28298155 | 11676 | 2423 |
| sk_proposal | 859 | 63726047 | 29290 | 2175 |
| sk_recommendation | 66 | 5654790 | 3364 | 1680 |
| sk_regulation | 2642 | 110721747 | 141890 | 780 |
| sl_all | 6222 | 394814289 | 350574 | 1126 |
| sl_caselaw | 1071 | 111238184 | 105608 | 1053 |
| sl_decision | 1075 | 59454906 | 54349 | 1093 |
| sl_directive | 176 | 13908097 | 4393 | 3165 |
| sl_intagr | 441 | 28239078 | 11676 | 2418 |
| sl_proposal | 812 | 63391970 | 29290 | 2164 |
| sl_recommendation | 62 | 5628775 | 3364 | 1673 |
| sl_regulation | 2585 | 112953279 | 141894 | 796 |
| sv_all | 7419 | 500085970 | 351051 | 1424 |
| sv_caselaw | 1585 | 162108645 | 105980 | 1529 |
| sv_decision | 1213 | 71744934 | 54357 | 1319 |
| sv_directive | 195 | 15386273 | 4393 | 3502 |
| sv_intagr | 463 | 29845462 | 11676 | 2556 |
| sv_proposal | 1059 | 86016237 | 29292 | 2936 |
| sv_recommendation | 79 | 7152141 | 3366 | 2124 |
| sv_regulation | 2825 | 127832278 | 141987 | 900 |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data has been downloaded using the R package [eurlex](https://cran.r-project.org/web/packages/eurlex/vignettes/eurlexpkg.html) between June and August 2022.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
[see also the legal notice](https://eur-lex.europa.eu/content/legal-notice/legal-notice.html)
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| 21,998 | [
[
-0.0638427734375,
-0.0242462158203125,
0.028076171875,
0.01491546630859375,
-0.0100250244140625,
0.00939178466796875,
-0.01058197021484375,
-0.007328033447265625,
0.05181884765625,
0.050384521484375,
-0.0322265625,
-0.055450439453125,
-0.032135009765625,
0.0... | ||
Divyanshu/IE_SemParse | 2023-07-13T18:35:10.000Z | [
"task_categories:text2text-generation",
"task_ids:parsing",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"lang... | Divyanshu | IE-SemParse is an Inter-bilingual Seq2seq Semantic parsing dataset for 11 distinct Indian languages | @misc{aggarwal2023evaluating,
title={Evaluating Inter-Bilingual Semantic Parsing for Indian Languages},
author={Divyanshu Aggarwal and Vivek Gupta and Anoop Kunchukuttan},
year={2023},
eprint={2304.13005},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 5 | 2022-10-01T10:51:54 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: IE-SemParse
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- parsing
---
# Dataset Card for "IE-SemParse"
## Table of Contents
- [Dataset Card for "IE-SemParse"](#dataset-card-for-ie-semparse)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset usage](#dataset-usage)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Human Verification Process](#human-verification-process)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** <https://github.com/divyanshuaggarwal/IE-SemParse>
- **Paper:** [Evaluating Inter-Bilingual Semantic Parsing for Indian Languages](https://arxiv.org/abs/2304.13005)
- **Point of Contact:** [Divyanshu Aggarwal](mailto:divyanshuggrwl@gmail.com)
### Dataset Summary
IE-SemParse is an InterBilingual Semantic Parsing Dataset for eleven major Indic languages that includes
Assamese (‘as’), Gujarat (‘gu’), Kannada (‘kn’),
Malayalam (‘ml’), Marathi (‘mr’), Odia (‘or’),
Punjabi (‘pa’), Tamil (‘ta’), Telugu (‘te’), Hindi
(‘hi’), and Bengali (‘bn’).
### Supported Tasks and Leaderboards
**Tasks:** Inter-Bilingual Semantic Parsing
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
...
<!-- Below is the dataset split given for `hi` dataset.
```python
DatasetDict({
train: Dataset({
features: ['utterance', 'logical form', 'intent'],
num_rows: 36000
})
test: Dataset({
features: ['utterance', 'logical form', 'intent'],
num_rows: 3000
})
validation: Dataset({
features: ['utterance', 'logical form', 'intent'],
num_rows: 1500
})
})
``` -->
## Dataset usage
Code snippet for using the dataset using datasets library.
```python
from datasets import load_dataset
dataset = load_dataset("Divyanshu/IE_SemParse")
```
## Dataset Creation
Machine translation of 3 multilingual semantic Parsing datasets english dataset to 11 listed Indic Languages.
### Curation Rationale
[More information needed]
### Source Data
[mTOP dataset](https://aclanthology.org/2021.eacl-main.257/)
[multilingualTOP dataset](https://github.com/awslabs/multilingual-top)
[multi-ATIS++ dataset](https://paperswithcode.com/paper/end-to-end-slot-alignment-and-recognition-for)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
#### Human Verification Process
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
## Considerations for Using the Data
### Social Impact of Dataset
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
### Discussion of Biases
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
### Other Known Limitations
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
### Dataset Curators
Divyanshu Aggarwal, Vivek Gupta, Anoop Kunchukuttan
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@misc{aggarwal2023evaluating,
title={Evaluating Inter-Bilingual Semantic Parsing for Indian Languages},
author={Divyanshu Aggarwal and Vivek Gupta and Anoop Kunchukuttan},
year={2023},
eprint={2304.13005},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- ### Contributions -->
| 4,963 | [
[
-0.031219482421875,
-0.048095703125,
0.005184173583984375,
0.045562744140625,
-0.0142059326171875,
-0.00521087646484375,
-0.033477783203125,
-0.0214691162109375,
0.027099609375,
0.0197601318359375,
-0.043548583984375,
-0.057525634765625,
-0.053955078125,
0.0... |
ywchoi/pmc_0_cleaned | 2022-10-07T17:13:03.000Z | [
"region:us"
] | ywchoi | null | null | 0 | 5 | 2022-10-07T17:12:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
umair894/rvl_cdip_300_examples_per_class | 2022-10-11T07:10:15.000Z | [
"region:us"
] | umair894 | null | null | 0 | 5 | 2022-10-11T07:10:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
allenai/multixscience_dense_mean | 2022-11-18T19:58:51.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | allenai | null | null | 0 | 5 | 2022-10-12T13:30:21 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `train`, `validation` and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==4`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5270 | 0.2005 | 0.1551 | 0.2357 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5310 | 0.2026 | 0.1603 | 0.2432 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5229 | 0.2081 | 0.1612 | 0.2440 | | 1,598 | [
[
-0.0239410400390625,
-0.004543304443359375,
0.01739501953125,
0.0090179443359375,
-0.01030731201171875,
0.00345611572265625,
-0.0129852294921875,
0.005725860595703125,
0.042327880859375,
0.035064697265625,
-0.041412353515625,
-0.036529541015625,
-0.0495910644531... |
allenai/multixscience_dense_oracle | 2022-11-18T19:57:37.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | allenai | null | null | 1 | 5 | 2022-10-12T13:30:45 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5270 | 0.2005 | 0.2005 | 0.2005 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5310 | 0.2026 | 0.2026 | 0.2026 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5229 | 0.2081 | 0.2081 | 0.2081 | | 1,568 | [
[
-0.021881103515625,
-0.00759124755859375,
0.021148681640625,
0.0077362060546875,
-0.01207733154296875,
0.00225067138671875,
-0.00606536865234375,
0.0031585693359375,
0.050201416015625,
0.039642333984375,
-0.046051025390625,
-0.03521728515625,
-0.04144287109375,
... |
allenai/cochrane_dense_max | 2022-11-18T19:41:49.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | allenai | null | null | 1 | 5 | 2022-10-12T13:42:35 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7790 | 0.4487 | 0.1959 | 0.6268 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7856 | 0.4424 | 0.1995 | 0.6433 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. | 1,638 | [
[
-0.006500244140625,
-0.012115478515625,
0.020111083984375,
0.0186309814453125,
-0.01342010498046875,
-0.01561737060546875,
-0.01044464111328125,
-0.00039649009704589844,
0.0276641845703125,
0.03704833984375,
-0.035675048828125,
-0.04656982421875,
-0.058898925781... |
allenai/wcep_dense_mean | 2022-11-18T20:00:21.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | 0 | 5 | 2022-10-12T14:33:21 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `train`, `validation, and `test` splits have been have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==9`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8590 | 0.6490 | 0.6239 | 0.6271 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8578 | 0.6326 | 0.6301 | 0.6031 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8678 | 0.6631 | 0.6564 | 0.6338 | | 1,868 | [
[
-0.036407470703125,
-0.00806427001953125,
0.0196533203125,
0.0154571533203125,
-0.0164947509765625,
-0.00685882568359375,
-0.0193634033203125,
-0.003971099853515625,
0.0244293212890625,
0.036529541015625,
-0.03466796875,
-0.046783447265625,
-0.054473876953125,
... |
csebuetnlp/BanglaParaphrase | 2022-11-14T15:39:43.000Z | [
"task_categories:text2text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100k<n<1M",
"source_datasets:original",
"language:bn",
"license:cc-by-nc-sa-4.0",
"conditional-text-generation",
"paraphrase-generation",
"arxiv:2210.0... | csebuetnlp | We present a high quality bangla paraphrase dataset containing about 466k paraphrase pairs. The paraphrases ensures high quality by being semantically coherent and syntactically diverse. | to be added | 3 | 5 | 2022-10-13T16:06:21 | ---
annotations_creators:
- found
language_creators:
- found
language:
- bn
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100k<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: BanglaParaphrase
tags:
- conditional-text-generation
- paraphrase-generation
---
# Dataset Card for "BanglaParaphrase"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/banglaparaphrase](https://github.com/csebuetnlp/banglaparaphrase)
- **Paper:** [BanglaParaphrase: A High-Quality Bangla Paraphrase Dataset](https://arxiv.org/abs/2210.05109)
- **Point of Contact:** [Najrin Sultana](mailto:nazrinshukti@gmail.com)
### Dataset Summary
We present BanglaParaphrase, a high quality synthetic Bangla paraphrase dataset containing about 466k paraphrase pairs.
The paraphrases ensures high quality by being semantically coherent and syntactically diverse.
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Languages
- `bengali`
## Loading the dataset
```python
from datasets import load_dataset
from datasets import load_dataset
ds = load_dataset("csebuetnlp/BanglaParaphrase")
```
## Dataset Structure
### Data Instances
One example from the `train` part of the dataset is given below in JSON format.
```
{
"source": "বেশিরভাগ সময় প্রকৃতির দয়ার ওপরেই বেঁচে থাকতেন উপজাতিরা।",
"target": "বেশিরভাগ সময়ই উপজাতিরা প্রকৃতির দয়ার উপর নির্ভরশীল ছিল।"
}
```
### Data Fields
- 'source': A string representing the source sentence.
- 'target': A string representing the target sentence.
### Data Splits
Dataset with train-dev-test example counts are given below:
Language | ISO 639-1 Code | Train | Validation | Test |
-------------- | ---------------- | ------- | ----- | ------ |
Bengali | bn | 419, 967 | 233, 31 | 233, 32 |
## Dataset Creation
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Source Data
[Roar Bangla](https://roar.media/bangla)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
### Annotations
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Annotation process
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Who are the annotators?
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
```
@article{akil2022banglaparaphrase,
title={BanglaParaphrase: A High-Quality Bangla Paraphrase Dataset},
author={Akil, Ajwad and Sultana, Najrin and Bhattacharjee, Abhik and Shahriyar, Rifat},
journal={arXiv preprint arXiv:2210.05109},
year={2022}
}
```
### Contributions
| 5,293 | [
[
-0.0103912353515625,
-0.0596923828125,
-0.00441741943359375,
0.046905517578125,
-0.0256500244140625,
0.003353118896484375,
-0.02398681640625,
-0.01361083984375,
0.0222015380859375,
0.032135009765625,
-0.023895263671875,
-0.052459716796875,
-0.038848876953125,
... |
amanneo/collected-mail-corpus-mini | 2022-10-20T13:08:59.000Z | [
"region:us"
] | amanneo | null | null | 0 | 5 | 2022-10-20T13:08:38 | ---
dataset_info:
features:
- name: id
dtype: float64
- name: email_type
dtype: string
- name: text
dtype: string
- name: mail_length
dtype: int64
splits:
- name: test
num_bytes: 4260.131707317073
num_examples: 21
- name: train
num_bytes: 37326.86829268293
num_examples: 184
download_size: 26719
dataset_size: 41587.0
---
# Dataset Card for "collected-mail-corpus-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 554 | [
[
-0.052764892578125,
-0.026641845703125,
0.012969970703125,
-0.0021915435791015625,
-0.01959228515625,
0.0004949569702148438,
0.006305694580078125,
-0.00621795654296875,
0.08013916015625,
0.025360107421875,
-0.06610107421875,
-0.04815673828125,
-0.055877685546875... |
lcw99/cc100-ko-only | 2022-10-21T07:23:11.000Z | [
"language:ko",
"region:us"
] | lcw99 | null | null | 1 | 5 | 2022-10-21T06:05:16 | ---
language:
- ko
---
# cc100 dataset Korean only | 52 | [
[
-0.007701873779296875,
0.006839752197265625,
0.01523590087890625,
0.045501708984375,
-0.022613525390625,
0.0169219970703125,
-0.0194854736328125,
0.0177001953125,
0.0309295654296875,
0.07598876953125,
-0.06036376953125,
-0.0701904296875,
-0.022430419921875,
... |
arbml/TSAC | 2022-10-24T16:30:35.000Z | [
"region:us"
] | arbml | null | null | 0 | 5 | 2022-10-24T16:30:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
arbml/Arabic_News | 2022-10-26T00:19:15.000Z | [
"region:us"
] | arbml | null | null | 0 | 5 | 2022-10-26T00:14:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Short-Answer-Feedback/saf_legal_domain_german | 2023-03-31T11:47:38.000Z | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"short answer feedback",
"legal domain",
"region:us"
] | Short-Answer-Feedback | null | null | 2 | 5 | 2022-11-09T10:35:55 | ---
pretty_name: SAF - Legal Domain - German
annotations_creators:
- expert-generated
language:
- de
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- short answer feedback
- legal domain
task_categories:
- text2text-generation
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: reference_answer
dtype: string
- name: provided_answer
dtype: string
- name: answer_feedback
dtype: string
- name: verification_feedback
dtype: string
- name: error_class
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 2142112
num_examples: 1596
- name: validation
num_bytes: 550206
num_examples: 400
- name: test_unseen_answers
num_bytes: 301087
num_examples: 221
- name: test_unseen_questions
num_bytes: 360616
num_examples: 275
download_size: 484808
dataset_size: 3354021
license: cc-by-4.0
---
# Dataset Card for "saf_legal_domain_german"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This Short Answer Feedback (SAF) dataset contains 19 German questions in the domain of the German social law (with reference answers). The idea of constructing a bilingual (English and German) short answer dataset as a way to remedy the lack of content-focused feedback datasets was introduced in [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022). Please refer to [saf_micro_job_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_micro_job_german) and [saf_communication_networks_english](https://huggingface.co/datasets/Short-Answer-Feedback/saf_communication_networks_english) for similarly constructed datasets that can be used for SAF tasks.
### Supported Tasks and Leaderboards
- `short_answer_feedback`: The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in German.
## Dataset Structure
### Data Instances
An example of an entry of the training split looks as follows.
```
{
"id": "1",
"question": "Ist das eine Frage?",
"reference_answer": "Ja, das ist eine Frage.",
"provided_answer": "Ich bin mir sicher, dass das eine Frage ist.",
"answer_feedback": "Korrekt.",
"verification_feedback": "Correct",
"error_class": "Keine",
"score": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature (UUID4 in HEX format).
- `question`: a `string` feature representing a question.
- `reference_answer`: a `string` feature representing a reference answer to the question.
- `provided_answer`: a `string` feature representing an answer that was provided for a particular question.
- `answer_feedback`: a `string` feature representing the feedback given to the provided answers.
- `verification_feedback`: a `string` feature representing an automatic labeling of the score. It can be `Correct` (`score` = 1), `Incorrect` (`score` = 0) or `Partially correct` (all intermediate scores).
- `error_class`: a `string` feature representing the type of error identified in the case of a not completely correct answer.
- `score`: a `float64` feature (between 0 and 1) representing the score given to the provided answer.
### Data Splits
The dataset is comprised of four data splits.
- `train`: used for training, contains a set of questions and the provided answers to them.
- `validation`: used for validation, contains a set of questions and the provided answers to them (derived from the original training set from which the data came from).
- `test_unseen_answers`: used for testing, contains unseen answers to the questions present in the `train` split.
- `test_unseen_questions`: used for testing, contains unseen questions that do not appear in the `train` split.
| Split |train|validation|test_unseen_answers|test_unseen_questions|
|-------------------|----:|---------:|------------------:|--------------------:|
|Number of instances| 1596| 400| 221| 275|
## Additional Information
### Contributions
Thanks to [@JohnnyBoy2103](https://github.com/JohnnyBoy2103) for adding this dataset. | 4,926 | [
[
-0.051055908203125,
-0.057342529296875,
0.01181793212890625,
0.0260467529296875,
-0.0073089599609375,
-0.00765228271484375,
-0.0225982666015625,
-0.01953125,
0.0265655517578125,
0.039581298828125,
-0.07745361328125,
-0.043426513671875,
-0.031341552734375,
0.... |
bigbio/ehr_rel | 2022-12-22T15:44:34.000Z | [
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"region:us"
] | bigbio | EHR-Rel is a novel open-source1 biomedical concept relatedness dataset consisting of 3630 concept pairs, six times more
than the largest existing dataset. Instead of manually selecting and pairing concepts as done in previous work,
the dataset is sampled from EHRs to ensure concepts are relevant for the EHR concept retrieval task.
A detailed analysis of the concepts in the dataset reveals a far larger coverage compared to existing datasets. | @inproceedings{schulz-etal-2020-biomedical,
title = {Biomedical Concept Relatedness {--} A large {EHR}-based benchmark},
author = {Schulz, Claudia and
Levy-Kramer, Josh and
Van Assel, Camille and
Kepes, Miklos and
Hammerla, Nils},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
month = {dec},
year = {2020},
address = {Barcelona, Spain (Online)},
publisher = {International Committee on Computational Linguistics},
url = {https://aclanthology.org/2020.coling-main.577},
doi = {10.18653/v1/2020.coling-main.577},
pages = {6565--6575},
} | 1 | 5 | 2022-11-13T22:08:18 |
---
language:
- en
bigbio_language:
- English
license: apache-2.0
multilinguality: monolingual
bigbio_license_shortname: APACHE_2p0
pretty_name: EHR-Rel
homepage: https://github.com/babylonhealth/EHR-Rel
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for EHR-Rel
## Dataset Description
- **Homepage:** https://github.com/babylonhealth/EHR-Rel
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
EHR-Rel is a novel open-source1 biomedical concept relatedness dataset consisting of 3630 concept pairs, six times more
than the largest existing dataset. Instead of manually selecting and pairing concepts as done in previous work,
the dataset is sampled from EHRs to ensure concepts are relevant for the EHR concept retrieval task.
A detailed analysis of the concepts in the dataset reveals a far larger coverage compared to existing datasets.
## Citation Information
```
@inproceedings{schulz-etal-2020-biomedical,
title = {Biomedical Concept Relatedness {--} A large {EHR}-based benchmark},
author = {Schulz, Claudia and
Levy-Kramer, Josh and
Van Assel, Camille and
Kepes, Miklos and
Hammerla, Nils},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
month = {dec},
year = {2020},
address = {Barcelona, Spain (Online)},
publisher = {International Committee on Computational Linguistics},
url = {https://aclanthology.org/2020.coling-main.577},
doi = {10.18653/v1/2020.coling-main.577},
pages = {6565--6575},
}
```
| 1,595 | [
[
-0.01041412353515625,
-0.05938720703125,
0.0355224609375,
-0.0180511474609375,
-0.0355224609375,
0.00972747802734375,
-0.0081634521484375,
-0.03985595703125,
0.047515869140625,
0.024871826171875,
-0.037628173828125,
-0.068359375,
-0.0193328857421875,
0.01626... |
bigbio/mediqa_nli | 2022-12-22T15:45:31.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | Natural Language Inference (NLI) is the task of determining whether a given hypothesis can be
inferred from a given premise. Also known as Recognizing Textual Entailment (RTE), this task has
enjoyed popularity among researchers for some time. However, almost all datasets for this task
focused on open domain data such as as news texts, blogs, and so on. To address this gap, the MedNLI
dataset was created for language inference in the medical domain. MedNLI is a derived dataset with
data sourced from MIMIC-III v1.4. In order to stimulate research for this problem, a shared task on
Medical Inference and Question Answering (MEDIQA) was organized at the workshop for biomedical
natural language processing (BioNLP) 2019. The dataset provided herein is a test set of 405 premise
hypothesis pairs for the NLI challenge in the MEDIQA shared task. Participants of the shared task
are expected to use the MedNLI data for development of their models and this dataset was used as an
unseen dataset for scoring each participant submission. | @misc{https://doi.org/10.13026/gtv4-g455,
title = {MedNLI for Shared Task at ACL BioNLP 2019},
author = {Shivade, Chaitanya},
year = 2019,
publisher = {physionet.org},
doi = {10.13026/GTV4-G455},
url = {https://physionet.org/content/mednli-bionlp19/}
} | 0 | 5 | 2022-11-13T22:09:39 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: PHYSIONET_LICENSE_1p5
pretty_name: MEDIQA NLI
homepage: https://physionet.org/content/mednli-bionlp19/1.0.1/
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- TEXTUAL_ENTAILMENT
---
# Dataset Card for MEDIQA NLI
## Dataset Description
- **Homepage:** https://physionet.org/content/mednli-bionlp19/1.0.1/
- **Pubmed:** False
- **Public:** False
- **Tasks:** TE
Natural Language Inference (NLI) is the task of determining whether a given hypothesis can be
inferred from a given premise. Also known as Recognizing Textual Entailment (RTE), this task has
enjoyed popularity among researchers for some time. However, almost all datasets for this task
focused on open domain data such as as news texts, blogs, and so on. To address this gap, the MedNLI
dataset was created for language inference in the medical domain. MedNLI is a derived dataset with
data sourced from MIMIC-III v1.4. In order to stimulate research for this problem, a shared task on
Medical Inference and Question Answering (MEDIQA) was organized at the workshop for biomedical
natural language processing (BioNLP) 2019. The dataset provided herein is a test set of 405 premise
hypothesis pairs for the NLI challenge in the MEDIQA shared task. Participants of the shared task
are expected to use the MedNLI data for development of their models and this dataset was used as an
unseen dataset for scoring each participant submission.
## Citation Information
```
@misc{https://doi.org/10.13026/gtv4-g455,
title = {MedNLI for Shared Task at ACL BioNLP 2019},
author = {Shivade, Chaitanya},
year = 2019,
publisher = {physionet.org},
doi = {10.13026/GTV4-G455},
url = {https://physionet.org/content/mednli-bionlp19/}
}
```
| 1,887 | [
[
-0.004730224609375,
-0.04376220703125,
0.0352783203125,
0.021942138671875,
-0.00217437744140625,
-0.0194549560546875,
-0.0026397705078125,
-0.035858154296875,
0.033721923828125,
0.03375244140625,
-0.062286376953125,
-0.0440673828125,
-0.0202484130859375,
0.0... |
bigbio/mediqa_qa | 2022-12-22T15:45:32.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | The MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).
Mailing List: https://groups.google.com/forum/#!forum/bionlp-mediqa
In the QA task, participants are tasked to:
- filter/classify the provided answers (1: correct, 0: incorrect).
- re-rank the answers. | @inproceedings{MEDIQA2019,
author = {Asma {Ben Abacha} and Chaitanya Shivade and Dina Demner{-}Fushman},
title = {Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering},
booktitle = {ACL-BioNLP 2019},
year = {2019}
} | 0 | 5 | 2022-11-13T22:09:42 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: MEDIQA QA
homepage: https://sites.google.com/view/mediqa2019
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for MEDIQA QA
## Dataset Description
- **Homepage:** https://sites.google.com/view/mediqa2019
- **Pubmed:** False
- **Public:** True
- **Tasks:** QA
The MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).
Mailing List: https://groups.google.com/forum/#!forum/bionlp-mediqa
In the QA task, participants are tasked to:
- filter/classify the provided answers (1: correct, 0: incorrect).
- re-rank the answers.
## Citation Information
```
@inproceedings{MEDIQA2019,
author = {Asma {Ben Abacha} and Chaitanya Shivade and Dina Demner{-}Fushman},
title = {Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering},
booktitle = {ACL-BioNLP 2019},
year = {2019}
}
```
| 1,215 | [
[
-0.00838470458984375,
-0.0458984375,
0.0460205078125,
0.0012912750244140625,
-0.01229095458984375,
0.005939483642578125,
0.0237579345703125,
-0.036285400390625,
0.02764892578125,
0.04150390625,
-0.06610107421875,
-0.039764404296875,
-0.041748046875,
0.035247... |
bigbio/n2c2_2014_deid | 2022-12-22T15:45:57.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured two tracks.
The first of these was the de-identification track focused on identifying protected health
information (PHI) in longitudinal clinical narratives.
TRACK 1: NER PHI\n
HIPAA requires that patient medical records have all identifying information removed in order to
protect patient privacy. There are 18 categories of Protected Health Information (PHI) identifiers of the
patient or of relatives, employers, or household members of the patient that must be removed in order
for a file to be considered de-identified.
In order to de-identify the records, each file has PHI marked up. All PHI has an
XML tag indicating its category and type, where applicable. For the purposes of this task,
the 18 HIPAA categories have been grouped into 6 main categories and 25 sub categories | @article{stubbs2015automated,
title = {Automated systems for the de-identification of longitudinal
clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1},
journal = {Journal of Biomedical Informatics},
volume = {58},
pages = {S11-S19},
year = {2015},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2015.06.007},
url = {https://www.sciencedirect.com/science/article/pii/S1532046415001173},
author = {Amber Stubbs and Christopher Kotfila and Özlem Uzuner}
} | 1 | 5 | 2022-11-13T22:10:42 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: DUA
pretty_name: n2c2 2014 De-identification
homepage: https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for n2c2 2014 De-identification
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER
The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured two tracks.
The first of these was the de-identification track focused on identifying protected health
information (PHI) in longitudinal clinical narratives.
TRACK 1: NER PHI
HIPAA requires that patient medical records have all identifying information removed in order to
protect patient privacy. There are 18 categories of Protected Health Information (PHI) identifiers of the
patient or of relatives, employers, or household members of the patient that must be removed in order
for a file to be considered de-identified.
In order to de-identify the records, each file has PHI marked up. All PHI has an
XML tag indicating its category and type, where applicable. For the purposes of this task,
the 18 HIPAA categories have been grouped into 6 main categories and 25 sub categories
## Citation Information
```
@article{stubbs2015automated,
title = {Automated systems for the de-identification of longitudinal
clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1},
journal = {Journal of Biomedical Informatics},
volume = {58},
pages = {S11-S19},
year = {2015},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2015.06.007},
url = {https://www.sciencedirect.com/science/article/pii/S1532046415001173},
author = {Amber Stubbs and Christopher Kotfila and Özlem Uzuner}
}
```
| 1,904 | [
[
-0.006343841552734375,
-0.046417236328125,
0.03179931640625,
0.024749755859375,
-0.0202484130859375,
0.0109405517578125,
0.0152435302734375,
-0.06585693359375,
0.00701904296875,
0.0501708984375,
-0.04595947265625,
-0.047882080078125,
-0.0537109375,
0.0161285... |
ciempiess/ciempiess_test | 2023-08-11T19:19:33.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:cc-by-sa-4.0",
"ciempiess",
"spanish",
"mexican spanish",
"test set... | ciempiess | The CIEMPIESS TEST Corpus is a gender balanced corpus destined to test acoustic models for the speech recognition task. The corpus was manually transcribed and it contains audio recordings from 10 male and 10 female speakers. The CIEMPIESS TEST is one of the three corpora included at the LDC's \"CIEMPIESS Experimentation\" (LDC2019S07). | @misc{carlosmenaciempiesstest2022,
title={CIEMPIESS TEST CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.},
ldc_catalog_no={LDC2019S07},
DOI={https://doi.org/10.35111/xdx5-n815},
author={Hernandez Mena, Carlos Daniel},
journal={Linguistic Data Consortium, Philadelphia},
year={2019},
url={https://catalog.ldc.upenn.edu/LDC2019S07},
} | 1 | 5 | 2022-11-21T18:29:31 | ---
annotations_creators:
- expert-generated
language:
- es
language_creators:
- other
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'CIEMPIESS TEST CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- ciempiess
- spanish
- mexican spanish
- test set
- ciempiess project
- ciempiess-unam project
- ciempiess test
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for ciempiess_test
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIEMPIESS-UNAM Project](https://ciempiess.org/)
- **Repository:** [CIEMPIESS-TEST is part of LDC2019S07](https://catalog.ldc.upenn.edu/LDC2019S07)
- **Paper:** [Creating Mexican Spanish Language Resources through the Social Service Program](https://aclanthology.org/2022.nidcp-1.4.pdf)
- **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org)
### Dataset Summary
When developing automatic speech recognition engines and any other machine learning system is a good practice to separate the test from the training data and never combined them. So, the CIEMPIESS TEST Corpus was created by this necessity of having an standard test set destined to measure the advances of the community of users of the CIEMPIESS datasets and we strongly recommend not to use the CIEMPIESS TEST for any other purpose.
The CIEMPIESS TEST Corpus is a gender balanced corpus designed to test acoustic models for the speech recognition task. It was created by recordings and human transcripts of 10 male and 10 female speakers.
The CIEMPIESS TEST Corpus is considered a CIEMPIESS dataset because it only contains audio from the same source of the first [CIEMPIESS Corpus](https://catalog.ldc.upenn.edu/LDC2015S07) and it has the word "TEST" in its name, obviously because it is recommended for test purposes only.
This corpus is part of the [CIEMPIESS Experimentation](https://catalog.ldc.upenn.edu/LDC2019S07), which is a set of three different datasets, specifically [CIEMPIESS COMPLEMENTARY](https://huggingface.co/datasets/ciempiess/ciempiess_complementary), [CIEMPIESS FEM](https://huggingface.co/datasets/ciempiess/ciempiess_fem) and [CIEMPIESS TEST](https://huggingface.co/datasets/ciempiess/ciempiess_test).
CIEMPIESS is the acronym for:
"Corpus de Investigación en Español de México del Posgrado de Ingeniería Eléctrica y Servicio Social".
### Example Usage
The CIEMPIESS TEST contains only the test split:
```python
from datasets import load_dataset
ciempiess_test = load_dataset("ciempiess/ciempiess_test")
```
It is also valid to do:
```python
from datasets import load_dataset
ciempiess_test = load_dataset("ciempiess/ciempiess_test",split="test")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The language of the corpus is Spanish with the accent of Central Mexico except for the speaker M_09 that comes from El Salvador.
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'CMPT_M_07_0074',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/86a30fdc762ba3fad1e38fbe6900ea4940d6f0070af8d56aa483701faa050d51/test/male/M_07/CMPT_M_07_0074.flac',
'array': array([-0.00192261, -0.00234985, -0.00158691, ..., -0.00839233,
-0.00900269, -0.00698853], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': 'M_07',
'gender': 'male',
'duration': 7.510000228881836,
'normalized_text': 'pues está la libertá de las posiciones de a ver quién es pasivo quién es activo blablablá muchas cosas no pero'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription
### Data Splits
The corpus counts just with the test split which has a total of 3558 speech files from 10 male speakers and 10 female speakers with a total duration of 8 hours and 8 minutes.
## Dataset Creation
### Curation Rationale
The CIEMPIESS TEST (CT) Corpus has the following characteristics:
* The CT has a total of 3558 audio files of 10 male speakers and 10 female speakers. It has a total duration of 8 hours and 8 minutes.
* The total number of audio files that come from male speakers is 1694 with a total duration of 4 hours and 3 minutes. The total number of audio files that come from female speakers is 1864 with a total duration of 4 hours and 4 minutes. So CT is perfectly balanced in gender.
* All of the speakers in the CT come from Mexico, except for the speaker M_09 that comes from El Salvador.
* Every audio file in the CT has a duration between 5 and 10 seconds approximately.
* Data in CT is classified by gender and also by speaker, so one can easily select audios from a particular set of speakers to do experiments.
* Audio files in the CT and the first [CIEMPIESS](https://catalog.ldc.upenn.edu/LDC2015S07) are all of the same type. In both, speakers talk about legal and lawyer issues. They also talk about things related to the [UNAM University](https://www.unam.mx/) and the ["Facultad de Derecho de la UNAM"](https://www.derecho.unam.mx/).
* As in the first CIEMPIESS Corpus, transcriptions in the CT were made by humans.
* Speakers in the CT are not present in any other CIEMPIESS dataset.
* Audio files in the CT are distributed in a 16khz@16bit mono format.
### Source Data
#### Initial Data Collection and Normalization
The CIEMPIESS TEST is a Radio Corpus designed to test acoustic models of automatic speech recognition and it is made out of recordings of spontaneous conversations in Spanish between a radio moderator and his guests. Most of the speech in these conversations has the accent of Central Mexico.
All the recordings that constitute the CIEMPIESS TEST come from ["RADIO-IUS"](http://www.derecho.unam.mx/cultura-juridica/radio.php), a radio station belonging to UNAM. Recordings were donated by Lic. Cesar Gabriel Alanis Merchand and Mtro. Ricardo Rojas Arevalo from the "Facultad de Derecho de la UNAM" with the condition that they have to be used for academic and research purposes only.
### Annotations
#### Annotation process
The annotation process is at follows:
* 1. A whole podcast is manually segmented keeping just the portions containing good quality speech.
* 2. A second pass os segmentation is performed; this time to separate speakers and put them in different folders.
* 3. The resulting speech files between 5 and 10 seconds are transcribed by students from different departments (computing, engineering, linguistics). Most of them are native speakers but not with a particular training as transcribers.
#### Who are the annotators?
The CIEMPIESS TEST Corpus was created by the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html) of the ["Facultad de Ingeniería"](https://www.ingenieria.unam.mx/) (FI) in the ["Universidad Nacional Autónoma de México"](https://www.unam.mx/) (UNAM) between 2016 and 2018 by Carlos Daniel Hernández Mena, head of the program.
### Personal and Sensitive Information
The dataset could contain names revealing the identity of some speakers; on the other side, the recordings come from publicly available podcasts, so, there is not a real intent of the participants to be anonymized. Anyway, you agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is challenging because it contains spontaneous speech; so, it will be helpful for the ASR community to evaluate their acoustic models in Spanish with it.
### Discussion of Biases
The dataset intents to be gender balanced. It is comprised of 10 male speakers and 10 female speakers. On the other hand the vocabulary is limited to legal issues.
### Other Known Limitations
The transcriptions in this dataset were revised by Mónica Alejandra Ruiz López during 2022 and they are slightly different from the transcriptions found at [LDC](https://catalog.ldc.upenn.edu/LDC2019S07) or at the [CIEMPIESS-UNAM Project](http://www.ciempiess.org/) official website. We strongly recommend to use these updated transcriptions; we will soon update the transcriptions in the rest of the repositories.
### Dataset Curators
The dataset was collected by students belonging to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html), it was curated by Carlos Daniel Hernández Mena and its transcriptions were manually verified by Mónica Alejandra Ruiz López during 2022.
### Licensing Information
[CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@misc{carlosmenaciempiesstest2019,
title={CIEMPIESS TEST CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.},
ldc_catalog_no={LDC2019S07},
DOI={https://doi.org/10.35111/xdx5-n815},
author={Hernandez Mena, Carlos Daniel},
journal={Linguistic Data Consortium, Philadelphia},
year={2019},
url={https://catalog.ldc.upenn.edu/LDC2019S07},
}
```
### Contributions
The authors want to thank to Alejandro V. Mena, Elena Vera and Angélica Gutiérrez for their support to the social service program: "Desarrollo de Tecnologías del Habla." We also thank to the social service students for all the hard work.
We also thank to Lic. Cesar Gabriel Alanis Merchand and Mtro. Ricardo Rojas Arevalo from the "Facultad de Derecho de la UNAM" for donating all the recordings that constitute the CIEMPIESS TEST Corpus.
Special thanks to Mónica Alejandra Ruiz López who performed a meticulous verification of the transcriptions of this dataset during 2022.
| 11,485 | [
[
-0.036468505859375,
-0.046234130859375,
0.00830078125,
0.036163330078125,
-0.01024627685546875,
0.00955963134765625,
-0.035552978515625,
-0.023193359375,
0.03680419921875,
0.01763916015625,
-0.0380859375,
-0.052215576171875,
-0.035614013671875,
0.02803039550... |
kasnerz/logicnlg | 2023-03-14T15:10:07.000Z | [
"region:us"
] | kasnerz | null | null | 0 | 5 | 2022-11-28T11:58:52 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
kasnerz/charttotext-s | 2023-03-14T15:08:25.000Z | [
"region:us"
] | kasnerz | null | null | 1 | 5 | 2022-11-28T12:36:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mlxen/squad_validation_with_JJ_VB_synonyms | 2022-11-29T21:29:40.000Z | [
"region:us"
] | mlxen | null | null | 0 | 5 | 2022-11-29T06:06:25 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 10484818
num_examples: 10570
download_size: 1825207
dataset_size: 10484818
---
# Dataset Card for "squad_validation_with_JJ_VB_synonyms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 604 | [
[
-0.031494140625,
-0.01526641845703125,
0.01415252685546875,
0.0306854248046875,
-0.0146942138671875,
0.019439697265625,
0.00989532470703125,
0.0037326812744140625,
0.049346923828125,
0.023193359375,
-0.07684326171875,
-0.04644775390625,
-0.035888671875,
0.00... |
deutsche-telekom/NLU-few-shot-benchmark-en-de | 2023-01-01T07:23:53.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:extended|deutsche-telekom/NLU-Evaluation-Data-en-de",
"language:en",
"language:de",
"license:cc-by-4.0",
"region:us"
] | deutsche-telekom | null | null | 1 | 5 | 2022-12-02T16:26:59 | ---
license: cc-by-4.0
language:
- en
- de
multilinguality:
- multilingual
source_datasets:
- extended|deutsche-telekom/NLU-Evaluation-Data-en-de
size_categories:
- 1K<n<10K
task_categories:
- text-classification
task_ids:
- intent-classification
---
# NLU Few-shot Benchmark - English and German
This is a few-shot training dataset from the domain of human-robot interaction.
It contains texts in German and English language with 64 different utterances (classes).
Each utterance (class) has exactly 20 samples in the training set.
This leads to a total of 1280 different training samples.
The dataset is intended to benchmark the intent classifiers of chat bots in English and especially in German language.
We are building on our
[deutsche-telekom/NLU-Evaluation-Data-en-de](https://huggingface.co/datasets/deutsche-telekom/NLU-Evaluation-Data-en-de)
data set.
## Processing Steps
- drop `NaN` values
- drop duplicates in `answer_de` and `answer`
- delete all rows where `answer_de` has more than 70 characters
- add column `label`: `df["label"] = df["scenario"] + "_" + df["intent"]`
- remove classes (`label`) with less than 25 samples:
- `audio_volume_other`
- `cooking_query`
- `general_greet`
- `music_dislikeness`
- random selection for train set - exactly 20 samples for each class (`label`)
- rest for test set
## Copyright
Copyright (c) the authors of [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data)\
Copyright (c) 2022 [Philip May](https://may.la/), [Deutsche Telekom AG](https://www.telekom.com/)
All data is released under the
[Creative Commons Attribution 4.0 International License (CC BY 4.0)](http://creativecommons.org/licenses/by/4.0/).
| 1,698 | [
[
-0.0264129638671875,
-0.05474853515625,
0.01995849609375,
0.0137481689453125,
-0.003993988037109375,
-0.0240936279296875,
-0.022857666015625,
-0.0250244140625,
0.002674102783203125,
0.033477783203125,
-0.04913330078125,
-0.047271728515625,
-0.028656005859375,
... |
lmqg/qg_tweetqa | 2022-12-02T19:11:42.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:tweet_qa",
"language:en",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | lmqg | Question generation dataset based on [TweetQA](https://huggingface.co/datasets/tweet_qa). | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | 0 | 5 | 2022-12-02T18:53:49 | ---
license: cc-by-sa-4.0
pretty_name: TweetQA for question generation
language: en
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: tweet_qa
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_tweetqa"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the [tweet_qa](https://huggingface.co/datasets/tweet_qa). The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'vine',
'paragraph_question': 'question: what site does the link take you to?, context:5 years in 5 seconds. Darren Booth (@darbooth) January 25, 2013',
'question': 'what site does the link take you to?',
'paragraph': '5 years in 5 seconds. Darren Booth (@darbooth) January 25, 2013'
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `question_answer`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9489 | 1086| 1203|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | 2,373 | [
[
-0.0322265625,
-0.07635498046875,
0.0234832763671875,
0.005344390869140625,
-0.0201263427734375,
0.002471923828125,
-0.01091766357421875,
-0.01358795166015625,
0.01085662841796875,
0.0209197998046875,
-0.07012939453125,
-0.04754638671875,
-0.0181884765625,
0... |
Brosnan/WIFI_RSSI_Indoor_Positioning_Dataset | 2022-12-02T20:42:32.000Z | [
"task_categories:tabular-classification",
"task_ids:tabular-single-column-regression",
"language_creators:expert-generated",
"size_categories:100K<n<1M",
"license:cc-by-nc-sa-4.0",
"wifi",
"indoor-positioning",
"indoor-localisation",
"wifi-rssi",
"rssi",
"recurrent-neural-networks",
"region:us... | Brosnan | null | null | 2 | 5 | 2022-12-02T20:14:17 | ---
license: cc-by-nc-sa-4.0
language_creators:
- expert-generated
pretty_name: WiFi RSSI Indoor Localization
size_categories:
- 100K<n<1M
task_categories:
- tabular-classification
task_ids:
- tabular-single-column-regression
tags:
- wifi
- indoor-positioning
- indoor-localisation
- wifi-rssi
- rssi
- recurrent-neural-networks
---
# WIFI RSSI Indoor Positioning Dataset
A reliable and comprehensive public WiFi fingerprinting database for researchers to implement and compare the indoor localization’s methods.The database contains RSSI information from 6 APs conducted in different days with the support of autonomous robot.
We use an autonomous robot to collect the WiFi fingerprint data. Our 3-wheel robot has multiple sensors including wheel odometer, an inertial measurement unit (IMU), a LIDAR, sonar sensors and a color and depth (RGB-D) camera. The robot can navigate to a target location to collect WiFi fingerprints automatically. The localization accuracy of the robot is 0.07 m ± 0.02 m. The dimension of the area is 21 m × 16 m. It has three long corridors. There are six APs and five of them provide two distinct MAC address for 2.4- and 5-GHz communications channels, respectively, except for one that only operates on 2.4-GHz frequency. There is one router can provide CSI information.
# Data Format
X Position (m), Y Position (m), RSSI Feature 1 (dBm), RSSI Feature 2 (dBm), RSSI Feature 3 (dBm), RSSI Feature 4 (dBm), ...
| 1,453 | [
[
-0.02301025390625,
-0.00579833984375,
0.01922607421875,
-0.0189208984375,
0.00717926025390625,
0.0055084228515625,
0.03790283203125,
-0.04376220703125,
0.0183563232421875,
0.0227508544921875,
-0.035491943359375,
-0.0260009765625,
-0.015869140625,
0.013511657... |
tarteel-ai/everyayah | 2022-12-09T19:33:08.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"license:mit",
"region:us"
] | tarteel-ai | null | null | 6 | 5 | 2022-12-07T21:53:59 | ---
pretty_name: Tarteel AI - EveryAyah Dataset
dataset_info:
features:
- name: audio
dtype: audio
- name: duration
dtype: float64
- name: text
dtype: string
- name: reciter
dtype: string
splits:
- name: train
num_bytes: 262627688145.3
num_examples: 187785
- name: test
num_bytes: 25156009734.72
num_examples: 23473
- name: validation
num_bytes: 23426886730.218
num_examples: 23474
download_size: 117190597305
dataset_size: 311210584610.23804
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- ar
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: tarteel-everyayah
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
train-eval-index:
- config: clean
task: automatic-speech-recognition
task_id: speech_recognition
splits:
train_split: train
eval_split: test
validation_split: validation
col_mapping:
audio: audio
text: text
reciter: text
metrics:
- type: wer
name: WER
- type: cer
name: CER
---
﷽
# Dataset Card for Tarteel AI's EveryAyah Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Tarteel AI](https://www.tarteel.ai/)
- **Repository:** [Needs More Information]
- **Point of Contact:** [Mohamed Saad Ibn Seddik](mailto:ms.ibnseddik@tarteel.ai)
### Dataset Summary
This dataset is a collection of Quranic verses and their transcriptions, with diacritization, by different reciters.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Arabic.
## Dataset Structure
### Data Instances
A typical data point comprises the audio file `audio`, and its transcription called `text`.
The `duration` is in seconds, and the author is `reciter`.
An example from the dataset is:
```
{
'audio': {
'path': None,
'array': array([ 0. , 0. , 0. , ..., -0.00057983,
-0.00085449, -0.00061035]),
'sampling_rate': 16000
},
'duration': 6.478375,
'text': 'بِسْمِ اللَّهِ الرَّحْمَنِ الرَّحِيمِ',
'reciter': 'abdulsamad'
}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: The transcription of the audio file.
- duration: The duration of the audio file.
- reciter: The reciter of the verses.
### Data Splits
| | Train | Test | Validation |
| ----- | ----- | ---- | ---------- |
| dataset | 187785 | 23473 | 23474 |
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
```
### Contributions
This dataset was created by:
| 4,723 | [
[
-0.03009033203125,
-0.036712646484375,
0.006011962890625,
0.028594970703125,
-0.03704833984375,
0.005237579345703125,
-0.0199737548828125,
-0.01824951171875,
0.0364990234375,
0.040435791015625,
-0.050811767578125,
-0.08795166015625,
-0.0479736328125,
0.02192... |
Numerati/numerai-datasets | 2022-12-11T13:11:50.000Z | [
"task_categories:time-series-forecasting",
"task_categories:tabular-classification",
"task_categories:other",
"task_ids:multivariate-time-series-forecasting",
"task_ids:tabular-single-column-regression",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:othe... | Numerati | null | null | 2 | 5 | 2022-12-10T19:58:16 | ---
annotations_creators:
- no-annotation
language: []
language_creators:
- machine-generated
license:
- unknown
multilinguality:
- other-my-multilinguality
pretty_name: Numerai Dataset
size_categories: []
source_datasets:
- original
tags:
- numerai
- stock market
- hedge fund
- obfuscated
task_categories:
- time-series-forecasting
- tabular-classification
- other
task_ids:
- multivariate-time-series-forecasting
- tabular-single-column-regression
---
# Numerai Datasets
This is a mirror of the official numerai dataset - NOT OFFICIALLY SUPPORTED OR MAINTAINED BY NUMERAI.
Official source: https://numer.ai/data
Use the official source to submit your predictions, no guarantees for correctness or completeness.
This is maintained by the Numerai community. | 760 | [
[
-0.02960205078125,
-0.019775390625,
0.01593017578125,
0.016937255859375,
-0.03515625,
0.0005650520324707031,
0.00820159912109375,
-0.0276641845703125,
0.03717041015625,
0.051239013671875,
-0.060333251953125,
-0.0206146240234375,
-0.041412353515625,
0.0224456... |
gigant/tib_transcripts | 2023-01-21T13:54:23.000Z | [
"region:us"
] | gigant | null | null | 0 | 5 | 2022-12-12T21:18:55 | ---
dataset_info:
features:
- name: doi
dtype: string
- name: transcript
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 251058543
num_examples: 8481
download_size: 130991914
dataset_size: 251058543
---
# Dataset Card for "tib_transcripts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 438 | [
[
-0.0310516357421875,
-0.031280517578125,
0.0229034423828125,
0.01422119140625,
-0.007965087890625,
0.0181732177734375,
0.01458740234375,
0.006946563720703125,
0.0604248046875,
0.0252838134765625,
-0.04998779296875,
-0.056243896484375,
-0.04766845703125,
-0.0... |
bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images | 2022-12-14T21:04:51.000Z | [
"region:us"
] | bhargavsdesai | null | null | 1 | 5 | 2022-12-14T19:24:25 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
lmqg/qag_itquad | 2022-12-18T08:21:31.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_itquad",
"language:it",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | lmqg | Question & answer generation dataset based on SQuAD. | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | 0 | 5 | 2022-12-18T08:05:18 | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: it
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_itquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_itquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the ITQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Itallian (it)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""4 Minuti" è uscito come primo singolo dell' album e ha raggiunto il terzo posto sulla Billboard Hot 100. E' stato il 37° top-ten di Madonna che ha spinto Madonna oltre Elvis Presley come l' artista con i più top-ten hit. Nel Regno Unito ha mantenuto il suo record per il più numero uno single per una artista femminile;"4 Minuti" diventando il suo tredicesimo. Al 23° Japan Gold Disc Awards, Madonna ha ricevuto il suo quinto trofeo Artista dell' anno dalla Recording Industry Association of Japan, la più importante per qualsiasi artista. Per promuovere ulteriormente l' album, Madonna ha intrapreso il Sticky & Sweet Tour, la sua prima grande avventura con Live Nation. Con un lordo di 280 milioni di dollari, è diventato il tour più incassato di un artista solista, superando il precedente record di Madonna stabilito con il Confessions Tour; è stato poi superato da The Wall Live di Roger Waters. E' stato esteso al prossimo anno, aggiungendo nuove date europee, e dopo la fine, il totale lordo totale era di 408 milioni di dollari.",
"questions": [ "Qual è il nome del primo tour con Live Nation?", "4 minuti è diventato Madonna's che numero uno nel Regno Unito?", "Quanto ha incassato Stick e Sweet Tour?", "Madonna ha superato l' artista con i più alti dieci colpi?" ],
"answers": [ "Sticky & Sweet Tour", "tredicesimo", "280 milioni di dollari,", "Elvis Presley" ],
"questions_answers": "question: Qual è il nome del primo tour con Live Nation?, answer: Sticky & Sweet Tour | question: 4 minuti è diventato Madonna's che numero uno nel Regno Unito?, answer: tredicesimo | question: Quanto ha incassato Stick e Sweet Tour?, answer: 280 milioni di dollari, | question: Madonna ha superato l' artista con i più alti dieci colpi?, answer: Elvis Presley"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|16918 | 6280 | 1988|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | 3,694 | [
[
-0.04302978515625,
-0.07110595703125,
0.01416778564453125,
0.00749969482421875,
-0.0212249755859375,
-0.00527191162109375,
-0.0154266357421875,
-0.0231781005859375,
0.030853271484375,
0.035430908203125,
-0.05999755859375,
-0.049224853515625,
-0.020355224609375,
... |
fewshot-goes-multilingual/cs_mall-product-reviews | 2022-12-20T21:11:15.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:cs",
"license:cc-by-nc-sa-3.0",
"region:us"
] | fewshot-goes-multilingual | null | null | 1 | 5 | 2022-12-20T20:35:40 | ---
annotations_creators:
- found
language:
- cs
language_creators:
- found
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
pretty_name: Mall.cz Product Reviews
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for Mall.cz Product Reviews (Czech)
## Dataset Description
The dataset contains user reviews from Czech eshop <mall.cz>
Each review contains text, sentiment (positive/negative/neutral), and automatically-detected language (mostly Czech, occasionaly Slovak) using [lingua-py](https://github.com/pemistahl/lingua-py)
The dataset has in total (train+validation+test) 30,000 reviews. The data is balanced.
Train set has 8000 positive, 8000 neutral and 8000 negative reviews.
Validation and test set each have 1000 positive, 1000 neutral and 1000 negative reviews.
## Dataset Features
Each sample contains:
- `review_id`: unique string identifier of the review.
- `rating_str`: string representation of the rating - "pozitivní" / "neutrální" / "negativní"
- `rating_int`: integer representation of the rating (1=positive, 0=neutral, -1=negative)
- `comment_language`: language of the review (mostly "cs", occasionaly "sk")
- `comment`: the string of the review
## Dataset Source
The data is a processed adaptation of [Mall CZ corpus](https://liks.fav.zcu.cz/sentiment/).
The adaptation is label-balanced and adds automatically-detected language
| 1,485 | [
[
-0.036834716796875,
-0.0372314453125,
-0.005519866943359375,
0.054901123046875,
-0.039794921875,
0.00147247314453125,
-0.03863525390625,
-0.031890869140625,
0.018463134765625,
0.0305328369140625,
-0.04388427734375,
-0.07647705078125,
-0.0198822021484375,
0.0... |
RuyuanWan/SChem_Disagreement | 2022-12-26T22:03:28.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:extended",
"language:en",
"region:us"
] | RuyuanWan | null | null | 0 | 5 | 2022-12-26T19:56:21 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
pretty_name: RuyuanWan/SChem_Disagreement
size_categories: []
source_datasets:
- extended
tags: []
task_categories:
- text-classification
task_ids: []
---
This dataset is processed version of Social Chemistry 101(SChem) dataset including text and the annotation disagreement labels. <br>
Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br>
Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br>
Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br>
Source Data: [Social Chemistry 101(Forbes et al. 2020)](https://github.com/mbforbes/social-chemistry-101) <br> | 779 | [
[
-0.0274810791015625,
-0.03802490234375,
0.039276123046875,
0.0309600830078125,
0.01027679443359375,
0.01073455810546875,
-0.01218414306640625,
-0.015899658203125,
0.047210693359375,
0.03167724609375,
-0.049224853515625,
-0.0457763671875,
-0.04022216796875,
0... |
RuyuanWan/Dilemmas_Disagreement | 2022-12-26T21:28:17.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:extended",
"language:en",
"region:us"
] | RuyuanWan | null | null | 0 | 5 | 2022-12-26T21:21:21 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
pretty_name: RuyuanWan/Dilemmas_Disagreement
size_categories: []
source_datasets:
- extended
tags: []
task_categories:
- text-classification
task_ids: []
---
This dataset is processed version of Dilemmas dataset including text and the annotation disagreement labels. <br>
Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br>
Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br>
Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br>
Source Data: [Scruples-dilemmas (Lourie, Bras, and Choi 2021)](https://github.com/allenai/scruples) <br> | 757 | [
[
-0.038299560546875,
-0.031951904296875,
0.0292510986328125,
0.036773681640625,
0.0010986328125,
-0.005695343017578125,
-0.0069580078125,
-0.018829345703125,
0.03326416015625,
0.0399169921875,
-0.046600341796875,
-0.02850341796875,
-0.04583740234375,
0.025436... |
DavidVivancos/MindBigData2022 | 2023-01-07T10:18:30.000Z | [
"arxiv:2212.14746",
"region:us"
] | DavidVivancos | null | null | 2 | 5 | 2022-12-27T16:01:18 | # MindBigData 2022 A Large Dataset of Brain Signals
> Supporting datasets for paper [ arXiv:2212.14746](https://arxiv.org/abs/2212.14746)
> There are 3 Main datasets with subdatasets:
>
**1.- MindBigData MNIST of Brain Digits**
> based on http://mindbigdata.com/opendb/index.html
> But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes)
> EEG's Resampled to match original headsets sampling rate
> Included headers.
> and simplified to contain only label & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel FP1 and MindWave will be FP1-0 FP1-1 ..... FP1-1023 since there are 1024 samples.
> There are 4 subdatasets:
>
> For MindWave with 1 EEG Channel and 1024 samples x Channel
>
> For EPOC1 with 14 EEG Channels and 256 samples x Channel
>
> For Muse1 with 4 EEG Channels and 440 samples x Channel
>
> For Insight1 with 5 EEG Channels and 256 samples x Channel
>
**1.1.- MindBigData MNIST of Brain digits MindWave1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_MW
>
**1.2.- MindBigData MNIST of Brain digits EPOC1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_EP
**1.3.- MindBigData MNIST of Brain digits Muse1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_MU
**1.4.- MindBigData MNIST of Brain digits Insight1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_IN
**2.- MindBigData Imagenet of the Brain**
> based on http://mindbigdata.com/opendb/imagenet.html
> But all datasets splitted to 80% Train 20% Test (also proportional in all the classes)
> EEG's Resampled to match original headsets sampling rate
> Included headers.
> contains label as the ILSVRC2013 category, and a hotencoded name lists, the RGB pixel values of the image seen resampled to 150pixels by 150 pixels & EEG data as rows named in headers as ChannelName-SampleNum,
> There are 2 subdatasets:
>
> One with the Insight 1 EEG signals at 384 samples per channel (5 channels)
>
> One with the Spectrogram image 64x64px instead of the EEG as described in the paper
>
**2.1.- MindBigData Imagenet of the Brain Insight1 EEG**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_Imagenet_IN
**2.2.- MindBigData Imagenet of the Brain Insight1 Spectrogram**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_Imagenet_IN_Spct
**3.- MindBigData Visual MNIST of Brain Digits**
> based on http://mindbigdata.com/opendb/visualmnist.html
> But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes)
> Included headers.
> and simplified to contain only label, the original MNIST pixels of the digit seen 28x28pixels & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel TP9 and Muse2 will be TP9-0 TP9-1 ..... TP9-511 since there are 512 samples.
> There are 3 subdatasets:
>
> For Muse2 with 5 EEG Channels, 3 PPG Channels, 3 ACC Channels & 3 GYR Channels and 512 samples x Channel
>
> For Cap64 with 64 EEG Channels and 400 samples x Channel
>
> For Cap64 with 64 EEG Channels and 400 samples x Channel but with Morlet png images as EEG outputs
>
**3.1.- MindBigData Visual MNIST of Brain digits Muse2**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_MU2
**3.2.- MindBigData Visual MNIST of Brain digits Cap64**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_Cap64
**3.3.- MindBigData Visual MNIST of Brain digits Cap64 Morlet**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_Cap64_Morlet
| 3,582 | [
[
-0.066650390625,
-0.032440185546875,
0.03155517578125,
0.032135009765625,
-0.0215911865234375,
-0.0309600830078125,
-0.01507568359375,
-0.03912353515625,
0.06683349609375,
0.0172271728515625,
-0.069091796875,
-0.03338623046875,
-0.020843505859375,
-0.0100784... |
conorcl/portraits-512 | 2022-12-30T09:04:11.000Z | [
"region:us"
] | conorcl | null | null | 0 | 5 | 2022-12-30T09:03:33 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 83939067.61
num_examples: 2917
download_size: 83808019
dataset_size: 83939067.61
---
# Dataset Card for "portraits-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 397 | [
[
-0.048370361328125,
-0.001148223876953125,
0.0174102783203125,
0.0212249755859375,
-0.005374908447265625,
0.00403594970703125,
0.022857666015625,
-0.0272674560546875,
0.06341552734375,
0.037811279296875,
-0.07196044921875,
-0.0538330078125,
-0.030609130859375,
... |
irds/clueweb09 | 2023-01-05T02:54:31.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 0 | 5 | 2023-01-05T02:54:25 | ---
pretty_name: '`clueweb09`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `clueweb09`
The `clueweb09` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,040,859,705
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/clueweb09', 'docs')
for record in docs:
record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
| 834 | [
[
-0.0178070068359375,
0.0036373138427734375,
0.00727081298828125,
0.00901031494140625,
-0.0203399658203125,
-0.0137481689453125,
-0.00260162353515625,
-0.0288848876953125,
0.0299224853515625,
0.019195556640625,
-0.043853759765625,
-0.0565185546875,
-0.02740478515... |
irds/disks45_nocr | 2023-01-05T03:01:34.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 0 | 5 | 2023-01-05T03:01:29 | ---
pretty_name: '`disks45/nocr`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `disks45/nocr`
The `disks45/nocr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=528,155
This dataset is used by: [`disks45_nocr_trec-robust-2004`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004), [`disks45_nocr_trec-robust-2004_fold1`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold1), [`disks45_nocr_trec-robust-2004_fold2`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold2), [`disks45_nocr_trec-robust-2004_fold3`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold3), [`disks45_nocr_trec-robust-2004_fold4`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold4), [`disks45_nocr_trec-robust-2004_fold5`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold5), [`disks45_nocr_trec7`](https://huggingface.co/datasets/irds/disks45_nocr_trec7), [`disks45_nocr_trec8`](https://huggingface.co/datasets/irds/disks45_nocr_trec8)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/disks45_nocr', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'body': ..., 'marked_up_doc': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@misc{Voorhees1996Disks45,
title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set},
author = {Ellen M. Voorhees},
doi = {10.18434/t47g6m},
year = {1996},
publisher = {National Institute of Standards and Technology}
}
```
| 1,951 | [
[
-0.03570556640625,
-0.01409149169921875,
0.00836181640625,
-0.00261688232421875,
-0.00994110107421875,
0.00428009033203125,
0.01154327392578125,
-0.0149688720703125,
0.03411865234375,
0.037628173828125,
-0.050628662109375,
-0.06280517578125,
-0.0362548828125,
... |
irds/msmarco-passage_train_triples-small | 2023-01-05T03:17:31.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/msmarco-passage",
"region:us"
] | irds | null | null | 0 | 5 | 2023-01-05T03:17:25 | ---
pretty_name: '`msmarco-passage/train/triples-small`'
viewer: false
source_datasets: ['irds/msmarco-passage']
task_categories:
- text-retrieval
---
# Dataset Card for `msmarco-passage/train/triples-small`
The `msmarco-passage/train/triples-small` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-passage#msmarco-passage/train/triples-small).
# Data
This dataset provides:
- `docpairs`; count=39,780,811
- For `docs`, use [`irds/msmarco-passage`](https://huggingface.co/datasets/irds/msmarco-passage)
## Usage
```python
from datasets import load_dataset
docpairs = load_dataset('irds/msmarco-passage_train_triples-small', 'docpairs')
for record in docpairs:
record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Bajaj2016Msmarco,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang},
booktitle={InCoCo@NIPS},
year={2016}
}
```
| 1,429 | [
[
-0.011962890625,
-0.001941680908203125,
0.0198211669921875,
0.0111083984375,
-0.019287109375,
-0.016754150390625,
-0.008209228515625,
-0.01392364501953125,
0.0084228515625,
0.033721923828125,
-0.031829833984375,
-0.033782958984375,
-0.0269622802734375,
0.039... |
irds/trec-arabic_ar2002 | 2023-01-05T03:51:37.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/trec-arabic",
"region:us"
] | irds | null | null | 0 | 5 | 2023-01-05T03:51:31 | ---
pretty_name: '`trec-arabic/ar2002`'
viewer: false
source_datasets: ['irds/trec-arabic']
task_categories:
- text-retrieval
---
# Dataset Card for `trec-arabic/ar2002`
The `trec-arabic/ar2002` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-arabic#trec-arabic/ar2002).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=38,432
- For `docs`, use [`irds/trec-arabic`](https://huggingface.co/datasets/irds/trec-arabic)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/trec-arabic_ar2002', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/trec-arabic_ar2002', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Gey2002Arabic,
title={The TREC-2002 Arabic/English CLIR Track},
author={Fredric Gey and Douglas Oard},
booktitle={TREC},
year={2002}
}
@misc{Graff2001Arabic,
title={Arabic Newswire Part 1 LDC2001T55},
author={Graff, David, and Walker, Kevin},
year={2001},
url={https://catalog.ldc.upenn.edu/LDC2001T55},
publisher={Linguistic Data Consortium}
}
```
| 1,567 | [
[
-0.0282745361328125,
-0.0268096923828125,
0.005008697509765625,
-0.006786346435546875,
-0.0244903564453125,
0.00489044189453125,
-0.006351470947265625,
-0.006198883056640625,
0.017181396484375,
0.0298919677734375,
-0.03704833984375,
-0.0777587890625,
-0.04101562... |
sil-ai/audio-kw-in-context | 2023-07-24T18:14:59.000Z | [
"region:us"
] | sil-ai | null | @InProceedings{huggingface:audio-kw-in-context,
title = {audio-kw-in-context},
author={Joshua Nemecek
},
year={2022}
} | 0 | 5 | 2023-01-09T16:38:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Lord-Goku/testing_1 | 2023-01-11T18:16:39.000Z | [
"license:afl-3.0",
"region:us"
] | Lord-Goku | null | null | 0 | 5 | 2023-01-09T18:28:35 | ---
license: afl-3.0
---
---
TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
---
# Dataset Card for Testing Stock Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a test dataset
### Supported Tasks and Leaderboards
BERT
MARKET
STOCK
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 2,654 | [
[
-0.03839111328125,
-0.040740966796875,
0.01004791259765625,
0.0143890380859375,
-0.00977325439453125,
0.0102996826171875,
-0.0206756591796875,
-0.02813720703125,
0.0504150390625,
0.0408935546875,
-0.05419921875,
-0.0748291015625,
-0.045257568359375,
-0.00791... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.