id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
Niche-Squad/mock-dots | 2023-09-13T14:52:49.000Z | [
"license:bsd-3-clause",
"region:us"
] | Niche-Squad | null | null | null | 1 | 234 | ---
license: bsd-3-clause
---
|
bigheiniuJ/JimmyLu | 2023-10-08T12:53:57.000Z | [
"region:us"
] | bigheiniuJ | null | null | null | 0 | 234 | ---
dataset_info:
features:
- name: output
dtype: string
- name: input
dtype: string
- name: seed
dtype: string
- name: split
dtype: string
- name: task
dtype: string
- name: options
sequence: string
splits:
- name: train
num_bytes: 767510
num_examples: 3150
- name: dev
num_bytes: 746828
num_examples: 3150
- name: test
num_bytes: 24605660
num_examples: 87430
download_size: 8480863
dataset_size: 26119998
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "JimmyLu"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
copenlu/fever_gold_evidence | 2022-11-17T11:42:54.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:us"
] | copenlu | null | null | null | 4 | 233 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- machine-generated
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: fever
pretty_name: ''
size_categories:
- 100K<n<1M
source_datasets:
- extended|fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
# Dataset Card for fever_gold_evidence
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/copenlu/fever-adversarial-attacks
- **Repository:** https://github.com/copenlu/fever-adversarial-attacks
- **Paper:** https://aclanthology.org/2020.emnlp-main.256/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Dataset for training classification-only fact checking with claims from the FEVER dataset.
This dataset is used in the paper "Generating Label Cohesive and Well-Formed Adversarial Claims", EMNLP 2020
The evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims.
For *NEI* claims, we extract evidence sentences with the system in "Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the
First Workshop on Fact Extraction and VERification (FEVER), pages 109113."
More details can be found in https://github.com/copenlu/fever-adversarial-attacks
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{atanasova-etal-2020-generating,
title = "Generating Label Cohesive and Well-Formed Adversarial Claims",
author = "Atanasova, Pepa and
Wright, Dustin and
Augenstein, Isabelle",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.256",
doi = "10.18653/v1/2020.emnlp-main.256",
pages = "3168--3177",
abstract = "Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimizing the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. We then train a conditional language model to generate semantically valid statements, which include the found universal triggers. We find that the generated attacks maintain the directionality and semantic validity of the claim better than previous work.",
}
``` |
GATE-engine/COCOStuff164K | 2023-06-26T06:29:49.000Z | [
"region:us"
] | GATE-engine | null | null | null | 0 | 233 | ---
dataset_info:
features:
- name: image
dtype: image
- name: mask
dtype: image
splits:
- name: val
num_bytes: 2431424833.0
num_examples: 5000
- name: train
num_bytes: 57790292141.76
num_examples: 118287
download_size: 39862772718
dataset_size: 60221716974.76
---
# Dataset Card for "COCOStuff164K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_10000_spambase_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T03:32:53.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 233 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 236440000
num_examples: 10000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 62261087
dataset_size: 472880000
---
# Dataset Card for "autotree_pmlb_10000_spambase_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
longhoang06/text-recognition | 2023-09-30T15:08:12.000Z | [
"region:us"
] | longhoang06 | null | null | null | 0 | 233 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 6858787617.0
num_examples: 100000
download_size: 6858941356
dataset_size: 6858787617.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "text-recognition"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ds4sd/DocLayNet | 2023-01-25T17:01:19.000Z | [
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"size_categories:10K<n<100K",
"license:other",
"layout-segmentation",
"COCO",
"document-understanding",
"PDF",
"region:us"
] | ds4sd | DocLayNet is a human-annotated document layout segmentation dataset from a broad variety of document sources. | @article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis},
doi = {10.1145/3534678.353904},
url = {https://arxiv.org/abs/2206.01062},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022}
} | null | 24 | 232 | ---
annotations_creators:
- crowdsourced
license: other
pretty_name: DocLayNet
size_categories:
- 10K<n<100K
tags:
- layout-segmentation
- COCO
- document-understanding
- PDF
task_categories:
- object-detection
- image-segmentation
task_ids:
- instance-segmentation
---
# Dataset Card for DocLayNet
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
### Supported Tasks and Leaderboards
We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/.
## Dataset Structure
### Data Fields
DocLayNet provides four types of data assets:
1. PNG images of all pages, resized to square `1025 x 1025px`
2. Bounding-box annotations in COCO format for each PNG image
3. Extra: Single-page PDF files matching each PNG image
4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
The COCO image record are defined like this example
```js
...
{
"id": 1,
"width": 1025,
"height": 1025,
"file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
// Custom fields:
"doc_category": "financial_reports" // high-level document category
"collection": "ann_reports_00_04_fancy", // sub-collection name
"doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
"page_no": 9, // page number in original document
"precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
},
...
```
The `doc_category` field uses one of the following constants:
```
financial_reports,
scientific_articles,
laws_and_regulations,
government_tenders,
manuals,
patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
```
### Contributions
Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset.
|
nasa-cisto-data-science-group/modis-lake-powell-toy-dataset | 2023-05-04T01:39:33.000Z | [
"size_categories:n<1K",
"license:apache-2.0",
"region:us"
] | nasa-cisto-data-science-group | null | null | null | 0 | 232 | ---
license: apache-2.0
size_categories:
- n<1K
---
# MODIS Water Lake Powell Toy Dataset
### Dataset Summary
Tabular dataset comprised of MODIS surface reflectance bands along with calculated indices and a label (water/not-water)
## Dataset Structure
### Data Fields
- `water`: Label, water or not-water (binary)
- `sur_refl_b01_1`: MODIS surface reflection band 1 (-100, 16000)
- `sur_refl_b02_1`: MODIS surface reflection band 2 (-100, 16000)
- `sur_refl_b03_1`: MODIS surface reflection band 3 (-100, 16000)
- `sur_refl_b04_1`: MODIS surface reflection band 4 (-100, 16000)
- `sur_refl_b05_1`: MODIS surface reflection band 5 (-100, 16000)
- `sur_refl_b06_1`: MODIS surface reflection band 6 (-100, 16000)
- `sur_refl_b07_1`: MODIS surface reflection band 7 (-100, 16000)
- `ndvi`: Normalized differential vegetation index (-20000, 20000)
- `ndwi1`: Normalized differential water index 1 (-20000, 20000)
- `ndwi2`: Normalized differential water index 2 (-20000, 20000)
### Data Splits
Train and test split. Test is 200 rows, train is 800.
## Dataset Creation
## Source Data
[MODIS MOD44W](https://lpdaac.usgs.gov/products/mod44wv006/)
[MODIS MOD09GA](https://lpdaac.usgs.gov/products/mod09gav006/)
[MODIS MOD09GQ](https://lpdaac.usgs.gov/products/mod09gqv006/)
## Annotation process
Labels were created by using the MOD44W C6 product to designate pixels in MODIS surface reflectance products as land or water. |
maximuslee07/raqna | 2023-09-25T21:31:40.000Z | [
"region:us"
] | maximuslee07 | null | null | null | 0 | 232 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 85774
num_examples: 100
download_size: 53483
dataset_size: 85774
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "raqna"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
webis/conclugen | 2022-05-03T06:18:33.000Z | [
"region:us"
] | webis | The ConcluGen corpus is constructed for the task of argument summarization. It consists of 136,996 pairs of argumentative texts and their conclusions collected from the ChangeMyView subreddit, a web portal for argumentative discussions on controversial topics.
The corpus has three variants: aspects, topics, and targets. Each variation encodes the corresponding information via control codes. These provide additional argumentative knowledge for generating more informative conclusions. | @inproceedings{syed:2021,
author = {Shahbaz Syed and
Khalid Al Khatib and
Milad Alshomary and
Henning Wachsmuth and
Martin Potthast},
editor = {Chengqing Zong and
Fei Xia and
Wenjie Li and
Roberto Navigli},
title = {Generating Informative Conclusions for Argumentative Texts},
booktitle = {Findings of the Association for Computational Linguistics: {ACL/IJCNLP}
2021, Online Event, August 1-6, 2021},
pages = {3482--3493},
publisher = {Association for Computational Linguistics},
year = {2021},
url = {https://doi.org/10.18653/v1/2021.findings-acl.306},
doi = {10.18653/v1/2021.findings-acl.306}
} | null | 1 | 231 | # Dataset Card for ConcluGen
## Table of Contents
- [Dataset Card for ConcluGen](#dataset-card-for-conclugen)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://zenodo.org/record/4818134
- **Repository:** https://github.com/webis-de/acl21-informative-conclusion-generation
- **Paper:** [Generating Informative Conclusions for Argumentative Texts](https://aclanthology.org/2021.findings-acl.306.pdf)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Shahbaz Syed](mailto:shahbaz.syed@uni-leipzig.de)
### Dataset Summary
The ConcluGen corpus is constructed for the task of argument summarization. It consists of 136,996 pairs of argumentative texts and their conclusions collected from the ChangeMyView subreddit, a web portal for argumentative discussions on controversial topics.
The corpus has three variants: topics, aspects, and targets. Each variation encodes the corresponding information via control codes. These provide additional argumentative knowledge for generating more informative conclusions.
### Supported Tasks and Leaderboards
Argument Summarization, Conclusion Generation
### Languages
English ('en') as spoken by Reddit users on the [r/changemyview](https://old.reddit.com/r/changemyview/) subreddits.
## Dataset Structure
### Data Instances
An example consists of a unique 'id', an 'argument', and its 'conclusion'.
**base**
Contains only the argument and its conclusion.
```
{'id': 'ee11c116-23df-4795-856e-8b6c6626d5ed',
'argument': "In my opinion, the world would be a better place if alcohol was illegal. I've done a little bit of research to get some numbers, and I was quite shocked at what I found. Source On average, one in three people will be involved in a drunk driving crash in their lifetime. In 2011, 9,878 people died in drunk driving crashes Drunk driving costs each adult in this country almost 500 per year. Drunk driving costs the United States 132 billion a year. Every day in America, another 27 people die as a result of drunk driving crashes. Almost every 90 seconds, a person is injured in a drunk driving crash. These are just the driving related statistics. They would each get reduced by at least 75 if the sale of alcohol was illegal. I just don't see enough positives to outweigh all the deaths and injuries that result from irresponsible drinking. Alcohol is quite literally a drug, and is also extremely addicting. It would already be illegal if not for all these pointless ties with culture. Most people wouldn't even think to live in a world without alcohol, but in my opinion that world would be a better, safer, and more productive one. , or at least defend the fact that it's legal.",
'conclusion': 'I think alcohol should be illegal.'}
```
**topic**
Argument encoded with the discussion topic.
```
{"id":"b22272fd-00d2-4373-b46c-9c1d9d21e6c2","argument":"<|TOPIC|>Should Planned Parenthood Be Defunded?<|ARGUMENT|>Even the best contraceptive methods such as surgical sterilisation can fail, and even with perfect use the pill may not work.<|CONCLUSION|>","conclusion":"Even with the best intentions and preparation, contraceptives can and do fail."}
```
**aspects**
Argument encoded with the discussion topic and argument's aspects.
```
{"id":"adc92826-7892-42d4-9405-855e845bf027","argument":"<|TOPIC|>Gender Neutral Bathrooms: Should They be Standard?<|ARGUMENT|>Men's toilets and women's urine have different odours due to hormone differences in each biological sex. As a result, the urine of one sex may smell much worse to the other sex and vice versa, meaning that it is logical to keep their toilet facilities separate.<|ASPECTS|>hormone differences, urine, separate, facilities, different odours, smell much worse<|CONCLUSION|>","conclusion":"Men and women, because of their different biological characteristics, each need a different type of bathroom. Gender-segregated bathrooms reflect and honour these differences."}
```
**targets**
Argument encoded with the discussion topic and possible conclusion targets.
```
{"id":"c9a87a03-edda-42be-9c0d-1e7d2d311816","argument":"<|TOPIC|>Australian republic vs. monarchy<|ARGUMENT|>The monarchy is a direct reflection of Australia's past as a British colony and continues to symbolize Australia's subservience to the British crown. Such symbolism has a powerfully negative effect on Australians' sense of independence and identity. Ending the monarchy and establishing a republic would constitute a substantial stride in the direction of creating a greater sense of independence and national pride and identity.<|TARGETS|>Such symbolism, The monarchy, Ending the monarchy and establishing a republic<|CONCLUSION|>","conclusion":"Ending the monarchy would foster an independent identity in Australia"}
```
### Data Fields
- `id`: a string identifier for each example.
- `argument`: the argumentative text.
- `conclusion`: the conclusion of the argumentative text.
### Data Splits
The data is split into train, validation, and test splits for each variation of the dataset (including base).
| | Train | Validation | Test |
|--------- |--------- |------------ |------ |
| Base | 116,922 | 12,224 | 1373 |
| Aspects | 120,142 | 12,174 | 1357 |
| Targets | 109,376 | 11,053 | 1237 |
| Topic | 121,588 | 12,335 | 1372 |
## Dataset Creation
### Curation Rationale
ConcluGen was built as a first step towards argument summarization technology. The [rules of the subreddit](https://old.reddit.com/r/changemyview/wiki/rules) ensure high quality data suitable for the task.
### Source Data
#### Initial Data Collection and Normalization
Reddit [ChangeMyView](https://old.reddit.com/r/changemyview/)
#### Who are the source language producers?
Users of the subreddit [r/changemyview](https://old.reddit.com/r/changemyview/). Further demographic information is unavailable from the data source.
### Annotations
The dataset is augmented with automatically extracted knowledge such as the argument's aspects, the discussion topic, and possible conclusion targets.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Only the argumentative text and its conclusion are provided. No personal information of the posters is included.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
### Citation Information
```
@inproceedings{syed:2021,
author = {Shahbaz Syed and
Khalid Al Khatib and
Milad Alshomary and
Henning Wachsmuth and
Martin Potthast},
editor = {Chengqing Zong and
Fei Xia and
Wenjie Li and
Roberto Navigli},
title = {Generating Informative Conclusions for Argumentative Texts},
booktitle = {Findings of the Association for Computational Linguistics: {ACL/IJCNLP}
2021, Online Event, August 1-6, 2021},
pages = {3482--3493},
publisher = {Association for Computational Linguistics},
year = {2021},
url = {https://doi.org/10.18653/v1/2021.findings-acl.306},
doi = {10.18653/v1/2021.findings-acl.306}
}
```
|
allenai/multi_lexsum | 2023-05-18T21:41:22.000Z | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:odc-by",
"arxiv:2206.10883",
"region:us"
] | allenai | Multi-LexSum is a multi-doc summarization dataset for civil rights litigation lawsuits with summaries of three granularities. | @article{Shen2022MultiLexSum,
author = {Zejiang Shen and
Kyle Lo and
Lauren Yu and
Nathan Dahlberg and
Margo Schlanger and
Doug Downey},
title = {Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities},
journal = {CoRR},
volume = {abs/2206.10883},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2206.10883},
doi = {10.48550/arXiv.2206.10883}
} | null | 9 | 231 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- odc-by
multilinguality:
- monolingual
pretty_name: Multi-LexSum
size_categories:
- 1K<n<10K
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- summarization
task_ids: []
---
# Dataset Card for Multi-LexSum
## Table of Contents
- [Dataset Card for Multi-LexSum](#dataset-card-for-multi-lexsum)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset](#dataset)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Sheet (Datasheet)](#dataset-sheet-datasheet)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Release History](#release-history)
## Dataset Description
- **Homepage:** https://multilexsum.github.io
- **Repository:** https://github.com/multilexsum/dataset
- **Paper:** https://arxiv.org/abs/2206.10883
<p>
<a href="https://multilexsum.github.io" style="display: inline-block;">
<img src="https://img.shields.io/badge/-homepage-informational.svg?logo=jekyll" title="Multi-LexSum Paper" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a>
<a href="https://github.com/multilexsum/dataset" style="display: inline-block;">
<img src="https://img.shields.io/badge/-multilexsum-lightgrey.svg?logo=github" title="Multi-LexSum Github Repo" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a>
<a href="https://arxiv.org/abs/2206.10883" style="display: inline-block;">
<img src="https://img.shields.io/badge/NeurIPS-2022-9cf" title="Multi-LexSum is accepted in NeurIPS 2022" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a>
</p>
### Talk @ NeurIPS 2022
[](https://youtu.be/C-fwW_ZhkE8)
### Dataset Summary
The Multi-LexSum dataset is a collection of 9,280 such legal case summaries. Multi-LexSum is distinct from other datasets in its **multiple target summaries, each at a different granularity** (ranging from one-sentence “extreme” summaries to multi-paragraph narrations of over five hundred words). It presents a challenging multi-document summarization task given **the long length of the source documents**, often exceeding two hundred pages per case. Unlike other summarization datasets that are (semi-)automatically curated, Multi-LexSum consists of **expert-authored summaries**: the experts—lawyers and law students—are trained to follow carefully created guidelines, and their work is reviewed by an additional expert to ensure quality.
### Languages
English
## Dataset
### Data Fields
The dataset contains a list of instances (cases); each instance contains the following data:
| Field | Description |
| ------------: | -------------------------------------------------------------------------------: |
| id | `(str)` The case ID |
| sources | `(List[str])` A list of strings for the text extracted from the source documents |
| summary/long | `(str)` The long (multi-paragraph) summary for this case |
| summary/short | `(Optional[str])` The short (one-paragraph) summary for this case |
| summary/tiny | `(Optional[str])` The tiny (one-sentence) summary for this case |
Please check the exemplar usage below for loading the data:
```python
from datasets import load_dataset
multi_lexsum = load_dataset("allenai/multi_lexsum", name="v20230518")
# Download multi_lexsum locally and load it as a Dataset object
example = multi_lexsum["validation"][0] # The first instance of the dev set
example["sources"] # A list of source document text for the case
for sum_len in ["long", "short", "tiny"]:
print(example["summary/" + sum_len]) # Summaries of three lengths
print(example['case_metadata']) # The corresponding metadata for a case in a dict
```
### Data Splits
| | Instances | Source Documents (D) | Long Summaries (L) | Short Summaries (S) | Tiny Summaries (T) | Total Summaries |
| ----------: | --------: | -------------------: | -----------------: | ------------------: | -----------------: | --------------: |
| Train (70%) | 3,177 | 28,557 | 3,177 | 2,210 | 1,130 | 6,517 |
| Test (20%) | 908 | 7,428 | 908 | 616 | 312 | 1,836 |
| Dev (10%) | 454 | 4,134 | 454 | 312 | 161 | 927 |
## Dataset Sheet (Datasheet)
Please check our [dataset sheet](https://multilexsum.github.io/datasheet) for details regarding dataset creation, source data, annotation, and considerations for the usage.
## Additional Information
### Dataset Curators
The dataset is created by the collaboration between Civil Rights Litigation Clearinghouse (CRLC, from University of Michigan) and Allen Institute for AI. Multi-LexSum builds on the dataset used and posted by the Clearinghouse to inform the public about civil rights litigation.
### Licensing Information
The Multi-LexSum dataset is distributed under the [Open Data Commons Attribution License (ODC-By)](https://opendatacommons.org/licenses/by/1-0/).
The case summaries and metadata are licensed under the [Creative Commons Attribution License (CC BY-NC)](https://creativecommons.org/licenses/by-nc/4.0/), and the source documents are already in the public domain.
Commercial users who desire a license for summaries and metadata can contact [info@clearinghouse.net](mailto:info@clearinghouse.net), which will allow free use but limit summary re-posting.
The corresponding code for downloading and loading the dataset is licensed under the Apache License 2.0.
### Citation Information
```
@article{Shen2022MultiLexSum,
author = {Zejiang Shen and
Kyle Lo and
Lauren Yu and
Nathan Dahlberg and
Margo Schlanger and
Doug Downey},
title = {Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities},
journal = {CoRR},
volume = {abs/2206.10883},
year = {2022},****
url = {https://doi.org/10.48550/arXiv.2206.10883},
doi = {10.48550/arXiv.2206.10883}
}
```
## Release History
| Version | Description |
| ----------: | -----------------------------------------------------------: |
| `v20230518` | The v1.1 release including case and source document metadata |
| `v20220616` | The initial v1.0 release | |
result-kand2-sdxl-wuerst-karlo/b776d96a | 2023-10-01T00:53:01.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 231 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 173
num_examples: 10
download_size: 1318
dataset_size: 173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "b776d96a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/39ceeb6b | 2023-10-01T00:53:02.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 231 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 173
num_examples: 10
download_size: 1318
dataset_size: 173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "39ceeb6b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pvduy/rm_oa_hh | 2023-06-13T16:39:03.000Z | [
"region:us"
] | pvduy | null | null | null | 1 | 230 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: selected
dtype: string
- name: rejected
dtype: string
- name: source
dtype: string
splits:
- name: test
num_bytes: 11065628
num_examples: 8524
- name: train
num_bytes: 220101381
num_examples: 166750
download_size: 135525253
dataset_size: 231167009
---
# Dataset Card for "rm_oa_hh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zxvix/c4_biomedical_2 | 2023-09-12T03:10:56.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 230 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 3516783.122
num_examples: 989
download_size: 2179356
dataset_size: 3516783.122
---
# Dataset Card for "c4_biomedical_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
midas/inspec | 2022-03-05T03:08:37.000Z | [
"arxiv:1910.08840",
"region:us"
] | midas | Benchmark dataset for automatic identification of keyphrases from text published with the work - Improved automatic keyword extraction given more linguistic knowledge. Anette Hulth. In Proceedings of EMNLP 2003. p. 216-223. | @inproceedings{hulth2003improved,
title={Improved automatic keyword extraction given more linguistic knowledge},
author={Hulth, Anette},
booktitle={Proceedings of the 2003 conference on Empirical methods in natural language processing},
pages={216--223},
year={2003}
} | null | 7 | 229 | A dataset for benchmarking keyphrase extraction and generation techniques from abstracts of English scientific papers. For more details about the dataset please refer the original paper - [https://dl.acm.org/doi/pdf/10.3115/1119355.1119383](https://dl.acm.org/doi/pdf/10.3115/1119355.1119383).
Data source - [https://github.com/boudinfl/ake-datasets/tree/master/datasets/Inspec](https://github.com/boudinfl/ake-datasets/tree/master/datasets/Inspec)
## Dataset Summary
The Inspec dataset was originally proposed by *Hulth* in the paper titled - [Improved automatic keyword extraction given more linguistic knowledge](https://aclanthology.org/W03-1028.pdf) in the year 2003. The dataset consists of abstracts of 2,000 English scientific papers from the [Inspec database](https://clarivate.com/webofsciencegroup/solutions/webofscience-inspec/). The abstracts are from papers belonging to the scientific domains of *Computers and Control* and *Information Technology* published between 1998 to 2002. Each abstract has two sets of keyphrases annotated by professional indexers - *controlled* and *uncontrolled*. The *controlled* keyphrases are obtained from the Inspec thesaurus and therefore are often not present in the abstract's text. Only 18.1% of the *controlled* keyphrases are actually present in the abstract's text. The *uncontrolled* keyphrases are those selected by the indexers after reading the full-length scientific articles and 76.2% of them are present in the abstract's text. There is no information in the original paper about how these 2,000 scientific papers were selected. It is unknown whether the papers were randomly selected out of all the papers published between 1998-2002 in the *Computers and Control* and *Information Technology* domains or were there only 2,000 papers in this domain that were indexed by Inspec. The train, dev and test splits of the data were arbitrarily chosen.
One of the key aspect of this dataset which makes it unique is that it provides keyphrases assigned by professional indexers, which is uncommon in the keyphrase literature. Most of the datasets in this domain have author assigned keyphrases as the ground truth. The dataset shared over here does not explicitly presents the *controlled* and *uncontrolled* keyphrases instead it only categorizes the keyphrases into *extractive* and *abstractive*. **Extractive keyphrases** are those that could be found in the input text and the **abstractive keyphrases** are those that are not present in the input text. In order to get all the meta-data about the documents and keyphrases please refer to the [original source](https://github.com/boudinfl/ake-datasets/tree/master/datasets/Inspec) from which the dataset was taken from. The main motivation behind making this dataset available in the form as presented over here is to make it easy for the researchers to programmatically download it and evaluate their models for the tasks of keyphrase extraction and generation. As keyphrase extraction by treating it as a sequence tagging task and using contextual language models has become popular - [Keyphrase extraction from scholarly articles as sequence labeling using contextualized embeddings](https://arxiv.org/pdf/1910.08840.pdf), we have also made the token tags available in the BIO tagging format.
## Dataset Structure
## Dataset Statistics
Table 1: Statistics on the length of the abstractive keyphrases for Train, Test, and Validation splits of Inspec dataset.
| | Train | Test | Validation |
|:---------------:|:-----------------------:|:----------------------:|:------------------------:|
| Single word | 9.0% | 9.5% | 10.1% |
| Two words | 50.4% | 48.2% | 45.7% |
| Three words | 27.6% | 28.6% | 29.8% |
| Four words | 9.3% | 10.3% | 10.3% |
| Five words | 2.4% | 2.0% | 3.2% |
| Six words | 0.9% | 1.2% | 0.7% |
| Seven words | 0.3% | 0.2% | 0.2% |
| Eight words | 0.1% | 0% | 0.1% |
| Nine words | 0% | 0.1% | 0% |
Table 2: Statistics on the length of the extractive keyphrases for Train, Test, and Validation splits of Inspec dataset.
| | Train | Test | Validation |
|:------------:|:-----------------------:|:----------------------:|:------------------------:|
| Single word | 16.2% | 15.4% | 17.0% |
| Two words | 52.4% | 54.8% | 51.6% |
| Three words | 24.3% | 22.99% | 24.3% |
| Four words | 5.6% | 4.96% | 5.8% |
| Five words | 1.2% | 1.3% | 1.1% |
| Six words | 0.2% | 0.36% | 0.2% |
| Seven words | 0.1% | 0.06% | 0.1% |
| Eight words | 0% | 0% | 0.03% |
Table 3: General statistics of the Inspec dataset.
| Type of Analysis | Train | Test | Validation |
|:----------------------------------------------:|:------------------------------:|:------------------------------:|:------------------------------:|
| Annotator Type | Professional Indexers | Professional Indexers | Professional Indexers |
| Document Type | Abstracts from Inspec Database | Abstracts from Inspec Database | Abstracts from Inspec Database |
| No. of Documents | 1000 | 500 | 500 |
| Avg. Document length (words) | 141.5 | 134.6 | 132.6 |
| Max Document length (words) | 557 | 384 | 330 |
| Max no. of abstractive keyphrases in a document | 17 | 20 | 14 |
| Min no. of abstractive keyphrases in a document | 0 | 0 | 0 |
| Avg. no. of abstractive keyphrases per document | 3.39 | 3.26 | 3.12 |
| Max no. of extractive keyphrases in a document | 24 | 27 | 22 |
| Min no. of extractive keyphrases in a document | 0 | 0 | 0 |
| Avg. no. of extractive keyphrases per document | 6.39 | 6.56 | 5.95 |
- Percentage of keyphrases that are named entities: 55.25% (named entities detected using scispacy - en-core-sci-lg model)
- Percentage of keyphrases that are noun phrases: 73.59% (noun phrases detected using spacy after removing determiners)
### Data Fields
- **id**: unique identifier of the document.
- **document**: Whitespace separated list of words in the document.
- **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
- **extractive_keyphrases**: List of all the present keyphrases.
- **abstractive_keyphrase**: List of all the absent keyphrases.
### Data Splits
|Split| No. of datapoints |
|--|--|
| Train | 1,000 |
| Test | 500 |
| Validation | 500 |
## Usage
### Full Dataset
```python
from datasets import load_dataset
# get entire dataset
dataset = load_dataset("midas/inspec", "raw")
# sample from the train split
print("Sample from training dataset split")
train_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Tokenized Document: ", train_sample["document"])
print("Document BIO Tags: ", train_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the validation split
print("Sample from validation dataset split")
validation_sample = dataset["validation"][0]
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Tokenized Document: ", validation_sample["document"])
print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the test split
print("Sample from test dataset split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
**Output**
```bash
Sample from training data split
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
Tokenized Document: ['A', 'conflict', 'between', 'language', 'and', 'atomistic', 'information', 'Fred', 'Dretske', 'and', 'Jerry', 'Fodor', 'are', 'responsible', 'for', 'popularizing', 'three', 'well-known', 'theses', 'in', 'contemporary', 'philosophy', 'of', 'mind', ':', 'the', 'thesis', 'of', 'Information-Based', 'Semantics', '-LRB-', 'IBS', '-RRB-', ',', 'the', 'thesis', 'of', 'Content', 'Atomism', '-LRB-', 'Atomism', '-RRB-', 'and', 'the', 'thesis', 'of', 'the', 'Language', 'of', 'Thought', '-LRB-', 'LOT', '-RRB-', '.', 'LOT', 'concerns', 'the', 'semantically', 'relevant', 'structure', 'of', 'representations', 'involved', 'in', 'cognitive', 'states', 'such', 'as', 'beliefs', 'and', 'desires', '.', 'It', 'maintains', 'that', 'all', 'such', 'representations', 'must', 'have', 'syntactic', 'structures', 'mirroring', 'the', 'structure', 'of', 'their', 'contents', '.', 'IBS', 'is', 'a', 'thesis', 'about', 'the', 'nature', 'of', 'the', 'relations', 'that', 'connect', 'cognitive', 'representations', 'and', 'their', 'parts', 'to', 'their', 'contents', '-LRB-', 'semantic', 'relations', '-RRB-', '.', 'It', 'holds', 'that', 'these', 'relations', 'supervene', 'solely', 'on', 'relations', 'of', 'the', 'kind', 'that', 'support', 'information', 'content', ',', 'perhaps', 'with', 'some', 'help', 'from', 'logical', 'principles', 'of', 'combination', '.', 'Atomism', 'is', 'a', 'thesis', 'about', 'the', 'nature', 'of', 'the', 'content', 'of', 'simple', 'symbols', '.', 'It', 'holds', 'that', 'each', 'substantive', 'simple', 'symbol', 'possesses', 'its', 'content', 'independently', 'of', 'all', 'other', 'symbols', 'in', 'the', 'representational', 'system', '.', 'I', 'argue', 'that', 'Dretske', "'s", 'and', 'Fodor', "'s", 'theories', 'are', 'false', 'and', 'that', 'their', 'falsehood', 'results', 'from', 'a', 'conflict', 'IBS', 'and', 'Atomism', ',', 'on', 'the', 'one', 'hand', ',', 'and', 'LOT', ',', 'on', 'the', 'other']
Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'B', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'B', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O']
Extractive/present Keyphrases: ['philosophy of mind', 'content atomism', 'ibs', 'language of thought', 'lot', 'cognitive states', 'beliefs', 'desires']
Abstractive/absent Keyphrases: ['information-based semantics']
-----------
Sample from validation data split
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
Tokenized Document: ['Impact', 'of', 'aviation', 'highway-in-the-sky', 'displays', 'on', 'pilot', 'situation', 'awareness', 'Thirty-six', 'pilots', '-LRB-', '31', 'men', ',', '5', 'women', '-RRB-', 'were', 'tested', 'in', 'a', 'flight', 'simulator', 'on', 'their', 'ability', 'to', 'intercept', 'a', 'pathway', 'depicted', 'on', 'a', 'highway-in-the-sky', '-LRB-', 'HITS', '-RRB-', 'display', '.', 'While', 'intercepting', 'and', 'flying', 'the', 'pathway', ',', 'pilots', 'were', 'required', 'to', 'watch', 'for', 'traffic', 'outside', 'the', 'cockpit', '.', 'Additionally', ',', 'pilots', 'were', 'tested', 'on', 'their', 'awareness', 'of', 'speed', ',', 'altitude', ',', 'and', 'heading', 'during', 'the', 'flight', '.', 'Results', 'indicated', 'that', 'the', 'presence', 'of', 'a', 'flight', 'guidance', 'cue', 'significantly', 'improved', 'flight', 'path', 'awareness', 'while', 'intercepting', 'the', 'pathway', ',', 'but', 'significant', 'practice', 'effects', 'suggest', 'that', 'a', 'guidance', 'cue', 'might', 'be', 'unnecessary', 'if', 'pilots', 'are', 'given', 'proper', 'training', '.', 'The', 'amount', 'of', 'time', 'spent', 'looking', 'outside', 'the', 'cockpit', 'while', 'using', 'the', 'HITS', 'display', 'was', 'significantly', 'less', 'than', 'when', 'using', 'conventional', 'aircraft', 'instruments', '.', 'Additionally', ',', 'awareness', 'of', 'flight', 'information', 'present', 'on', 'the', 'HITS', 'display', 'was', 'poor', '.', 'Actual', 'or', 'potential', 'applications', 'of', 'this', 'research', 'include', 'guidance', 'for', 'the', 'development', 'of', 'perspective', 'flight', 'display', 'standards', 'and', 'as', 'a', 'basis', 'for', 'flight', 'training', 'requirements']
Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
Extractive/present Keyphrases: ['flight simulator', 'pilots', 'cockpit', 'flight guidance', 'situation awareness', 'flight path awareness']
Abstractive/absent Keyphrases: ['highway-in-the-sky display', 'human factors', 'aircraft display']
-----------
Sample from test data split
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
Tokenized Document: ['A', 'new', 'graphical', 'user', 'interface', 'for', 'fast', 'construction', 'of', 'computation', 'phantoms', 'and', 'MCNP', 'calculations', ':', 'application', 'to', 'calibration', 'of', 'in', 'vivo', 'measurement', 'systems', 'Reports', 'on', 'a', 'new', 'utility', 'for', 'development', 'of', 'computational', 'phantoms', 'for', 'Monte', 'Carlo', 'calculations', 'and', 'data', 'analysis', 'for', 'in', 'vivo', 'measurements', 'of', 'radionuclides', 'deposited', 'in', 'tissues', '.', 'The', 'individual', 'properties', 'of', 'each', 'worker', 'can', 'be', 'acquired', 'for', 'a', 'rather', 'precise', 'geometric', 'representation', 'of', 'his', '-LRB-', 'her', '-RRB-', 'anatomy', ',', 'which', 'is', 'particularly', 'important', 'for', 'low', 'energy', 'gamma', 'ray', 'emitting', 'sources', 'such', 'as', 'thorium', ',', 'uranium', ',', 'plutonium', 'and', 'other', 'actinides', '.', 'The', 'software', 'enables', 'automatic', 'creation', 'of', 'an', 'MCNP', 'input', 'data', 'file', 'based', 'on', 'scanning', 'data', '.', 'The', 'utility', 'includes', 'segmentation', 'of', 'images', 'obtained', 'with', 'either', 'computed', 'tomography', 'or', 'magnetic', 'resonance', 'imaging', 'by', 'distinguishing', 'tissues', 'according', 'to', 'their', 'signal', '-LRB-', 'brightness', '-RRB-', 'and', 'specification', 'of', 'the', 'source', 'and', 'detector', '.', 'In', 'addition', ',', 'a', 'coupling', 'of', 'individual', 'voxels', 'within', 'the', 'tissue', 'is', 'used', 'to', 'reduce', 'the', 'memory', 'demand', 'and', 'to', 'increase', 'the', 'calculational', 'speed', '.', 'The', 'utility', 'was', 'tested', 'for', 'low', 'energy', 'emitters', 'in', 'plastic', 'and', 'biological', 'tissues', 'as', 'well', 'as', 'for', 'computed', 'tomography', 'and', 'magnetic', 'resonance', 'imaging', 'scanning', 'information']
Document BIO Tags: ['O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'B', 'I', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'B', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'I', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'B', 'O', 'B', 'I', 'O', 'O', 'B', 'I', 'I', 'I', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'B', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'I', 'I', 'I']
Extractive/present Keyphrases: ['computational phantoms', 'monte carlo calculations', 'in vivo measurements', 'radionuclides', 'tissues', 'worker', 'precise geometric representation', 'mcnp input data file', 'scanning data', 'computed tomography', 'brightness', 'graphical user interface', 'computation phantoms', 'calibration', 'in vivo measurement systems', 'signal', 'detector', 'individual voxels', 'memory demand', 'calculational speed', 'plastic', 'magnetic resonance imaging scanning information', 'anatomy', 'low energy gamma ray emitting sources', 'actinides', 'software', 'automatic creation']
Abstractive/absent Keyphrases: ['th', 'u', 'pu', 'biological tissues']
-----------
```
### Keyphrase Extraction
```python
from datasets import load_dataset
# get the dataset only for keyphrase extraction
dataset = load_dataset("midas/inspec", "extraction")
print("Samples for Keyphrase Extraction")
# sample from the train split
print("Sample from training data split")
train_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Tokenized Document: ", train_sample["document"])
print("Document BIO Tags: ", train_sample["doc_bio_tags"])
print("\n-----------\n")
# sample from the validation split
print("Sample from validation data split")
validation_sample = dataset["validation"][0]
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Tokenized Document: ", validation_sample["document"])
print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
print("\n-----------\n")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("\n-----------\n")
```
### Keyphrase Generation
```python
# get the dataset only for keyphrase generation
dataset = load_dataset("midas/inspec", "generation")
print("Samples for Keyphrase Generation")
# sample from the train split
print("Sample from training data split")
train_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Tokenized Document: ", train_sample["document"])
print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the validation split
print("Sample from validation data split")
validation_sample = dataset["validation"][0]
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Tokenized Document: ", validation_sample["document"])
print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
## Citation Information
Please cite the works below if you use this dataset in your work.
```
@inproceedings{hulth2003improved,
title={Improved automatic keyword extraction given more linguistic knowledge},
author={Hulth, Anette},
booktitle={Proceedings of the 2003 conference on Empirical methods in natural language processing},
pages={216--223},
year={2003}
}
```
and
```
@InProceedings{10.1007/978-3-030-45442-5_41,
author="Sahrawat, Dhruva
and Mahata, Debanjan
and Zhang, Haimin
and Kulkarni, Mayank
and Sharma, Agniv
and Gosangi, Rakesh
and Stent, Amanda
and Kumar, Yaman
and Shah, Rajiv Ratn
and Zimmermann, Roger",
editor="Jose, Joemon M.
and Yilmaz, Emine
and Magalh{\~a}es, Jo{\~a}o
and Castells, Pablo
and Ferro, Nicola
and Silva, M{\'a}rio J.
and Martins, Fl{\'a}vio",
title="Keyphrase Extraction as Sequence Labeling Using Contextualized Embeddings",
booktitle="Advances in Information Retrieval",
year="2020",
publisher="Springer International Publishing",
address="Cham",
pages="328--335",
abstract="In this paper, we formulate keyphrase extraction from scholarly articles as a sequence labeling task solved using a BiLSTM-CRF, where the words in the input text are represented using deep contextualized embeddings. We evaluate the proposed architecture using both contextualized and fixed word embedding models on three different benchmark datasets, and compare with existing popular unsupervised and supervised techniques. Our results quantify the benefits of: (a) using contextualized embeddings over fixed word embeddings; (b) using a BiLSTM-CRF architecture with contextualized word embeddings over fine-tuning the contextualized embedding model directly; and (c) using domain-specific contextualized embeddings (SciBERT). Through error analysis, we also provide some insights into why particular models work better than the others. Lastly, we present a case study where we analyze different self-attention layers of the two best models (BERT and SciBERT) to better understand their predictions.",
isbn="978-3-030-45442-5"
}
```
and
```
@article{kulkarni2021learning,
title={Learning Rich Representation of Keyphrases from Text},
author={Kulkarni, Mayank and Mahata, Debanjan and Arora, Ravneet and Bhowmik, Rajarshi},
journal={arXiv preprint arXiv:2112.08547},
year={2021}
}
```
## Contributions
Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax), [@UmaGunturi](https://github.com/UmaGunturi) and [@ad6398](https://github.com/ad6398) for adding this dataset
|
sbu_captions | 2023-06-02T20:56:01.000Z | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | The SBU Captioned Photo Dataset is a collection of over 1 million images with associated text descriptions extracted from Flicker. | @inproceedings{NIPS2011_5dd9db5e,
author = {Ordonez, Vicente and Kulkarni, Girish and Berg, Tamara},
booktitle = {Advances in Neural Information Processing Systems},
editor = {J. Shawe-Taylor and R. Zemel and P. Bartlett and F. Pereira and K.Q. Weinberger},
pages = {},
publisher = {Curran Associates, Inc.},
title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
url = {https://proceedings.neurips.cc/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf},
volume = {24},
year = {2011}
} | null | 9 | 229 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: sbu-captions-dataset
pretty_name: SBU Captioned Photo Dataset
dataset_info:
features:
- name: image_url
dtype: string
- name: user_id
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 143795586
num_examples: 1000000
download_size: 49787719
dataset_size: 143795586
---
# Dataset Card for SBU Captioned Photo Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.cs.rice.edu/~vo9/sbucaptions/
- **Repository:**
- **Paper:** [Im2Text: Describing Images Using 1 Million Captioned Photographs](https://papers.nips.cc/paper/2011/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html)
- **Leaderboard:**
- **Point of Contact:** [Vicente Ordóñez Román](mailto:vicenteor@rice.edu)
### Dataset Summary
SBU Captioned Photo Dataset is a collection of associated captions and images from Flickr.
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("sbu_captions")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Supported Tasks and Leaderboards
- `image-to-text`: This dataset can be used to train a model for Image Captioning where the goal is to predict a caption given the image.
### Languages
All captions are in English.
## Dataset Structure
### Data Instances
Each instance in SBU Captioned Photo Dataset represents a single image with a caption and a user_id:
```
{
'img_url': 'http://static.flickr.com/2723/4385058960_b0f291553e.jpg',
'user_id': '47889917@N08',
'caption': 'A wooden chair in the living room'
}
```
### Data Fields
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Textual description of the image.
- `user_id`: Author of caption.
### Data Splits
All the data is contained in training split. The training set has 1M instances.
## Dataset Creation
### Curation Rationale
From the paper:
> One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually
relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results.
### Source Data
The source images come from Flickr.
#### Initial Data Collection and Normalization
One key contribution of our paper is a novel web-scale database of photographs with associated
descriptive text. To enable effective captioning of novel images, this database must be good in two
ways: 1) It must be large so that image based matches to a query are reasonably similar, 2) The
captions associated with the data base photographs must be visually relevant so that transferring
captions between pictures is useful. To achieve the first requirement we query Flickr using a huge
number of pairs of query terms (objects, attributes, actions, stuff, and scenes). This produces a very
large, but noisy initial set of photographs with associated text.
#### Who are the source language producers?
The Flickr users.
### Annotations
#### Annotation process
Text descriptions associated with the images are inherited as annotations/captions.
#### Who are the annotators?
The Flickr users.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Vicente Ordonez, Girish Kulkarni and Tamara L. Berg.
### Licensing Information
Not specified.
### Citation Information
```bibtex
@inproceedings{NIPS2011_5dd9db5e,
author = {Ordonez, Vicente and Kulkarni, Girish and Berg, Tamara},
booktitle = {Advances in Neural Information Processing Systems},
editor = {J. Shawe-Taylor and R. Zemel and P. Bartlett and F. Pereira and K.Q. Weinberger},
pages = {},
publisher = {Curran Associates, Inc.},
title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
url = {https://proceedings.neurips.cc/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf},
volume = {24},
year = {2011}
}
```
### Contributions
Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset |
pauri32/fiqa-2018 | 2023-05-31T15:43:26.000Z | [
"region:us"
] | pauri32 | null | null | null | 3 | 229 | Entry not found |
Universal-NER/Pile-NER-type | 2023-08-07T17:07:30.000Z | [
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | Universal-NER | null | null | null | 5 | 229 | ---
language:
- en
size_categories:
- 10K<n<100K
---
# Intro
Pile-NER-type is a set of GPT-generated data for named entity recognition using the type-based data construction prompt. It was collected by prompting gpt-3.5-turbo-0301 and augmented by negative sampling. Check our [project page](https://universal-ner.github.io/) for more information.
# License
Attribution-NonCommercial 4.0 International |
loubnabnl/code_reviews_500k | 2023-09-20T14:03:43.000Z | [
"region:us"
] | loubnabnl | null | null | null | 0 | 229 | ---
dataset_info:
features:
- name: bucket
dtype: string
- name: pull_request_info
struct:
- name: org.id
dtype: int64
- name: public
dtype: bool
- name: pull_request.additions
dtype: int64
- name: pull_request.base.user.type
dtype: string
- name: pull_request.body
dtype: string
- name: pull_request.changed_files
dtype: int64
- name: pull_request.closed_at
dtype: string
- name: pull_request.comments
dtype: int64
- name: pull_request.commits
dtype: int64
- name: pull_request.created_at
dtype: string
- name: pull_request.deletions
dtype: int64
- name: pull_request.guid
dtype: string
- name: pull_request.head.user.type
dtype: string
- name: pull_request.id
dtype: int64
- name: pull_request.merged_at
dtype: string
- name: pull_request.merged_by.login
dtype: string
- name: pull_request.milestone.description
dtype: string
- name: pull_request.milestone.number
dtype: int64
- name: pull_request.milestone.title
dtype: string
- name: pull_request.number
dtype: float64
- name: pull_request.review_comments
dtype: int64
- name: pull_request.state
dtype: string
- name: pull_request.title
dtype: string
- name: pull_request.user.id
dtype: int64
- name: pull_request.user.login
dtype: string
- name: repo.id
dtype: int64
- name: repo.name
dtype: string
- name: head_repo_info
struct:
- name: pull_request.head.label
dtype: string
- name: pull_request.head.ref
dtype: string
- name: pull_request.head.repo.default_branch
dtype: string
- name: pull_request.head.repo.description
dtype: string
- name: pull_request.head.repo.homepage
dtype: string
- name: pull_request.head.repo.language
dtype: string
- name: pull_request.head.repo.license.name
dtype: string
- name: pull_request.head.repo.name
dtype: string
- name: pull_request.head.repo.owner.login
dtype: string
- name: pull_request.head.repo.owner.type
dtype: string
- name: pull_request.head.repo.private
dtype: bool
- name: pull_request.head.repo.stargazers_count
dtype: int64
- name: pull_request.head.sha
dtype: string
- name: pull_request.head.user.login
dtype: string
- name: pull_request.head.user.type
dtype: string
- name: base_repo_info
struct:
- name: pull_request.base.label
dtype: string
- name: pull_request.base.ref
dtype: string
- name: pull_request.base.repo.default_branch
dtype: string
- name: pull_request.base.repo.description
dtype: string
- name: pull_request.base.repo.forks_count
dtype: int64
- name: pull_request.base.repo.homepage
dtype: string
- name: pull_request.base.repo.language
dtype: string
- name: pull_request.base.repo.license.name
dtype: string
- name: pull_request.base.repo.name
dtype: string
- name: pull_request.base.repo.open_issues_count
dtype: int64
- name: pull_request.base.repo.owner.login
dtype: string
- name: pull_request.base.repo.owner.type
dtype: string
- name: pull_request.base.repo.private
dtype: bool
- name: pull_request.base.repo.stargazers_count
dtype: int64
- name: pull_request.base.repo.watchers_count
dtype: int64
- name: pull_request.base.sha
dtype: string
- name: pull_request.base.user.login
dtype: string
- name: pull_request.base.user.type
dtype: string
- name: pull_request.comments
dtype: int64
- name: pull_request.label.name
dtype: 'null'
- name: pull_request.review_comments
dtype: int64
- name: events
list:
- name: action
dtype: string
- name: actor.id
dtype: int64
- name: actor.login
dtype: string
- name: comment.author_association
dtype: string
- name: comment.body
dtype: string
- name: comment.commit_id
dtype: string
- name: comment.created_at
dtype: string
- name: comment.diff_hunk
dtype: string
- name: comment.id
dtype: int64
- name: comment.in_reply_to_id
dtype: int64
- name: comment.line
dtype: int64
- name: comment.original_commit_id
dtype: string
- name: comment.original_line
dtype: int64
- name: comment.original_position
dtype: int64
- name: comment.original_start_line
dtype: int64
- name: comment.path
dtype: string
- name: comment.position
dtype: int64
- name: comment.side
dtype: string
- name: comment.start_line
dtype: int64
- name: comment.start_side
dtype: string
- name: comment.updated_at
dtype: string
- name: created_at
dtype: timestamp[us, tz=UTC]
- name: issue.author
dtype: string
- name: issue.comment
dtype: string
- name: issue.comment_id
dtype: float64
- name: pull_request.merged
dtype: bool
- name: pull_request.merged_by.login
dtype: string
- name: pull_request.merged_by.type
dtype: string
- name: pull_request.state
dtype: string
- name: review.author_association
dtype: string
- name: review.body
dtype: string
- name: review.commit_id
dtype: string
- name: review.id
dtype: int64
- name: review.state
dtype: string
- name: review.submitted_at
dtype: string
- name: type
dtype: string
- name: user.login
dtype: string
- name: user.type
dtype: string
splits:
- name: train
num_bytes: 2814801797
num_examples: 500000
download_size: 856134655
dataset_size: 2814801797
---
# Dataset Card for "code_reviews_500k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wili_2018 | 2023-01-25T15:02:28.000Z | [
"task_categories:text-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ace",
"language:af",
"language:als",
"language:am",
"language:an",
"language:ang",
"language:ar",
"language:arz",
"language:as",
"language:ast",
"language:av",
"language:ay",
"language:az",
"language:azb",
"language:ba",
"language:bar",
"language:bcl",
"language:be",
"language:bg",
"language:bho",
"language:bjn",
"language:bn",
"language:bo",
"language:bpy",
"language:br",
"language:bs",
"language:bxr",
"language:ca",
"language:cbk",
"language:cdo",
"language:ce",
"language:ceb",
"language:chr",
"language:ckb",
"language:co",
"language:crh",
"language:cs",
"language:csb",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:diq",
"language:dsb",
"language:dty",
"language:dv",
"language:egl",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:ext",
"language:fa",
"language:fi",
"language:fo",
"language:fr",
"language:frp",
"language:fur",
"language:fy",
"language:ga",
"language:gag",
"language:gd",
"language:gl",
"language:glk",
"language:gn",
"language:gu",
"language:gv",
"language:ha",
"language:hak",
"language:he",
"language:hi",
"language:hif",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:ig",
"language:ilo",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:jam",
"language:jbo",
"language:jv",
"language:ka",
"language:kaa",
"language:kab",
"language:kbd",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:koi",
"language:kok",
"language:krc",
"language:ksh",
"language:ku",
"language:kv",
"language:kw",
"language:ky",
"language:la",
"language:lad",
"language:lb",
"language:lez",
"language:lg",
"language:li",
"language:lij",
"language:lmo",
"language:ln",
"language:lo",
"language:lrc",
"language:lt",
"language:ltg",
"language:lv",
"language:lzh",
"language:mai",
"language:map",
"language:mdf",
"language:mg",
"language:mhr",
"language:mi",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:ms",
"language:mt",
"language:mwl",
"language:my",
"language:myv",
"language:mzn",
"language:nan",
"language:nap",
"language:nb",
"language:nci",
"language:nds",
"language:ne",
"language:new",
"language:nl",
"language:nn",
"language:nrm",
"language:nso",
"language:nv",
"language:oc",
"language:olo",
"language:om",
"language:or",
"language:os",
"language:pa",
"language:pag",
"language:pam",
"language:pap",
"language:pcd",
"language:pdc",
"language:pfl",
"language:pl",
"language:pnb",
"language:ps",
"language:pt",
"language:qu",
"language:rm",
"language:ro",
"language:roa",
"language:ru",
"language:rue",
"language:rup",
"language:rw",
"language:sa",
"language:sah",
"language:sc",
"language:scn",
"language:sco",
"language:sd",
"language:sgs",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sme",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:srn",
"language:stq",
"language:su",
"language:sv",
"language:sw",
"language:szl",
"language:ta",
"language:tcy",
"language:te",
"language:tet",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tn",
"language:to",
"language:tr",
"language:tt",
"language:tyv",
"language:udm",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vec",
"language:vep",
"language:vi",
"language:vls",
"language:vo",
"language:vro",
"language:wa",
"language:war",
"language:wo",
"language:wuu",
"language:xh",
"language:xmf",
"language:yi",
"language:yo",
"language:zea",
"language:zh",
"license:odbl",
"language-identification",
"arxiv:1801.07779",
"region:us"
] | null | It is a benchmark dataset for language identification and contains 235000 paragraphs of 235 languages | @dataset{thoma_martin_2018_841984,
author = {Thoma, Martin},
title = {{WiLI-2018 - Wikipedia Language Identification database}},
month = jan,
year = 2018,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.841984},
url = {https://doi.org/10.5281/zenodo.841984}
} | null | 3 | 228 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ace
- af
- als
- am
- an
- ang
- ar
- arz
- as
- ast
- av
- ay
- az
- azb
- ba
- bar
- bcl
- be
- bg
- bho
- bjn
- bn
- bo
- bpy
- br
- bs
- bxr
- ca
- cbk
- cdo
- ce
- ceb
- chr
- ckb
- co
- crh
- cs
- csb
- cv
- cy
- da
- de
- diq
- dsb
- dty
- dv
- egl
- el
- en
- eo
- es
- et
- eu
- ext
- fa
- fi
- fo
- fr
- frp
- fur
- fy
- ga
- gag
- gd
- gl
- glk
- gn
- gu
- gv
- ha
- hak
- he
- hi
- hif
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- ie
- ig
- ilo
- io
- is
- it
- ja
- jam
- jbo
- jv
- ka
- kaa
- kab
- kbd
- kk
- km
- kn
- ko
- koi
- kok
- krc
- ksh
- ku
- kv
- kw
- ky
- la
- lad
- lb
- lez
- lg
- li
- lij
- lmo
- ln
- lo
- lrc
- lt
- ltg
- lv
- lzh
- mai
- map
- mdf
- mg
- mhr
- mi
- min
- mk
- ml
- mn
- mr
- mrj
- ms
- mt
- mwl
- my
- myv
- mzn
- nan
- nap
- nb
- nci
- nds
- ne
- new
- nl
- nn
- nrm
- nso
- nv
- oc
- olo
- om
- or
- os
- pa
- pag
- pam
- pap
- pcd
- pdc
- pfl
- pl
- pnb
- ps
- pt
- qu
- rm
- ro
- roa
- ru
- rue
- rup
- rw
- sa
- sah
- sc
- scn
- sco
- sd
- sgs
- sh
- si
- sk
- sl
- sme
- sn
- so
- sq
- sr
- srn
- stq
- su
- sv
- sw
- szl
- ta
- tcy
- te
- tet
- tg
- th
- tk
- tl
- tn
- to
- tr
- tt
- tyv
- udm
- ug
- uk
- ur
- uz
- vec
- vep
- vi
- vls
- vo
- vro
- wa
- war
- wo
- wuu
- xh
- xmf
- yi
- yo
- zea
- zh
license:
- odbl
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: wili-2018
pretty_name: Wili2018
language_bcp47:
- be-tarask
- map-bms
- nds-nl
- roa-tara
- zh-yue
tags:
- language-identification
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': cdo
'1': glk
'2': jam
'3': lug
'4': san
'5': rue
'6': wol
'7': new
'8': mwl
'9': bre
'10': ara
'11': hye
'12': xmf
'13': ext
'14': cor
'15': yor
'16': div
'17': asm
'18': lat
'19': cym
'20': hif
'21': ace
'22': kbd
'23': tgk
'24': rus
'25': nso
'26': mya
'27': msa
'28': ava
'29': cbk
'30': urd
'31': deu
'32': swa
'33': pus
'34': bxr
'35': udm
'36': csb
'37': yid
'38': vro
'39': por
'40': pdc
'41': eng
'42': tha
'43': hat
'44': lmo
'45': pag
'46': jav
'47': chv
'48': nan
'49': sco
'50': kat
'51': bho
'52': bos
'53': kok
'54': oss
'55': mri
'56': fry
'57': cat
'58': azb
'59': kin
'60': hin
'61': sna
'62': dan
'63': egl
'64': mkd
'65': ron
'66': bul
'67': hrv
'68': som
'69': pam
'70': nav
'71': ksh
'72': nci
'73': khm
'74': sgs
'75': srn
'76': bar
'77': cos
'78': ckb
'79': pfl
'80': arz
'81': roa-tara
'82': fra
'83': mai
'84': zh-yue
'85': guj
'86': fin
'87': kir
'88': vol
'89': hau
'90': afr
'91': uig
'92': lao
'93': swe
'94': slv
'95': kor
'96': szl
'97': srp
'98': dty
'99': nrm
'100': dsb
'101': ind
'102': wln
'103': pnb
'104': ukr
'105': bpy
'106': vie
'107': tur
'108': aym
'109': lit
'110': zea
'111': pol
'112': est
'113': scn
'114': vls
'115': stq
'116': gag
'117': grn
'118': kaz
'119': ben
'120': pcd
'121': bjn
'122': krc
'123': amh
'124': diq
'125': ltz
'126': ita
'127': kab
'128': bel
'129': ang
'130': mhr
'131': che
'132': koi
'133': glv
'134': ido
'135': fao
'136': bak
'137': isl
'138': bcl
'139': tet
'140': jpn
'141': kur
'142': map-bms
'143': tyv
'144': olo
'145': arg
'146': ori
'147': lim
'148': tel
'149': lin
'150': roh
'151': sqi
'152': xho
'153': mlg
'154': fas
'155': hbs
'156': tam
'157': aze
'158': lad
'159': nob
'160': sin
'161': gla
'162': nap
'163': snd
'164': ast
'165': mal
'166': mdf
'167': tsn
'168': nds
'169': tgl
'170': nno
'171': sun
'172': lzh
'173': jbo
'174': crh
'175': pap
'176': oci
'177': hak
'178': uzb
'179': zho
'180': hsb
'181': sme
'182': mlt
'183': vep
'184': lez
'185': nld
'186': nds-nl
'187': mrj
'188': spa
'189': ceb
'190': ina
'191': heb
'192': hun
'193': que
'194': kaa
'195': mar
'196': vec
'197': frp
'198': ell
'199': sah
'200': eus
'201': ces
'202': slk
'203': chr
'204': lij
'205': nep
'206': srd
'207': ilo
'208': be-tarask
'209': bod
'210': orm
'211': war
'212': glg
'213': mon
'214': gle
'215': min
'216': ibo
'217': ile
'218': epo
'219': lav
'220': lrc
'221': als
'222': mzn
'223': rup
'224': fur
'225': tat
'226': myv
'227': pan
'228': ton
'229': kom
'230': wuu
'231': tcy
'232': tuk
'233': kan
'234': ltg
config_name: WiLI-2018 dataset
splits:
- name: train
num_bytes: 65408201
num_examples: 117500
- name: test
num_bytes: 66491260
num_examples: 117500
download_size: 130516351
dataset_size: 131899461
---
# Dataset Card for wili_2018
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/841984
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/pdf/1801.07779
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** Thoma, Martin (Email: info@martin-thoma.de)
### Dataset Summary
WiLI-2018, the Wikipedia language identification benchmark dataset, contains 235000 paragraphs of 235 languages. The dataset is balanced and a train-test split is provided.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
235 Different Languages
## Dataset Structure
### Data Instances
```
{
'label': 207,
'sentence': 'Ti Turkia ket maysa a demokrata, sekular, unitario, batay-linteg a republika nga addaan ti taga-ugma a tinawtawid a kultura. Ti Turkia ket umadadu a naipatipon iti Laud babaen ti panagkameng kadagiti organisasion a kas ti Konsilo iti Europa, NATO, OECD, OSCE ken ti G-20 a dagiti kangrunaan nga ekonomia. Ti Turkia ket nangrugi a nakitulag ti napno a panagkameng iti Kappon ti Europa idi 2005, nga isu ket maysa idin a kumaduaan a kameng iti Europeano a Komunidad ti Ekonomia manipud idi 1963 ken nakadanon ti maysa a tulagan ti kappon ti aduana idi 1995. Ti Turkia ket nagtaraken iti asideg a kultural, politikal, ekonomiko ken industria a panakibiang iti Tengnga a Daya, dagiti Turko nga estado iti Tengnga nga Asia ken dagiti pagilian ti Aprika babaen ti panagkameng kadagiti organisasion a kas ti Turko a Konsilo, Nagsaupan nga Administrasion iti Turko nga Arte ken Kultura, Organisasion iti Islamiko a Panagtitinnulong ken ti Organisasion ti Ekonomiko a Panagtitinnulong.'
}
```
### Data Fields
[Needs More Information]
### Data Splits
175000 lines of text each for train and test data.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Thomas Martin
### Licensing Information
ODC Open Database License v1.0
### Citation Information
```
@dataset{thoma_martin_2018_841984,
author = {Thoma, Martin},
title = {{WiLI-2018 - Wikipedia Language Identification database}},
month = jan,
year = 2018,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.841984},
url = {https://doi.org/10.5281/zenodo.841984}
}
```
### Contributions
Thanks to [@Shubhambindal2017](https://github.com/Shubhambindal2017) for adding this dataset. |
nielsr/funsd-iob-original | 2022-11-19T13:38:09.000Z | [
"region:us"
] | nielsr | https://guillaumejaume.github.io/FUNSD/ | @article{Jaume2019FUNSDAD,
title={FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents},
author={Guillaume Jaume and H. K. Ekenel and J. Thiran},
journal={2019 International Conference on Document Analysis and Recognition Workshops (ICDARW)},
year={2019},
volume={2},
pages={1-6}
} | null | 0 | 228 | Entry not found |
sradc/chunked-shuffled-wikipedia20220301en-bookcorpusopen | 2023-07-17T20:33:04.000Z | [
"language:en",
"region:us"
] | sradc | null | null | null | 1 | 228 | ---
language: en
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 26076989556
num_examples: 33536113
download_size: 17380043798
dataset_size: 26076989556
---
# Dataset Card for "wikipedia20220301en-bookcorpusopen-chunked-shuffled"
```
num_examples: 33.5 million
download_size: 15.3 GB
dataset_size: 26.1 GB
```
This dataset combines [wikipedia20220301.en](https://huggingface.co/datasets/wikipedia) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen),
and splits the data into smaller chunks, of size ~820 chars
(such that each item will be at least ~128 tokens for the average tokenizer).
The order of the items in this dataset has been shuffled,
meaning you don't have to use `dataset.shuffle`,
which is slower to iterate over.
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars.
The dataset has been normalized into lower case, with accents and non-english characters removed.
Items with less than 200 chars or more than 1000 chars have been removed.
This dataset is processed for convenience, at the expense of losing some percentage of the tokens due to truncation,
(assuming the training minibatches are truncated to 128 tokens).
|
hlgd | 2023-01-25T14:32:19.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"headline-grouping",
"region:us"
] | null | HLGD is a binary classification dataset consisting of 20,056 labeled news headlines pairs indicating
whether the two headlines describe the same underlying world event or not. | @inproceedings{Laban2021NewsHG,
title={News Headline Grouping as a Challenging NLU Task},
author={Philippe Laban and Lucas Bandarkar},
booktitle={NAACL 2021},
publisher = {Association for Computational Linguistics},
year={2021}
} | null | 2 | 227 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: Headline Grouping (HLGD)
tags:
- headline-grouping
dataset_info:
features:
- name: timeline_id
dtype:
class_label:
names:
'0': 0
'1': 1
'2': 2
'3': 3
'4': 4
'5': 5
'6': 6
'7': 7
'8': 8
'9': 9
- name: headline_a
dtype: string
- name: headline_b
dtype: string
- name: date_a
dtype: string
- name: date_b
dtype: string
- name: url_a
dtype: string
- name: url_b
dtype: string
- name: label
dtype:
class_label:
names:
'0': same_event
'1': different_event
splits:
- name: train
num_bytes: 6447212
num_examples: 15492
- name: test
num_bytes: 941145
num_examples: 2495
- name: validation
num_bytes: 798302
num_examples: 2069
download_size: 1858948
dataset_size: 8186659
---
# Dataset Card for Headline Grouping (HLGD)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/tingofurro/headline_grouping](https://github.com/tingofurro/headline_grouping)
- **Repository:** [https://github.com/tingofurro/headline_grouping](https://github.com/tingofurro/headline_grouping)
- **Paper:** [https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf](https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** phillab (at) berkeley (dot) edu
### Dataset Summary
HLGD is a binary classification dataset consisting of 20,056 labeled news headlines pairs indicating whether the two headlines describe the same underlying world event or not. The dataset comes with an existing split between `train`, `validation` and `test` (60-20-20).
### Supported Tasks and Leaderboards
The paper (NAACL2021) introducing HLGD proposes three challenges making use of various amounts of data:
- Challenge 1: Headline-only. Models must make predictions using only the text of both headlines.
- Challenge 2: Headline + Time. Models must make predictions using the headline and publication date of the two headlines.
- Challenge 3: Headline + Time + Other. Models can make predictions using the headline, publication date as well as any other relevant meta-data that can be obtained through the URL attached to the headline (full article content, authors, news source, etc.)
### Languages
Dataset is in english.
## Dataset Structure
### Data Instances
A typical dataset consists of a timeline_id, two headlines (A/B), each associated with a URL, and a date. Finally, a label indicates whether the two headlines describe the same underlying event (1) or not (0). Below is an example from the training set:
```
{'timeline_id': 4,
'headline_a': 'France fines Google nearly $57 million for first major violation of new European privacy regime',
'headline_b': "France hits Google with record EUR50mn fine over 'forced consent' data collection",
'date_a': '2019-01-21',
'date_b': '2019-01-21',
'url_a': 'https://www.chicagotribune.com/business/ct-biz-france-fines-google-privacy-20190121-story.html',
'url_b': 'https://www.rt.com/news/449369-france-hits-google-with-record-fine/',
'label': 1}
```
### Data Fields
- `timeline_id`: Represents the id of the timeline that the headline pair belongs to (values 0 to 9). The dev set is composed of timelines 0 and 5, and the test set timelines 7 and 8
- `headline_a`, `headline_b`: Raw text for the headline pair being compared
- `date_a`, `date_b`: Publication date of the respective headlines, in the `YYYY-MM-DD` format
- `url_a`, `url_b`: Original URL of the respective headlines. Can be used to retrieve additional meta-data on the headline.
- `label`: 1 if the two headlines are part of the the same headline group and describe the same underlying event, 0 otherwise.
### Data Splits
| | Train | Dev | Test |
| --------------------------- | ------- | ------ | ----- |
| Number of examples | 15,492 | 2,069 | 2,495 |
## Dataset Creation
### Curation Rationale
The task of grouping headlines from diverse news sources discussing a same underlying event is important to enable interfaces that can present the diversity of coverage of unfolding news events. Many news aggregators (such as Google or Yahoo news) present several sources for a given event, with an objective to highlight coverage diversity.
Automatic grouping of news headlines and articles remains challenging as headlines are short, heavily-stylized texts.
The HeadLine Grouping Dataset introduces the first benchmark to evaluate NLU model's ability to group headlines according to the underlying event they describe.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by collecting 10 news timelines from the NewsLens project by selecting timelines diversified in topic each contained between 80 and 300 news articles.
#### Who are the source language producers?
The source language producers are journalists or members of the newsroom of 34 news organizations listed in the paper.
### Annotations
#### Annotation process
Each timeline was annotated for group IDs by 5 independent annotators. The 5 annotations were merged into a single annotation named the global groups.
The global group IDs are then used to generate all pairs of headlines within timelines with binary labels: 1 if two headlines are part of the same global group, and 0 otherwise. A heuristic is used to remove negative examples to obtain a final dataset that has class imbalance of 1 positive example to 5 negative examples.
#### Who are the annotators?
Annotators were authors of the papers and 8 crowd-workers on the Upwork platform. The crowd-workers were native English speakers with experience either in proof-reading or data-entry.
### Personal and Sensitive Information
Annotators identity has been anonymized. Due to the public nature of news headline, it is not expected that the headlines will contain personal sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to facilitate applications that present diverse news coverage.
By simplifying the process of developing models that can group headlines that describe a common event, we hope the community can build applications that show news readers diverse sources covering similar events.
We note however that the annotations were performed in majority by crowd-workers and that even though inter-annotator agreement was high, it was not perfect. Bias of the annotators therefore remains in the dataset.
### Discussion of Biases
There are several sources of bias in the dataset:
- Annotator bias: 10 annotators participated in the creation of the dataset. Their opinions and perspectives influenced the creation of the dataset.
- Subject matter bias: HLGD consists of headlines from 10 news timelines from diverse topics (space, tech, politics, etc.). This choice has an impact on the types of positive and negative examples that appear in the dataset.
- Source selection bias: 33 English-language news sources are represented in the dataset. This selection of news sources has an effect on the content in the timeline, and the overall dataset.
- Time-range of the timelines: the timelines selected range from 2010 to 2020, which has an influence on the language and style of news headlines.
### Other Known Limitations
For the task of Headline Grouping, inter-annotator agreement is high (0.814) but not perfect. Some decisions for headline grouping are subjective and depend on interpretation of the reader.
## Additional Information
### Dataset Curators
The dataset was initially created by Philippe Laban, Lucas Bandarkar and Marti Hearst at UC Berkeley.
### Licensing Information
The licensing status of the dataset depends on the legal status of news headlines. It is commonly held that News Headlines fall under "fair-use" ([American Bar blog post](https://www.americanbar.org/groups/gpsolo/publications/gp_solo/2011/september/fair_use_news_reviews/))
The dataset only distributes headlines, a URL and a publication date. Users of the dataset can then retrieve additional information (such as the body content, author, etc.) directly by querying the URL.
### Citation Information
```
@inproceedings{Laban2021NewsHG,
title={News Headline Grouping as a Challenging NLU Task},
author={Laban, Philippe and Bandarkar, Lucas and Hearst, Marti A},
booktitle={NAACL 2021},
publisher = {Association for Computational Linguistics},
year={2021}
}
```
### Contributions
Thanks to [@tingofurro](https://github.com/<tingofurro>) for adding this dataset. |
GEM/xlsum | 2022-10-24T15:31:33.000Z | [
"task_categories:summarization",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:und",
"license:cc-by-nc-sa-4.0",
"arxiv:1607.01759",
"region:us"
] | GEM | We present XLSum, a comprehensive and diverse dataset comprising 1.35 million professionally
annotated article-summary pairs from BBC, extracted using a set of carefully designed heuristics.
The dataset covers 45 languages ranging from low to high-resource, for many of which no
public dataset is currently available. XL-Sum is highly abstractive, concise,
and of high quality, as indicated by human and intrinsic evaluation. | @inproceedings{hasan-etal-2021-xl,
title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Islam, Md. Saiful and
Mubasshir, Kazi and
Li, Yuan-Fang and
Kang, Yong-Bin and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.413",
pages = "4693--4703",
} | null | 3 | 227 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- und
license:
- cc-by-nc-sa-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: xlsum
---
# Dataset Card for GEM/xlsum
## Dataset Description
- **Homepage:** https://github.com/csebuetnlp/xl-sum
- **Repository:** https://huggingface.co/datasets/csebuetnlp/xlsum/tree/main/data
- **Paper:** https://aclanthology.org/2021.findings-acl.413/
- **Leaderboard:** http://explainaboard.nlpedia.ai/leaderboard/task_xlsum/
- **Point of Contact:** Tahmid Hasan
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/xlsum).
### Dataset Summary
XLSum is a highly multilingual summarization dataset supporting 44 language. The data stems from BBC news articles.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/xlsum')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/xlsum).
#### website
[Github](https://github.com/csebuetnlp/xl-sum)
#### paper
[ACL Anthology](https://aclanthology.org/2021.findings-acl.413/)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/csebuetnlp/xl-sum)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Huggingface](https://huggingface.co/datasets/csebuetnlp/xlsum/tree/main/data)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL Anthology](https://aclanthology.org/2021.findings-acl.413/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{hasan-etal-2021-xl,
title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Islam, Md. Saiful and
Mubasshir, Kazi and
Li, Yuan-Fang and
Kang, Yong-Bin and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.413",
pages = "4693--4703",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Tahmid Hasan
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
tahmidhasan@cse.buet.ac.bd
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
yes
#### Leaderboard Link
<!-- info: Provide a link to the leaderboard. -->
<!-- scope: periscope -->
[Explainaboard](http://explainaboard.nlpedia.ai/leaderboard/task_xlsum/)
#### Leaderboard Details
<!-- info: Briefly describe how the leaderboard evaluates models. -->
<!-- scope: microscope -->
The leaderboard ranks models based on ROUGE scores (R1/R2/RL) of the generated summaries.
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`Amharic`, `Arabic`, `Azerbaijani`, `Bengali, Bangla`, `Burmese`, `Chinese (family)`, `English`, `French`, `Gujarati`, `Hausa`, `Hindi`, `Igbo`, `Indonesian`, `Japanese`, `Rundi`, `Korean`, `Kirghiz, Kyrgyz`, `Marathi`, `Nepali (individual language)`, `Oromo`, `Pushto, Pashto`, `Persian`, `Ghanaian Pidgin English`, `Portuguese`, `Panjabi, Punjabi`, `Russian`, `Scottish Gaelic, Gaelic`, `Serbian`, `Romano-Serbian`, `Sinhala, Sinhalese`, `Somali`, `Spanish, Castilian`, `Swahili (individual language), Kiswahili`, `Tamil`, `Telugu`, `Thai`, `Tigrinya`, `Turkish`, `Ukrainian`, `Urdu`, `Uzbek`, `Vietnamese`, `Welsh`, `Yoruba`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-nc-sa-4.0: Creative Commons Attribution Non Commercial Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Abstractive summarization has centered around the English language, as most large abstractive summarization datasets are available in English only. Though there have been some recent efforts for curating multilingual abstractive summarization datasets, they are limited in terms of the number of languages covered, the number of training samples, or both. To this end, **XL-Sum** presents a large-scale abstractive summarization dataset of 1.35 million news articles from 45 languages crawled from the British Broadcasting Corporation website. It is intended to be used for both multilingual and per-language summarization tasks.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Summarize news-like text in one of 45 languages.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Bangladesh University of Engineering and Technology
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Tahmid Hasan (Bangladesh University of Engineering and Technology), Abhik Bhattacharjee (Bangladesh University of Engineering and Technology)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `gem_id`: A string representing the article ID.
- `url`: A string representing the article URL.
- `title`: A string containing the article title.
- `summary`: A string containing the article summary.
- `text` : A string containing the article text.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
"gem_id": "GEM-xlsum_english-train-1589",
"url": "[BBC news](https://www.bbc.com/news)/technology-17657859",
"title": "Yahoo files e-book advert system patent applications",
"summary": "Yahoo has signalled it is investigating e-book adverts as a way to stimulate its earnings.",
"text": "Yahoo's patents suggest users could weigh the type of ads against the sizes of discount before purchase. It says in two US patent applications that ads for digital book readers have been \"less than optimal\" to date. The filings suggest that users could be offered titles at a variety of prices depending on the ads' prominence They add that the products shown could be determined by the type of book being read, or even the contents of a specific chapter, phrase or word. The paperwork was published by the US Patent and Trademark Office late last week and relates to work carried out at the firm's headquarters in Sunnyvale, California. \"Greater levels of advertising, which may be more valuable to an advertiser and potentially more distracting to an e-book reader, may warrant higher discounts,\" it states. Free books It suggests users could be offered ads as hyperlinks based within the book's text, in-laid text or even \"dynamic content\" such as video. Another idea suggests boxes at the bottom of a page could trail later chapters or quotes saying \"brought to you by Company A\". It adds that the more willing the customer is to see the ads, the greater the potential discount. \"Higher frequencies... may even be great enough to allow the e-book to be obtained for free,\" it states. The authors write that the type of ad could influence the value of the discount, with \"lower class advertising... such as teeth whitener advertisements\" offering a cheaper price than \"high\" or \"middle class\" adverts, for things like pizza. The inventors also suggest that ads could be linked to the mood or emotional state the reader is in as a they progress through a title. For example, they say if characters fall in love or show affection during a chapter, then ads for flowers or entertainment could be triggered. The patents also suggest this could applied to children's books - giving the Tom Hanks animated film Polar Express as an example. It says a scene showing a waiter giving the protagonists hot drinks \"may be an excellent opportunity to show an advertisement for hot cocoa, or a branded chocolate bar\". Another example states: \"If the setting includes young characters, a Coke advertisement could be provided, inviting the reader to enjoy a glass of Coke with his book, and providing a graphic of a cool glass.\" It adds that such targeting could be further enhanced by taking account of previous titles the owner has bought. 'Advertising-free zone' At present, several Amazon and Kobo e-book readers offer full-screen adverts when the device is switched off and show smaller ads on their menu screens, but the main text of the titles remains free of marketing. Yahoo does not currently provide ads to these devices, and a move into the area could boost its shrinking revenues. However, Philip Jones, deputy editor of the Bookseller magazine, said that the internet firm might struggle to get some of its ideas adopted. \"This has been mooted before and was fairly well decried,\" he said. \"Perhaps in a limited context it could work if the merchandise was strongly related to the title and was kept away from the text. \"But readers - particularly parents - like the fact that reading is an advertising-free zone. Authors would also want something to say about ads interrupting their narrative flow.\""
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The splits in the dataset are specified by the language names, which are as follows:
- `amharic`
- `arabic`
- `azerbaijani`
- `bengali`
- `burmese`
- `chinese_simplified`
- `chinese_traditional`
- `english`
- `french`
- `gujarati`
- `hausa`
- `hindi`
- `igbo`
- `indonesian`
- `japanese`
- `kirundi`
- `korean`
- `kyrgyz`
- `marathi`
- `nepali`
- `oromo`
- `pashto`
- `persian`
- `pidgin`
- `portuguese`
- `punjabi`
- `russian`
- `scottish_gaelic`
- `serbian_cyrillic`
- `serbian_latin`
- `sinhala`
- `somali`
- `spanish`
- `swahili`
- `tamil`
- `telugu`
- `thai`
- `tigrinya`
- `turkish`
- `ukrainian`
- `urdu`
- `uzbek`
- `vietnamese`
- `welsh`
- `yoruba`
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
We used a 80%-10%-10% split for all languages with a few exceptions. `English` was split 93%-3.5%-3.5% for the evaluation set size to resemble that of `CNN/DM` and `XSum`; `Scottish Gaelic`, `Kyrgyz` and `Sinhala` had relatively fewer samples, their evaluation sets were increased to 500 samples for more reliable evaluation. Same articles were used for evaluation in the two variants of Chinese and Serbian to prevent data leakage in multilingual training. Individual dataset download links with train-dev-test example counts are given below:
Language | ISO 639-1 Code | BBC subdomain(s) | Train | Dev | Test | Total |
--------------|----------------|------------------|-------|-----|------|-------|
Amharic | am | [BBC amharic](https://www.bbc.com/amharic) | 5761 | 719 | 719 | 7199 |
Arabic | ar | [BBC arabic](https://www.bbc.com/arabic) | 37519 | 4689 | 4689 | 46897 |
Azerbaijani | az | [BBC azeri](https://www.bbc.com/azeri) | 6478 | 809 | 809 | 8096 |
Bengali | bn | [BBC bengali](https://www.bbc.com/bengali) | 8102 | 1012 | 1012 | 10126 |
Burmese | my | [BBC burmese](https://www.bbc.com/burmese) | 4569 | 570 | 570 | 5709 |
Chinese (Simplified) | zh-CN | [BBC ukchina](https://www.bbc.com/ukchina)/simp, [BBC zhongwen](https://www.bbc.com/zhongwen)/simp | 37362 | 4670 | 4670 | 46702 |
Chinese (Traditional) | zh-TW | [BBC ukchina](https://www.bbc.com/ukchina)/trad, [BBC zhongwen](https://www.bbc.com/zhongwen)/trad | 37373 | 4670 | 4670 | 46713 |
English | en | [BBC english](https://www.bbc.com/english), [BBC sinhala](https://www.bbc.com/sinhala) `*` | 306522 | 11535 | 11535 | 329592 |
French | fr | [BBC afrique](https://www.bbc.com/afrique) | 8697 | 1086 | 1086 | 10869 |
Gujarati | gu | [BBC gujarati](https://www.bbc.com/gujarati) | 9119 | 1139 | 1139 | 11397 |
Hausa | ha | [BBC hausa](https://www.bbc.com/hausa) | 6418 | 802 | 802 | 8022 |
Hindi | hi | [BBC hindi](https://www.bbc.com/hindi) | 70778 | 8847 | 8847 | 88472 |
Igbo | ig | [BBC igbo](https://www.bbc.com/igbo) | 4183 | 522 | 522 | 5227 |
Indonesian | id | [BBC indonesia](https://www.bbc.com/indonesia) | 38242 | 4780 | 4780 | 47802 |
Japanese | ja | [BBC japanese](https://www.bbc.com/japanese) | 7113 | 889 | 889 | 8891 |
Kirundi | rn | [BBC gahuza](https://www.bbc.com/gahuza) | 5746 | 718 | 718 | 7182 |
Korean | ko | [BBC korean](https://www.bbc.com/korean) | 4407 | 550 | 550 | 5507 |
Kyrgyz | ky | [BBC kyrgyz](https://www.bbc.com/kyrgyz) | 2266 | 500 | 500 | 3266 |
Marathi | mr | [BBC marathi](https://www.bbc.com/marathi) | 10903 | 1362 | 1362 | 13627 |
Nepali | np | [BBC nepali](https://www.bbc.com/nepali) | 5808 | 725 | 725 | 7258 |
Oromo | om | [BBC afaanoromoo](https://www.bbc.com/afaanoromoo) | 6063 | 757 | 757 | 7577 |
Pashto | ps | [BBC pashto](https://www.bbc.com/pashto) | 14353 | 1794 | 1794 | 17941 |
Persian | fa | [BBC persian](https://www.bbc.com/persian) | 47251 | 5906 | 5906 | 59063 |
Pidgin`**` | pcm | [BBC pidgin](https://www.bbc.com/pidgin) | 9208 | 1151 | 1151 | 11510 |
Portuguese | pt | [BBC portuguese](https://www.bbc.com/portuguese) | 57402 | 7175 | 7175 | 71752 |
Punjabi | pa | [BBC punjabi](https://www.bbc.com/punjabi) | 8215 | 1026 | 1026 | 10267 |
Russian | ru | [BBC russian](https://www.bbc.com/russian), [BBC ukrainian](https://www.bbc.com/ukrainian) `*` | 62243 | 7780 | 7780 | 77803 |
Scottish Gaelic | gd | [BBC naidheachdan](https://www.bbc.com/naidheachdan) | 1313 | 500 | 500 | 2313 |
Serbian (Cyrillic) | sr | [BBC serbian](https://www.bbc.com/serbian)/cyr | 7275 | 909 | 909 | 9093 |
Serbian (Latin) | sr | [BBC serbian](https://www.bbc.com/serbian)/lat | 7276 | 909 | 909 | 9094 |
Sinhala | si | [BBC sinhala](https://www.bbc.com/sinhala) | 3249 | 500 | 500 | 4249 |
Somali | so | [BBC somali](https://www.bbc.com/somali) | 5962 | 745 | 745 | 7452 |
Spanish | es | [BBC mundo](https://www.bbc.com/mundo) | 38110 | 4763 | 4763 | 47636 |
Swahili | sw | [BBC swahili](https://www.bbc.com/swahili) | 7898 | 987 | 987 | 9872 |
Tamil | ta | [BBC tamil](https://www.bbc.com/tamil) | 16222 | 2027 | 2027 | 20276 |
Telugu | te | [BBC telugu](https://www.bbc.com/telugu) | 10421 | 1302 | 1302 | 13025 |
Thai | th | [BBC thai](https://www.bbc.com/thai) | 6616 | 826 | 826 | 8268 |
Tigrinya | ti | [BBC tigrinya](https://www.bbc.com/tigrinya) | 5451 | 681 | 681 | 6813 |
Turkish | tr | [BBC turkce](https://www.bbc.com/turkce) | 27176 | 3397 | 3397 | 33970 |
Ukrainian | uk | [BBC ukrainian](https://www.bbc.com/ukrainian) | 43201 | 5399 | 5399 | 53999 |
Urdu | ur | [BBC urdu](https://www.bbc.com/urdu) | 67665 | 8458 | 8458 | 84581 |
Uzbek | uz | [BBC uzbek](https://www.bbc.com/uzbek) | 4728 | 590 | 590 | 5908 |
Vietnamese | vi | [BBC vietnamese](https://www.bbc.com/vietnamese) | 32111 | 4013 | 4013 | 40137 |
Welsh | cy | [BBC cymrufyw](https://www.bbc.com/cymrufyw) | 9732 | 1216 | 1216 | 12164 |
Yoruba | yo | [BBC yoruba](https://www.bbc.com/yoruba) | 6350 | 793 | 793 | 7936 |
`*` A lot of articles in BBC Sinhala and BBC Ukrainian were written in English and Russian respectively. They were identified using [Fasttext](https://arxiv.org/abs/1607.01759) and moved accordingly.
`**` West African Pidgin English
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
Traditional abstractive text summarization has been centered around English and other high-resource languages. **XL-Sum** provides a large collection of high-quality article-summary pairs for 45 languages where the languages range from high-resource to extremely low-resource. This enables the research community to explore the summarization capabilities of different models for multiple languages and languages in isolation. We believe the addition of **XL-Sum** to GEM makes the domain of abstractive text summarization more diversified and inclusive to the research community. We hope our efforts in this work will encourage the community to push the boundaries of abstractive text summarization beyond the English language, especially for low and mid-resource languages, bringing technological advances to communities of these languages that have been traditionally under-served.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
The summaries are highly concise and abstractive.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
Conciseness, abstractiveness, and overall summarization capability.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Conciseness, abstractiveness, and overall summarization capability.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
ROUGE is the de facto evaluation metric used for text summarization. However, it was designed specifically for evaluating English texts. Due to the nature of the metric, scores are heavily dependent on text tokenization / stemming / unnecessary character removal, etc. Some modifications to the original ROUGE evaluation were done such as punctuation only removal, language specific tokenization/stemming to enable reliable comparison of source and target summaries across different scripts.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
State-of-the-art text summarization models are heavily data-driven, i.e., a large number of article-summary pairs are required to train them effectively. As a result, abstractive summarization has centered around the English language, as most large abstractive summarization datasets are available in English only. Though there have been some recent efforts for curating multilingual abstractive summarization datasets, they are limited in terms of the number of languages covered, the number of training samples, or both. To this end, we curate **XL-Sum**, a large-scale abstractive summarization dataset of 1.35 million news articles from 45 languages crawled from the British Broadcasting Corporation website.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Introduce new languages in the english-centric domain of abstractive text summarization and enable both multilingual and per-language summarization.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
yes
#### Source Details
<!-- info: List the sources (one per line) -->
<!-- scope: periscope -->
British Broadcasting Corporation (BBC) news websites.
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Multiple websites`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The language content was written by professional news editors hired by BBC.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
News
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Data Preprocessing
<!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
<!-- scope: microscope -->
We used 'NFKC' normalization on all text instances.
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
algorithmically
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
We designed a crawler to recursively crawl pages starting from the homepage by visiting different article links present in each page visited. We were able to take advantage of the fact that all BBC sites have somewhat similar structures, and were able to scrape articles from all sites. We discarded pages with no textual contents (mostly pages consisting of multimedia contents) before further processing. We designed a number of heuristics to make the extraction effective by carefully examining the HTML structures of the crawled pages:
1. The desired summary must be present within the beginning two paragraphs of an article.
2. The summary paragraph must have some portion of texts in bold format.
3. The summary paragraph may contain some hyperlinks that may not be bold. The proportion of bold texts and hyperlinked texts to the total length of the paragraph in consideration must be at least 95\%.
4. All texts except the summary and the headline must be included in the input text (including image captions).
5. The input text must be at least twice as large as the summary.
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
BBC's policy specifies that the text content within its websites can be used for non-commercial research only.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
likely
#### Categories of PII
<!-- info: What categories of PII are present or suspected in the data? -->
<!-- scope: periscope -->
`generic PII`
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
yes
#### Details on how Dataset Addresses the Needs
<!-- info: Describe how this dataset addresses the needs of underserved communities. -->
<!-- scope: microscope -->
This dataset introduces summarization corpus for many languages where there weren't any datasets like this curated before.
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
Yes
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`research use only`, `non-commercial use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`research use only`, `non-commercial use only`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
Human evaluation showed most languages had a high percentage of good summaries in the upper nineties, almost none of the summaries contained any conflicting information, while about one-third on average had information that was not directly inferrable from the source article. Since generally multiple articles are written regarding an important event, there could be an overlap between the training and evaluation data in terms on content.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The dataset is limited to news domain only. Hence it wouldn't be advisable to use a model trained on this dataset for summarizing texts from a different domain i.e. literature, scientific text etc. Another pitfall could be hallucinations in the model generated summary.
#### Discouraged Use Cases
<!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
<!-- scope: microscope -->
ROUGE evaluates the quality of the summary as a whole by considering up to 4-gram overlaps. Therefore, in an article about India if the word "India" in the generated summary gets replaced by "Pakistan" due to model hallucination, the overall score wouldn't be reduced significantly, but the entire meaning could get changed.
|
oscar-corpus/OSCAR-2109 | 2022-11-08T09:04:43.000Z | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:als",
"language:gsw",
"language:am",
"language:an",
"language:ar",
"language:arz",
"language:as",
"language:ast",
"language:av",
"language:az",
"language:azb",
"language:ba",
"language:bar",
"language:be",
"language:bg",
"language:bh",
"language:bn",
"language:bo",
"language:bpy",
"language:br",
"language:bs",
"language:bxr",
"language:ca",
"language:cbk",
"language:ce",
"language:ceb",
"language:ckb",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:diq",
"language:dsb",
"language:dv",
"language:el",
"language:eml",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:frr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gn",
"language:gom",
"language:gu",
"language:gv",
"language:he",
"language:hi",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:ilo",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:jbo",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:krc",
"language:ku",
"language:kv",
"language:kw",
"language:ky",
"language:la",
"language:lb",
"language:lez",
"language:li",
"language:lmo",
"language:lo",
"language:lrc",
"language:lt",
"language:lv",
"language:mai",
"language:mg",
"language:mhr",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:ms",
"language:mt",
"language:mwl",
"language:my",
"language:myv",
"language:mzn",
"language:nah",
"language:nap",
"language:nds",
"language:ne",
"language:new",
"language:nl",
"language:nn",
"language:no",
"language:oc",
"language:or",
"language:os",
"language:pa",
"language:pam",
"language:pl",
"language:pms",
"language:pnb",
"language:ps",
"language:pt",
"language:qu",
"language:rm",
"language:ro",
"language:ru",
"language:rue",
"language:sa",
"language:sah",
"language:scn",
"language:sco",
"language:sd",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:sq",
"language:sr",
"language:su",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:tyv",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vec",
"language:vi",
"language:vls",
"language:vo",
"language:wa",
"language:war",
"language:wuu",
"language:xal",
"language:xmf",
"language:yi",
"language:yo",
"language:zh",
"license:cc0-1.0",
"arxiv:2010.14571",
"arxiv:2103.12028",
"region:us"
] | oscar-corpus | The Open Super-large Crawled Aggregated coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\ | @inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
@article{caswell-etal-2021-quality,
author = {{Caswell}, Isaac and {Kreutzer}, Julia and {Wang}, Lisa and {Wahab}, Ahsan and {van Esch}, Daan and {Ulzii-Orshikh}, Nasanbayar and {Tapo}, Allahsera and {Subramani}, Nishant and {Sokolov}, Artem and {Sikasote}, Claytone and {Setyawan}, Monang and {Sarin}, Supheakmungkol and {Samb}, Sokhar and {Sagot}, Beno{\^\i}t and {Rivera}, Clara and {Rios}, Annette and {Papadimitriou}, Isabel and {Osei}, Salomey and {Ortiz Su{\'a}rez}, Pedro Javier and {Orife}, Iroro and {Ogueji}, Kelechi and {Niyongabo}, Rubungo Andre and {Nguyen}, Toan Q. and {M{\"u}ller}, Mathias and {M{\"u}ller}, Andr{\'e} and {Hassan Muhammad}, Shamsuddeen and {Muhammad}, Nanda and {Mnyakeni}, Ayanda and {Mirzakhalov}, Jamshidbek and {Matangira}, Tapiwanashe and {Leong}, Colin and {Lawson}, Nze and {Kudugunta}, Sneha and {Jernite}, Yacine and {Jenny}, Mathias and {Firat}, Orhan and {Dossou}, Bonaventure F.~P. and {Dlamini}, Sakhile and {de Silva}, Nisansa and {{\c{C}}abuk Ball{\i}}, Sakine and {Biderman}, Stella and {Battisti}, Alessia and {Baruwa}, Ahmed and {Bapna}, Ankur and {Baljekar}, Pallavi and {Abebe Azime}, Israel and {Awokoya}, Ayodele and {Ataman}, Duygu and {Ahia}, Orevaoghene and {Ahia}, Oghenefego and {Agrawal}, Sweta and {Adeyemi}, Mofetoluwa},
title = "{Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language, Computer Science - Artificial Intelligence},
year = 2021,
month = mar,
eid = {arXiv:2103.12028},
pages = {arXiv:2103.12028},
archivePrefix = {arXiv},
eprint = {2103.12028},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210312028C},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{\'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{\'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{\"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
} | null | 30 | 226 | ---
pretty_name: OSCAR
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- als
- gsw
- am
- an
- ar
- arz
- as
- ast
- av
- az
- azb
- ba
- bar
- be
- bg
- bh
- bn
- bo
- bpy
- br
- bs
- bxr
- ca
- cbk
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- diq
- dsb
- dv
- el
- eml
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- frr
- fy
- ga
- gd
- gl
- gn
- gom
- gu
- gv
- he
- hi
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- ie
- ilo
- io
- is
- it
- ja
- jbo
- jv
- ka
- kk
- km
- kn
- ko
- krc
- ku
- kv
- kw
- ky
- la
- lb
- lez
- li
- lmo
- lo
- lrc
- lt
- lv
- mai
- mg
- mhr
- min
- mk
- ml
- mn
- mr
- mrj
- ms
- mt
- mwl
- my
- myv
- mzn
- nah
- nap
- nds
- ne
- new
- nl
- nn
- 'no'
- oc
- or
- os
- pa
- pam
- pl
- pms
- pnb
- ps
- pt
- qu
- rm
- ro
- ru
- rue
- sa
- sah
- scn
- sco
- sd
- sh
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- tyv
- ug
- uk
- ur
- uz
- vec
- vi
- vls
- vo
- wa
- war
- wuu
- xal
- xmf
- yi
- yo
- zh
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
unshuffled_deduplicated_af:
- 100K<n<1M
unshuffled_deduplicated_als:
- 1K<n<10K
unshuffled_deduplicated_am:
- 10K<n<100K
unshuffled_deduplicated_an:
- 1K<n<10K
unshuffled_deduplicated_ar:
- 1M<n<10M
unshuffled_deduplicated_arz:
- 10K<n<100K
unshuffled_deduplicated_as:
- 1K<n<10K
unshuffled_deduplicated_ast:
- 1K<n<10K
unshuffled_deduplicated_av:
- n<1K
unshuffled_deduplicated_az:
- 100K<n<1M
unshuffled_deduplicated_azb:
- 1K<n<10K
unshuffled_deduplicated_ba:
- 10K<n<100K
unshuffled_deduplicated_bar:
- n<1K
unshuffled_deduplicated_bcl:
- n<1K
unshuffled_deduplicated_be:
- 100K<n<1M
unshuffled_deduplicated_bg:
- 1M<n<10M
unshuffled_deduplicated_bh:
- n<1K
unshuffled_deduplicated_bn:
- 1M<n<10M
unshuffled_deduplicated_bo:
- 10K<n<100K
unshuffled_deduplicated_bpy:
- 1K<n<10K
unshuffled_deduplicated_br:
- 10K<n<100K
unshuffled_deduplicated_bs:
- n<1K
unshuffled_deduplicated_bxr:
- n<1K
unshuffled_deduplicated_ca:
- 1M<n<10M
unshuffled_deduplicated_cbk:
- n<1K
unshuffled_deduplicated_ce:
- 1K<n<10K
unshuffled_deduplicated_ceb:
- 10K<n<100K
unshuffled_deduplicated_ckb:
- 10K<n<100K
unshuffled_deduplicated_cs:
- 10M<n<100M
unshuffled_deduplicated_cv:
- 10K<n<100K
unshuffled_deduplicated_cy:
- 10K<n<100K
unshuffled_deduplicated_da:
- 1M<n<10M
unshuffled_deduplicated_de:
- 10M<n<100M
unshuffled_deduplicated_diq:
- n<1K
unshuffled_deduplicated_dsb:
- n<1K
unshuffled_deduplicated_dv:
- 10K<n<100K
unshuffled_deduplicated_el:
- 1M<n<10M
unshuffled_deduplicated_eml:
- n<1K
unshuffled_deduplicated_en:
- 100M<n<1B
unshuffled_deduplicated_eo:
- 10K<n<100K
unshuffled_deduplicated_es:
- 10M<n<100M
unshuffled_deduplicated_et:
- 1M<n<10M
unshuffled_deduplicated_eu:
- 100K<n<1M
unshuffled_deduplicated_fa:
- 1M<n<10M
unshuffled_deduplicated_fi:
- 1M<n<10M
unshuffled_deduplicated_fr:
- 10M<n<100M
unshuffled_deduplicated_frr:
- n<1K
unshuffled_deduplicated_fy:
- 10K<n<100K
unshuffled_deduplicated_ga:
- 10K<n<100K
unshuffled_deduplicated_gd:
- 1K<n<10K
unshuffled_deduplicated_gl:
- 100K<n<1M
unshuffled_deduplicated_gn:
- n<1K
unshuffled_deduplicated_gom:
- n<1K
unshuffled_deduplicated_gu:
- 100K<n<1M
unshuffled_deduplicated_he:
- 1M<n<10M
unshuffled_deduplicated_hi:
- 1M<n<10M
unshuffled_deduplicated_hr:
- 100K<n<1M
unshuffled_deduplicated_hsb:
- 1K<n<10K
unshuffled_deduplicated_ht:
- n<1K
unshuffled_deduplicated_hu:
- 1M<n<10M
unshuffled_deduplicated_hy:
- 100K<n<1M
unshuffled_deduplicated_ia:
- n<1K
unshuffled_deduplicated_id:
- 1M<n<10M
unshuffled_deduplicated_ie:
- n<1K
unshuffled_deduplicated_ilo:
- 1K<n<10K
unshuffled_deduplicated_io:
- n<1K
unshuffled_deduplicated_is:
- 100K<n<1M
unshuffled_deduplicated_it:
- 10M<n<100M
unshuffled_deduplicated_ja:
- 10M<n<100M
unshuffled_deduplicated_jbo:
- n<1K
unshuffled_deduplicated_jv:
- 1K<n<10K
unshuffled_deduplicated_ka:
- 100K<n<1M
unshuffled_deduplicated_kk:
- 100K<n<1M
unshuffled_deduplicated_km:
- 100K<n<1M
unshuffled_deduplicated_kn:
- 100K<n<1M
unshuffled_deduplicated_ko:
- 1M<n<10M
unshuffled_deduplicated_krc:
- 1K<n<10K
unshuffled_deduplicated_ku:
- 10K<n<100K
unshuffled_deduplicated_kv:
- n<1K
unshuffled_deduplicated_kw:
- n<1K
unshuffled_deduplicated_ky:
- 10K<n<100K
unshuffled_deduplicated_la:
- 10K<n<100K
unshuffled_deduplicated_lb:
- 10K<n<100K
unshuffled_deduplicated_lez:
- 1K<n<10K
unshuffled_deduplicated_li:
- n<1K
unshuffled_deduplicated_lmo:
- 1K<n<10K
unshuffled_deduplicated_lo:
- 10K<n<100K
unshuffled_deduplicated_lrc:
- n<1K
unshuffled_deduplicated_lt:
- 1M<n<10M
unshuffled_deduplicated_lv:
- 100K<n<1M
unshuffled_deduplicated_mai:
- n<1K
unshuffled_deduplicated_mg:
- 10K<n<100K
unshuffled_deduplicated_mhr:
- 1K<n<10K
unshuffled_deduplicated_min:
- n<1K
unshuffled_deduplicated_mk:
- 100K<n<1M
unshuffled_deduplicated_ml:
- 100K<n<1M
unshuffled_deduplicated_mn:
- 100K<n<1M
unshuffled_deduplicated_mr:
- 100K<n<1M
unshuffled_deduplicated_mrj:
- n<1K
unshuffled_deduplicated_ms:
- 100K<n<1M
unshuffled_deduplicated_mt:
- 10K<n<100K
unshuffled_deduplicated_mwl:
- n<1K
unshuffled_deduplicated_my:
- 100K<n<1M
unshuffled_deduplicated_myv:
- n<1K
unshuffled_deduplicated_mzn:
- n<1K
unshuffled_deduplicated_nah:
- n<1K
unshuffled_deduplicated_nap:
- n<1K
unshuffled_deduplicated_nds:
- 1K<n<10K
unshuffled_deduplicated_ne:
- 100K<n<1M
unshuffled_deduplicated_new:
- 1K<n<10K
unshuffled_deduplicated_nl:
- 10M<n<100M
unshuffled_deduplicated_nn:
- 100K<n<1M
unshuffled_deduplicated_no:
- 1M<n<10M
unshuffled_deduplicated_oc:
- 1K<n<10K
unshuffled_deduplicated_or:
- 10K<n<100K
unshuffled_deduplicated_os:
- 1K<n<10K
unshuffled_deduplicated_pa:
- 10K<n<100K
unshuffled_deduplicated_pam:
- n<1K
unshuffled_deduplicated_pl:
- 10M<n<100M
unshuffled_deduplicated_pms:
- 1K<n<10K
unshuffled_deduplicated_pnb:
- 1K<n<10K
unshuffled_deduplicated_ps:
- 10K<n<100K
unshuffled_deduplicated_pt:
- 10M<n<100M
unshuffled_deduplicated_qu:
- n<1K
unshuffled_deduplicated_rm:
- n<1K
unshuffled_deduplicated_ro:
- 1M<n<10M
unshuffled_deduplicated_ru:
- 100M<n<1B
unshuffled_deduplicated_sa:
- 1K<n<10K
unshuffled_deduplicated_sah:
- 1K<n<10K
unshuffled_deduplicated_scn:
- n<1K
unshuffled_deduplicated_sd:
- 10K<n<100K
unshuffled_deduplicated_sh:
- 10K<n<100K
unshuffled_deduplicated_si:
- 100K<n<1M
unshuffled_deduplicated_sk:
- 1M<n<10M
unshuffled_deduplicated_sl:
- 100K<n<1M
unshuffled_deduplicated_so:
- n<1K
unshuffled_deduplicated_sq:
- 100K<n<1M
unshuffled_deduplicated_sr:
- 100K<n<1M
unshuffled_deduplicated_su:
- n<1K
unshuffled_deduplicated_sv:
- 10M<n<100M
unshuffled_deduplicated_sw:
- 10K<n<100K
unshuffled_deduplicated_ta:
- 100K<n<1M
unshuffled_deduplicated_te:
- 100K<n<1M
unshuffled_deduplicated_tg:
- 10K<n<100K
unshuffled_deduplicated_th:
- 1M<n<10M
unshuffled_deduplicated_tk:
- 1K<n<10K
unshuffled_deduplicated_tl:
- 100K<n<1M
unshuffled_deduplicated_tr:
- 10M<n<100M
unshuffled_deduplicated_tt:
- 10K<n<100K
unshuffled_deduplicated_tyv:
- n<1K
unshuffled_deduplicated_ug:
- 10K<n<100K
unshuffled_deduplicated_uk:
- 1M<n<10M
unshuffled_deduplicated_ur:
- 100K<n<1M
unshuffled_deduplicated_uz:
- 10K<n<100K
unshuffled_deduplicated_vec:
- n<1K
unshuffled_deduplicated_vi:
- 1M<n<10M
unshuffled_deduplicated_vo:
- 1K<n<10K
unshuffled_deduplicated_wa:
- n<1K
unshuffled_deduplicated_war:
- 1K<n<10K
unshuffled_deduplicated_wuu:
- n<1K
unshuffled_deduplicated_xal:
- n<1K
unshuffled_deduplicated_xmf:
- 1K<n<10K
unshuffled_deduplicated_yi:
- 10K<n<100K
unshuffled_deduplicated_yo:
- n<1K
unshuffled_deduplicated_yue:
- n<1K
unshuffled_deduplicated_zh:
- 10M<n<100M
unshuffled_original_af:
- 100K<n<1M
unshuffled_original_als:
- 1K<n<10K
unshuffled_original_am:
- 10K<n<100K
unshuffled_original_an:
- 1K<n<10K
unshuffled_original_ar:
- 10M<n<100M
unshuffled_original_arz:
- 100K<n<1M
unshuffled_original_as:
- 10K<n<100K
unshuffled_original_ast:
- 1K<n<10K
unshuffled_original_av:
- n<1K
unshuffled_original_az:
- 100K<n<1M
unshuffled_original_azb:
- 10K<n<100K
unshuffled_original_ba:
- 10K<n<100K
unshuffled_original_bar:
- n<1K
unshuffled_original_bcl:
- n<1K
unshuffled_original_be:
- 100K<n<1M
unshuffled_original_bg:
- 1M<n<10M
unshuffled_original_bh:
- n<1K
unshuffled_original_bn:
- 1M<n<10M
unshuffled_original_bo:
- 10K<n<100K
unshuffled_original_bpy:
- 1K<n<10K
unshuffled_original_br:
- 10K<n<100K
unshuffled_original_bs:
- 1K<n<10K
unshuffled_original_bxr:
- n<1K
unshuffled_original_ca:
- 1M<n<10M
unshuffled_original_cbk:
- n<1K
unshuffled_original_ce:
- 1K<n<10K
unshuffled_original_ceb:
- 10K<n<100K
unshuffled_original_ckb:
- 100K<n<1M
unshuffled_original_cs:
- 10M<n<100M
unshuffled_original_cv:
- 10K<n<100K
unshuffled_original_cy:
- 100K<n<1M
unshuffled_original_da:
- 1M<n<10M
unshuffled_original_de:
- 100M<n<1B
unshuffled_original_diq:
- n<1K
unshuffled_original_dsb:
- n<1K
unshuffled_original_dv:
- 10K<n<100K
unshuffled_original_el:
- 10M<n<100M
unshuffled_original_eml:
- n<1K
unshuffled_original_en:
- 100M<n<1B
unshuffled_original_eo:
- 100K<n<1M
unshuffled_original_es:
- 10M<n<100M
unshuffled_original_et:
- 1M<n<10M
unshuffled_original_eu:
- 100K<n<1M
unshuffled_original_fa:
- 10M<n<100M
unshuffled_original_fi:
- 1M<n<10M
unshuffled_original_fr:
- 10M<n<100M
unshuffled_original_frr:
- n<1K
unshuffled_original_fy:
- 10K<n<100K
unshuffled_original_ga:
- 10K<n<100K
unshuffled_original_gd:
- 1K<n<10K
unshuffled_original_gl:
- 100K<n<1M
unshuffled_original_gn:
- n<1K
unshuffled_original_gom:
- n<1K
unshuffled_original_gu:
- 100K<n<1M
unshuffled_original_he:
- 1M<n<10M
unshuffled_original_hi:
- 1M<n<10M
unshuffled_original_hr:
- 100K<n<1M
unshuffled_original_hsb:
- 1K<n<10K
unshuffled_original_ht:
- n<1K
unshuffled_original_hu:
- 10M<n<100M
unshuffled_original_hy:
- 100K<n<1M
unshuffled_original_ia:
- 1K<n<10K
unshuffled_original_id:
- 10M<n<100M
unshuffled_original_ie:
- n<1K
unshuffled_original_ilo:
- 1K<n<10K
unshuffled_original_io:
- n<1K
unshuffled_original_is:
- 100K<n<1M
unshuffled_original_it:
- 10M<n<100M
unshuffled_original_ja:
- 10M<n<100M
unshuffled_original_jbo:
- n<1K
unshuffled_original_jv:
- 1K<n<10K
unshuffled_original_ka:
- 100K<n<1M
unshuffled_original_kk:
- 100K<n<1M
unshuffled_original_km:
- 100K<n<1M
unshuffled_original_kn:
- 100K<n<1M
unshuffled_original_ko:
- 1M<n<10M
unshuffled_original_krc:
- 1K<n<10K
unshuffled_original_ku:
- 10K<n<100K
unshuffled_original_kv:
- 1K<n<10K
unshuffled_original_kw:
- n<1K
unshuffled_original_ky:
- 100K<n<1M
unshuffled_original_la:
- 10K<n<100K
unshuffled_original_lb:
- 10K<n<100K
unshuffled_original_lez:
- 1K<n<10K
unshuffled_original_li:
- n<1K
unshuffled_original_lmo:
- 1K<n<10K
unshuffled_original_lo:
- 10K<n<100K
unshuffled_original_lrc:
- n<1K
unshuffled_original_lt:
- 1M<n<10M
unshuffled_original_lv:
- 1M<n<10M
unshuffled_original_mai:
- n<1K
unshuffled_original_mg:
- 10K<n<100K
unshuffled_original_mhr:
- 1K<n<10K
unshuffled_original_min:
- n<1K
unshuffled_original_mk:
- 100K<n<1M
unshuffled_original_ml:
- 100K<n<1M
unshuffled_original_mn:
- 100K<n<1M
unshuffled_original_mr:
- 100K<n<1M
unshuffled_original_mrj:
- n<1K
unshuffled_original_ms:
- 100K<n<1M
unshuffled_original_mt:
- 10K<n<100K
unshuffled_original_mwl:
- n<1K
unshuffled_original_my:
- 100K<n<1M
unshuffled_original_myv:
- n<1K
unshuffled_original_mzn:
- 1K<n<10K
unshuffled_original_nah:
- n<1K
unshuffled_original_nap:
- n<1K
unshuffled_original_nds:
- 10K<n<100K
unshuffled_original_ne:
- 100K<n<1M
unshuffled_original_new:
- 1K<n<10K
unshuffled_original_nl:
- 10M<n<100M
unshuffled_original_nn:
- 100K<n<1M
unshuffled_original_no:
- 1M<n<10M
unshuffled_original_oc:
- 10K<n<100K
unshuffled_original_or:
- 10K<n<100K
unshuffled_original_os:
- 1K<n<10K
unshuffled_original_pa:
- 100K<n<1M
unshuffled_original_pam:
- n<1K
unshuffled_original_pl:
- 10M<n<100M
unshuffled_original_pms:
- 1K<n<10K
unshuffled_original_pnb:
- 1K<n<10K
unshuffled_original_ps:
- 10K<n<100K
unshuffled_original_pt:
- 10M<n<100M
unshuffled_original_qu:
- n<1K
unshuffled_original_rm:
- n<1K
unshuffled_original_ro:
- 1M<n<10M
unshuffled_original_ru:
- 100M<n<1B
unshuffled_original_sa:
- 10K<n<100K
unshuffled_original_sah:
- 10K<n<100K
unshuffled_original_scn:
- n<1K
unshuffled_original_sd:
- 10K<n<100K
unshuffled_original_sh:
- 10K<n<100K
unshuffled_original_si:
- 100K<n<1M
unshuffled_original_sk:
- 1M<n<10M
unshuffled_original_sl:
- 1M<n<10M
unshuffled_original_so:
- n<1K
unshuffled_original_sq:
- 100K<n<1M
unshuffled_original_sr:
- 1M<n<10M
unshuffled_original_su:
- n<1K
unshuffled_original_sv:
- 10M<n<100M
unshuffled_original_sw:
- 10K<n<100K
unshuffled_original_ta:
- 1M<n<10M
unshuffled_original_te:
- 100K<n<1M
unshuffled_original_tg:
- 10K<n<100K
unshuffled_original_th:
- 1M<n<10M
unshuffled_original_tk:
- 1K<n<10K
unshuffled_original_tl:
- 100K<n<1M
unshuffled_original_tr:
- 10M<n<100M
unshuffled_original_tt:
- 100K<n<1M
unshuffled_original_tyv:
- n<1K
unshuffled_original_ug:
- 10K<n<100K
unshuffled_original_uk:
- 10M<n<100M
unshuffled_original_ur:
- 100K<n<1M
unshuffled_original_uz:
- 10K<n<100K
unshuffled_original_vec:
- n<1K
unshuffled_original_vi:
- 10M<n<100M
unshuffled_original_vo:
- 1K<n<10K
unshuffled_original_wa:
- 1K<n<10K
unshuffled_original_war:
- 1K<n<10K
unshuffled_original_wuu:
- n<1K
unshuffled_original_xal:
- n<1K
unshuffled_original_xmf:
- 1K<n<10K
unshuffled_original_yi:
- 10K<n<100K
unshuffled_original_yo:
- n<1K
unshuffled_original_yue:
- n<1K
unshuffled_original_zh:
- 10M<n<100M
source_datasets:
- original
task_categories:
- sequence-modeling
task_ids:
- language-modeling
paperswithcode_id: oscar
---
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [github.com/oscar-corpus/corpus](https://github.com/oscar-corpus/corpus)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled **A**ggregated co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [ungoliant](https://github.com/oscar-corpus/ungoliant) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 168 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
### Issues
OSCAR 21.09 has known issues regarding specific languages.
Note that other issues may (and could) be present in other languages.
**If you encounter something that is unexpected, please file an issue here: https://github.com/oscar-corpus/corpus/issues.**
|Language code|Language|Issues|
|-------------|--------|------|
|`tg`|Tajik|[](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Atg+label%3Aver%3A21.09)|
|`tr`|Turkish|[](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Atr+label%3Aver%3A21.09)|
|`vls`|West Flemish|[](https://github.com/oscar-corpus/corpus/issues?q=is%3Aopen+label%3Alang%3Avls+label%3Aver%3A21.09)|
|`wuu`|Wu Chinese|[](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Awuu+label%3Aver%3A21.09)|
|`nap`|Neapolitan|[](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Anap+label%3Aver%3A21.09)|
|`so`|Somali|[](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Aso+label%3Aver%3A21.09)|
|`frr`|Northern Frisian|[](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Afrr+label%3Aver%3A21.09)|
|`cbk`|Chavacano|[](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Acbk+label%3Aver%3A21.09)|
|`sco`|Scots|[](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Asco+label%3Aver%3A21.09)|
## Dataset Structure
We show detailed information for all the configurations of the dataset.
### Data Instances
<details>
<summary>Click to expand the Data/size information for each language (deduplicated)</summary>
#### deduplicated_af
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3287,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:BUOBNDDY3VZKNNUOY33PAWBXEVNDCDJK',
'warc-date': '2021-03-09T04:21:33Z',
'warc-identified-content-language': 'afr,eng',
'warc-record-id': '<urn:uuid:dece1e30-a099-411a-87fd-483791342d48>',
'warc-refers-to': '<urn:uuid:5a35e8b2-0fcb-4600-9d15-f5c6469ddf01>',
'warc-target-uri': 'http://www.northwestnewspapers.co.za/gemsbok/2015-06-18-10-02-17/hoe-om-n-ad-te-plaas/1907-man-betrap-met-jagluiperd-en-leeu-bene',
'warc-type': 'conversion'},
'nb_sentences': 3,
'offset': 0},
'text': 'Stap 2: Tik jou ad in die teks boksie, jy sal sien dat die prys aan '
'die regterkant van die boksie verander volgens di...'}
```
#### deduplicated_als
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 4607,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:URQ53Z4I4KGPHICZYLW2ZOX7OWWCGZUA',
'warc-date': '2021-03-03T16:09:20Z',
'warc-identified-content-language': 'deu,eng',
'warc-record-id': '<urn:uuid:134499db-d54a-4c29-9517-350cacc3d29d>',
'warc-refers-to': '<urn:uuid:073aeb77-b4ed-47eb-b955-27031963acf4>',
'warc-target-uri': 'https://als.m.wikipedia.org/wiki/Neukaledonien',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'D Wirtschaft bestoot vor allem us Handwärk, Bärgbau, Industrii und '
'Turismus. 40 Kilometer vo dr Hauptstadt Nouméa äwä...'}
```
#### deduplicated_am
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 9679,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:YADJOQVUOQHUKJ7BXCKKU4LRFKE3JPOA',
'warc-date': '2021-03-09T04:16:32Z',
'warc-identified-content-language': 'amh,eng',
'warc-record-id': '<urn:uuid:fa02fe22-c72e-42e8-9cb3-89da85a80941>',
'warc-refers-to': '<urn:uuid:ff89f862-5e6a-41aa-bc40-ef1d2f91d258>',
'warc-target-uri': 'http://ethioforum.ethiopiaforums.com/viewtopic.php?f=6&t=3874&p=6511',
'warc-type': 'conversion'},
'nb_sentences': 10,
'offset': 0},
'text': '(ፍኖተ ነፃነት) በኢትዮጵያ የአዉሮፓ ሕብረት ልኡካን ቡድን መሪ አምባሳደር ቻንታል ሔበሬሽ፣ በአዉሮፓ '
'ሕብረት የአፍሪካ ቀንድ እና የሕንድ ዉቂያኖስ አካባቢ ዴስክ ኦፌሴር ቪክቶሪያ ጋርሲ...'}
```
#### deduplicated_an
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 134014,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:OG2T3MJFSLSH33PVI7D3WPXVE6ZFLZ4Z',
'warc-date': '2021-03-08T00:58:33Z',
'warc-identified-content-language': 'ara,fra',
'warc-record-id': '<urn:uuid:0ef1d002-86e7-49c1-ac8a-8ba933d190ee>',
'warc-refers-to': '<urn:uuid:5071f1f7-3350-406d-ad97-f292fe7a2ff0>',
'warc-target-uri': 'http://dorous.ek.la/1-5-a6032874?reply_comm=68653652',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'ووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووو...'}
```
#### deduplicated_ar
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 12677,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:NFDDUGANGSGSFXIQAXEGIVHGRLFCUW55',
'warc-date': '2021-03-04T02:22:39Z',
'warc-identified-content-language': 'ara,eng',
'warc-record-id': '<urn:uuid:3ea1e651-68f3-4dde-bfea-7a12e5331084>',
'warc-refers-to': '<urn:uuid:dcecf9ad-1797-44d0-b06a-010c424ba396>',
'warc-target-uri': 'https://elmgals.net/?p=62804',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': 'مطحنة الكرة في ماسبات - orioloingeu. مطاحن الفرينة في مطحنة الكرة '
'مراكز بيع الة طحن التوابل بيع ألات لرحي اسعار بيع ا...'}
```
#### deduplicated_arz
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 9603,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:6O2LEGAWXAWYSRH2TQNYOWX47ZFWTKRC',
'warc-date': '2021-03-09T03:51:17Z',
'warc-identified-content-language': 'ara',
'warc-record-id': '<urn:uuid:0578411b-367f-4d52-b85c-56b4bb64c0be>',
'warc-refers-to': '<urn:uuid:8777119c-434c-49a1-80a8-f2b23fa0e21c>',
'warc-target-uri': 'https://www.hko-ommen.nl/Nov_01/605.html',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'مستعملة 4265 كسارات للبيع - كسارة الحجر. كسارات مستعمله للبيع فى '
'مصر. للبيع كسارات فى مصرمطلوب كسارات حجر مستعملة للب...'}
```
#### deduplicated_as
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 9280,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DORQKORQ4TURDN35T75TW72IZ7IZIEFG',
'warc-date': '2021-03-03T15:06:57Z',
'warc-identified-content-language': 'asm,eng',
'warc-record-id': '<urn:uuid:fd6c3650-f91f-4f03-ae7a-bea654e043bb>',
'warc-refers-to': '<urn:uuid:48f057d6-f642-42d2-8de1-fec8e4fca4d4>',
'warc-target-uri': 'https://assam.nenow.in/%E0%A6%95%E0%A6%BE%E0%A6%87%E0%A6%B2%E0%A7%88%E0%A7%B0-%E0%A6%AA%E0%A7%B0%E0%A6%BE-%E0%A6%AF%E0%A7%8B%E0%A7%B0%E0%A6%B9%E0%A6%BE%E0%A6%9F%E0%A6%A4-%E0%A6%86%E0%A7%B0%E0%A6%AE%E0%A7%8D%E0%A6%AD/',
'warc-type': 'conversion'},
'nb_sentences': 8,
'offset': 0},
'text': 'যোৰহাট জিলাৰ এন আৰ চি উন্নিতকৰণৰ প্ৰথম পৰ্য্যায়ৰ বংশবৃক্ষ পৰীক্ষণৰ '
'কাম কাইলৈৰ পৰা পৰীক্ষামূলকভাৱে আৰু ১৯ ফেব্ৰুৱাৰিৰ ...'}
```
#### deduplicated_ast
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3752,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:BU44BHPYU2BOWH4TUAY7ZOEBFVQ6KD44',
'warc-date': '2021-03-01T15:56:44Z',
'warc-identified-content-language': 'spa',
'warc-record-id': '<urn:uuid:2b3ca12f-6614-4662-a4e9-16e1ce13a8b0>',
'warc-refers-to': '<urn:uuid:0e132db0-e0f4-44c5-ab63-48b7594a35a6>',
'warc-target-uri': 'https://elsummum.es/tag/dial-traxel-pais/',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': 'Esta ye la galería d’imáxenes de los participantes nel concursu, el '
'xuráu y dellos miembros de la organización de la ...'}
```
#### deduplicated_av
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2012,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:EULKS66PQCWWVXHNRPSISI72G3GFJD7L',
'warc-date': '2021-03-01T10:13:53Z',
'warc-identified-content-language': 'rus,eng',
'warc-record-id': '<urn:uuid:c2986179-7947-4184-9df5-dca05c987055>',
'warc-refers-to': '<urn:uuid:8b3e82e1-0964-4677-8b39-9bd3c67be25b>',
'warc-target-uri': 'http://gazetalevashi.ru/articles/media/2019/10/25/diktant-tiobitiana/',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Дагъистаналъул жамгIият рахьдал мацIал цIуниялде ва '
'церетIезариялде, тарих, гIадатал, маданият ва дагъистаналъул '
'халк...'}
```
#### deduplicated_az
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 59868,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:LDASIZ5NDJU6NRCJW7XCCI4QRLFIZZQX',
'warc-date': '2021-02-26T04:13:32Z',
'warc-identified-content-language': 'aze',
'warc-record-id': '<urn:uuid:a35cc521-926e-442d-b285-299ea4a3b72a>',
'warc-refers-to': '<urn:uuid:b60fd7ea-7056-4ebb-8ae5-eb02617ca8cd>',
'warc-target-uri': 'https://azrefs.org/iqtisadi-tesebbuslere-yardim-ictimai-birliyi-yerli-seviyyede-i.html',
'warc-type': 'conversion'},
'nb_sentences': 70,
'offset': 0},
'text': 'İQTİsadi TƏŞƏBBÜSLƏRƏ yardim iCTİMAİ BİRLİYİ Yerli səviyyədə içməli '
'su təchizatı sisteminin idarə olunması\n'
'Az1009, Az...'}
```
#### deduplicated_azb
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 5245,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:XWTKHZGKVJI6ZAIKSTOA4AOP5PCWI2SH',
'warc-date': '2021-03-05T13:35:27Z',
'warc-identified-content-language': 'fas,uzb,eng',
'warc-record-id': '<urn:uuid:41816fd7-985e-4e35-b79b-bf471e68dd80>',
'warc-refers-to': '<urn:uuid:5717a90d-021c-428b-a69d-45d6cb2fc692>',
'warc-target-uri': 'https://azb.wikipedia.org/wiki/%D8%A2%D9%85%D8%B3%D8%AA%D8%B1%D8%AF%D8%A7%D9%85_%D8%A8%DB%8C%D9%84%DB%8C%D9%85%E2%80%8C%DB%8C%D9%88%D8%B1%D8%AF%D9%88',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'یازی Creative Commons Attribution-ShareAlike '
'License;آلتیندا\u200cدیر آرتیق شرطلر آرتیریلا بیلر. آرتیق ایطلاعات '
'اوچون ایشل...'}
```
#### deduplicated_ba
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 9444,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:NRTIKDSYAPTPQ64CKKLNR6TFVUYG7CLR',
'warc-date': '2021-03-09T04:46:56Z',
'warc-identified-content-language': 'uig,eng',
'warc-record-id': '<urn:uuid:b69f43f4-0e19-4cad-b083-fce91a40f64b>',
'warc-refers-to': '<urn:uuid:3176da53-14ff-4f65-91e4-4d209e9c7190>',
'warc-target-uri': 'https://uyghurix.net/archives/date/2016/05?uls=us',
'warc-type': 'conversion'},
'nb_sentences': 3,
'offset': 0},
'text': 'линакис системисиниң көрүнмә йүзи барғансери ишлитишкә қулайлиқ '
'болуп, кәң ишлитиливатқан болсиму, әмили хизмәттә йән...'}
```
#### deduplicated_bar
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 105623,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:L7EXHEWTVKPV7BWPZJFKHM2TZ3ZNKPWC',
'warc-date': '2021-03-07T18:33:16Z',
'warc-identified-content-language': 'fra',
'warc-record-id': '<urn:uuid:578af8ce-2149-42e3-978c-5191caaaca8c>',
'warc-refers-to': '<urn:uuid:a7afc792-983c-43b7-9b5b-75b2dc5fcd77>',
'warc-target-uri': 'https://fr.readkong.com/page/automne-hiver-printemps-2017-8342349',
'warc-type': 'conversion'},
'nb_sentences': 3,
'offset': 0},
'text': ' '
'vo\n'
' ...'}
```
#### deduplicated_be
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3159,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:TEJML7M4S55254DZU43DXXORKPZMKGUL',
'warc-date': '2021-03-09T05:47:09Z',
'warc-identified-content-language': 'bel,eng',
'warc-record-id': '<urn:uuid:e22883c9-5622-4a0e-b259-b5265e6e345a>',
'warc-refers-to': '<urn:uuid:7ec2102d-2645-4fd9-89b8-557762996439>',
'warc-target-uri': 'https://be-tarask.wikipedia.org/wiki/%D0%9A%D0%B0%D1%82%D1%8D%D0%B3%D0%BE%D1%80%D1%8B%D1%8F:%D0%9F%D1%80%D1%8D%D1%81%D0%BD%D0%B0%D1%8F_%D0%B2%D0%B0%D0%B4%D0%B0',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Гэты тэкст даступны на ўмовах ліцэнзіі Creative Commons '
'Attribution/Share-Alike 3.0; у асобных выпадках могуць ужывац...'}
```
#### deduplicated_bg
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 23651,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:QDAV5ZVRR2IGND4ANWTVOBPNO2POZUEQ',
'warc-date': '2021-03-08T21:47:20Z',
'warc-identified-content-language': 'bul',
'warc-record-id': '<urn:uuid:0e422a1d-ac8c-4f21-bb71-e5c65282f30c>',
'warc-refers-to': '<urn:uuid:0109dba6-8f1a-4047-bdd5-cbcc38de63a8>',
'warc-target-uri': 'http://europe.bg/bg/bulgariya-poluchava-resor-inovacii-i-mladezh',
'warc-type': 'conversion'},
'nb_sentences': 37,
'offset': 0},
'text': 'От хилядите кубинци и другите граждани на страните от СИВ, '
'командировани на строежа на АЕЦ-а, в Белене е останал само...'}
```
#### deduplicated_bh
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 9021,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:IN7PHDOP7MZD6RHN6KIJ7SXTY7VC76SK',
'warc-date': '2021-03-08T22:57:31Z',
'warc-identified-content-language': 'hin,eng',
'warc-record-id': '<urn:uuid:62e18c96-cd2c-461b-93d9-900d95eec89e>',
'warc-refers-to': '<urn:uuid:73ee6388-6f0a-460d-ac2e-bbc1a2b63bb4>',
'warc-target-uri': 'https://bh.wikipedia.org/wiki/%E0%A4%B6%E0%A5%8D%E0%A4%B0%E0%A5%87%E0%A4%A3%E0%A5%80:%E0%A4%B5%E0%A4%BF%E0%A4%95%E0%A4%BF%E0%A4%AA%E0%A5%80%E0%A4%A1%E0%A4%BF%E0%A4%AF%E0%A4%BE_%E0%A4%97%E0%A5%88%E0%A4%B0-%E0%A4%AE%E0%A5%81%E0%A4%95%E0%A5%8D%E0%A4%A4_%E0%A4%AB%E0%A4%BE%E0%A4%87%E0%A4%B2_%E0%A4%B5%E0%A5%88%E0%A4%A7_%E0%A4%AC%E0%A5%88%E0%A4%95%E0%A4%B2%E0%A4%BF%E0%A4%82%E0%A4%95_%E0%A4%95%E0%A5%87_%E0%A4%B8%E0%A4%BE%E0%A4%A5?from=Ea',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'ई एगो छुपावल गइल श्रेणी बाटे। ई पन्ना सभ पर तबले ना लउकी जबले कि '
'प्रयोगकर्ता के सेटिंग, छुपावल गइल श्रेणी देखावे खाति...'}
```
#### deduplicated_bn
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 36198,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:7QRYGJ3YDG7SBTFUVMMALFA6UWNDVLVY',
'warc-date': '2021-03-05T07:10:58Z',
'warc-identified-content-language': 'ben',
'warc-record-id': '<urn:uuid:050c0cdb-562c-49e5-bcb6-7e5350531ea6>',
'warc-refers-to': '<urn:uuid:a3749b59-4285-4e90-ba64-aa9d745c1f46>',
'warc-target-uri': 'https://www.kalerkantho.com/online/business/2020/12/06/982949',
'warc-type': 'conversion'},
'nb_sentences': 8,
'offset': 0},
'text': 'নিজস্ব সংবাদদাতা: গাড়ি নয় যেন মানুষের খাঁচা। নেই কোন ভালো বসার '
'আসন, যা আছে সেগুলো ভাঙ্গাচুরা, ময়লা ও ধুলাবালিতে ভর...'}
```
#### deduplicated_bo
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 5059,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:XHKOQL5IQBLCVBANFVH66ZZXJZHEEMYW',
'warc-date': '2021-03-03T15:06:26Z',
'warc-identified-content-language': 'zho,bod',
'warc-record-id': '<urn:uuid:3a406f8f-58cd-4990-ae6f-f63dff7e06e3>',
'warc-refers-to': '<urn:uuid:806c4a11-f8cd-49e8-bc22-cae5e0cf6ef2>',
'warc-target-uri': 'http://tcansee.com/goods.php?id=392',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': '所有分类 藏学名家名著 国内名家名著 国外名家名著政治 社会 法律 政治 法律 社会 经济文学 艺术 旅游 艺术 文学 旅游宗教 历史 '
'文化 宗教 历史 文化教育 童书 工具书 教辅 童书 工具书语言文字 语言研究 语言 文字期刊 社...'}
```
#### deduplicated_bpy
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 8270,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:POHCGWDC32KW74IE26NTJ2UMNX7QRBDB',
'warc-date': '2021-03-05T14:00:16Z',
'warc-identified-content-language': 'ben',
'warc-record-id': '<urn:uuid:d53007ee-ddbe-44e9-8253-235567d2960c>',
'warc-refers-to': '<urn:uuid:0409ce75-26bc-4a60-b08d-4e2b6174127e>',
'warc-target-uri': 'http://pobnapurup.gaibandha.gov.bd/site/page/5dc0a075-18fd-11e7-9461-286ed488c766/%E0%A6%95%E0%A6%BE%E0%A6%B0%E0%A7%8D%E0%A6%AF%E0%A6%BE%E0%A6%AC%E0%A6%B2%E0%A7%80',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'পবনাপুর ইউনিয়ন---কিশোরগাড়ী ইউনিয়নহোসেনপুর ইউনিয়নপলাশবাড়ী '
'ইউনিয়নবরিশাল ইউনিয়নমহদীপুর ইউনিয়নবেতকাপা ইউনিয়নপবনাপুর ইউনিয়...'}
```
#### deduplicated_br
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3134,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:U353JBWLMC22GRYEIDN4WOSBUOIUMYQT',
'warc-date': '2021-02-24T21:00:25Z',
'warc-identified-content-language': 'bre',
'warc-record-id': '<urn:uuid:49d1650d-aaf5-43b9-b340-326746e88b31>',
'warc-refers-to': '<urn:uuid:04877e5f-6b86-497e-b39c-30a72683261f>',
'warc-target-uri': 'https://br.m.wiktionary.org/wiki/dont',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': 'Sellet e vez ouzh ar bajenn pe ar gevrenn-mañ evel un divraz da '
'glokaat e brezhoneg. Mar gouezit tra pe dra diwar-ben...'}
```
#### deduplicated_bs
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 8483,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:HS77KGP5HJKJASHMW6WSYV326BPGVM35',
'warc-date': '2021-02-24T18:13:58Z',
'warc-identified-content-language': 'bos,hrv',
'warc-record-id': '<urn:uuid:c12f1b14-4194-405e-a059-9af2f7146940>',
'warc-refers-to': '<urn:uuid:31bedcb4-265f-4aa3-8d2c-cfdc64c42325>',
'warc-target-uri': 'http://mojusk.ba/zastrasujuce-slike-tamnice-u-kojoj-je-skolski-domar-silovao-12-godisnjakinju/',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Predsjednica Evropske centralne banke Christine Lagarde izjavila je '
'da njen najveći strah nije da će Evropska...'}
```
#### deduplicated_bxr
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 6751,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:RELUZWSMYT63FAPLHP55SMNNCSXIQEDX',
'warc-date': '2021-02-26T07:18:33Z',
'warc-identified-content-language': 'mon,rus',
'warc-record-id': '<urn:uuid:efe8d9fa-4329-4479-aa56-43938e8e5370>',
'warc-refers-to': '<urn:uuid:bba3bfb2-b7c7-4605-9f49-34598eac9a5b>',
'warc-target-uri': 'http://soyol.ru/bur/yoho-zanshal/hoityn/',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Хүнэй бэе мүнхэ бэшэ. Һүнэһэнэй бэеымнай орхижо, түрэлөө '
'урилхадань, тэрэнэй хальһан боложо ябаһан бэемнай үхэнэ, газ...'}
```
#### deduplicated_ca
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 30591,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DJYNCXSBI5JH4V3LKGE7YNQBL34E3W5G',
'warc-date': '2021-03-02T21:39:28Z',
'warc-identified-content-language': 'cat,eng',
'warc-record-id': '<urn:uuid:ec350f95-900b-4164-aab3-8a6451228d5b>',
'warc-refers-to': '<urn:uuid:4c8e31b8-3011-4a21-9591-39be0942e121>',
'warc-target-uri': 'https://ca.m.wikipedia.org/wiki/Regne_d%27Ayutthaya',
'warc-type': 'conversion'},
'nb_sentences': 33,
'offset': 0},
'text': "El regne d'Ayutthaya va ser un estat a Tailàndia que va existir de "
'1351 a 1767 governat per un rei. El rei Rāmadhipat...'}
```
#### deduplicated_cbk
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 151273,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:JCULI5BTSXOFUJYKZPPLMU5BZEZJZEVJ',
'warc-date': '2021-03-04T21:00:26Z',
'warc-identified-content-language': 'ita',
'warc-record-id': '<urn:uuid:ca25bd6b-9a5f-41b5-8b0f-ad437a545cee>',
'warc-refers-to': '<urn:uuid:ac67c26c-c62a-4c3d-9bd9-dd66a78a474f>',
'warc-target-uri': 'https://it.readkong.com/page/note-di-un-anno-di-lavoro-plural-3281543',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': ' '
'na '
'...'}
```
#### deduplicated_ce
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 5944,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:AXGWUWKZ5HO42LSEO32HWLT77MATHGXB',
'warc-date': '2021-03-03T14:41:28Z',
'warc-identified-content-language': 'eng',
'warc-record-id': '<urn:uuid:1333c910-7921-4bdd-9bb9-1a8322dfa74b>',
'warc-refers-to': '<urn:uuid:9e976ac2-74e4-4e30-8c49-12f2dc1c257c>',
'warc-target-uri': 'https://www.radiomarsho.com/a/27368811.html',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Апти Бисултанов вина 1959 шарахь. Апти -- гоьваьлла нохчийн '
'кхузаманахьлера байтанча ву. 1983 шарахь цо чекхъяккхира ...'}
```
#### deduplicated_ceb
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 8799,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:GSVQUFRLD3BYXEG2ASAEVHR2IH4D7A2S',
'warc-date': '2021-03-09T04:28:21Z',
'warc-identified-content-language': 'ceb,eng',
'warc-record-id': '<urn:uuid:e53f5344-29f5-4e59-8dac-8fdc92d1758f>',
'warc-refers-to': '<urn:uuid:03c0e7e5-b84c-4205-80cc-c3fb3dc82406>',
'warc-target-uri': 'https://www.safesworld.com/ceb/safewell-17ef-small-combination-lock-digital-safe-box-with-electronic-combination.html',
'warc-type': 'conversion'},
'nb_sentences': 4,
'offset': 0},
'text': '17EF SERYE Talagsaong design ug madanihon nga kolor naghimo 17EF '
'popular nga sa taliwala sa mga anak ug mga babaye, k...'}
```
#### deduplicated_ckb
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 8668,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:XZOIJPSX5QTL5QQPQMXEVADFHZTXMP5I',
'warc-date': '2021-03-09T03:25:59Z',
'warc-identified-content-language': 'kur,eng',
'warc-record-id': '<urn:uuid:9fe2f7e9-c158-4b84-a4a3-24e51acbd69e>',
'warc-refers-to': '<urn:uuid:14902cc0-948b-4dcf-bde6-e687ba41212f>',
'warc-target-uri': 'https://www.dastihawkary.org/blog/portfolio/social-harms-of-drugs/?lang=en',
'warc-type': 'conversion'},
'nb_sentences': 9,
'offset': 0},
'text': 'وەبیرم دێ\u200c لە كۆتایی هەشتاكانی سەدەی ڕابردوو دیاردەیەك هەبوو '
'لەنێو گەنجە لادەرەكانی شاری هەولێر و سەرشەقام هەڵدەستان ...'}
```
#### deduplicated_cs
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 17263,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:EJZ477E7PWMVVVM777MHB5DMDHVYEWK6',
'warc-date': '2021-03-05T11:28:42Z',
'warc-identified-content-language': 'ces',
'warc-record-id': '<urn:uuid:6fc03e7f-9768-4f26-89ce-84fa4732e3c0>',
'warc-refers-to': '<urn:uuid:d78128e5-f667-4461-9f0c-2263d75b74a1>',
'warc-target-uri': 'https://www.lidovky.cz/relax/dobra-chut/mak-a-svestky-vyzkousejte-makovec-podle-romana-pauluse.A150427_125913_dobra-chut_ape?recommendationId=00000000-0000-5000-8000-000000000000',
'warc-type': 'conversion'},
'nb_sentences': 12,
'offset': 0},
'text': 'Porno motor vyhledávání o nové sedlo masáž se svou. pro měkký sex '
'voda učitelka kočička videa stránky Starý pár sex n...'}
```
#### deduplicated_cv
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 4133,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:FKR5EKWIFACLGBIK6IKLHTHDNTEZNF3T',
'warc-date': '2021-03-03T14:25:27Z',
'warc-identified-content-language': 'rus',
'warc-record-id': '<urn:uuid:8140dbf0-2fb0-48d8-a834-c1b052bcc72d>',
'warc-refers-to': '<urn:uuid:cca433fe-6646-4ab7-b5da-f8e17821b43d>',
'warc-target-uri': 'http://chuv-krarm.3dn.ru/blog/vladimir_leontev_savna_masharam_emer_perle_purnar_i/2013-02-08-47',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Сайт авторĕ тата модераторĕ- Михайлов Алексей, Чăваш Республикин '
'Президенчĕн 2010,2012 çулсенчи стипендиачĕ, Сайт адм...'}
```
#### deduplicated_cy
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 1967,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:RNFNJNY7RHGXN5NPEVF2PYNNIWOTDAMJ',
'warc-date': '2021-03-09T03:48:16Z',
'warc-identified-content-language': 'cym,eng',
'warc-record-id': '<urn:uuid:66f063ba-6a33-4f53-9cfb-7dc64a292e89>',
'warc-refers-to': '<urn:uuid:281f9c10-2d7d-4781-82f6-a504f27852a1>',
'warc-target-uri': 'https://cy.wikipedia.org/wiki/John_T._Koch',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': 'Graddiodd o Brifysgol Harvard, gan gymeryd doethuriaeth mewn '
'Ieithoedd a Llenyddiaethau Celtaidd yn 1985. Bu hefyd yn...'}
```
#### deduplicated_da
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 22154,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:AF2FFBNZQ3TOEEZ3MFDU77CXZ6PVU3ZB',
'warc-date': '2021-03-01T12:49:13Z',
'warc-identified-content-language': 'dan',
'warc-record-id': '<urn:uuid:92fffabd-5d36-4539-b8eb-18a0f2554ddb>',
'warc-refers-to': '<urn:uuid:1970d6bb-474f-448b-a3e1-8a77c9a32cb6>',
'warc-target-uri': 'http://rosamundis.dk/thai-horsens-gode-parfumer-til-m%C3%A6nd/',
'warc-type': 'conversion'},
'nb_sentences': 16,
'offset': 0},
'text': 'Mange praler af den sindsro, de har fundet i huler i det '
'norske/forfaldne franske ferielejligheder etc., hvor de har ...'}
```
#### deduplicated_de
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 11180,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:LLCPCA3RGKMXLYUEA3OZ2KFEEBNEOPE2',
'warc-date': '2021-03-09T01:22:52Z',
'warc-identified-content-language': 'eng,deu',
'warc-record-id': '<urn:uuid:0128ab60-86c8-4dc2-b1cf-57950654ae38>',
'warc-refers-to': '<urn:uuid:ff27032b-b843-4ba3-b1e2-377793173071>',
'warc-target-uri': 'http://bioconcepts.de/views/search.php?term=231&listed=y',
'warc-type': 'conversion'},
'nb_sentences': 16,
'offset': 0},
'text': 'Kreismeisterschaften bringen zahlreiche Sunderner Medaillengewinner '
'und Titelträger - Tischtennis im Sauerland\n'
'Am ver...'}
```
#### deduplicated_diq
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 4196,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DTA56M722SM5BZLNADOCPXQGGT32J46O',
'warc-date': '2021-03-06T15:51:03Z',
'warc-identified-content-language': 'tur,srp,nno',
'warc-record-id': '<urn:uuid:b7dcd4a4-b130-4009-88d0-631ca51a7bcc>',
'warc-refers-to': '<urn:uuid:fe4e4ad7-3089-40d2-aa29-f675e3cea0dd>',
'warc-target-uri': 'https://diq.wikipedia.org/wiki/Z%C4%B1wan%C3%AA_Slawki',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Zıwanê Slawki, zıwano merdumanê Slawano. Zıwanê Slawki yew lızgeyê '
'Zıwananê Hind u Ewropao. Keyeyê Zıwananê Slawki be...'}
```
#### deduplicated_dsb
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 20663,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:WWZOAFJJLJ4OHG2PTVLCMP664OR26XCR',
'warc-date': '2021-02-27T22:03:14Z',
'warc-identified-content-language': None,
'warc-record-id': '<urn:uuid:239b7155-8f37-4889-bad8-5bdb0aaa83c2>',
'warc-refers-to': '<urn:uuid:2714b744-a080-4807-a29a-d8f99c80e49c>',
'warc-target-uri': 'https://dsb.m.wikipedia.org/wiki/P%C5%9Bed%C5%82oga:LocMap',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Mjaz tamnjejšej pśedłogu a </noinclude>-kodom mógu pśidatne '
'kategorije a cuzorěcne wótkaze stojaś. Ewentualne pśikład...'}
```
#### deduplicated_dv
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7923,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:ECFUNRNYICXFAZXP5TLM45DPGJX5AHOI',
'warc-date': '2021-02-24T19:53:40Z',
'warc-identified-content-language': 'div,eng',
'warc-record-id': '<urn:uuid:23e2557a-dacc-428c-99fc-e41d4ce2ed95>',
'warc-refers-to': '<urn:uuid:067b6719-0209-49df-8198-27b1954b61b4>',
'warc-target-uri': 'https://dhiislam.com/114288',
'warc-type': 'conversion'},
'nb_sentences': 7,
'offset': 0},
'text': 'މީސްތަކުންގެ ފިކުރާއި ކުޅެލުމަށްޓަކައި މިޒަމާނުގެ ވަސީލަތްތަކުގެ '
'ބޭނުން އެންމެ ރަނގަޅު ގޮތުގައި ހިފަމުންދޭ: ޝެއިޚް ފި...'}
```
#### deduplicated_el
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 12604,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:2LXNVVGR3C4G72RLJUJBKUWLZZJ53TPX',
'warc-date': '2021-03-03T11:34:34Z',
'warc-identified-content-language': 'ell,eng',
'warc-record-id': '<urn:uuid:d95ddbe8-2e54-4d61-a6af-227212090684>',
'warc-refers-to': '<urn:uuid:a0e15450-8455-4b2f-ad8f-3858873a538d>',
'warc-target-uri': 'https://www.androsportal.gr/category/topika/nea-syllogwn/',
'warc-type': 'conversion'},
'nb_sentences': 18,
'offset': 0},
'text': 'Η ραδιοφωνική διαφήμιση χαρακτηρίζεται από αμεσότητα και οικειότητα '
'λόγω της στενής σχέσης του μέσου με τους ακροατές...'}
```
#### deduplicated_eml
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 11710,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:OM2W34UTSIJJHAEXEX42BYMZWBB7U3FS',
'warc-date': '2021-03-05T23:48:29Z',
'warc-identified-content-language': 'ita',
'warc-record-id': '<urn:uuid:26a267af-a6de-4e84-b945-411b78b4815a>',
'warc-refers-to': '<urn:uuid:656aaba2-ff1d-4d7c-915a-9a555533aa42>',
'warc-target-uri': 'https://eml.wikipedia.org/wiki/2_(n%C3%B9mer)',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': "Al 2 'l è al prim nùmer prim ed tùta la séri ch'a s cata in di "
"nùmer naturèl e anc 'l ùnic ch'al sìa pèra:\n"
"Insèm a 'l..."}
```
#### deduplicated_en
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 15201,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:EIQTEGOE4V5SDID2OLTO4PWWCTW3AD5H',
'warc-date': '2021-03-03T18:20:30Z',
'warc-identified-content-language': 'eng',
'warc-record-id': '<urn:uuid:7cec445b-76fe-4ce2-ab43-8a85de680c6f>',
'warc-refers-to': '<urn:uuid:1cf845b2-3015-4f01-abaf-262af4adeba5>',
'warc-target-uri': 'https://www.aqueencitysound.com/2016/05',
'warc-type': 'conversion'},
'nb_sentences': 28,
'offset': 0},
'text': 'But the term “extension” also means lengthening. EkhartYoga members '
'can get to k… Renforcement du dos (muscles para-v...'}
```
#### deduplicated_eo
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 27953,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:YO4NP6746IFQDF5KISEPLNFA2QD3PTEO',
'warc-date': '2021-03-09T05:29:46Z',
'warc-identified-content-language': 'epo,eng',
'warc-record-id': '<urn:uuid:5e3bc7b3-723f-4de9-8202-790351a2253f>',
'warc-refers-to': '<urn:uuid:dd5e537a-f340-4418-bc07-487232ea197c>',
'warc-target-uri': 'http://kantaro.ikso.net/cxu?image=kis_kut.png&ns=&tab_details=view&do=media',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Iloj Montri paĝonMalnovaj reviziojRetroligoj Freŝaj '
'ŝanĝojMedio-administriloIndekso RegistriĝiEnsaluti'}
```
#### deduplicated_es
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 8322,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DXIQKIWES4PP64BTGK5BYTJ3TX4RVQSI',
'warc-date': '2021-03-03T23:27:45Z',
'warc-identified-content-language': 'spa,eng',
'warc-record-id': '<urn:uuid:4275a14a-f997-4e58-8cf6-046006d76dab>',
'warc-refers-to': '<urn:uuid:d54d1a7b-1316-4bd1-8147-7a44ec5b3803>',
'warc-target-uri': 'https://www.rcrperu.com/defensoria-del-pueblo-oficina-en-lima-sur-registro-mas-de-3000-casos-durante-el-2020/',
'warc-type': 'conversion'},
'nb_sentences': 7,
'offset': 0},
'text': 'Se prevé que a finales de mes haya llegado al 92,5 por ciento de '
'los centros, aquellos en los que no hay confirmados ...'}
```
#### deduplicated_et
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 57234,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:JU7SWP3ZS36M3ABAEPNTFH37MVI2SLAF',
'warc-date': '2021-02-24T20:43:43Z',
'warc-identified-content-language': 'est',
'warc-record-id': '<urn:uuid:2bbcaa39-7336-4ade-accf-1b582785f731>',
'warc-refers-to': '<urn:uuid:849563c9-8549-4bdc-a09c-d179c8399ae0>',
'warc-target-uri': 'https://cardiaccareclinic.com/chto-luchshe-panangin-ili-kardiomagnil.html',
'warc-type': 'conversion'},
'nb_sentences': 129,
'offset': 0},
'text': 'Kas hirmu ei pruugi tekitada hoopis segadus? Näiteks võtame Ukraina '
'kogemuse. Järsku ilmusid välja lindikestega mehed...'}
```
#### deduplicated_eu
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 4248,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:STDEJOH35DPN5UB52OUZJJC4YCN7EH3N',
'warc-date': '2021-03-09T05:11:48Z',
'warc-identified-content-language': 'spa,eus',
'warc-record-id': '<urn:uuid:fb6752f7-5e91-4d0c-b022-71bd5d3ce910>',
'warc-refers-to': '<urn:uuid:faca7a42-20c2-4c4c-bd8a-6d4be5a1adb6>',
'warc-target-uri': 'http://intermedia.eus/la-comunicacion-imprescindible-lo-que-no-debemos-olvidar-de-2015-resumido-en-447/',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': 'Nesken artean bokazio zientifikoak eta teknologikoak sustatzeko '
'INSPIRA STEAM proiektua ia 120 ikastetxetako 5.000 ik...'}
```
#### deduplicated_fa
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 10411,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:VM7Q7TXNMU2SRNHFJSZMBCKU2YVRKI56',
'warc-date': '2021-03-02T11:23:27Z',
'warc-identified-content-language': 'fas',
'warc-record-id': '<urn:uuid:9f666d03-9592-4f59-9111-981a558b3a32>',
'warc-refers-to': '<urn:uuid:8daf3dc1-92dd-4dbf-a339-992c99f09112>',
'warc-target-uri': 'https://zhycan.com/concough/blog/%D9%86%D8%AD%D9%88%D9%87-%D8%AB%D8%A8%D8%AA-%D9%86%D8%A7%D9%85-%DA%A9%D9%86%DA%A9%D9%88%D8%B1-%D8%AF%DA%A9%D8%AA%D8%B1%DB%8C-97-%D8%A7%D8%B9%D9%84%D8%A7%D9%85-%D8%B4%D8%AF-%D8%A7%D9%85/',
'warc-type': 'conversion'},
'nb_sentences': 16,
'offset': 0},
'text': 'انجمن دانشجویان پیام نور تبليغات تماس با ما تبلیغات دسته بندی باز / '
'بسته کردن دسته بندی ها . شرایط اختصاصی برای شغل د...'}
```
#### deduplicated_fi
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 19216,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:5OUEZDSL7KB2VHT2R67YZDER6UO5FHON',
'warc-date': '2021-03-05T00:14:23Z',
'warc-identified-content-language': 'fin,eng',
'warc-record-id': '<urn:uuid:61e0fc42-ceee-4026-ba76-3c8a8addd596>',
'warc-refers-to': '<urn:uuid:c4ba3c9f-5a6c-4de5-8f77-f5beb547315c>',
'warc-target-uri': 'https://kreditassms.eu/arvostelut-treffisivusto-py%C3%B6re%C3%A4-tanssi/',
'warc-type': 'conversion'},
'nb_sentences': 46,
'offset': 0},
'text': 'Facebook ulkomaiset morsiamet fantasia lähellä lohja mistä pillua '
'porno leffat sex treffit karvaiset tussut Thai mass...'}
```
#### deduplicated_fr
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 5274,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:XUVXOZU2BIT4TIDEVHLLBLUIHRS4L7WV',
'warc-date': '2021-03-03T14:00:24Z',
'warc-identified-content-language': 'fra,eng',
'warc-record-id': '<urn:uuid:76252d00-9672-479c-9580-722614e078f9>',
'warc-refers-to': '<urn:uuid:4a6bde1e-9596-4388-9334-cc473a7c93ee>',
'warc-target-uri': 'https://www.cahier-des-charges.net/produit/modele-cahier-des-charges-de-logiciel-de-gestion-de-processus-metier/',
'warc-type': 'conversion'},
'nb_sentences': 9,
'offset': 0},
'text': 'Créée en 1765 par le duc de Villars, alors gouverneur de Provence, '
'l’École supérieure d’art d’Aix en Provence est un ...'}
```
#### deduplicated_frr
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 27381,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DJE2KO4YWWRERKS5JYSK5JCJWYZ6DJHM',
'warc-date': '2021-03-01T03:40:10Z',
'warc-identified-content-language': 'ell',
'warc-record-id': '<urn:uuid:3a2a34ae-1c42-4d2e-bb08-8dabc916ea30>',
'warc-refers-to': '<urn:uuid:caeb39b2-da76-463d-b80c-4917d3dca230>',
'warc-target-uri': 'https://www.sedik.gr/neo/el/%CE%B1%CF%81%CF%87%CE%B5%CE%AF%CE%BF-%CE%B5%CE%BB%CE%B1%CE%B9%CE%BF%CE%BD%CE%AD%CF%89%CE%BD/%CE%B1%CF%81%CF%87%CE%B5%CE%AF%CE%BF-%CE%B5%CE%BB%CE%B1%CE%B9%CE%BF%CE%BD%CE%AD%CF%89%CE%BD-2009/178-178-title',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': '’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ '
'’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’...'}
```
#### deduplicated_fy
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 1807,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:JABSHFJ2L6SQOXPPTBYGZGR24GCEDTTM',
'warc-date': '2021-03-09T04:24:30Z',
'warc-identified-content-language': 'fry',
'warc-record-id': '<urn:uuid:fd1b28cb-20ce-4082-b1ca-40045ed6af73>',
'warc-refers-to': '<urn:uuid:bc50e1f0-6384-4054-8916-2a489e9a0ffd>',
'warc-target-uri': 'https://www.omropfryslan.nl/nijs/201805-gruttere-lisboksstal-tastien',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Melkfeehâlders yn Súdwest-Fryslân kinne tenei makliker '
"lisboksstâlen fergrutsje no't de gemeente de lanlike wet op st..."}
```
#### deduplicated_ga
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3296,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:WF6SCFDXN3NOT7FPKTEFOAMMPKXSEZ2W',
'warc-date': '2021-03-09T04:37:11Z',
'warc-identified-content-language': 'gle',
'warc-record-id': '<urn:uuid:bff39289-dbf7-444c-8df1-382fd46c993d>',
'warc-refers-to': '<urn:uuid:e27ba1c5-5707-4e9f-8ba8-f42c67bd9fc9>',
'warc-target-uri': 'http://nos.ie/cultur/iarratais-a-lorg-don-slam-filiochta-agus-duaischiste-700-ann-i-mbliana/',
'warc-type': 'conversion'},
'nb_sentences': 6,
'offset': 0},
'text': 'Tá duaischiste £700 ar fáil do Slam Filíochta Liú Lúnasa a bheidh '
'ar siúl ar líne ag deireadh na míosa seo chugainn. ...'}
```
#### deduplicated_gd
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7659,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:OO363HOO6EDDYSBTTYB6H4WYAJBBMJ6D',
'warc-date': '2021-03-03T15:22:11Z',
'warc-identified-content-language': 'gla',
'warc-record-id': '<urn:uuid:e24cc86f-ae2c-49f6-b668-cda4f514a34d>',
'warc-refers-to': '<urn:uuid:1739d2d8-974d-4c29-b8d0-3a3ef9082537>',
'warc-target-uri': 'http://gd.cnswmc.com/ty320-3-bulldozer-product/',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Tha inneal-brathaidh TY320-3 crochte leth-chruaidh, gluasad '
'uisgeachaidh, inneal tarbh fo smachd seòrsa hydraulic. Ta...'}
```
#### deduplicated_gl
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 4202,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:TIH7ARF4FNLH7VRGHXKOWVHNXNXC2HZX',
'warc-date': '2021-03-09T04:47:46Z',
'warc-identified-content-language': 'glg',
'warc-record-id': '<urn:uuid:983dd790-0846-4232-a7b4-3956af0982a8>',
'warc-refers-to': '<urn:uuid:b77207af-29d0-459f-9a55-0b25501d3e8b>',
'warc-target-uri': 'http://concellomuxia.com/item/outras-capelas/',
'warc-type': 'conversion'},
'nb_sentences': 8,
'offset': 0},
'text': 'O templo actual é producto de diversas reconstrucións que se '
'realizaron a finais do século XVII e principios do XVIII...'}
```
#### deduplicated_gn
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3873,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:FWN62CTWNJKPWUARS4BMBUFU6OVHL6XP',
'warc-date': '2021-02-27T22:49:49Z',
'warc-identified-content-language': 'grn,eng,bih',
'warc-record-id': '<urn:uuid:b4954ced-abe0-487e-b5b0-a26beb751a02>',
'warc-refers-to': '<urn:uuid:be5468f1-47f0-4bd8-a177-3529a14dead7>',
'warc-target-uri': 'https://gn.wikipedia.org/wiki/Apere%27arusu',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Ko ñe\'ẽ "apere\'arusu" ou avañe\'ẽ ñe\'ẽngue "apere\'a" he\'ise '
'India Tapiti, ha avañe\'ẽ ñe\'ẽngue "rusu" he\'iséva iguasúva.'}
```
#### deduplicated_gom
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 8747,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:CKNSFAH2KISLLR7222FSQSPENYHQTAX3',
'warc-date': '2021-03-01T11:10:29Z',
'warc-identified-content-language': 'mar',
'warc-record-id': '<urn:uuid:d4622a3e-1b0e-4775-b25d-273ee14ae176>',
'warc-refers-to': '<urn:uuid:9d00e57b-9031-4f86-a9c8-cc3c0c2213a7>',
'warc-target-uri': 'https://gom.m.wikipedia.org/wiki/%E0%A4%B5%E0%A5%80%E0%A4%9C',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'कांय वस्तू रगडल्यो तर तांचेकडेन हलक्यो वस्तू आकर्शित जाता हेंजेन्ना '
'पळयलें तेन्ना वीज हे ऊर्जेची कल्पना मनशाक आयली.हे...'}
```
#### deduplicated_gu
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 15036,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:2FGV42SN72HRKRBEEQ7QJVJBLUYQPCIH',
'warc-date': '2021-03-09T04:48:08Z',
'warc-identified-content-language': 'eng,khm,lao',
'warc-record-id': '<urn:uuid:04d772d6-09db-4d5a-86c8-22b914a35b6f>',
'warc-refers-to': '<urn:uuid:f3cdcafa-5a28-4fbb-81df-7cc5e7bb3248>',
'warc-target-uri': 'http://www.ahealthyme.com/RelatedItems/RelatedDocuments.pg?d=&TypeId=121&ContentId=761&Category=DC',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'ધ્યાન આપો: જો તમે ગુજરા તી બોલતા હો, તો તમને ભા ષા કીય સહાય તા સેવા '
'ઓ વિ ના મૂલ્યે ઉપલબ્ધ છે. તમા રા આઈડી કાર ્ડ પર આ...'}
```
#### deduplicated_gv
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 29707,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:TIDW47D4MAHOLY6PQZ5SHLDYQIJ66REQ',
'warc-date': '2021-03-06T18:16:22Z',
'warc-identified-content-language': 'glv,eng',
'warc-record-id': '<urn:uuid:c7a5e531-487b-4e52-96ca-33b658691652>',
'warc-refers-to': '<urn:uuid:fa7285d4-126c-458f-9a72-d0d8615ce494>',
'warc-target-uri': 'https://gv.wikipedia.org/wiki/%C3%87hengoaylleeaght',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Ta çhengoaylleeaght feamagh eiyrt er sheiltynyssyn çhengoaylleeagh '
'ayns ayrnyn myr ynsaghey çhengaghyn joaree, glare-...'}
```
#### deduplicated_he
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 12254,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:BL56ZUXYO5GLIO6YTBUWKPVYJN2BKCIM',
'warc-date': '2021-03-09T10:29:09Z',
'warc-identified-content-language': 'heb,eng',
'warc-record-id': '<urn:uuid:1ae77825-a836-424e-a8b1-1f9c985a41b9>',
'warc-refers-to': '<urn:uuid:fce3d3dc-979e-4603-82e3-027b75346e52>',
'warc-target-uri': 'https://shop.makeup.land/collections/frontpage',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': 'הולדת פג היא אירוע מטלטל לכל משפחה, אך הולדת פג בצל מגפת הקורונה '
'מאתגרת אף יותר? מהם האתגרים עמם מתמודדים ההורים והצו...'}
```
#### deduplicated_hi
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7897,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:VZCN5HXN57VQHZJT5G3NWV7RCIT4GP7T',
'warc-date': '2021-02-26T10:18:11Z',
'warc-identified-content-language': 'hin,eng',
'warc-record-id': '<urn:uuid:6cccccb7-be0e-4c16-83be-7b4150b107ac>',
'warc-refers-to': '<urn:uuid:41eda5d1-e2cf-44f4-9f5b-c074a2de89da>',
'warc-target-uri': 'https://36.gurturgoth.com/2019/11/blog-post_8.html',
'warc-type': 'conversion'},
'nb_sentences': 5,
'offset': 0},
'text': 'Bill Gates Biography in Hindi, विश्व के सबसे अमीर इंसान और '
'माइक्रोसॉफ्ट कंपनी के संस्थापक Bill Gates जिसने अपनी बुद्ध...'}
```
#### deduplicated_hr
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 41545,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:6NTZEPK7ETF4AOLM3YDZRLRGZAKH7XM3',
'warc-date': '2021-03-09T04:58:04Z',
'warc-identified-content-language': 'hrv,bos,eng',
'warc-record-id': '<urn:uuid:32361cc9-e12a-4861-978a-b94b84efe78c>',
'warc-refers-to': '<urn:uuid:f0476e5f-e04c-4741-94a6-ddbcfb25c17e>',
'warc-target-uri': 'http://mjesec.ffzg.hr/webpac/?rm=results&show_full=1&f=PersonalName&v=Sanader%20Mirjana',
'warc-type': 'conversion'},
'nb_sentences': 3,
'offset': 0},
'text': 'Impresum: Pula : Sveučilište u Zagrebu, Međunarodno središte '
'hrvatskih sveučilišta u Istri, Međunarodni istraživački ...'}
```
#### deduplicated_hsb
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3352,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:E5ZCT5OIZBDV2EFBNX3MSLFJKKMZWQWI',
'warc-date': '2021-03-08T22:15:50Z',
'warc-identified-content-language': None,
'warc-record-id': '<urn:uuid:374a31b4-d38f-4d94-b3df-59013b15e644>',
'warc-refers-to': '<urn:uuid:fa9b7b26-2b4c-4acc-a652-47047617b0c0>',
'warc-target-uri': 'https://www.serbske-nowiny.de/index.php/hsb/z-luzicy/lokalka/item/50643-jednotna-proty-ka-tr-bna',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': 'Žonjace akciske tydźenje zahajene\tDźensniši Mjezynarodny dźeń '
'žonow je zazběh hač do 22. apryla trajacych ...\t\n'
'Wotstr...'}
```
#### deduplicated_ht
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 17823,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:LXQEYMTPIKHPAYKEKIZF6FCMC6WH66PW',
'warc-date': '2021-02-25T02:48:22Z',
'warc-identified-content-language': 'rus',
'warc-record-id': '<urn:uuid:a5599306-82ad-4740-9c00-5bba34c96d54>',
'warc-refers-to': '<urn:uuid:2378d2f7-69a4-4f8a-ad03-4d556d031ebb>',
'warc-target-uri': 'http://mywebstores.ru/index.php?id_product=1841&controller=product',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'начать us $ nan us $ nan us $ nan us $ nan us $ nan us $ nan us $ '
'nan us $ nan us $ nan us $ nan us $ nan us $ nan us...'}
```
#### deduplicated_hu
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 39801,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:B3XHZ4C4AJYQLVV3ESGOVZU6FZ5N5637',
'warc-date': '2021-02-26T07:03:18Z',
'warc-identified-content-language': 'hun',
'warc-record-id': '<urn:uuid:926ed467-3adb-44f5-b33c-63112879ba5a>',
'warc-refers-to': '<urn:uuid:9d9175b4-6b0a-45e8-961b-61e9d50eb684>',
'warc-target-uri': 'https://luminanz.eu/anya-hatartalan-ingyen-videok-pina-nagy-video-video-sex-szekx-hd-videa-nyelvu-%C3%B6reg/',
'warc-type': 'conversion'},
'nb_sentences': 104,
'offset': 0},
'text': 'A WordPress egy ingyenesen letölthető rendszer. Letöltés után csak '
'telepíteni kell a webszerverre és máris használhat...'}
```
#### deduplicated_hy
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 6269,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:42PWBXN2Q7PFCRFWIDLTW42KUUGAKQOE',
'warc-date': '2021-02-24T23:49:31Z',
'warc-identified-content-language': 'hye,eng',
'warc-record-id': '<urn:uuid:932d1903-aea7-4be9-abb4-6b3114592c9c>',
'warc-refers-to': '<urn:uuid:cecf676f-884a-4311-a0b5-45ade0f517b7>',
'warc-target-uri': 'https://www.usanogh.am/lur/tramp-amn-coronavirus/',
'warc-type': 'conversion'},
'nb_sentences': 4,
'offset': 0},
'text': 'ՀՀ ԳԱԱ Զեկույցներ =Reports NAS RA կիրառում է «Ստեղծագործական '
'համայնքներ» հեղինակային իրավունքի արտոնագիրը համաձայն որ...'}
```
#### deduplicated_ia
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 9479,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:4JBN4SUDHHRPZI3TAVTZ4JUYSSOGGRFX',
'warc-date': '2021-03-01T17:14:58Z',
'warc-identified-content-language': 'ron,eng',
'warc-record-id': '<urn:uuid:5abe05ff-7309-4c3f-8ccd-175a12a655a2>',
'warc-refers-to': '<urn:uuid:8dec50fd-2be1-4bcf-8bb2-8cb9826c2465>',
'warc-target-uri': 'https://www.monitorulsv.ro/Ultima-ora-local/2008-02-18/Campania-electorala-interzisa-in-Primaria-Suceava',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha '
'ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ...'}
```
#### deduplicated_id
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3080,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:XU6GIUNYT5ELGH5XSZ4FUARC3YTJAD5P',
'warc-date': '2021-03-05T03:32:56Z',
'warc-identified-content-language': 'ind',
'warc-record-id': '<urn:uuid:2328da88-ee5f-4b4c-af3e-25dc4a574041>',
'warc-refers-to': '<urn:uuid:0781f7e2-f020-402b-b204-71fdf299f956>',
'warc-target-uri': 'https://sulsel.kemenag.go.id/berita/berita-kontributor/stqh-26-tingkat-kabupaten-jeneponto-siap-di-gelar',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': '* Masa berlaku normal poin 1 (satu) tahun dan masa berlaku bonus '
'poin sampai dengan 31 Desember 2020.\n'
'Diskon dari Ban...'}
```
#### deduplicated_ie
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 16919,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:W7UDGWMCEYQFEIPJMFZKX72Z6MH4XCUP',
'warc-date': '2021-03-08T16:16:42Z',
'warc-identified-content-language': 'ron,eng',
'warc-record-id': '<urn:uuid:f5ba5473-8eb2-41f4-9e43-3d36f14243a1>',
'warc-refers-to': '<urn:uuid:d2784efa-8250-4370-a348-28c640195663>',
'warc-target-uri': 'https://rolabel.info/door/yX-WpseZpNycfXY/luis-gabriel-haziran-te-am-cautat-si-te-am-gasit-official-video.html',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Va iubesc mult mult mult mult mult mult mult mult mult mult mult '
'mult mult mult mult mult mult mult mult mult mult mu...'}
```
#### deduplicated_ilo
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3511,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:NLHH2LVPZTUZE37ET2FJIRZNOLPLKK4O',
'warc-date': '2021-03-03T15:52:32Z',
'warc-identified-content-language': 'tgl',
'warc-record-id': '<urn:uuid:2fb6a437-41c8-4c2c-9f5d-2e8c34df9f2b>',
'warc-refers-to': '<urn:uuid:bdc072a0-db63-4256-a96b-7515a2c4fdfd>',
'warc-target-uri': 'https://ilo.m.wikipedia.org/wiki/Amphibia',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Daytoy nga artikulo dagiti nangruna nga artikulo ket pungol. '
'Makatulongka iti Wikipedia babaen ti panagnayon iti daytoy.'}
```
#### deduplicated_io
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3586,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:VUQPETM2PUWBL5AGADEVN2FPE7KURXG4',
'warc-date': '2021-03-03T15:22:41Z',
'warc-identified-content-language': 'ara',
'warc-record-id': '<urn:uuid:fd8a899b-d54a-424d-9955-a90b81e16439>',
'warc-refers-to': '<urn:uuid:c40226a6-6851-4009-a834-77a1a3e0c0f3>',
'warc-target-uri': 'https://io.wikipedia.org/wiki/New_Vienna,_Iowa',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': "Segun l'Usana Kontado Ministerio, l'urbo havas entote 1.2 km², "
'equivalanta a 0.4 mi², di qui 1.2 km² (0.4 mi²) esas l...'}
```
#### deduplicated_is
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 1829,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DXUGRT4OK7WRCOPGB7AAKLHPUDTBDRO2',
'warc-date': '2021-03-09T04:40:07Z',
'warc-identified-content-language': 'isl',
'warc-record-id': '<urn:uuid:6568bf31-b402-45b8-9ddb-6ce0f3d0a323>',
'warc-refers-to': '<urn:uuid:5daa12c0-604a-4233-9ed8-d4e245af4048>',
'warc-target-uri': 'http://hugvis.hi.is/',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': 'Vegna hertra aðgerða í bará ttunni við Covid19 munum við takmarka '
'gestafjölda í laugum okkar við 80 manns. Thank you ...'}
```
#### deduplicated_it
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 14112,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:MLJ4TW2HJZAPE2ORVARPJES6GRGO6ZLK',
'warc-date': '2021-03-05T13:56:32Z',
'warc-identified-content-language': 'ita',
'warc-record-id': '<urn:uuid:31d7ebb5-c1f7-468b-92f8-b79b7c28af9f>',
'warc-refers-to': '<urn:uuid:f92f33a2-6940-49fd-a21e-228ee5d2efb1>',
'warc-target-uri': 'https://mauriziomezzetti.com/patologie-trattate/',
'warc-type': 'conversion'},
'nb_sentences': 47,
'offset': 0},
'text': 'Il Presidente del Caffè Letterario Quasimodo di Modica, Domenico '
'Pisana, sarà ospite a Taranto, il prossimo 4 maggio,...'}
```
#### deduplicated_ja
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 16411,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:XOFBBBX7LINQS3EZN5VH6OQ7PPFNRICJ',
'warc-date': '2021-03-09T01:09:27Z',
'warc-identified-content-language': 'jpn,eng,lat',
'warc-record-id': '<urn:uuid:5c0685f4-736d-4155-9153-56cf79462df4>',
'warc-refers-to': '<urn:uuid:88586e1b-926d-4291-910f-53680e3d6482>',
'warc-target-uri': 'http://flpj.karapyzi.ru/30',
'warc-type': 'conversion'},
'nb_sentences': 14,
'offset': 0},
'text': '番組『日本を元気に!スマイルサプライズ!』が、28日に放送(後7:00)。コロナ禍や自然災害など、日本が長いトンネルに入ってしまったような状態だが、「でも、きっとこの先に明るい出口がある!」と明るい未...\n'
'プリゲーム『ポケモンスマイ...'}
```
#### deduplicated_jbo
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 6970,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:2EVVU2OCTSB5EYCHSV6Z7I3PMQSNNOED',
'warc-date': '2021-03-03T23:28:54Z',
'warc-identified-content-language': None,
'warc-record-id': '<urn:uuid:0d4387a2-391d-4e3e-8772-808face0ab78>',
'warc-refers-to': '<urn:uuid:4e45af2a-aea7-4f1a-af89-6ee5f69b7bfd>',
'warc-target-uri': 'https://jbo.m.wikipedia.org/wiki/mumyma%27i_7moi',
'warc-type': 'conversion'},
'nb_sentences': 26,
'offset': 0},
'text': "ni'o 7 la mumast. cu 7moi djedi fi'o masti la mumast. noi ke'a cu "
'mumoi masti .i 6 la mumast. cu purlamdei .ije 8 la ...'}
```
#### deduplicated_jv
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 8822,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:NPQGATEVIAYLOSLDB22EB7IYDVBZ7N6Q',
'warc-date': '2021-03-09T11:14:25Z',
'warc-identified-content-language': 'jav',
'warc-record-id': '<urn:uuid:db7d8bd7-a3a3-4a30-8786-7efb2352285d>',
'warc-refers-to': '<urn:uuid:2cb85a37-545e-471a-b7e7-cb334112f0e3>',
'warc-target-uri': 'https://jv.wikipedia.org/wiki/Bon%C3%A9kah',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Yèn sadurungé golèkan digawé kanggo awaké dhéwé, wiwit jaman iki '
'dikomersialakaké. Fungsiné owah saka ritual lan mode...'}
```
#### deduplicated_ka
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 42480,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:HHSMTLZXKA4SQDPDBWAOUFELXBUJZJKO',
'warc-date': '2021-03-06T15:33:35Z',
'warc-identified-content-language': 'kat,eng',
'warc-record-id': '<urn:uuid:7d931f2a-a6ef-4070-9277-2033e7e96b9b>',
'warc-refers-to': '<urn:uuid:89429497-9722-45e6-95a6-699ef7280e6c>',
'warc-target-uri': 'https://ka.m.wikipedia.org/wiki/%E1%83%93%E1%83%90%E1%83%A1%E1%83%A2%E1%83%98%E1%83%9C_%E1%83%B0%E1%83%9D%E1%83%A4%E1%83%9B%E1%83%90%E1%83%9C%E1%83%98',
'warc-type': 'conversion'},
'nb_sentences': 36,
'offset': 0},
'text': 'დასტინ ჰოფმანი[1] (ინგლ. Dustin Lee Hoffman დ. 8 აგვისტო, 1937) — '
'ორგზის კინოაკადემიის ოსკარისა და ექვსგზის ოქროს გლო...'}
```
#### deduplicated_kk
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 9197,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:BJW4PLV2UOAJLJO6E55YH7DAEWQTFQUZ',
'warc-date': '2021-03-09T04:35:14Z',
'warc-identified-content-language': 'rus,kaz',
'warc-record-id': '<urn:uuid:ddd1d3e1-3bf3-4c4a-b722-8e293ab16f75>',
'warc-refers-to': '<urn:uuid:097c4f10-4bdc-400d-ab39-c04e4f98f51f>',
'warc-target-uri': 'http://blogs.kazakh.ru/blogs/index.php?page=group&gid=6&id=3&PAGEN_1=3%3Fid%3D2?id=6',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Бұрынғы жоғары лауазымды шенеунік Анатолий Шкарупа (сол жақта) '
'өзіне қарсы қозғалған қылмыстық іс бойынша өтіп жатқан...'}
```
#### deduplicated_km
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 15036,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:2FGV42SN72HRKRBEEQ7QJVJBLUYQPCIH',
'warc-date': '2021-03-09T04:48:08Z',
'warc-identified-content-language': 'eng,khm,lao',
'warc-record-id': '<urn:uuid:04d772d6-09db-4d5a-86c8-22b914a35b6f>',
'warc-refers-to': '<urn:uuid:f3cdcafa-5a28-4fbb-81df-7cc5e7bb3248>',
'warc-target-uri': 'http://www.ahealthyme.com/RelatedItems/RelatedDocuments.pg?d=&TypeId=121&ContentId=761&Category=DC',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'ការជូនដំណឹង៖ ប្រសិនប. ើអ្នកនិយាយភាសា ខ្មែរ សេ វាជំនួយភាសាឥតគិតថ្លៃ '
'គឺអាចរកបានសម្ រាប ់អ្នក។ សូមទូរស័ព្ទទ ៅផ ្នែ កសេ វ...'}
```
#### deduplicated_kn
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 8425,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:TMWGSQVJMRPZCPMDM5D3AK2YKGMWBZZI',
'warc-date': '2021-03-09T04:21:39Z',
'warc-identified-content-language': 'kan,eng',
'warc-record-id': '<urn:uuid:ca35da96-ee3a-43ad-8082-a10b055200ca>',
'warc-refers-to': '<urn:uuid:a57cc8f6-c5ed-47a2-9322-2259687cdbde>',
'warc-target-uri': 'https://kannada.b4blaze.com/tag/rachitha-ram/',
'warc-type': 'conversion'},
'nb_sentences': 16,
'offset': 0},
'text': 'ಅಡಿಗರು ಮತ್ತು ರಾಯರು ಚಾಪೆ ಹಾಸಿ ಸ್ವಲ್ಪ ಹೊತ್ತು ಮಲಗಿ ಕಾಫಿ ಕುಡಿದು '
'ಹೊರಟುಹೋದಿದ್ದರು. ಜಾತ್ರೆ ದಿನ ಜಗನ್ನಾಥನ ಮನೆಗೆ ಬರಬಹುದಾದ ನೂರಾರು...'}
```
#### deduplicated_ko
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2831,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DLTUACNWU3R5KYI7HMMZF4CYR4WGRMWU',
'warc-date': '2021-02-26T10:13:10Z',
'warc-identified-content-language': 'kor,eng',
'warc-record-id': '<urn:uuid:7f7727bf-bf3d-45c3-8e3c-b595f67f9d90>',
'warc-refers-to': '<urn:uuid:17735508-d2ce-4e0a-a3ba-86acb749b9a2>',
'warc-target-uri': 'http://excel2017.zz.am/entry/mousqul',
'warc-type': 'conversion'},
'nb_sentences': 3,
'offset': 0},
'text': '인류는 최근 수백년 동안 물질적 풍요를 행복의 최대 조건으로 믿고, 이를 추구해 왔다. 그러나 이 과정에서 사람들은 '
'상대방에게 사랑을 베풀기보다는 상처를 입히는 일이 많아졌고, 물질적 풍요는 내면의 충족을 동반...'}
```
#### deduplicated_krc
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 4806,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:CWWWGTU7JCHS7SR5A7D7QMDTF4JBMCA6',
'warc-date': '2021-02-26T04:08:10Z',
'warc-identified-content-language': 'nno,bih',
'warc-record-id': '<urn:uuid:ef2175c0-4887-4006-9b21-374282abf2d2>',
'warc-refers-to': '<urn:uuid:d5aaef09-6f3c-427a-8c2f-664e639c2a0f>',
'warc-target-uri': 'https://krc.wikipedia.org/wiki/1606_%D0%B4%D0%B6%D1%8B%D0%BB',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Бу, тамамланмагъан статьяды. Сиз болушургъа боллукъсуз проектге, '
'тюзетиб эм информация къошуб бу статьягъа.'}
```
#### deduplicated_ku
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 12767,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:BQQEDD5HKU6LXDRIDLMWPIESOMEGIUX6',
'warc-date': '2021-03-09T04:11:10Z',
'warc-identified-content-language': 'eng',
'warc-record-id': '<urn:uuid:5a67e5e4-f688-4aa1-a9a0-2e4f6217ef21>',
'warc-refers-to': '<urn:uuid:40fa61be-18d1-4bd5-9267-252720cd5b05>',
'warc-target-uri': 'http://www.peyamakurd.org/kurmanci/Kurdistan/gruben-smo-ye-bi-hawane-li-til-rifete-xistin-3-miri-u-6-birindar',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': 'PeyamaKurd – Grûbên bi ser Tirkiyê de li Binxetê li bajarokê Til '
'Rifetê bi hawanê lê dan û di encamê de 3 kes mirin û...'}
```
#### deduplicated_kv
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 14161,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:JH3R64H4VMXQ3NRHTX3LO3B4VFN6IZ62',
'warc-date': '2021-03-03T15:09:36Z',
'warc-identified-content-language': 'rus',
'warc-record-id': '<urn:uuid:a94b390c-8e72-475d-bf76-c523c20908ce>',
'warc-refers-to': '<urn:uuid:e11eee46-e68f-4e1b-b4a3-0b9eeb74a877>',
'warc-target-uri': 'https://kv.wikipedia.org/wiki/%D0%9C%D0%B8%D0%BA%D1%83%D1%88%D0%B5%D0%B2_%D0%90%D0%BD%D0%B0%D1%82%D0%BE%D0%BB%D0%B8%D0%B9_%D0%9A%D0%BE%D0%BD%D1%81%D1%82%D0%B0%D0%BD%D1%82%D0%B8%D0%BD%D0%BE%D0%B2%D0%B8%D1%87',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': '1947, моз тӧлысь–1950, кӧч тӧлысь – уджалiс велöдысьöн да '
'директорöн Сыктывдiн районса Ыб шöр школаын.'}
```
#### deduplicated_kw
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3496,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:S5H4MWHD4QTG74ZNJZ5X63W2XSLUJU7C',
'warc-date': '2021-02-26T18:49:31Z',
'warc-identified-content-language': 'cym',
'warc-record-id': '<urn:uuid:44d32e62-4240-413a-9f8a-562fe27223c6>',
'warc-refers-to': '<urn:uuid:7d95741c-6974-427f-80f7-d08559f799aa>',
'warc-target-uri': 'https://kw.m.wikipedia.org/wiki/Kembra',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Kembra yw konna-tir menydhek yn Howlsedhes Breten Veur. Glow hag '
'owr o poesek yn erbysieth Pow Kembra seulajydh, mes ...'}
```
#### deduplicated_ky
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 28946,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:TVCYX44AC2J2TBVAYMQW62P4XYHWPSAH',
'warc-date': '2021-02-24T20:28:28Z',
'warc-identified-content-language': 'kir,eng',
'warc-record-id': '<urn:uuid:b0b897b8-5d55-4109-967f-9e368be6b7aa>',
'warc-refers-to': '<urn:uuid:b7ac5729-15cb-44c8-a0a2-096cb46cb1de>',
'warc-target-uri': 'http://mezgilnews.kg/tag/klip/',
'warc-type': 'conversion'},
'nb_sentences': 6,
'offset': 0},
'text': 'Мезгил. Ырчы Зерени соцтармактар аркылуу коркуткан белгисиз '
'адамдарды милиция издеп баштады. Чүй облустук ИИБинин маа...'}
```
#### deduplicated_la
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2647,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:QXPYMWAXXOOHWKBNAYCNUODKWSB56XU4',
'warc-date': '2021-03-09T04:51:12Z',
'warc-identified-content-language': 'lat,eng',
'warc-record-id': '<urn:uuid:684bcdce-19ec-4a44-b814-949eb5ceff66>',
'warc-refers-to': '<urn:uuid:2cd40ddd-0087-41ba-8442-8b2b6b1bbcd2>',
'warc-target-uri': 'http://grhpay.es/index.php/about-us/',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Nam libero tempore, cum soluta nobis est eligendi optio cumque '
'nihil impedit quo minus id quod maxime placeat facere ...'}
```
#### deduplicated_lb
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2060,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:5YXISU3T3UP7WKUDJ2W45OAKEFJ7ZD2T',
'warc-date': '2021-03-09T04:51:26Z',
'warc-identified-content-language': 'ltz',
'warc-record-id': '<urn:uuid:534e6ce8-782c-4813-9dfb-902736ffc141>',
'warc-refers-to': '<urn:uuid:5829843c-0428-4098-9213-52bb2fb319b2>',
'warc-target-uri': 'https://online-archive-extractor.com/lb/open-7z-file',
'warc-type': 'conversion'},
'nb_sentences': 4,
'offset': 0},
'text': 'Eis Online Archiv Extraiteren erlaabt Iech den Inhalt vu '
'kompriméierten Archiven direkt aus Ärem Browser ze extrahier...'}
```
#### deduplicated_lez
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 6238,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:4MMTYN2QRKUOUZESCUL3AOZJTMDM5YSY',
'warc-date': '2021-03-02T18:06:44Z',
'warc-identified-content-language': 'nno,eng',
'warc-record-id': '<urn:uuid:78581b3a-c21f-46a2-b168-bff6f147c337>',
'warc-refers-to': '<urn:uuid:02f1447d-0b61-4ad5-ac56-0f42c2438e6b>',
'warc-target-uri': 'https://lez.wikipedia.org/wiki/1877_%D0%B9%D0%B8%D1%81',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': '1877 йис (са агъзурни муьжуьдвишни пудкъанницIеирид лагьай йис) — '
'григорийдин чIаваргандал гьалтайла ислендиз эгечӀза...'}
```
#### deduplicated_li
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2199,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:IIZSY6KLHN5WSCCGU4NZ6K6WYLIMJP4I',
'warc-date': '2021-03-04T07:19:27Z',
'warc-identified-content-language': 'nld',
'warc-record-id': '<urn:uuid:c7eb18bb-ea03-43c2-a1e9-e8eb5b15e25b>',
'warc-refers-to': '<urn:uuid:486a5d06-6dd8-46d2-a93f-d798b8a5bd07>',
'warc-target-uri': 'https://li.m.wikipedia.org/wiki/Waterop',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': "Hoes Karsveld aan de Gulp sjtamp oet de 18e ièw. 't Kesjtièlechtig "
"hoes ies van mergel mèt 'ne trapgevel. 't Ies gebo..."}
```
#### deduplicated_lmo
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 6553,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DAJPSPBN7BVZNRWANXQAW2KP6LQEWNUW',
'warc-date': '2021-03-04T10:49:45Z',
'warc-identified-content-language': None,
'warc-record-id': '<urn:uuid:d9452b27-9a95-47e9-8274-518138812f56>',
'warc-refers-to': '<urn:uuid:4ff4e796-c685-4c81-adc9-fecbd50e79cb>',
'warc-target-uri': 'https://lmo.wikipedia.org/wiki/Antrenas',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': "El sò teretóre el g'ha 'na superfìce de 17,55 km² e 'l và de 'na "
"altèsa mìnima de 720 méter a 'na altèsa màsima de 11..."}
```
#### deduplicated_lo
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 15036,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:2FGV42SN72HRKRBEEQ7QJVJBLUYQPCIH',
'warc-date': '2021-03-09T04:48:08Z',
'warc-identified-content-language': 'eng,khm,lao',
'warc-record-id': '<urn:uuid:04d772d6-09db-4d5a-86c8-22b914a35b6f>',
'warc-refers-to': '<urn:uuid:f3cdcafa-5a28-4fbb-81df-7cc5e7bb3248>',
'warc-target-uri': 'http://www.ahealthyme.com/RelatedItems/RelatedDocuments.pg?d=&TypeId=121&ContentId=761&Category=DC',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'ຂໍ້ຄວນໃສ່ໃຈ: ຖ້າເຈົ້າເວົ້າພາສາລາວໄດ້, '
'ມີການບໍລິການຊ່ວຍເຫຼືອດ້ານພາສາໃຫ້ທ່ານໂດຍບໍ່ເສຍຄ່າ. ໂທ ຫາ '
'ຝ່າຍບໍລິການສະ ມາ ຊິກທີ່...'}
```
#### deduplicated_lrc
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7958,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:GTR6WCXERTVUI5RIKHE7MC7LTACF7R2W',
'warc-date': '2021-03-01T04:48:39Z',
'warc-identified-content-language': 'fas,eng',
'warc-record-id': '<urn:uuid:7ba618e0-f09e-48c2-a0be-a1b77ba5678a>',
'warc-refers-to': '<urn:uuid:2e4504e7-46c9-4aaa-818f-3077c73f1d97>',
'warc-target-uri': 'http://www.shaya.me/2013/01/blog-post_3.html',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'یار یار یار یار یار یار یار یار یار یار یار یار یار یار یار یار یار '
'یار یار یار یار یار یار یار یار یار'}
```
#### deduplicated_lt
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 221005,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:KSLULK6RGSIW43IBMSAEU4643LSRMW3V',
'warc-date': '2021-03-05T07:21:10Z',
'warc-identified-content-language': 'lit',
'warc-record-id': '<urn:uuid:fa6592a5-bc87-4683-88d6-37ce74af5058>',
'warc-refers-to': '<urn:uuid:d78122b4-90d8-4cdf-a205-579bcff9ec88>',
'warc-target-uri': 'https://apcis.ktu.edu/lt/site/katalogas?cat_id=132&type=2',
'warc-type': 'conversion'},
'nb_sentences': 219,
'offset': 0},
'text': 'Telšių apskritis – viena iš Lietuvos sričių, kuri turi ką parodyti '
'pasauliui, ir iš to galima pasiekti didelės naudos...'}
```
#### deduplicated_lv
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 4036,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:NUB75CFJHUBI7HOED4HVCNHGQUIVCBO3',
'warc-date': '2021-03-09T03:46:31Z',
'warc-identified-content-language': 'lav,eng',
'warc-record-id': '<urn:uuid:9ad87feb-993f-45b9-bf1e-53a8185b3dc6>',
'warc-refers-to': '<urn:uuid:64eb85d8-c204-4cf8-a6c3-29760fe1f362>',
'warc-target-uri': 'http://igatesbaznica.lv/augupvrsta-stratijas-binr-opcijas.php',
'warc-type': 'conversion'},
'nb_sentences': 10,
'offset': 0},
'text': 'Latvijā šobrīd nav normatīvu aktu mājas un istabas dzīvnieku '
'vairotāju regulēšanai, jo vairākums audzētāju savu nodar...'}
```
#### deduplicated_mai
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3632,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:OQRKDLTDWJCD37HVHGXYU7E3BXBR5NB3',
'warc-date': '2021-03-01T16:25:27Z',
'warc-identified-content-language': 'bih,hin,fra',
'warc-record-id': '<urn:uuid:da0cf739-4c6c-46d4-9c32-8e34a673fa26>',
'warc-refers-to': '<urn:uuid:0c39ca75-b871-431b-8c89-63d58ea0893f>',
'warc-target-uri': 'https://mai.m.wikipedia.org/wiki/%E0%A4%B0%E0%A4%BE%E0%A4%9C%E0%A4%A7%E0%A4%BE%E0%A4%A8%E0%A5%80',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'शब्द राजधानी संस्कृत सँ आएल अछि । राजधानी आम तौर पर सङ्घटक क्षेत्रक '
'सब सँ पैग सहर होएत अछि मुदा ई जरुरी नै अछि ।[१]'}
```
#### deduplicated_mg
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2714,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:OGAHJNKN3OSLXYKJKK2LQAFKAEM67DFQ',
'warc-date': '2021-03-03T15:32:59Z',
'warc-identified-content-language': 'mlg,nno',
'warc-record-id': '<urn:uuid:f5a6492f-29c4-4de9-baaa-12edb86d89cd>',
'warc-refers-to': '<urn:uuid:970362fe-4102-481e-8f4b-db5f3e8ce4db>',
'warc-target-uri': 'https://mg.wikipedia.org/wiki/Barro_Alto_(Bahia)',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': "I Barro Alto (Bahia) dia kaominina ao Brazila, ao amin'i Bahia, ao "
"amin'i Centro-Norte Baiano, Irecê.\n"
'Ny velarantanin...'}
```
#### deduplicated_mhr
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 27685,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:YJYVG5XEYRKALEYIO5PCK34QFNUO3JRD',
'warc-date': '2021-03-06T17:12:45Z',
'warc-identified-content-language': 'rus',
'warc-record-id': '<urn:uuid:3405f528-672f-449c-a2a3-cfa73f5d17b0>',
'warc-refers-to': '<urn:uuid:dfe46be9-656c-4b02-9384-fd1e75987a15>',
'warc-target-uri': 'http://marisong.ru/mar/kalendar',
'warc-type': 'conversion'},
'nb_sentences': 31,
'offset': 0},
'text': '1982 — 1985 ийлаште — Палантай лӱмеш музыкальный училищыште баян '
'дене отделенийыште шинчымашым налын.\n'
'Тыгак шуко жап ...'}
```
#### deduplicated_min
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 4309,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:XV23LOBECSVNRXJ2NJTCZVJXOCVQ3BBR',
'warc-date': '2021-03-08T22:10:36Z',
'warc-identified-content-language': 'eng,spa',
'warc-record-id': '<urn:uuid:fdaddf50-1986-44b3-b84b-d9a5d0fa27f1>',
'warc-refers-to': '<urn:uuid:257f7969-3a19-42d6-ae1a-ddb5c0486bb8>',
'warc-target-uri': 'https://cookingwithmydoctor.com/?LOSS=danger-of-keto-diet%2F',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': '\u200f\u200f\u200e \u200e\u200f\u200f\u200e '
'\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e '
'\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e '
'\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e '
'\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e '
'\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e '
'\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e '
'\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e '
'\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e '
'\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e '
'\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e '
'\u200e\u200f\u200f\u200e \u200e\u200f\u200f...'}
```
#### deduplicated_mk
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 22483,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:SGEJ6O6XOEVCQXKXT2XRSRBOSH3ZDSVJ',
'warc-date': '2021-03-02T05:16:16Z',
'warc-identified-content-language': 'mkd,srp,eng',
'warc-record-id': '<urn:uuid:168d1661-a73f-4687-a614-e8cecf7a70a0>',
'warc-refers-to': '<urn:uuid:a61ec44e-a4c1-4b8e-837c-7adc80e853e2>',
'warc-target-uri': 'http://zenica.mk/2018/02/10/tri-dena-kultura-vo-karev-festival/',
'warc-type': 'conversion'},
'nb_sentences': 4,
'offset': 0},
'text': '„Три дена културa“ е настан кој ќе се одржи од 21-23 февруари '
'(среда, четврток и петок, 20:00ч.) во гимназијата „Нико...'}
```
#### deduplicated_ml
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 20202,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:ZOEIO7AIEAGDR2S6TOZYZOAQDOV6QJUE',
'warc-date': '2021-03-08T00:10:05Z',
'warc-identified-content-language': 'mal,eng',
'warc-record-id': '<urn:uuid:f19a2925-0064-47e2-9ec9-48b2786657bd>',
'warc-refers-to': '<urn:uuid:20c7b8fd-1909-480f-b36c-89cd1d0ee3c4>',
'warc-target-uri': 'https://boolokam.com/what-to-do-for-police-clearance-conduct-certificate-in-uae/227247',
'warc-type': 'conversion'},
'nb_sentences': 12,
'offset': 0},
'text': 'രണ്ടുപേര്\u200d തമ്മിലുള്ള സ്നേഹ ബന്ധം അവര്\u200dക്കിടയില്\u200d '
'പൊതുവായി കാണപ്പെടുന്ന മൂല്യങ്ങളുടെ അടിസ്ഥാനത്തില്\u200d '
'ആയിരിക്കും.\n'
'ഒരുവ...'}
```
#### deduplicated_mn
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 5616,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:ILMC56UA63RNTABOJTVMUJQJHMKKC6QR',
'warc-date': '2021-03-09T04:20:37Z',
'warc-identified-content-language': 'mon,ell',
'warc-record-id': '<urn:uuid:07697b69-9e58-4e84-bc0e-a536bcc1ae11>',
'warc-refers-to': '<urn:uuid:704af2f1-3094-45dc-a1c5-63bd08d53069>',
'warc-target-uri': 'http://mn.uncyclopedia.info/index.php?title=%D0%A5%D1%8D%D1%80%D1%8D%D0%B3%D0%BB%D1%8D%D0%B3%D1%87:Mongol_Emperor&action=edit',
'warc-type': 'conversion'},
'nb_sentences': 3,
'offset': 0},
'text': 'Анциклопедиа-д оруулсан бүх хувь нэмэр Creative Commons '
'Attribution-NonCommercial-ShareAlike-н хувьд (дэлгэрэнгүй мэд...'}
```
#### deduplicated_mr
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 11373,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:V3PQES342QGJGRFZ6QMXNB6RIX2ST3V5',
'warc-date': '2021-03-09T05:01:31Z',
'warc-identified-content-language': 'mar,eng',
'warc-record-id': '<urn:uuid:b96cf6ee-7cda-4a7a-9364-08b51284a05e>',
'warc-refers-to': '<urn:uuid:92e533ed-c2c7-4ac7-9b17-af780a503ce6>',
'warc-target-uri': 'https://marathi.thewire.in/devangana-kalita-uapa-bail-rejected-natasha-narwal',
'warc-type': 'conversion'},
'nb_sentences': 9,
'offset': 0},
'text': 'पुण्यातील कार्यक्रमांना स्थगिती:पुण्यातील अनेक सांस्कृतिक नियोजित '
'कार्यक्रमांना स्थगिती, कोरोनाच्या वाढत्या रुग्णांमु...'}
```
#### deduplicated_mrj
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3492,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:7B242FKI45QVEGJQTF46YCRFYMYW6YFG',
'warc-date': '2021-03-03T05:03:02Z',
'warc-identified-content-language': 'eng',
'warc-record-id': '<urn:uuid:bd7d5682-be60-4a00-9781-29b03a87b30e>',
'warc-refers-to': '<urn:uuid:49641a15-2834-4a72-a011-fdc9cd7273c7>',
'warc-target-uri': 'https://mrj.wikipedia.org/wiki/%D0%91%D0%B0%D1%80%D0%BA%D0%B5%D1%80%D0%B8',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Баркери (латинлӓ Barkeria) – Орхидейвлӓ (Orchidaceae) йыхыш пырышы '
'пеледшӹ кушкыш. Америкышты вӓшлиӓлтеш. Цилӓжӹ 15 й...'}
```
#### deduplicated_ms
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7939,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:7BWXR4LQ6O2IBJLKLKWJKHTF3JBXB26T',
'warc-date': '2021-03-09T05:38:44Z',
'warc-identified-content-language': 'msa,eng',
'warc-record-id': '<urn:uuid:35a9d91c-3a64-4748-b135-3c467bfa403f>',
'warc-refers-to': '<urn:uuid:9cf4de91-0523-4327-9fcb-5c8f99956da0>',
'warc-target-uri': 'https://kheru2006.livejournal.com/1665383.html',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Bagaimanapun beliau memiliki satu lagi pandangan iaitu perkara '
'paling bodoh seseorang boleh lakukan ialah menjangka d...'}
```
#### deduplicated_mt
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 98714,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:HC75UY5ZHRC3AY4C2VHFR4JADUM2AZBH',
'warc-date': '2021-03-09T04:29:23Z',
'warc-identified-content-language': 'eng,mlt',
'warc-record-id': '<urn:uuid:45dec17d-a638-454e-a136-c45579517b53>',
'warc-refers-to': '<urn:uuid:c82d8d7c-05b6-43d8-be17-5072323aab01>',
'warc-target-uri': 'https://carmelcacopardo.wordpress.com/2015/07/28/',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Kemmuna hi protetta bħala sit Natura 2000. Imma ma nistgħux '
'neskludu logħob tas-soltu biex iduru ma din il-protezzjon...'}
```
#### deduplicated_mwl
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 11598,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:2A22BTIRZ4E5FI2FCG7AUCWJQTY2J4ST',
'warc-date': '2021-02-26T13:58:26Z',
'warc-identified-content-language': None,
'warc-record-id': '<urn:uuid:73a60756-1664-410f-bf62-ab44c88c074f>',
'warc-refers-to': '<urn:uuid:800d3642-449d-4be0-817c-edc7fb64c1b4>',
'warc-target-uri': 'https://mwl.wikipedia.org/wiki/R%C3%A1dio_(quemunica%C3%A7on)',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'La radioquemunicaçon ye un meio de quemunicaçon por trascepçon de '
'anformaçon, podendo ser rializada por Radiaçon eile...'}
```
#### deduplicated_my
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 237288,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:U2QEC6RSZR5UW5LXTNN6QRD47FHVYVJY',
'warc-date': '2021-02-27T06:07:58Z',
'warc-identified-content-language': 'mya,eng',
'warc-record-id': '<urn:uuid:817de4f8-0b7a-446e-bae2-8436019dd34f>',
'warc-refers-to': '<urn:uuid:b364cc33-c1bf-4adb-8317-1aad1cfd4aa0>',
'warc-target-uri': 'http://www.pnsjapan.org/2010/05/',
'warc-type': 'conversion'},
'nb_sentences': 248,
'offset': 0},
'text': 'စတိုင္လည္းက် စမတ္လည္းက်တဲ့ ေန႔စဥ္ လႈပ္ရွားမႈဘဝေလးေတြကို '
'ပိုင္ဆိုင္ႏိုင္ဖို႔အတြက္ Samsung ကေန မၾကာေသးခင္က ထုတ္လုပ္လိုက...'}
```
#### deduplicated_myv
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 11091,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:IFCGUVXSCYHEFYLUVOQ5QMGJWYL2CTVJ',
'warc-date': '2021-03-02T21:05:00Z',
'warc-identified-content-language': 'rus',
'warc-record-id': '<urn:uuid:ea77b8a6-e394-48c1-b865-3cea87e7b906>',
'warc-refers-to': '<urn:uuid:a4927904-4e3c-4f22-858a-adad9bbb1e63>',
'warc-target-uri': 'https://ru.m.wikinews.org/wiki/%D0%9E%D0%BC%D0%B1%D0%BE%D0%BC%D0%B0%D1%81%D1%82%D0%BE%D1%80%D1%81%D0%BE_%C2%AB%D0%90%D0%B7%D0%BE%D1%80%C2%BB_%D1%8D%D1%80%D0%B7%D1%8F%D0%BD%D1%8C_%D1%8D%D1%80%D1%8F%D0%BC%D0%B0%D1%80%D1%82%D0%BE%D0%BD%D1%82%D1%8C_%D0%B2%D0%B0%D1%81%D0%B5%D0%BD%D1%86%D0%B5_%D0%BD%D0%B5%D0%B2%D1%82%D0%B5%D0%BC%D0%B0%D1%81%D1%8C_%D1%8E%D1%82%D1%8B_%D0%A1%D1%83%D0%BE%D0%BC%D0%B8%D1%81%D1%81%D1%8D',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': '«Азор» — васенце эрзянь кельсэ артонь эриванмо-фильманть теемстэ. '
'Орданьбуень Баеньбуе веле, Мордовиясо.'}
```
#### deduplicated_mzn
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 6193,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:QVLHP3APVA34EQ4YFDRJWF2ODTQZ3QG6',
'warc-date': '2021-03-08T00:11:58Z',
'warc-identified-content-language': 'fas',
'warc-record-id': '<urn:uuid:c86dfe2b-795d-4e5d-aaa0-75c1e98690a6>',
'warc-refers-to': '<urn:uuid:b6258701-626d-4a7c-b79e-1c526f9892a6>',
'warc-target-uri': 'https://mzn.wikipedia.org/wiki/%D8%A7%D9%88%D8%B3%D9%88%DA%A9%DB%8C%D8%8C_%D8%A7%D9%88%D8%A6%DB%8C%D8%AA%D8%A7',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'اوسوکی اتا شهر نوم هسته که جاپون ِاوئیتا استان دله دره. ونه جمعیت '
'ره سال ۲۰۰۸ گادِر ۴۲٬۴۶۴ نفر اعلام هاکاردنه. این شه...'}
```
#### deduplicated_nah
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2517,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DSXC3C7F2LUL47USAV5ZRT4HMVQ4XGUI',
'warc-date': '2021-03-03T14:32:16Z',
'warc-identified-content-language': 'spa,ell',
'warc-record-id': '<urn:uuid:a305013e-01ba-49a3-89b9-027dc622576f>',
'warc-refers-to': '<urn:uuid:073b9e5a-a0d3-41c3-89bd-8f972b6a4154>',
'warc-target-uri': 'https://nah.wikipedia.org/wiki/%CF%98',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Ϙ ītōcā inic cē huēhuehtlahtōl īpan '
'greciamachiyōtlahtōltecpantiliztli. Ītlahtōl nō ic 90 tlapōhualli.'}
```
#### deduplicated_nap
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2331,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:EXGUINJCGD2K4E2IVQNJJAQLS4UDJ2TG',
'warc-date': '2021-03-07T13:12:47Z',
'warc-identified-content-language': 'cos,srp,lav',
'warc-record-id': '<urn:uuid:7362689d-31bc-492d-8e60-851c963b5313>',
'warc-refers-to': '<urn:uuid:ecd1bb5f-d247-4739-b9e9-4f93d46081d6>',
'warc-target-uri': 'https://nap.wikipedia.org/wiki/Priatorio',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': "'Int'ô cattolicesimo, priatorio è 'o pruciesso 'e purefecazzione 'e "
"ll'aneme ca moreno 'into ll'amicizzia 'e Dio ma n..."}
```
#### deduplicated_nds
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 5066,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:G2O2EJZLTIU5IDSXMYHPP3TMXVXMAZ3P',
'warc-date': '2021-03-08T22:13:48Z',
'warc-identified-content-language': 'nno,srp',
'warc-record-id': '<urn:uuid:d7f0c9a0-9c12-4d9a-ae5a-184bf7b59c5d>',
'warc-refers-to': '<urn:uuid:31f4d793-f3a4-4403-9c1f-a52f878b63c8>',
'warc-target-uri': 'https://nds.wikipedia.org/wiki/1763',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': '7. Oktober: In London geiht en königliche Proklamatschoon rut, dat '
'vun nu af an in de Kolonien vun Amerika de Kamm vu...'}
```
#### deduplicated_ne
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 17723,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:AZ2CUDZ672TVV2R3O643TJAX7JGXASP2',
'warc-date': '2021-03-08T22:24:08Z',
'warc-identified-content-language': 'nep',
'warc-record-id': '<urn:uuid:fa642413-904a-4def-86fc-a4889e5e9e71>',
'warc-refers-to': '<urn:uuid:f7caed4f-c5ae-4f55-944a-1f06ed71e438>',
'warc-target-uri': 'https://postpati.com/2017/26/07/1353',
'warc-type': 'conversion'},
'nb_sentences': 9,
'offset': 0},
'text': 'युएइको दूतावास बिरुद्द युएइमा रहेका संघ संस्थाहरु द्वारा निरन्तर '
'दवाव आउने क्रमजारि रहेको छ। नेकपा माओबादी सम्बद्ध रह...'}
```
#### deduplicated_new
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2388,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:E6YZSKQK57PDBRG7VPE64CGOL3N4D63I',
'warc-date': '2021-03-09T04:24:48Z',
'warc-identified-content-language': 'nep,eng,bih',
'warc-record-id': '<urn:uuid:20692995-9d67-4b05-ba9b-9dbac80b4441>',
'warc-refers-to': '<urn:uuid:a8445a70-117a-42c1-89ca-aa5df0cc5616>',
'warc-target-uri': 'https://new.wikipedia.org/wiki/%E0%A4%A7%E0%A4%BE%E0%A4%AA%E0%A4%BE',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'धापा (अंग्रेजी भाय:Dhapa), नेपायागु कर्णाली अञ्चलयागु जुम्ला '
'जिल्लायागु गाँ विकास समिति खः। थ्व थासे231खा छेँ दु।'}
```
#### deduplicated_nl
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 766978,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:77YAXN3F4IGI2CYBM3IESJRTCIB4WY2F',
'warc-date': '2021-02-25T16:49:18Z',
'warc-identified-content-language': 'nld',
'warc-record-id': '<urn:uuid:0b08e51a-1b82-4fb9-a420-8556f2fb47a3>',
'warc-refers-to': '<urn:uuid:dae7ca23-9b7e-45d1-9a1c-604942af8cb9>',
'warc-target-uri': 'https://www.delpher.nl/nl/tijdschriften/view?identifier=MMUBA13:001691001:00689&coll=dts',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': '1 Deze Duitse hond is nauw verwant aan de Duitse Brak, de '
'Westfaalse Dasbrak werd gefokt om op dieren te jagen, zoals...'}
```
#### deduplicated_nn
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2770,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:FLRYPK225URFXO3IG4LP6D5TI2WW7MNU',
'warc-date': '2021-03-09T03:50:05Z',
'warc-identified-content-language': 'nno',
'warc-record-id': '<urn:uuid:de821d19-abed-4a35-9284-91176a5428b9>',
'warc-refers-to': '<urn:uuid:7ed9913e-e7dd-496f-b0ef-e82098dd53ca>',
'warc-target-uri': 'https://www.avisa-hordaland.no/trafikk/tunell-pa-e16-stengd-2/',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Bilføraren som vart stogga på E16 i helga hadde 2,28 i promille: – '
'Han var ikkje i stand til å ta vare på seg sjølv'}
```
#### deduplicated_no
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 1329,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:G7JC2T5AD4YK4WWFGTYHHGP5VHB6M7KU',
'warc-date': '2021-03-08T13:17:52Z',
'warc-identified-content-language': 'nor',
'warc-record-id': '<urn:uuid:9e215de3-f988-4754-9ef5-6370121b9b5e>',
'warc-refers-to': '<urn:uuid:1facfcb5-da68-4122-9257-102271944050>',
'warc-target-uri': 'https://www.miljoindex.no/781825/nexans-norway-hovedkontor/',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Utvikling, produksjon og markedsføring av kabler og '
'kablingssystemer, samt annen tilknyttet virksomhet, herunder del...'}
```
#### deduplicated_oc
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 20117,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:2XDHRCL2CSS7YFAM2IAGQL6CSJJEQDXI',
'warc-date': '2021-03-03T15:40:21Z',
'warc-identified-content-language': 'oci',
'warc-record-id': '<urn:uuid:c9ebdec5-af68-4756-88c8-1df831621c5b>',
'warc-refers-to': '<urn:uuid:199db451-0e6f-4f75-ad81-2e7612295452>',
'warc-target-uri': 'https://oc.wikipedia.org/wiki/2',
'warc-type': 'conversion'},
'nb_sentences': 18,
'offset': 0},
'text': "8 : dins l'Empèri Part, assassinat dau rèi Orodes III, probablament "
'en causa de son autoritarisme, que foguèt remplaç...'}
```
#### deduplicated_or
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 12859,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:KQDIT6NHKBV43F56DTHTM5ZS3GHJT5SY',
'warc-date': '2021-03-09T05:25:21Z',
'warc-identified-content-language': 'ori,eng',
'warc-record-id': '<urn:uuid:e25e33da-92c5-42d6-aef8-c3465855312a>',
'warc-refers-to': '<urn:uuid:7457ac60-4aae-44ad-aaec-314795ea0708>',
'warc-target-uri': 'https://or.wikipedia.org/wiki/%E0%AC%A6%E0%AD%8D%E0%AD%B1%E0%AC%BF%E0%AC%A4%E0%AD%80%E0%AD%9F_%E0%AC%AC%E0%AC%BF%E0%AC%B6%E0%AD%8D%E0%AD%B1%E0%AC%AF%E0%AD%81%E0%AC%A6%E0%AD%8D%E0%AC%A7',
'warc-type': 'conversion'},
'nb_sentences': 3,
'offset': 0},
'text': 'ଇଉରୋପ, ପ୍ରଶାନ୍ତ ମହାସାଗର, ଆଟଲାଣ୍ଟିକ ମହାସାଗର, ଦକ୍ଷିଣ-ପୂର୍ବ ଏସିଆ, ଚୀନ, '
'ମଧ୍ୟପ୍ରାଚ୍ୟ, ଭୂମଧ୍ୟସାଗର, ଉତ୍ତର ଆଫ୍ରିକା, ପୂର୍ବ ଆଫ୍...'}
```
#### deduplicated_os
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7079,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:N7CKDF6E3SJBINW4SR6LIUNKLIJP2ROL',
'warc-date': '2021-03-08T22:01:32Z',
'warc-identified-content-language': 'nno',
'warc-record-id': '<urn:uuid:4cd86a68-815b-4539-84a8-bab850034e60>',
'warc-refers-to': '<urn:uuid:8774fb5e-b7fb-4feb-85e7-8c7b33f5980b>',
'warc-target-uri': 'https://os.wikipedia.org/wiki/%D0%9F%D1%83%D1%88%D0%BA%D0%B8%D0%BD,_%D0%A1%D0%B5%D1%80%D0%B3%D0%B5%D0%B9%D1%8B_%D1%84%D1%8B%D1%80%D1%82_%D0%90%D0%BB%D0%B5%D0%BA%D1%81%D0%B0%D0%BD%D0%B4%D1%80',
'warc-type': 'conversion'},
'nb_sentences': 4,
'offset': 0},
'text': 'Пушкин Александр Сергейы фырт (уырыс. Александр Сергеевич Пушкин; '
'райгуырдис 1799 азы 6 июны Мæскуыйы — амардис 1837 ...'}
```
#### deduplicated_pa
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3990,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:HBYN5XY3CD2KI4XIWMBJYPSV2ZPNBWUN',
'warc-date': '2021-03-09T05:05:20Z',
'warc-identified-content-language': 'pan,eng',
'warc-record-id': '<urn:uuid:1ac5c8d1-e750-492e-b35e-b9780bfd16fd>',
'warc-refers-to': '<urn:uuid:b4d8f997-8c9a-43cf-b16c-e8a77c209062>',
'warc-target-uri': 'https://pa.nhp.gov.in/Detail/getdirection?url=radha-krishna-nurshing-andmat-home-rae_bareli-uttar_pradesh',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'ਇਹ ਪੋਰਟਲ ਰਾਸ਼ਟਰੀ ਸਿਹਤ ਪੋਰਟਲ ਦੇ ਸਿਹਤ ਸੂਚਨਾ ਕੇਂਦਰ (CHI) ਦੁਆਰਾ ਵਿਕਸਿਤ '
'ਤੇ ਤਿਆਰ ਕੀਤਾ ਗਿਆ ਹੈ ਅਤੇ ਸਿਹਤ ਤੇ ਪਰਿਵਾਰ ਭਲਾਈ ਮੰਤਰਾਲੇ...'}
```
#### deduplicated_pam
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 4615,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:WOAFTI75LXN3LAF6WFDRDHITPU33CZRK',
'warc-date': '2021-03-07T22:02:39Z',
'warc-identified-content-language': 'eng',
'warc-record-id': '<urn:uuid:9d7a202a-0fec-4aac-9921-2ebf5aa7f9a2>',
'warc-refers-to': '<urn:uuid:70b6a707-77b1-4a0f-84e6-d75ed8d729ad>',
'warc-target-uri': 'https://toddlers.me/kpai-sarankan-gading-beri-penguatan-psikologi-untuk-gempi/',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': '“Káláu Gádìng tìdák mámpu melákukán ìtu, yá bìsá mìntá tolong '
'kepádá oráng yáng berkompeten, mìsálnyá psìkolog átáu s...'}
```
#### deduplicated_pl
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 51849,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:25YENUTK4YA3ZYGCWQH5Z6YDINCMI6SI',
'warc-date': '2021-03-05T22:43:01Z',
'warc-identified-content-language': 'pol',
'warc-record-id': '<urn:uuid:753116b6-f680-448d-ae8a-8fc88ce061b1>',
'warc-refers-to': '<urn:uuid:926693c4-5b59-4f50-98b9-787576fc71d7>',
'warc-target-uri': 'https://igraszki-jezykowe.pl/category/tips-and-tricks-metodyka/',
'warc-type': 'conversion'},
'nb_sentences': 60,
'offset': 0},
'text': 'W niedzielę, 12 czerwca w Orlando na Florydzie islamski terrorysta, '
'powiązany z ISIS zastrzelił 50 osób i drugie tyle...'}
```
#### deduplicated_pms
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2620,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:2T5H5XDLC3KPDB33XXVCTGNNYYDJXQWQ',
'warc-date': '2021-03-03T16:04:55Z',
'warc-identified-content-language': 'srp',
'warc-record-id': '<urn:uuid:952c2dda-041e-40ff-bf28-8a39075f53d9>',
'warc-refers-to': '<urn:uuid:6d526022-b736-4a51-9b9c-c5bdd5a546f9>',
'warc-target-uri': 'https://pms.wikipedia.org/wiki/Auer',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': "Auer (Ora për j'italian) a l'é un comun ëd 3.025 abitant dla "
'provincia ëd Bolsan (Region Autònoma Trentin-Sud Tiròl)....'}
```
#### deduplicated_pnb
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2896,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:GWWDSJAQDB7JDQWV65CI6WT7E6C33DL4',
'warc-date': '2021-03-08T23:01:08Z',
'warc-identified-content-language': 'urd',
'warc-record-id': '<urn:uuid:8c385ca8-7561-4f47-b5a3-0862488eb948>',
'warc-refers-to': '<urn:uuid:837d621d-3540-44fd-a4d0-6cb3c6f2327f>',
'warc-target-uri': 'https://pnb.wikipedia.org/wiki/453%DA%BE',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'لکھت کریئیٹیو کامنز انتساب/ اکوجہے-شراکت لائسنس دے ہیٹھ دستیاب اے، '
'ہور شرطاں وی لاگو ہوسکدیاں نیں۔ ویروے لئی ورتن شرط...'}
```
#### deduplicated_ps
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2424,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:CAUU5Y7TOTASV7WYKCYRCVXTZ7GGN2VO',
'warc-date': '2021-03-09T05:08:35Z',
'warc-identified-content-language': 'pus',
'warc-record-id': '<urn:uuid:d784cf7a-91e1-4c54-96a2-e41c67318548>',
'warc-refers-to': '<urn:uuid:98aed7d2-c3e3-4039-af83-f2c73a5c19f5>',
'warc-target-uri': 'https://www.mashaalradio.com/a/29821043.html',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'د افغانستان په فاریاب ولایت کې په یوه پارک کې ښځو په برقعو کې ورزش '
'کړی دی. د سیمې چارواکي وايي، د ښځو د ورزش لپاره ځا...'}
```
#### deduplicated_pt
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 79931,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:JYDP4XMEGW2XPPV6NAAF772KDH4X2CCF',
'warc-date': '2021-02-25T13:48:41Z',
'warc-identified-content-language': 'por',
'warc-record-id': '<urn:uuid:3b50f546-e03b-461f-98c8-5a38920d7c0a>',
'warc-refers-to': '<urn:uuid:564bfb21-0705-4997-bbb9-472f0cbcad3e>',
'warc-target-uri': 'http://www.artefazparte.com/',
'warc-type': 'conversion'},
'nb_sentences': 117,
'offset': 0},
'text': 'A reflexão sobre identidade de género anda a cansar muitos de nós. '
'Sobretudo os que não têm dúvidas e nela se sentem ...'}
```
#### deduplicated_qu
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2630,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:34TX2UNXR2JLRLAFTE3ILOBMEBRMWIRH',
'warc-date': '2021-03-09T05:23:48Z',
'warc-identified-content-language': 'que',
'warc-record-id': '<urn:uuid:237398f6-a300-449b-9e09-7a1ed8cf1e97>',
'warc-refers-to': '<urn:uuid:84b20aab-d538-4efc-bc97-33d546d84802>',
'warc-target-uri': 'https://qu.wikipedia.org/wiki/Sapaq:HukchasqaTinkimuq/Chinchay_Chungcheong_pruwinsya',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': "Kay sapaq p'anqaqa t'inkisqa p'anqakunapi ñaqha hukchasqakunatam "
"rikuchin. Watiqasqayki p'anqakunaqa yanasapa qillqas..."}
```
#### deduplicated_rm
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 100558,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:Z7R6QV2K5FDIHR4QJH7F2NTXND6NDEFY',
'warc-date': '2021-02-27T13:53:32Z',
'warc-identified-content-language': 'deu',
'warc-record-id': '<urn:uuid:da3aec28-6c61-470c-a5d2-66710bc1fb35>',
'warc-refers-to': '<urn:uuid:9d04f371-89a7-4ac2-9b1e-883aa93e4ace>',
'warc-target-uri': 'http://lexbrowser.provinz.bz.it/doc/la/lp-2009-5/lege_provinzialadi_28_de_set_mber_dl_2009_n_5.aspx?view=1',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': '(2) La prestaziun dla garanzia é sotmetüda al’aprovaziun di decunć '
'finanziars da pert dl’aministraziun dl consorz.'}
```
#### deduplicated_ro
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 1677,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DXKBGKXVETQLCHTHRMLLSWUPXTDNJDVV',
'warc-date': '2021-02-26T12:19:49Z',
'warc-identified-content-language': 'ron',
'warc-record-id': '<urn:uuid:2c20c06f-ca98-4118-9222-7b3b74bc760b>',
'warc-refers-to': '<urn:uuid:e77c028a-5857-4ec2-90db-58a9bb57c510>',
'warc-target-uri': 'https://ro.visafoto.com/es-visa-photo',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Căluşarii sau Boristenii, melodie culeasă din Braşov, în 1832, de '
'Canzler cav. de Ferio şi publicată târziu de Otto H...'}
```
#### deduplicated_ru
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 14025,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:2HSXIFOHEJZOTJV2EVDSZDVF26ATVATE',
'warc-date': '2021-03-07T02:45:16Z',
'warc-identified-content-language': 'rus',
'warc-record-id': '<urn:uuid:aa9b3fc9-fb66-45fa-a064-62ae5fd67970>',
'warc-refers-to': '<urn:uuid:e9145f1e-4ce5-44db-a7d7-234842b31973>',
'warc-target-uri': 'http://budzdorov-kaluga.ru/statyi_i_materialy/o-grippe',
'warc-type': 'conversion'},
'nb_sentences': 15,
'offset': 0},
'text': '«Геро́й» (кит. 英雄) — исторический фильм режиссёра Чжана Имоу, '
'снятый в 2002 году. Продолжительность — 93 минуты (суще...'}
```
#### deduplicated_rue
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 17472,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:YBMO2PR3WF7WQ7UEU5YLRBI7BZ6IP6KB',
'warc-date': '2021-03-06T15:24:27Z',
'warc-identified-content-language': 'ukr,rus',
'warc-record-id': '<urn:uuid:ca71a8fe-adb9-4346-a5b4-7d283f1410f8>',
'warc-refers-to': '<urn:uuid:a609d9f9-5040-4ca5-80a8-aa2c4c7a3525>',
'warc-target-uri': 'https://rue.wikipedia.org/wiki/%D0%9F%D0%BE%D0%BC%D1%96%D1%87:%D0%9A%D0%B0%D1%82%D0%B5%D2%91%D0%BE%D1%80%D1%96%D1%97',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Наприклад можете едітовати Катеґорія:Фізіци і додати одказ '
'[[Катеґорія:Фізіка]]. Катеґорія Фізіци буде пікатеґоріёв к...'}
```
#### deduplicated_sa
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 4166,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:ACZ66HH67HYSPS6I7YYQX64HRD4O5GIH',
'warc-date': '2021-02-24T20:35:30Z',
'warc-identified-content-language': 'san,eng',
'warc-record-id': '<urn:uuid:12bc2393-cb9b-492d-9398-f6b1090bd999>',
'warc-refers-to': '<urn:uuid:6e883bd6-350e-4280-94dc-ee84f44d2458>',
'warc-target-uri': 'https://sa.wikipedia.org/wiki/%E0%A4%B5%E0%A4%BF%E0%A4%B6%E0%A5%87%E0%A4%B7%E0%A4%83:%E0%A4%95%E0%A4%BF%E0%A4%AE%E0%A4%A4%E0%A5%8D%E0%A4%B0_%E0%A4%B8%E0%A4%81%E0%A4%B2%E0%A5%8D%E0%A4%B2%E0%A4%97%E0%A5%8D%E0%A4%A8%E0%A4%AE%E0%A5%8D/%E0%A4%B5%E0%A4%B0%E0%A5%8D%E0%A4%97%E0%A4%83:%E0%A5%A9%E0%A5%AC%E0%A5%A7',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'केभ्यः पृष्ठेभ्यः सम्बद्धम् पृष्ठम्: नामाकाशः : सर्वाणि (मुख्यम्) '
'सम्भाषणम् सदस्यः सदस्यसम्भाषणम् विकिपीडिया विकिपीडि...'}
```
#### deduplicated_sah
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 1724,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:5PKOMLENZCNOU6PT27NCNKTQFPRC37RQ',
'warc-date': '2021-03-03T15:19:03Z',
'warc-identified-content-language': 'ukr,rus',
'warc-record-id': '<urn:uuid:59b7bbeb-e375-4d8c-8b7c-fbe09e5ce21e>',
'warc-refers-to': '<urn:uuid:512d4df0-bd91-47aa-8f23-eb2a8d4b426e>',
'warc-target-uri': 'https://sah.m.wikipedia.org/wiki/%D0%A7%D0%B5%D1%80%D0%BD%D0%B8%D0%B3%D0%BE%D0%B2_%D1%83%D0%BE%D0%B1%D0%B0%D0%BB%D0%B0%D2%BB%D0%B0',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Тиэкис Creative Commons Attribution-ShareAlike лиссиэнсийэ '
'усулуобуйатынан тарҕанар, сорох түбэлтэҕэ эбии көрдөбүллэр...'}
```
#### deduplicated_scn
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3622,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:VGCXGU3B2WY722G2LRJ56RSYT4HSLUGI',
'warc-date': '2021-03-03T02:35:42Z',
'warc-identified-content-language': 'cos,ita',
'warc-record-id': '<urn:uuid:caeb7ba3-1bc2-4ef7-95cb-eb0d4d0792d6>',
'warc-refers-to': '<urn:uuid:19e33395-5981-4f6d-857b-12cf7d761b58>',
'warc-target-uri': 'https://scn.wikipedia.org/wiki/Canali_d%C3%A2_M%C3%A0nica',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Lu ripartu francisi dâ Mànica, chi cumprenni la pinìsula dû '
'Cotentin, chi si nesci ntô canali, pigghia lu sò nomu dû ...'}
```
#### deduplicated_sco
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 140370,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:TRXAEE4XHP7FT4FCJF3DSEKD7YBPCFOR',
'warc-date': '2021-03-02T07:33:12Z',
'warc-identified-content-language': 'eng,vol',
'warc-record-id': '<urn:uuid:d406a6c9-dba6-4955-8ede-f8082f7da58f>',
'warc-refers-to': '<urn:uuid:155919e0-a689-415c-b2aa-eccd06021476>',
'warc-target-uri': 'https://baggato.com/fo',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'fowjo fowjp fowjq fowjr fowka fowkb fowkc fowkd fowke fowkf fowkg '
'fowkh fowki fowkj fowkk fowkl fowkm fowkn fowko fow...'}
```
#### deduplicated_sd
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 17619,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DLWVP7WGNP64RB6ZLHDNQEJ7D24BYXOR',
'warc-date': '2021-02-24T20:04:37Z',
'warc-identified-content-language': 'snd,eng',
'warc-record-id': '<urn:uuid:8997e1c6-4d72-47f1-bffe-d18a00ae6b94>',
'warc-refers-to': '<urn:uuid:946e892e-46c3-4a68-8532-1eac8b65b76a>',
'warc-target-uri': 'https://sd.info-4all.ru/%D8%B1%D8%AA%D9%88%D9%BD%D9%88-%D8%A2%D8%A6%D9%8A%D8%B1%D8%B1%D8%A7/%DA%AA%D9%84%D8%A7%DA%AA/',
'warc-type': 'conversion'},
'nb_sentences': 21,
'offset': 0},
'text': 'بيلففيل ڪيئن ٿيو؟ پهرين توهان کي پنهنجو ضمير وڃائڻ جي ضرورت آهي. '
'اهي تعليم کان سواءِ صرف سست ماڻهو نه وٺندا آهن ، پر ...'}
```
#### deduplicated_sh
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 12517,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:IH6O64JAV4PLXURRD5LKU6C46DGGXS27',
'warc-date': '2021-03-09T06:06:53Z',
'warc-identified-content-language': 'fra,hrv,eng',
'warc-record-id': '<urn:uuid:ddc0f982-aea2-4206-a431-02e6c89ab090>',
'warc-refers-to': '<urn:uuid:904a206d-515a-4f11-ad25-9035adbf0cfa>',
'warc-target-uri': 'https://sh.wikipedia.org/wiki/Cliponville',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Po podacima iz 1999. godine u opštini je živelo 245 stanovnika, a '
'gustina naseljenosti je iznosila 33 stanovnika/km²....'}
```
#### deduplicated_si
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 18426,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:CZO426HASJ2VV5IMXEAHY2T53ZTDOZEP',
'warc-date': '2021-02-24T20:38:23Z',
'warc-identified-content-language': 'sin,eng',
'warc-record-id': '<urn:uuid:bec8b1fe-0659-4f47-b244-018b5dac9e30>',
'warc-refers-to': '<urn:uuid:1c918e04-8c2d-4bc0-bcfb-bf978ab0c0ea>',
'warc-target-uri': 'https://androidwedakarayo.com/before-you-look-for-a-job-please-fix-your-facebook-account/',
'warc-type': 'conversion'},
'nb_sentences': 19,
'offset': 0},
'text': 'ඉස්සර තමයි අපි සෝෂල්මීඩියා පාවිච්චි කරන්නේ අපි ආස නළු නිළියන්ගේ '
'ෆොටෝ, හදපු කෑම, ඩ්\u200dරින්ක් එකක් දාන්න සෙට් වෙච්චි වෙලා...'}
```
#### deduplicated_sk
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 37910,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:ODXVMZXR34B45NQTMJIKKK2VGBGRXKEA',
'warc-date': '2021-03-01T16:29:19Z',
'warc-identified-content-language': 'slk',
'warc-record-id': '<urn:uuid:6a22612f-9bbf-4f74-8cca-0457f069baa4>',
'warc-refers-to': '<urn:uuid:3981cb48-fadf-463f-9fc9-a6d717b9dc71>',
'warc-target-uri': 'http://www.tomsta.sk/',
'warc-type': 'conversion'},
'nb_sentences': 56,
'offset': 0},
'text': 'Keďže všade naokolo sú iba kopce, mohol byť jedine horský. Dnes je '
'z toho najlepší horský triatlon na Slovensku, ktor...'}
```
#### deduplicated_sl
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 8130,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:UFZ4P4LVU4TXYJIHZULTCIVJ4GA3JT54',
'warc-date': '2021-03-07T14:50:23Z',
'warc-identified-content-language': 'slv,eng',
'warc-record-id': '<urn:uuid:e50a528d-ebd3-46dc-92d7-af394aaa896a>',
'warc-refers-to': '<urn:uuid:dbfe8ac4-b415-45a8-a16c-c168ed5ce37b>',
'warc-target-uri': 'https://www.edi-nm.com/si/varicosen-mnenja-cena-lekarna/',
'warc-type': 'conversion'},
'nb_sentences': 6,
'offset': 0},
'text': 'Po najnovejših raziskavah v Sloveniji vsaka 4. oseba med 36. in 95. '
'letom trpi zaradi kronične venske insuficience – ...'}
```
#### deduplicated_so
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 17837,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:WIS4GECYGJYMTZMVFOUVUMRWTAPFZUSK',
'warc-date': '2021-03-03T20:11:46Z',
'warc-identified-content-language': 'bul,eng,srp',
'warc-record-id': '<urn:uuid:976de977-97b9-4517-8a42-2fc82fdda461>',
'warc-refers-to': '<urn:uuid:a0f1fbd0-b2cb-495f-93f3-53e77acae3f5>',
'warc-target-uri': 'https://studioqueens.bgnick.info/l4fOorCpgdutsnY/igra-na.html',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'ххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххх...'}
```
#### deduplicated_sq
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 6129,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:D3PWGEKLJKJEGOTQLYVQNUV4URWEFH2P',
'warc-date': '2021-03-09T03:17:23Z',
'warc-identified-content-language': 'sqi',
'warc-record-id': '<urn:uuid:3299bc56-c7fb-4655-bebd-393510d89aaa>',
'warc-refers-to': '<urn:uuid:1416a2ad-d319-4c60-b663-29239ff79154>',
'warc-target-uri': 'http://ata.gov.al/2019/11/03/video-u-prek-nga-termeti-ndertohet-nga-e-para-banesa-e-familjes-stafa-ne-petrele/',
'warc-type': 'conversion'},
'nb_sentences': 11,
'offset': 0},
'text': 'TIRANË, 3 nëntor/ATSH/- Në Petrelë të Tiranës ka nisur puna për '
'ndërtimin nga e para të shtëpisë së familjes Stafa, e...'}
```
#### deduplicated_sr
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7735,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:7LKRS7R2L2K53YTV5CYR2IAJRNIQKGBJ',
'warc-date': '2021-03-03T11:23:25Z',
'warc-identified-content-language': 'srp,eng',
'warc-record-id': '<urn:uuid:8ade8406-bedb-41a7-b854-8429b6b21214>',
'warc-refers-to': '<urn:uuid:cca5c75c-7221-4247-a51e-f7be99661793>',
'warc-target-uri': 'https://vojvodjanske.rs/40-jubilarni-somborski-polumaraton-u-nedelju-19-maja/',
'warc-type': 'conversion'},
'nb_sentences': 4,
'offset': 0},
'text': '„У недељу 19. маја, у Сомбору се одржава јубиларна 40. најстарија '
'улична трка у Републици Србији, Сомборски полумарат...'}
```
#### deduplicated_su
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 14013,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:IMFFV646FPXSYLMOATX7O6CDMKUU4BFL',
'warc-date': '2021-03-09T10:29:19Z',
'warc-identified-content-language': 'sun,ind',
'warc-record-id': '<urn:uuid:02eb1f6f-7040-4b8f-b995-7c547196da4b>',
'warc-refers-to': '<urn:uuid:4a9807f7-0c98-493f-ab84-8fafc61a1e50>',
'warc-target-uri': 'https://www.masdinko.com/2019/04/soal-utspts-bahasa-sunda-sd-kelas-4.html',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Pikeun urang lembur, daun seureuh téh geus teu anéh deui. Seureuh '
'mah mangrupa tangkal nu ngarémbét kana tangkal séjéna.'}
```
#### deduplicated_sv
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 87099,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:TKLP6CG56M45ABZQGDD7EDTCQMKTSAVS',
'warc-date': '2021-03-05T20:01:45Z',
'warc-identified-content-language': 'swe',
'warc-record-id': '<urn:uuid:97860695-1688-46ef-93db-5e15742820af>',
'warc-refers-to': '<urn:uuid:7c924b0e-39e1-4921-a561-52dc5453b886>',
'warc-target-uri': 'https://fortretligheter.blogspot.com/2011/01/',
'warc-type': 'conversion'},
'nb_sentences': 255,
'offset': 0},
'text': 'Svenska trupper hade en kväll för flera hundra år sedan när Sverige '
'och Danmark låg i Krig med varandra kommit med sk...'}
```
#### deduplicated_sw
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 2098,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:FPGJP34F47FJQSZF62PELBLYNJ4RTCSE',
'warc-date': '2021-03-03T15:24:39Z',
'warc-identified-content-language': 'swa',
'warc-record-id': '<urn:uuid:d42018de-64be-41f9-b4b6-700dd0051ce3>',
'warc-refers-to': '<urn:uuid:a40c8328-ab33-4113-9ea1-8c35967b0bde>',
'warc-target-uri': 'http://mwanza.go.tz/videos/78',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Mkuu wa Mkoa wa Mwanza Mhe.John Mongella akifungua Baraza la '
'biashara katika kikao kilichofanyika kwenye ukumbi wa mk...'}
```
#### deduplicated_ta
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 49341,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:FQEPDKJ7AYCAEVL5SRUQ5QOULOOSHECD',
'warc-date': '2021-03-09T04:15:52Z',
'warc-identified-content-language': 'tam',
'warc-record-id': '<urn:uuid:2fa70e6a-a31a-4359-b4ff-54ce7f5d6200>',
'warc-refers-to': '<urn:uuid:92eb01ff-4f82-438b-8d1f-1722fe23285a>',
'warc-target-uri': 'https://thiru2050.blogspot.com/2019_05_26_archive.html',
'warc-type': 'conversion'},
'nb_sentences': 15,
'offset': 0},
'text': '... 2017 adimmix psychic leah அறிவுரை கும்பம் மேஷம் ஜோதிடம் '
'புற்றுநோய் மகர படிக குழந்தைகள் மனநோய் புத்தகங்கள் முன்அ...'}
```
#### deduplicated_te
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 31516,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:MG3MFYW5T6XSW3XYZ4ZIKGJW5XAY2RCG',
'warc-date': '2021-03-06T18:07:45Z',
'warc-identified-content-language': 'tel',
'warc-record-id': '<urn:uuid:238b108b-d16e-41d2-b06e-464267352b0e>',
'warc-refers-to': '<urn:uuid:3663318c-d256-4c97-b71b-e4eeb2e6b58a>',
'warc-target-uri': 'https://telugu.greatandhra.com/articles/mbs/ammo-ativa-01-114908.html',
'warc-type': 'conversion'},
'nb_sentences': 15,
'offset': 0},
'text': 'అది 1868. ఇంగ్లండ్\u200cలోని బ్రైటన్\u200cలో క్రిస్టియానా ఎడ్మండ్స్ '
'అనే 40 ఏళ్ల మహిళ వుండేది. పెళ్లి కాలేదు. తల్లితో కలిసి ఒక ఎ...'}
```
#### deduplicated_tg
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 16112,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:LDBVTK3U6MY7J475ZR4LRLFK2CC2QWG5',
'warc-date': '2021-03-09T03:53:03Z',
'warc-identified-content-language': 'tgk,tat,rus',
'warc-record-id': '<urn:uuid:b2519476-6812-4a38-8522-f5292b95e73a>',
'warc-refers-to': '<urn:uuid:f11fa878-d4c6-4e56-bc50-a76554b7d811>',
'warc-target-uri': 'http://hamsafon.tj/2784-imr1263z-1203avoi-1207um1203ur1251-sofu-be1171ubor-meshavad.html',
'warc-type': 'conversion'},
'nb_sentences': 15,
'offset': 0},
'text': 'ДУШАНБЕ, 10.01.2017/АМИТ «Ховар»/. 10 январ дар пойтахти кишвар '
'ҳавои тағйирёбандаи бебориш дар назар дошта шудааст. ...'}
```
#### deduplicated_th
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 50841,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:MESEMAONUQXZZEA6IKBT3VCUZ43ZP4B7',
'warc-date': '2021-02-28T15:41:47Z',
'warc-identified-content-language': 'tha,eng',
'warc-record-id': '<urn:uuid:46495e6b-f22f-4dc6-86ab-3bbed66ce7e4>',
'warc-refers-to': '<urn:uuid:10946c1b-9dc5-4afb-bc74-d6baf9793a03>',
'warc-target-uri': 'https://www.thaicsr.com/2009/02/blog-post_08.html',
'warc-type': 'conversion'},
'nb_sentences': 34,
'offset': 0},
'text': 'ปี พ.ศ. 2521 '
'พระบาทสมเด็จพระเจ้าอยู่หัวเสด็จเยี่ยมราษฎรบ้านพระบาทห้วยต้ม '
'ทรงทอดพระเนตรเห็นสภาพพื้นที่และชีวิตความเป็น...'}
```
#### deduplicated_tk
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 22486,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:VNR5UQCQIGPEZQBZL4VAOQDASFOVNRDL',
'warc-date': '2021-03-03T15:07:09Z',
'warc-identified-content-language': 'eng,rus',
'warc-record-id': '<urn:uuid:b514b9c5-1ccd-4cf0-bea7-ea38a5aef686>',
'warc-refers-to': '<urn:uuid:edf1f6cb-9f46-4790-8256-eb984db0f0d5>',
'warc-target-uri': 'http://www.newscentralasia.net/2020/12/02/move-forward-with-universal-right-and-responsibility/',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Türkmenistanyň Daşary işler ministriniň Owganystanyň Milli Yslam '
'Hereketi partiýasynyň ýolbaşçysy bilen duşuşygy'}
```
#### deduplicated_tl
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 15036,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:2FGV42SN72HRKRBEEQ7QJVJBLUYQPCIH',
'warc-date': '2021-03-09T04:48:08Z',
'warc-identified-content-language': 'eng,khm,lao',
'warc-record-id': '<urn:uuid:04d772d6-09db-4d5a-86c8-22b914a35b6f>',
'warc-refers-to': '<urn:uuid:f3cdcafa-5a28-4fbb-81df-7cc5e7bb3248>',
'warc-target-uri': 'http://www.ahealthyme.com/RelatedItems/RelatedDocuments.pg?d=&TypeId=121&ContentId=761&Category=DC',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'PAUNAWA: Kung nagsasalita ka ng wikang Tagalog, mayroon kang '
'magagamit na mga libreng serbisyo para sa tulong sa wika...'}
```
#### deduplicated_tr
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 14815,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:GVNKVEGK7TMZGXIIMLV2O2YWYJRAKBO2',
'warc-date': '2021-03-04T00:44:44Z',
'warc-identified-content-language': 'tur,eng',
'warc-record-id': '<urn:uuid:7acbe6a8-83c4-4ebd-8d29-62cb0b150b2f>',
'warc-refers-to': '<urn:uuid:038ffe28-2fd1-49b9-a5c6-3dddd1af6318>',
'warc-target-uri': 'https://www.kadikoygitarkursum.com/search/label/g%C3%B6ztepe%20gitar%20dersi',
'warc-type': 'conversion'},
'nb_sentences': 5,
'offset': 0},
'text': 'İlk olarak, bir tek siyah kirpik takımı için fiyat belirleyin, '
"örneğin, 4000 ruble'ye eşittir. Artık bir müşteriyle ç..."}
```
#### deduplicated_tt
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 26112,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:FAPA2JNYP6OL53T6OIL3SR3EGMX2R4XY',
'warc-date': '2021-03-09T04:42:07Z',
'warc-identified-content-language': 'tat,rus',
'warc-record-id': '<urn:uuid:5cac6257-fa6c-4e67-9ba1-8e7d7424ef54>',
'warc-refers-to': '<urn:uuid:52642c8d-da35-462f-9776-ccfa88353466>',
'warc-target-uri': 'http://saby-rt.ru/news/konkurslar/fotokonkurs',
'warc-type': 'conversion'},
'nb_sentences': 12,
'offset': 0},
'text': 'Хөрмәтле хатын-кызларбыз! Сезне чын күңелдән 8 Март бәйрәме белән '
'тәбрик итәбез! Яраткан әниләребез, әбиләребез, гоме...'}
```
#### deduplicated_tyv
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7766,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:L5GRAANBGMGNYXDFF3ECSWJ5Q6D4QFHS',
'warc-date': '2021-02-28T07:20:44Z',
'warc-identified-content-language': 'rus',
'warc-record-id': '<urn:uuid:238082a9-0adf-4c8c-b749-1a523c91e229>',
'warc-refers-to': '<urn:uuid:4bfd0ca2-52bb-4ece-9ccf-cdcee0b30ee9>',
'warc-target-uri': 'https://tyv.wikipedia.org/wiki/%D0%A1%D0%B0%D1%80%D0%BB%D1%8B%D0%BA',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Сарлык бызаазы – ниити ады, назыны бир хар чедир, сарлыктың эр '
'бызаазы аза сарлыктың кыс бызаазы деп чугаалаар.'}
```
#### deduplicated_ug
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 19089,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:DHYFNWWKECLR6BHWF763HC62JRCASMGH',
'warc-date': '2021-03-09T04:33:38Z',
'warc-identified-content-language': 'uig',
'warc-record-id': '<urn:uuid:d1185989-9cd6-40f2-ad63-003e405c9141>',
'warc-refers-to': '<urn:uuid:923ac168-6484-49ea-807d-be3ced85a885>',
'warc-target-uri': 'https://www.akademiye.org/ug/?p=10959',
'warc-type': 'conversion'},
'nb_sentences': 30,
'offset': 0},
'text': 'شەرقىي تۈركىستانئاكادېمىيە ھەققىدەئەزالىقتەۋپىق '
'مۇكاپاتىئىئانەئالاقەTürkçeEnglishئۇيغۇرچەУйғурчәUyghurche\n'
'مىللىي مەۋج...'}
```
#### deduplicated_uk
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 16706,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:46XDNKJUJSG22BA4B6DDET2R5GMBU3LV',
'warc-date': '2021-02-26T22:04:41Z',
'warc-identified-content-language': 'ukr,eng',
'warc-record-id': '<urn:uuid:a3c68b5a-f9e8-41b6-b2bb-3d43e4d7a117>',
'warc-refers-to': '<urn:uuid:6a35e918-42ce-4349-9a6c-edcd22f07254>',
'warc-target-uri': 'https://www.interesniy.kiev.ua/vasil-boroday-korifey-mistetstva-pla/',
'warc-type': 'conversion'},
'nb_sentences': 14,
'offset': 0},
'text': 'На Женевському міжнародному автосалоні 2017 бренд Fiat буде '
'показувати дві свої душі, які співіснують у великій повні...'}
```
#### deduplicated_ur
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 9450,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:3SZ3UYOSHTRE3W3PDZXRO7DDSLRKENV2',
'warc-date': '2021-03-09T03:21:23Z',
'warc-identified-content-language': 'eng,urd,bos',
'warc-record-id': '<urn:uuid:0ded0cb4-2f73-41a7-a093-5dcfed204738>',
'warc-refers-to': '<urn:uuid:6b380ef1-fec4-4f48-bcdc-86700c508dfc>',
'warc-target-uri': 'http://www.khanaghar.org/?p=50',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'اتراکھنڈ کے سلماتا گاؤں کی لڑائیتی دیوی ایک پُر اعتماد اور عقلمند '
'مجاہد ہیں، جن کی طرف دیگر خواتین بھی دیکھ رہی ہیں۔ ...'}
```
#### deduplicated_uz
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3808,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:FYYLFGJTK74HXE2LRJOAR5E6BPGCQ5NU',
'warc-date': '2021-03-09T04:38:24Z',
'warc-identified-content-language': 'uzb,ben,ltz',
'warc-record-id': '<urn:uuid:2a56bf64-042e-47fa-9abb-819b13bf7920>',
'warc-refers-to': '<urn:uuid:155b1e81-dc6e-46dc-9544-5a6a97c05118>',
'warc-target-uri': 'https://uz.wikipedia.org/wiki/1408',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Matn Creative Commons Attribution-ShareAlike litsenziyasi boʻyicha '
'ommalashtirilmoqda, alohida holatlarda qoʻshimcha ...'}
```
#### deduplicated_vec
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7088,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:CX2L4ZL4I4OLXG7YJTXLRKNFHE7RIHRX',
'warc-date': '2021-02-24T19:06:44Z',
'warc-identified-content-language': None,
'warc-record-id': '<urn:uuid:abc5a544-7009-407a-a5a3-5c2145195bd5>',
'warc-refers-to': '<urn:uuid:4a956690-536a-437b-afe2-50dc7ac54b39>',
'warc-target-uri': 'https://vec.wikipedia.org/wiki/Utensa:Aelwyn',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Łe parołe che vien dal łatin -TAS, TATIS łe termina par -DÁ. Łe '
'parołe che łe vien da -ICUS łe tèrmina par -ÉGO. Łe p...'}
```
#### deduplicated_vi
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7845,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:CCXAI5SV5PFLNPSMP4UF4SQGGSYN37AP',
'warc-date': '2021-03-03T02:43:13Z',
'warc-identified-content-language': 'vie',
'warc-record-id': '<urn:uuid:7ce27f30-a1eb-4978-83d0-5110421393b0>',
'warc-refers-to': '<urn:uuid:5dad988d-2426-402c-ac0c-1fa811ed96dc>',
'warc-target-uri': 'http://httlvinhphuoc.org/vi/duong-linh/Hoc-Kinh-Thanh-hang-ngay/Lam-Dieu-Thien-Bang-Tinh-Yeu-Thuong-6521/',
'warc-type': 'conversion'},
'nb_sentences': 8,
'offset': 0},
'text': 'Bitcoin và tiền kỹ thuật số nói chung đang dần xâm nhập vào các '
'thị trường tài chính khi ngày càng có nhiều nhà đ...'}
```
#### deduplicated_vls
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 78684,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:VQNDJYOQXZLCLMDXIFCT4BHSW6LVTJQE',
'warc-date': '2021-02-28T16:16:27Z',
'warc-identified-content-language': 'fra,eng',
'warc-record-id': '<urn:uuid:266acc08-1c69-449f-95ad-0dcc82565788>',
'warc-refers-to': '<urn:uuid:c45dcd64-1b20-4ffc-bdd7-7dbff4f0a726>',
'warc-target-uri': 'https://fr.readkong.com/page/livret-des-licences-faculte-des-sciences-et-des-techniques-7906239',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': ' '
'...'}
```
#### deduplicated_vo
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 1937,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:VPG56ZACAOAZTXHSSXFJOBBH44NWUSJW',
'warc-date': '2021-03-09T06:02:56Z',
'warc-identified-content-language': 'vol,eng,srp',
'warc-record-id': '<urn:uuid:2cb96947-ee22-42a8-be36-31a03203efcc>',
'warc-refers-to': '<urn:uuid:da82b7d8-535b-4e39-8d9b-ea8c3d4a4460>',
'warc-target-uri': 'https://vo.wikipedia.org/wiki/Arnesano',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'Arnesano binon zif in topäd: Puglia, in Litaliyän. Arnesano topon '
'videtü 40° 20’ N e lunetü 18° 6’ L.'}
```
#### deduplicated_wa
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 6518,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:6NC6V46TRVMWTOHCPMTDVRTP7GGL3G3S',
'warc-date': '2021-02-26T09:47:28Z',
'warc-identified-content-language': 'wol',
'warc-record-id': '<urn:uuid:4d800a25-ccf5-4d55-9795-3f7974b988b1>',
'warc-refers-to': '<urn:uuid:87119673-154b-4246-8c39-35737821a7ff>',
'warc-target-uri': 'https://wa.wikipedia.org/wiki/Senegal',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': "Cisse pådje ci n' est co k' on djermon, dj' ô bén k' el pådje est "
"djusse sibåtcheye, eyet co trop tene; et s' divreut..."}
```
#### deduplicated_war
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7356,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:SVXPIA63QN77O2IJXL4Q75LNVLDEBHYW',
'warc-date': '2021-03-09T05:49:57Z',
'warc-identified-content-language': 'war,tha,eng',
'warc-record-id': '<urn:uuid:a143ebc6-a7b4-4fa7-96b3-59ba2c1dd03c>',
'warc-refers-to': '<urn:uuid:571d090a-cb65-41e7-ae7c-d95588d41c28>',
'warc-target-uri': 'https://war.wikipedia.org/wiki/Chakri_nga_Dinastiya',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': 'An Chakri nga Dinastiya (Thai: ราชวงศ์จักรี: Rajawongse Chakri) '
'namuno ngan naghadi han Thailand tikang han hi hadi T...'}
```
#### deduplicated_wuu
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 26503,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:XAH2SJIYORGGSMLN4DNJZCNVG2FVWF3C',
'warc-date': '2021-03-09T04:09:05Z',
'warc-identified-content-language': 'jpn',
'warc-record-id': '<urn:uuid:8df3f922-fbbf-4733-a3a8-9f34b7505cbf>',
'warc-refers-to': '<urn:uuid:a55eb04e-3679-4817-b94b-e0317142ab2b>',
'warc-target-uri': 'https://wpedia.goo.ne.jp/wiki/%E4%BC%8A%E5%8D%81%E4%BA%94%E5%9E%8B%E6%BD%9C%E6%B0%B4%E8%89%A6',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': '伊15 [I] | 伊17 | 伊19 | 伊21 | 伊23 | 伊25 | 伊26 | 伊27 | 伊28 | 伊29 | 伊30 '
'| 伊31 | 伊32 | 伊33 | 伊34 | 伊35 | 伊36 | 伊37 | 伊38 |...'}
```
#### deduplicated_xal
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 8598,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:KGZNUXNSFUSFYC45UQJRZPEHXNGK6C3H',
'warc-date': '2021-03-02T01:27:37Z',
'warc-identified-content-language': 'rus,spa',
'warc-record-id': '<urn:uuid:676f6ca8-706b-4f77-926f-bda90e3cd772>',
'warc-refers-to': '<urn:uuid:452efc2f-85ce-4e90-b268-2f46893172f8>',
'warc-target-uri': 'http://born.altnzam.com/2014/01/',
'warc-type': 'conversion'},
'nb_sentences': 2,
'offset': 0},
'text': 'Ааһ: Хоосн ааһ би, хагсхларн һанцардсн болҗ медгдҗәнә. Нанд усн йир '
'кергтә болҗана. Ус өгит, — эзнәсн сурна.\n'
'Ааһ ууль...'}
```
#### deduplicated_xmf
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 7053,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:OQKCWDGQCIJHXMM3SCUO2KPBMFCQACUJ',
'warc-date': '2021-03-03T14:27:35Z',
'warc-identified-content-language': 'kat',
'warc-record-id': '<urn:uuid:e701a584-a14f-49ac-80b3-a7604f98fc92>',
'warc-refers-to': '<urn:uuid:8fc0f735-6e2b-45b2-bee1-bf169e08433b>',
'warc-target-uri': 'https://xmf.wikipedia.org/wiki/%E1%83%99%E1%83%90%E1%83%A2%E1%83%94%E1%83%92%E1%83%9D%E1%83%A0%E1%83%98%E1%83%90:%E1%83%90%E1%83%94%E1%83%A0%E1%83%9D%E1%83%9E%E1%83%9D%E1%83%A0%E1%83%A2%E1%83%94%E1%83%A4%E1%83%98_%E1%83%90%E1%83%9C%E1%83%91%E1%83%90%E1%83%9C%E1%83%98%E1%83%A8_%E1%83%9B%E1%83%94%E1%83%AF%E1%83%98%E1%83%9C%E1%83%90%E1%83%97',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'მოჩამილი ტექსტი წჷმორინელი რე Creative Commons '
'Attribution-ShareAlike ლიცენზიათ; შილებე გეძინელი პირობეფიშ '
'არსებუა. კ...'}
```
#### deduplicated_yi
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 10420,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:CZAVPSCGNW77WY2V2IJNK7R2CCUEMZFB',
'warc-date': '2021-02-24T21:10:52Z',
'warc-identified-content-language': 'yid,eng',
'warc-record-id': '<urn:uuid:7aa9e375-726d-42bd-832a-deee6dce5e4a>',
'warc-refers-to': '<urn:uuid:53354991-7bca-4134-95ce-ce7edebf841b>',
'warc-target-uri': 'http://www.kaveshtiebel.com/viewtopic.php?p=237817',
'warc-type': 'conversion'},
'nb_sentences': 10,
'offset': 0},
'text': 'עמעזאן איז יעצט ארויסגעקומען מיט א נייע סמארט ספיקער סיסטעם. '
"ס'הייסט Echo. אין Echo דרייט זיך א ראבאטישקע זי הייסט אל..."}
```
#### deduplicated_yo
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 3627,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:UISXP36HUEMW2LBTMAR4CTISUYAVZZAD',
'warc-date': '2021-03-07T12:45:52Z',
'warc-identified-content-language': 'yor,eng',
'warc-record-id': '<urn:uuid:e67645e9-ee6c-4c88-9b27-a158dc7f83e9>',
'warc-refers-to': '<urn:uuid:07c8d83b-7840-4238-a3b4-edc3f98ecdd5>',
'warc-target-uri': 'https://edeyorubarewa.com/itelorun/',
'warc-type': 'conversion'},
'nb_sentences': 1,
'offset': 0},
'text': 'A dá sílè fún àwọn ènìyàn tí wọn fẹ́ràn láti mò nípa èdè Yorùbá, '
'àṣà àti ìṣe ilẹ̀ kóòtù ojire. Kíkó àwọn ọmọ wa ni Èd...'}
```
#### deduplicated_zh
* Size of downloaded dataset files: None
* Size of the generated dataset: None
* Total amount of disk used: None
An example of 'train' looks as follows:
```
{ 'id': 0,
'meta': { 'headers': { 'content-length': 108400,
'content-type': 'text/plain',
'warc-block-digest': 'sha1:PP6MQUJB3F4G63HKKGKO2QJG7SMRMTFJ',
'warc-date': '2021-02-28T09:41:11Z',
'warc-identified-content-language': 'zho',
'warc-record-id': '<urn:uuid:132aab53-daff-4bae-83d0-a0cdb4039d00>',
'warc-refers-to': '<urn:uuid:2f26c020-f1fc-4216-a616-4683e0b25b1e>',
'warc-target-uri': 'http://www.yummtumm.com/offer',
'warc-type': 'conversion'},
'nb_sentences': 7,
'offset': 0},
'text': '久久精品视频在线看15_久久人人97超碰_久久爱 '
'人人澡超碰碰中文字幕,人人天天夜夜日日狠狠,久久人人97超碰,人人婷婷开心情五月,日日摸天天摸人人看,碰人人么免费视频,色综合天天综合网 '
'久久爱免费视频在线观看_久久爱视频_久久爱在线...'}
```
</details>
### Data Fields
* `id`: a `int64` feature.
* `meta`: Metadata
* `meta.headers`: WARC Headers
* `meta.headers.content-length`: `int64` Content length (in bytes) **before** cleaning
* `meta.headers.content-type`: `string` MIME type
* `meta.headers.warc-block-digest`:`string` Algorithm name and calculated value of a digest applied to the full block of the record
* `meta.headers.warc-date`: `string` Crawl date (YYYY-MM-DDThh:mm:ssZ)
* `meta.headers.warc-identified-content-language`: `string` Comma-separated list of language identifications done by CommonCrawl (uses CLD3)
* `meta.headers.warc-record-id`: `string` Record ID
* `meta.headers.warc-refers-to`: `string` Record-ID of a single record for which the present record holds additional content
* `meta.headers.warc-target-uri`: `string` URI from where the content has been fetched
* `meta.headers.warc-type`: `string` Type of the WARC Record
* `meta.nb_sentences`: `int64` Number of sentences in the text
* `meta.offset`: `int64` line offset where the related text begins. Should be used with `meta.nb_sentences` when reading the source files rather than using iterators to get related data.
* `text`: `string` content
See the [WARC Format standard](https://iipc.github.io/warc-specifications/specifications/warc-format/warc-1.1/#warc-type-mandatory) for more details.
### Data Splits
<details>
<summary>Click to expand the number of samples per configuration</summary>
## Table
| Language code | language | Size original | words original | size deduplicated | words deduplicated |
|:----|:----------------------------|:-------|:----------------|:---------------|:----------------|
| af | Afrikaans | 258MB | 44,628,392 | 157MB | 27,057,785 |
| als | Alemanic | 7MB | 1,212,699 | 5MB | 871,664 |
| am | Amharic | 405MB | 30,991,914 | 241MB | 18,326,043 |
| an | Aragonese | 1MB | 115,938 | 608KB | 89,043 |
| ar | Arabic | 69GB | 6,494,332,191 | 35GB | 3,365,025,866 |
| arz | Egyptian Arabic | 48MB | 4,998,963 | 21MB | 2,341,904 |
| ast | Asturian | 7MB | 1,085,670 | 4MB | 776,069 |
| as | Assamese | 135MB | 7,917,923 | 95MB | 5,605,207 |
| av | Avaric | 421KB | 25,104 | 325KB | 19,133 |
| azb | South Azerbaijani | 47MB | 3,595,569 | 29MB | 2,243,562 |
| az | Azerbaijani | 3GB | 344,187,319 | 1GB | 169,655,478 |
| bar | Bavarian | 2KB | 247 | 1KB | 245 |
| ba | Bashkir | 110MB | 8,121,603 | 77MB | 5,625,158 |
| be | Belarusian | 2GB | 168,911,341 | 1GB | 98,212,442 |
| bg | Bulgarian | 34GB | 2,994,775,106 | 15GB | 1,315,091,995 |
| bh | Bihari languages | 579KB | 46,436 | 120KB | 9,181 |
| bn | Bangla | 14GB | 814,550,777 | 7GB | 466,289,242 |
| bo | Tibetan | 439MB | 3,751,935 | 358MB | 2,797,085 |
| bpy | Bishnupriya | 11MB | 558,819 | 4MB | 280,825 |
| br | Breton | 49MB | 8,067,480 | 23MB | 4,032,467 |
| bs | Bosnian | 310KB | 50,266 | 175KB | 25,157 |
| bxr | Russia Buriat | 22KB | 1,625 | 18KB | 1,335 |
| ca | Catalan | 13GB | 2,110,833,307 | 6GB | 1,012,770,904 |
| cbk | Chavacano | 168B | 2 | 168B | 2 |
| ceb | Cebuano | 81MB | 12,921,589 | 58MB | 9,201,870 |
| ce | Chechen | 29MB | 2,283,093 | 20MB | 1,638,963 |
| ckb | Central Kurdish | 784MB | 63,417,572 | 367MB | 29,355,017 |
| cs | Czech | 72GB | 9,996,052,434 | 33GB | 4,739,928,730 |
| cv | Chuvash | 60MB | 4,592,449 | 41MB | 3,141,872 |
| cy | Welsh | 307MB | 50,606,998 | 180MB | 30,198,860 |
| da | Danish | 18GB | 2,892,004,180 | 10GB | 1,704,605,898 |
| de | German | 433GB | 58,716,727,164 | 184GB | 25,446,071,671 |
| diq | Dimli (individual language) | 294B | 38 | 147B | 19 |
| dsb | Lower Sorbian | 31KB | 4,115 | 14KB | 1,873 |
| dv | Divehi | 143MB | 8,293,093 | 111MB | 6,481,260 |
| el | Greek | 72GB | 6,024,414,850 | 30GB | 2,539,719,195 |
| eml | Unknown language [eml] | 22KB | 4,360 | 20KB | 3,876 |
| en | English | 2936GB | 488,723,815,522 | 1342GB | 223,669,114,922 |
| eo | Esperanto | 560MB | 84,432,772 | 390MB | 59,411,208 |
| es | Spanish | 342GB | 54,715,337,438 | 160GB | 25,877,724,186 |
| et | Estonian | 7GB | 954,732,803 | 3GB | 455,553,053 |
| eu | Basque | 900MB | 110,676,692 | 503MB | 62,812,888 |
| fa | Persian | 79GB | 8,566,653,720 | 35GB | 3,902,206,854 |
| fi | Finnish | 35GB | 4,074,911,658 | 20GB | 2,357,264,196 |
| frr | Northern Frisian | 7KB | 1,702 | 5KB | 1,267 |
| fr | French | 340GB | 52,839,365,242 | 161GB | 25,245,127,073 |
| fy | Western Frisian | 82MB | 13,094,538 | 57MB | 9,329,828 |
| ga | Irish | 131MB | 20,142,627 | 69MB | 10,835,410 |
| gd | Scottish Gaelic | 2MB | 332,946 | 1MB | 173,588 |
| gl | Galician | 989MB | 155,030,216 | 549MB | 87,015,417 |
| gn | Guarani | 32KB | 3,828 | 25KB | 3,056 |
| gom | Goan Konkani | 3MB | 177,357 | 2MB | 148,801 |
| gu | Gujarati | 1GB | 124,652,589 | 950MB | 63,150,641 |
| gv | Manx | 1KB | 264 | 907B | 141 |
| he | Hebrew | 29GB | 2,829,132,925 | 11GB | 1,156,588,919 |
| hi | Hindi | 26GB | 2,009,754,819 | 13GB | 1,038,914,735 |
| hr | Croatian | 361MB | 51,654,735 | 169MB | 24,583,270 |
| hsb | Upper Sorbian | 2MB | 305,176 | 1MB | 207,715 |
| ht | Haitian Creole | 2KB | 592 | 1KB | 351 |
| hu | Hungarian | 60GB | 7,415,936,687 | 29GB | 3,765,883,306 |
| hy | Armenian | 4GB | 322,429,587 | 1GB | 124,515,953 |
| ia | Interlingua | 291KB | 74,696 | 172KB | 41,625 |
| id | Indonesian | 40GB | 5,767,715,387 | 22GB | 3,126,926,138 |
| ie | Interlingue | 7KB | 1,432 | 2KB | 424 |
| ilo | Iloko | 1MB | 275,029 | 857KB | 140,579 |
| io | Ido | 276KB | 46,463 | 221KB | 36,976 |
| is | Icelandic | 2GB | 290,997,158 | 1GB | 176,018,529 |
| it | Italian | 192GB | 29,252,541,808 | 94GB | 14,426,829,908 |
| ja | Japanese | 208GB | 5,357,000,179 | 96GB | 1,319,938,248 |
| jbo | Lojban | 929KB | 179,684 | 731KB | 140,749 |
| jv | Javanese | 858KB | 121,271 | 728KB | 101,386 |
| ka | Georgian | 6GB | 304,329,117 | 2GB | 116,422,468 |
| kk | Kazakh | 3GB | 236,767,203 | 1GB | 126,886,720 |
| km | Khmer | 1GB | 28,188,612 | 860MB | 13,408,408 |
| kn | Kannada | 2GB | 111,460,546 | 1GB | 56,801,321 |
| ko | Korean | 35GB | 3,367,279,749 | 15GB | 1,475,474,588 |
| krc | Karachay-Balkar | 2MB | 193,207 | 2MB | 153,755 |
| ku | Kurdish | 152MB | 23,845,402 | 108MB | 17,264,310 |
| kv | Komi | 1MB | 89,105 | 588KB | 46,219 |
| kw | Cornish | 119KB | 20,775 | 72KB | 12,687 |
| ky | Kyrgyz | 485MB | 33,401,287 | 334MB | 23,102,129 |
| la | Latin | 103MB | 15,869,314 | 9MB | 1,488,545 |
| lb | Luxembourgish | 54MB | 7,953,887 | 37MB | 5,454,220 |
| lez | Lezghian | 2MB | 214,890 | 2MB | 198,433 |
| li | Limburgish | 76KB | 12,105 | 54KB | 8,472 |
| lmo | Lombard | 1MB | 203,002 | 1MB | 182,533 |
| lo | Lao | 287MB | 6,928,229 | 163MB | 3,620,360 |
| lrc | Northern Luri | 183B | 26 | 183B | 26 |
| lt | Lithuanian | 12GB | 1,573,926,673 | 5GB | 701,326,575 |
| lv | Latvian | 6GB | 799,923,431 | 2GB | 352,753,044 |
| mai | Maithili | 685KB | 144,859 | 24KB | 1,916 |
| mg | Malagasy | 59MB | 8,103,631 | 38MB | 5,220,655 |
| mhr | Eastern Mari | 15MB | 1,170,650 | 10MB | 784,071 |
| min | Minangkabau | 8MB | 451,591 | 1MB | 74,882 |
| mk | Macedonian | 3GB | 261,571,966 | 1GB | 134,544,934 |
| ml | Malayalam | 4GB | 182,898,691 | 2GB | 87,615,430 |
| mn | Mongolian | 1GB | 143,244,180 | 912MB | 71,138,550 |
| mrj | Western Mari | 645KB | 51,812 | 521KB | 41,950 |
| mr | Marathi | 3GB | 173,001,078 | 1GB | 99,858,901 |
| ms | Malay | 146MB | 20,433,250 | 60MB | 8,301,250 |
| mt | Maltese | 51MB | 6,162,888 | 26MB | 3,179,815 |
| mwl | Mirandese | 3KB | 419 | 2KB | 302 |
| my | Burmese | 2GB | 54,624,239 | 1GB | 35,969,724 |
| myv | Erzya | 29KB | 2,844 | 2KB | 236 |
| mzn | Mazanderani | 1MB | 134,128 | 1MB | 106,533 |
| nah | Nahuatl languages | 34KB | 3,664 | 21KB | 2,363 |
| nap | Neapolitan | 1KB | 550 | 1KB | 235 |
| nds | Low German | 25MB | 3,998,912 | 17MB | 2,868,608 |
| ne | Nepali | 3GB | 207,891,824 | 2GB | 142,087,100 |
| new | Newari | 6MB | 433,880 | 4MB | 254,711 |
| nl | Dutch | 97GB | 15,248,924,083 | 47GB | 7,584,055,321 |
| nn | Norwegian Nynorsk | 123MB | 20,629,675 | 66MB | 11,095,804 |
| no | Norwegian Bokmål | 9GB | 1,492,984,384 | 4GB | 776,354,517 |
| oc | Occitan | 12MB | 1,822,595 | 5MB | 834,187 |
| or | Odia | 538MB | 30,838,706 | 357MB | 20,357,839 |
| os | Ossetic | 11MB | 911,794 | 6MB | 536,525 |
| pam | Pampanga | 3KB | 405 | 3KB | 405 |
| pa | Punjabi | 769MB | 59,031,334 | 430MB | 33,413,527 |
| pl | Polish | 122GB | 16,120,806,481 | 48GB | 6,496,098,108 |
| pms | Piedmontese | 4MB | 804,600 | 3MB | 644,017 |
| pnb | Western Panjabi | 68MB | 7,757,785 | 45MB | 5,221,168 |
| ps | Pashto | 404MB | 49,643,597 | 286MB | 35,345,424 |
| pt | Portuguese | 159GB | 24,770,395,312 | 71GB | 11,190,148,216 |
| qu | Quechua | 322KB | 40,691 | 230KB | 29,108 |
| rm | Romansh | 3KB | 512 | 3KB | 429 |
| ro | Romanian | 37GB | 5,629,438,576 | 15GB | 2,387,230,734 |
| rue | Rusyn | 247B | 14 | 247B | 14 |
| ru | Russian | 1201GB | 89,568,364,811 | 542GB | 41,194,052,384 |
| sah | Sakha | 57MB | 2,600,989 | 39MB | 1,944,651 |
| sa | Sanskrit | 72MB | 3,288,786 | 43MB | 1,998,089 |
| scn | Sicilian | 4KB | 712 | 3KB | 516 |
| sco | Scots | 1KB | 523 | 1KB | 282 |
| sd | Sindhi | 75MB | 8,937,427 | 50MB | 6,064,102 |
| sh | Serbian (Latin) | 13MB | 2,164,175 | 9MB | 1,461,045 |
| si | Sinhala | 1GB | 91,456,436 | 791MB | 47,770,919 |
| sk | Slovak | 14GB | 2,002,088,524 | 6GB | 865,456,498 |
| sl | Slovenian | 4GB | 610,843,131 | 1GB | 288,222,997 |
| so | Somali | 15KB | 849 | 13KB | 449 |
| sq | Albanian | 3GB | 493,861,192 | 1GB | 257,278,518 |
| sr | Serbian | 6GB | 574,460,746 | 3GB | 289,211,579 |
| su | Sundanese | 397KB | 54,420 | 274KB | 37,082 |
| sv | Swedish | 43GB | 6,542,433,732 | 19GB | 2,964,887,952 |
| sw | Swahili | 11MB | 1,853,022 | 7MB | 1,279,350 |
| ta | Tamil | 10GB | 438,489,984 | 5GB | 215,856,584 |
| te | Telugu | 3GB | 182,268,133 | 1GB | 73,193,605 |
| tg | Tajik | 985MB | 79,016,232 | 321MB | 26,069,632 |
| th | Thai | 62GB | 1,694,658,532 | 26GB | 635,230,676 |
| tk | Turkmen | 25MB | 2,693,720 | 20MB | 2,221,760 |
| tl | Filipino | 699MB | 115,471,760 | 383MB | 62,473,283 |
| tr | Turkish | 73GB | 8,763,467,387 | 33GB | 3,950,989,357 |
| tt | Tatar | 947MB | 68,793,924 | 424MB | 31,485,000 |
| tyv | Tuvinian | 9KB | 638 | 7KB | 542 |
| ug | Uyghur | 187MB | 12,786,741 | 123MB | 8,410,269 |
| uk | Ukrainian | 53GB | 4,014,675,914 | 28GB | 2,131,491,321 |
| ur | Urdu | 2GB | 354,937,986 | 1GB | 234,111,239 |
| uz | Uzbek | 56MB | 6,237,371 | 28MB | 3,327,595 |
| vec | Venetian | 37KB | 6,694 | 28KB | 5,139 |
| vi | Vietnamese | 87GB | 14,523,772,784 | 42GB | 7,011,404,625 |
| vls | West Flemish | 134B | 2 | 134B | 2 |
| vo | Volapük | 2MB | 426,052 | 2MB | 410,688 |
| war | Waray | 4MB | 750,162 | 4MB | 702,336 |
| wa | Walloon | 511KB | 93,163 | 329KB | 59,906 |
| wuu | Wu Chinese | 145KB | 9,130 | 69KB | 3,031 |
| xal | Kalmyk | 62KB | 5,495 | 62KB | 5,495 |
| xmf | Mingrelian | 16MB | 807,158 | 10MB | 510,700 |
| yi | Yiddish | 199MB | 18,699,112 | 93MB | 8,716,366 |
| yo | Yoruba | 229KB | 34,468 | 120KB | 17,487 |
| zh | Chinese | 500GB | 10,118,381,906 | 266GB | 3,898,987,727 |
</details>
## Dataset Creation
### Curation Rationale
OSCAR was constructed using [`Ungoliant`](https://github.com/oscar-corpus/ungoliant), a new pipeline derived from [goclassy](https://github.com/oscar-corpus/goclassy), itself being derived from [fastText's one](https://github.com/facebookresearch/fastText).
OSCAR 21.09 follows the [OSCAR Schema v1.1](https://oscar-corpus.com/post/oscar-schema-v1-1/), which adds metadata to each entry while staying backwards-compatible with OSCAR.
The order of operations is similar as in the goclassy pipeline, with optimisations regarding IO and a finer granlularity regarding multithreading.
`Ungoliant` is implemented in the [Rust programming language](https://rust-lang.org), and uses [rayon](https://github.com/rayon-rs/rayon) as its data parallelism strategy.
Threading is done at shard, record and sentence level, making the whole generation process much more efficient.
Filtering is done at line-level, removing lines shorter than 100 UTF-8 codepoints. While invalid UTF-8 characters are detected, they are not removed, but rather replaced with the [Replacement character](https://en.wikipedia.org/wiki/Special_(Unicode_block)#Replacement_character).
After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **February 2021** snapshot was used. It is composed by 64 000 compressed text files containing documents and their headers.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Julien Abadji](https://ujj.space), [Pedro Ortiz Suarez](https://portizs.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
@ARTICLE{caswell-etal-2021-quality,
author = {{Caswell}, Isaac and {Kreutzer}, Julia and {Wang}, Lisa and {Wahab}, Ahsan and {van Esch}, Daan and {Ulzii-Orshikh}, Nasanbayar and {Tapo}, Allahsera and {Subramani}, Nishant and {Sokolov}, Artem and {Sikasote}, Claytone and {Setyawan}, Monang and {Sarin}, Supheakmungkol and {Samb}, Sokhar and {Sagot}, Beno{\^\i}t and {Rivera}, Clara and {Rios}, Annette and {Papadimitriou}, Isabel and {Osei}, Salomey and {Ortiz Su{\'a}rez}, Pedro Javier and {Orife}, Iroro and {Ogueji}, Kelechi and {Niyongabo}, Rubungo Andre and {Nguyen}, Toan Q. and {M{\"u}ller}, Mathias and {M{\"u}ller}, Andr{\'e} and {Hassan Muhammad}, Shamsuddeen and {Muhammad}, Nanda and {Mnyakeni}, Ayanda and {Mirzakhalov}, Jamshidbek and {Matangira}, Tapiwanashe and {Leong}, Colin and {Lawson}, Nze and {Kudugunta}, Sneha and {Jernite}, Yacine and {Jenny}, Mathias and {Firat}, Orhan and {Dossou}, Bonaventure F.~P. and {Dlamini}, Sakhile and {de Silva}, Nisansa and {{\c{C}}abuk Ball{\i}}, Sakine and {Biderman}, Stella and {Battisti}, Alessia and {Baruwa}, Ahmed and {Bapna}, Ankur and {Baljekar}, Pallavi and {Abebe Azime}, Israel and {Awokoya}, Ayodele and {Ataman}, Duygu and {Ahia}, Orevaoghene and {Ahia}, Oghenefego and {Agrawal}, Sweta and {Adeyemi}, Mofetoluwa},
title = "{Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language, Computer Science - Artificial Intelligence},
year = 2021,
month = mar,
eid = {arXiv:2103.12028},
pages = {arXiv:2103.12028},
archivePrefix = {arXiv},
eprint = {2103.12028},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210312028C},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox), [@Uinelj](https://github.com/Uinelj) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
pie/tacred | 2023-09-27T14:43:54.000Z | [
"region:us"
] | pie | null | null | null | 0 | 226 | Entry not found |
zxvix/pubmed_nonbiomedicalrap_2 | 2023-09-11T12:33:32.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 226 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: MedlineCitation
struct:
- name: PMID
dtype: int32
- name: DateCompleted
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: NumberOfReferences
dtype: int32
- name: DateRevised
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: Article
struct:
- name: Abstract
struct:
- name: AbstractText
dtype: string
- name: ArticleTitle
dtype: string
- name: AuthorList
struct:
- name: Author
sequence:
- name: LastName
dtype: string
- name: ForeName
dtype: string
- name: Initials
dtype: string
- name: CollectiveName
dtype: string
- name: Language
dtype: string
- name: GrantList
struct:
- name: Grant
sequence:
- name: GrantID
dtype: string
- name: Agency
dtype: string
- name: Country
dtype: string
- name: PublicationTypeList
struct:
- name: PublicationType
sequence: string
- name: MedlineJournalInfo
struct:
- name: Country
dtype: string
- name: ChemicalList
struct:
- name: Chemical
sequence:
- name: RegistryNumber
dtype: string
- name: NameOfSubstance
dtype: string
- name: CitationSubset
dtype: string
- name: MeshHeadingList
struct:
- name: MeshHeading
sequence:
- name: DescriptorName
dtype: string
- name: QualifierName
dtype: string
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
- name: text
dtype: string
- name: title
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 3906927.162
num_examples: 999
download_size: 2127120
dataset_size: 3906927.162
---
# Dataset Card for "pubmed_nonbiomedicalrap_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceH4/self_instruct | 2023-03-27T22:03:01.000Z | [
"task_categories:text-generation",
"license:apache-2.0",
"region:us"
] | HuggingFaceH4 | null | null | null | 3 | 225 | ---
license: apache-2.0
task_categories:
- text-generation
---
This dataset splits the original [Self-instruct dataset](https://huggingface.co/datasets/yizhongw/self_instruct) into training (90%) and test (10%). |
whu9/sts_pretrain | 2023-05-21T21:38:21.000Z | [
"region:us"
] | whu9 | null | null | null | 0 | 225 | ---
dataset_info:
features:
- name: entity1
dtype: string
- name: entity2
dtype: string
splits:
- name: train
num_bytes: 2862540
num_examples: 22278
download_size: 0
dataset_size: 2862540
---
# Dataset Card for "sts_pretrain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vietgpt/openbookqa_en | 2023-06-03T22:16:08.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"SFT",
"region:us"
] | vietgpt | null | null | null | 0 | 225 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question_stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 895386
num_examples: 4957
- name: validation
num_bytes: 95428
num_examples: 500
- name: test
num_bytes: 91759
num_examples: 500
download_size: 609610
dataset_size: 1082573
task_categories:
- text-classification
language:
- en
tags:
- SFT
size_categories:
- 1K<n<10K
---
# OpenBookQA
- Source: https://huggingface.co/datasets/openbookqa
- Num examples:
- 4,957 (train)
- 500 (validation)
- 500 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("vietgpt/piqa_en")
```
- Format for GPT-3
```python
def preprocess_gpt3(sample):
question_stem = sample['question_stem']
choices = sample['choices']['text']
answerKey = sample['answerKey']
if answerKey == 'A':
output = f'\n<|correct|> {choices[0]}\n<|incorrect|> {choices[1]}\n<|incorrect|> {choices[2]}\n<|incorrect|> {choices[3]}'
elif answerKey == 'B':
output = f'\n<|correct|> {choices[1]}\n<|incorrect|> {choices[0]}\n<|incorrect|> {choices[2]}\n<|incorrect|> {choices[3]}'
elif answerKey == 'C':
output = f'\n<|correct|> {choices[2]}\n<|incorrect|> {choices[0]}\n<|incorrect|> {choices[1]}\n<|incorrect|> {choices[3]}'
else:
output = f'\n<|correct|> {choices[3]}\n<|incorrect|> {choices[0]}\n<|incorrect|> {choices[1]}\n<|incorrect|> {choices[2]}'
return {'text': f'<|startoftext|><|context|> {question_stem} <|answer|> {output} <|endoftext|>'}
"""
<|startoftext|><|context|> The sun is responsible for <|answer|>
<|correct|> plants sprouting, blooming and wilting
<|incorrect|> puppies learning new tricks
<|incorrect|> children growing up and getting old
<|incorrect|> flowers wilting in a vase <|endoftext|>
"""
``` |
truehealth/medicationqa | 2023-06-12T14:24:14.000Z | [
"region:us"
] | truehealth | null | null | null | 0 | 225 | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Focus (Drug)
dtype: string
- name: Question Type
dtype: string
- name: Answer
dtype: string
- name: Section Title
dtype: string
- name: URL
dtype: string
splits:
- name: train
num_bytes: 403030
num_examples: 690
download_size: 0
dataset_size: 403030
---
# Dataset Card for "medicationqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zishuod/pokemon-icons | 2022-09-24T15:35:39.000Z | [
"task_categories:image-classification",
"license:mit",
"pokemon",
"region:us"
] | zishuod | null | null | null | 2 | 224 | ---
annotations_creators: []
language: []
language_creators: []
license:
- mit
multilinguality: []
pretty_name: pokemon-icons
size_categories: []
source_datasets: []
tags:
- pokemon
task_categories:
- image-classification
task_ids: []
---
# Dataset Card for pokemon-icons
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Pokemon Icons. Most of them are collected and cropped from screenshots captured in Pokémon Sword and Shield.
### Supported Tasks and Leaderboards
Image classification |
katarinagresova/Genomic_Benchmarks_human_nontata_promoters | 2023-03-13T19:33:47.000Z | [
"region:us"
] | katarinagresova | null | null | null | 0 | 224 | ---
dataset_info:
features:
- name: seq
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 7126511
num_examples: 27097
- name: test
num_bytes: 2375942
num_examples: 9034
download_size: 0
dataset_size: 9502453
---
# Dataset Card for "Genomic_Benchmarks_human_nontata_promoters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pppppppppp2/planeperturbed | 2023-10-10T09:01:30.000Z | [
"region:us"
] | pppppppppp2 | null | null | null | 1 | 224 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1044598491.4
num_examples: 8800
download_size: 994515901
dataset_size: 1044598491.4
---
# Dataset Card for "planeperturbed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jason-lee08/TinyStoriesExclamationValidation2 | 2023-09-15T20:28:30.000Z | [
"region:us"
] | jason-lee08 | null | null | null | 0 | 224 | ---
dataset_info:
features:
- name: validation
dtype: string
splits:
- name: train
num_bytes: 168184
num_examples: 220
download_size: 89488
dataset_size: 168184
---
# Dataset Card for "TinyStoriesExclamationValidation2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
indic_glue | 2023-06-09T13:57:14.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:multiple-choice",
"task_ids:topic-classification",
"task_ids:natural-language-inference",
"task_ids:sentiment-analysis",
"task_ids:semantic-similarity-scoring",
"task_ids:named-entity-recognition",
"task_ids:multiple-choice-qa",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other",
"language:as",
"language:bn",
"language:en",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:other",
"discourse-mode-classification",
"paraphrase-identification",
"cross-lingual-similarity",
"headline-classification",
"region:us"
] | null | IndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide
variety of tasks and covers 11 major Indian languages - as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. | @inproceedings{kakwani2020indicnlpsuite,
title={{IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages}},
author={Divyanshu Kakwani and Anoop Kunchukuttan and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar},
year={2020},
booktitle={Findings of EMNLP},
} | null | 4 | 223 | ---
annotations_creators:
- other
language_creators:
- found
language:
- as
- bn
- en
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- other
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
task_categories:
- text-classification
- token-classification
- multiple-choice
task_ids:
- topic-classification
- natural-language-inference
- sentiment-analysis
- semantic-similarity-scoring
- named-entity-recognition
- multiple-choice-qa
pretty_name: IndicGLUE
tags:
- discourse-mode-classification
- paraphrase-identification
- cross-lingual-similarity
- headline-classification
dataset_info:
- config_name: wnli.en
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 104577
num_examples: 635
- name: validation
num_bytes: 11886
num_examples: 71
- name: test
num_bytes: 37305
num_examples: 146
download_size: 591249
dataset_size: 153768
- config_name: wnli.hi
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 253342
num_examples: 635
- name: validation
num_bytes: 28684
num_examples: 71
- name: test
num_bytes: 90831
num_examples: 146
download_size: 591249
dataset_size: 372857
- config_name: wnli.gu
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 251562
num_examples: 635
- name: validation
num_bytes: 28183
num_examples: 71
- name: test
num_bytes: 94586
num_examples: 146
download_size: 591249
dataset_size: 374331
- config_name: wnli.mr
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 256657
num_examples: 635
- name: validation
num_bytes: 29226
num_examples: 71
- name: test
num_bytes: 97136
num_examples: 146
download_size: 591249
dataset_size: 383019
- config_name: copa.en
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 46049
num_examples: 400
- name: validation
num_bytes: 11695
num_examples: 100
- name: test
num_bytes: 55862
num_examples: 500
download_size: 757679
dataset_size: 113606
- config_name: copa.hi
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 93392
num_examples: 362
- name: validation
num_bytes: 23575
num_examples: 88
- name: test
num_bytes: 112846
num_examples: 449
download_size: 757679
dataset_size: 229813
- config_name: copa.gu
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 92113
num_examples: 362
- name: validation
num_bytes: 23466
num_examples: 88
- name: test
num_bytes: 110013
num_examples: 448
download_size: 757679
dataset_size: 225592
- config_name: copa.mr
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 93457
num_examples: 362
- name: validation
num_bytes: 23890
num_examples: 88
- name: test
num_bytes: 112071
num_examples: 449
download_size: 757679
dataset_size: 229418
- config_name: sna.bn
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': kolkata
'1': state
'2': national
'3': sports
'4': entertainment
'5': international
splits:
- name: train
num_bytes: 46070054
num_examples: 11284
- name: validation
num_bytes: 5648130
num_examples: 1411
- name: test
num_bytes: 5799983
num_examples: 1411
download_size: 11803096
dataset_size: 57518167
- config_name: csqa.as
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 3800555
num_examples: 2942
download_size: 65099316
dataset_size: 3800555
- config_name: csqa.bn
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 54671146
num_examples: 38845
download_size: 65099316
dataset_size: 54671146
- config_name: csqa.gu
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 29131703
num_examples: 22861
download_size: 65099316
dataset_size: 29131703
- config_name: csqa.hi
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 40409475
num_examples: 35140
download_size: 65099316
dataset_size: 40409475
- config_name: csqa.kn
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 21199880
num_examples: 13666
download_size: 65099316
dataset_size: 21199880
- config_name: csqa.ml
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 47220932
num_examples: 26537
download_size: 65099316
dataset_size: 47220932
- config_name: csqa.mr
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 13667238
num_examples: 11370
download_size: 65099316
dataset_size: 13667238
- config_name: csqa.or
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 2562397
num_examples: 1975
download_size: 65099316
dataset_size: 2562397
- config_name: csqa.pa
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 5806129
num_examples: 5667
download_size: 65099316
dataset_size: 5806129
- config_name: csqa.ta
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 61868609
num_examples: 38590
download_size: 65099316
dataset_size: 61868609
- config_name: csqa.te
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 58785157
num_examples: 41338
download_size: 65099316
dataset_size: 58785157
- config_name: wstp.as
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 13581364
num_examples: 5000
- name: validation
num_bytes: 1698996
num_examples: 625
- name: test
num_bytes: 1697678
num_examples: 626
download_size: 242008091
dataset_size: 16978038
- config_name: wstp.bn
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 143340597
num_examples: 47580
- name: validation
num_bytes: 17759264
num_examples: 5947
- name: test
num_bytes: 17633893
num_examples: 5948
download_size: 242008091
dataset_size: 178733754
- config_name: wstp.gu
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 39353520
num_examples: 10004
- name: validation
num_bytes: 4887780
num_examples: 1251
- name: test
num_bytes: 4699186
num_examples: 1251
download_size: 242008091
dataset_size: 48940486
- config_name: wstp.hi
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 158529718
num_examples: 44069
- name: validation
num_bytes: 19371932
num_examples: 5509
- name: test
num_bytes: 19593029
num_examples: 5509
download_size: 242008091
dataset_size: 197494679
- config_name: wstp.kn
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 139950425
num_examples: 35379
- name: validation
num_bytes: 17789810
num_examples: 4422
- name: test
num_bytes: 17897059
num_examples: 4423
download_size: 242008091
dataset_size: 175637294
- config_name: wstp.ml
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 88360588
num_examples: 27527
- name: validation
num_bytes: 11193368
num_examples: 3441
- name: test
num_bytes: 11150942
num_examples: 3441
download_size: 242008091
dataset_size: 110704898
- config_name: wstp.mr
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 28302397
num_examples: 10446
- name: validation
num_bytes: 3328826
num_examples: 1306
- name: test
num_bytes: 3631712
num_examples: 1306
download_size: 242008091
dataset_size: 35262935
- config_name: wstp.or
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 10900034
num_examples: 4015
- name: validation
num_bytes: 1264963
num_examples: 502
- name: test
num_bytes: 1344680
num_examples: 502
download_size: 242008091
dataset_size: 13509677
- config_name: wstp.pa
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 22189758
num_examples: 8772
- name: validation
num_bytes: 2789214
num_examples: 1097
- name: test
num_bytes: 2685795
num_examples: 1097
download_size: 242008091
dataset_size: 27664767
- config_name: wstp.ta
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 151929358
num_examples: 48940
- name: validation
num_bytes: 18817195
num_examples: 6117
- name: test
num_bytes: 18815099
num_examples: 6118
download_size: 242008091
dataset_size: 189561652
- config_name: wstp.te
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 151696915
num_examples: 80000
- name: validation
num_bytes: 19003197
num_examples: 10000
- name: test
num_bytes: 18991941
num_examples: 10000
download_size: 242008091
dataset_size: 189692053
- config_name: inltkh.gu
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 883067
num_examples: 5269
- name: validation
num_bytes: 111205
num_examples: 659
- name: test
num_bytes: 110761
num_examples: 659
download_size: 2054771
dataset_size: 1105033
- config_name: inltkh.ml
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 1108149
num_examples: 5036
- name: validation
num_bytes: 140059
num_examples: 630
- name: test
num_bytes: 138851
num_examples: 630
download_size: 2054771
dataset_size: 1387059
- config_name: inltkh.mr
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 1462618
num_examples: 9672
- name: validation
num_bytes: 180310
num_examples: 1210
- name: test
num_bytes: 180562
num_examples: 1210
download_size: 2054771
dataset_size: 1823490
- config_name: inltkh.ta
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 2659573
num_examples: 5346
- name: validation
num_bytes: 316087
num_examples: 669
- name: test
num_bytes: 320469
num_examples: 669
download_size: 2054771
dataset_size: 3296129
- config_name: inltkh.te
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 1361671
num_examples: 4328
- name: validation
num_bytes: 170475
num_examples: 541
- name: test
num_bytes: 173153
num_examples: 541
download_size: 2054771
dataset_size: 1705299
- config_name: bbca.hi
features:
- name: label
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22126213
num_examples: 3467
- name: test
num_bytes: 5501156
num_examples: 866
download_size: 5770136
dataset_size: 27627369
- config_name: cvit-mkb-clsr.en-bn
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 2002009
num_examples: 5522
download_size: 3702442
dataset_size: 2002009
- config_name: cvit-mkb-clsr.en-gu
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 2316311
num_examples: 6463
download_size: 3702442
dataset_size: 2316311
- config_name: cvit-mkb-clsr.en-hi
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 1866335
num_examples: 5169
download_size: 3702442
dataset_size: 1866335
- config_name: cvit-mkb-clsr.en-ml
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 1999869
num_examples: 4886
download_size: 3702442
dataset_size: 1999869
- config_name: cvit-mkb-clsr.en-mr
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 2142129
num_examples: 5760
download_size: 3702442
dataset_size: 2142129
- config_name: cvit-mkb-clsr.en-or
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 276385
num_examples: 752
download_size: 3702442
dataset_size: 276385
- config_name: cvit-mkb-clsr.en-ta
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 2576460
num_examples: 5637
download_size: 3702442
dataset_size: 2576460
- config_name: cvit-mkb-clsr.en-te
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 1781235
num_examples: 5049
download_size: 3702442
dataset_size: 1781235
- config_name: cvit-mkb-clsr.en-ur
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 290450
num_examples: 1006
download_size: 3702442
dataset_size: 290450
- config_name: iitp-mr.hi
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 6704909
num_examples: 2480
- name: validation
num_bytes: 822222
num_examples: 310
- name: test
num_bytes: 702377
num_examples: 310
download_size: 1742048
dataset_size: 8229508
- config_name: iitp-pr.hi
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 945593
num_examples: 4182
- name: validation
num_bytes: 120104
num_examples: 523
- name: test
num_bytes: 121914
num_examples: 523
download_size: 266545
dataset_size: 1187611
- config_name: actsa-sc.te
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 1370911
num_examples: 4328
- name: validation
num_bytes: 166093
num_examples: 541
- name: test
num_bytes: 168295
num_examples: 541
download_size: 378882
dataset_size: 1705299
- config_name: md.hi
features:
- name: sentence
dtype: string
- name: discourse_mode
dtype: string
- name: story_number
dtype: int32
- name: id
dtype: int32
splits:
- name: train
num_bytes: 1672117
num_examples: 7974
- name: validation
num_bytes: 211195
num_examples: 997
- name: test
num_bytes: 210183
num_examples: 997
download_size: 1048441
dataset_size: 2093495
- config_name: wiki-ner.as
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 375007
num_examples: 1021
- name: validation
num_bytes: 49336
num_examples: 157
- name: test
num_bytes: 50480
num_examples: 160
download_size: 5980272
dataset_size: 474823
- config_name: wiki-ner.bn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 7502896
num_examples: 20223
- name: validation
num_bytes: 988707
num_examples: 2985
- name: test
num_bytes: 985965
num_examples: 2690
download_size: 5980272
dataset_size: 9477568
- config_name: wiki-ner.gu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 1571612
num_examples: 2343
- name: validation
num_bytes: 192828
num_examples: 297
- name: test
num_bytes: 197901
num_examples: 255
download_size: 5980272
dataset_size: 1962341
- config_name: wiki-ner.hi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 3762529
num_examples: 9463
- name: validation
num_bytes: 468702
num_examples: 1114
- name: test
num_bytes: 475277
num_examples: 1256
download_size: 5980272
dataset_size: 4706508
- config_name: wiki-ner.kn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 1352051
num_examples: 2679
- name: validation
num_bytes: 179562
num_examples: 412
- name: test
num_bytes: 180815
num_examples: 476
download_size: 5980272
dataset_size: 1712428
- config_name: wiki-ner.ml
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 7678935
num_examples: 15620
- name: validation
num_bytes: 969971
num_examples: 2067
- name: test
num_bytes: 991126
num_examples: 2042
download_size: 5980272
dataset_size: 9640032
- config_name: wiki-ner.mr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 5431537
num_examples: 12151
- name: validation
num_bytes: 701661
num_examples: 1498
- name: test
num_bytes: 655706
num_examples: 1329
download_size: 5980272
dataset_size: 6788904
- config_name: wiki-ner.or
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 493782
num_examples: 1077
- name: validation
num_bytes: 58592
num_examples: 132
- name: test
num_bytes: 62235
num_examples: 153
download_size: 5980272
dataset_size: 614609
- config_name: wiki-ner.pa
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 520268
num_examples: 1408
- name: validation
num_bytes: 61194
num_examples: 186
- name: test
num_bytes: 61812
num_examples: 179
download_size: 5980272
dataset_size: 643274
- config_name: wiki-ner.ta
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 10117152
num_examples: 20466
- name: validation
num_bytes: 1267212
num_examples: 2586
- name: test
num_bytes: 1321650
num_examples: 2611
download_size: 5980272
dataset_size: 12706014
- config_name: wiki-ner.te
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 3881235
num_examples: 7978
- name: validation
num_bytes: 458533
num_examples: 841
- name: test
num_bytes: 507830
num_examples: 1110
download_size: 5980272
dataset_size: 4847598
---
# Dataset Card for "indic_glue"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ai4bharat.iitm.ac.in/indic-glue
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages](https://aclanthology.org/2020.findings-emnlp.445/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3.51 GB
- **Size of the generated dataset:** 1.65 GB
- **Total amount of disk used:** 5.16 GB
### Dataset Summary
IndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide
variety of tasks and covers 11 major Indian languages - as, bn, gu, hi, kn, ml, mr, or, pa, ta, te.
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task
in which a system must read a sentence with a pronoun and select the referent of that pronoun from
a list of choices. The examples are manually constructed to foil simple statistical methods: Each
one is contingent on contextual information provided by a single word or phrase in the sentence.
To convert the problem into sentence pair classification, we construct sentence pairs by replacing
the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the
pronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of
new examples derived from fiction books that was shared privately by the authors of the original
corpus. While the included training set is balanced between two classes, the test set is imbalanced
between them (65% not entailment). Also, due to a data quirk, the development set is adversarial:
hypotheses are sometimes shared between training and development examples, so if a model memorizes the
training examples, they will predict the wrong label on corresponding development set
example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence
between a model's score on this task and its score on the unconverted original task. We
call converted dataset WNLI (Winograd NLI). This dataset is translated and publicly released for 3
Indian languages by AI4Bharat.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### actsa-sc.te
- **Size of downloaded dataset files:** 0.38 MB
- **Size of the generated dataset:** 1.71 MB
- **Total amount of disk used:** 2.09 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"label": 0,
"text": "\"ప్రయాణాల్లో ఉన్నవారికోసం బస్ స్టేషన్లు, రైల్వే స్టేషన్లలో పల్స్పోలియో బూతులను ఏర్పాటు చేసి చిన్నారులకు పోలియో చుక్కలు వేసేలా ఏర..."
}
```
#### bbca.hi
- **Size of downloaded dataset files:** 5.77 MB
- **Size of the generated dataset:** 27.63 MB
- **Total amount of disk used:** 33.40 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"label": "pakistan",
"text": "\"नेटिजन यानि इंटरनेट पर सक्रिय नागरिक अब ट्विटर पर सरकार द्वारा लगाए प्रतिबंधों के समर्थन या विरोध में अपने विचार व्यक्त करते है..."
}
```
#### copa.en
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.87 MB
An example of 'validation' looks as follows.
```
{
"choice1": "I swept the floor in the unoccupied room.",
"choice2": "I shut off the light in the unoccupied room.",
"label": 1,
"premise": "I wanted to conserve energy.",
"question": "effect"
}
```
#### copa.gu
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"choice1": "\"સ્ત્રી જાણતી હતી કે તેનો મિત્ર મુશ્કેલ સમયમાંથી પસાર થઈ રહ્યો છે.\"...",
"choice2": "\"મહિલાને લાગ્યું કે તેના મિત્રએ તેની દયાળુ લાભ લીધો છે.\"...",
"label": 0,
"premise": "મહિલાએ તેના મિત્રની મુશ્કેલ વર્તન સહન કરી.",
"question": "cause"
}
```
#### copa.hi
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.99 MB
An example of 'validation' looks as follows.
```
{
"choice1": "मैंने उसका प्रस्ताव ठुकरा दिया।",
"choice2": "उन्होंने मुझे उत्पाद खरीदने के लिए राजी किया।",
"label": 0,
"premise": "मैंने सेल्समैन की पिच पर शक किया।",
"question": "effect"
}
```
### Data Fields
The data fields are the same among all splits.
#### actsa-sc.te
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `positive` (0), `negative` (1).
#### bbca.hi
- `label`: a `string` feature.
- `text`: a `string` feature.
#### copa.en
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
#### copa.gu
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
#### copa.hi
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
### Data Splits
#### actsa-sc.te
| |train|validation|test|
|-----------|----:|---------:|---:|
|actsa-sc.te| 4328| 541| 541|
#### bbca.hi
| |train|test|
|-------|----:|---:|
|bbca.hi| 3467| 866|
#### copa.en
| |train|validation|test|
|-------|----:|---------:|---:|
|copa.en| 400| 100| 500|
#### copa.gu
| |train|validation|test|
|-------|----:|---------:|---:|
|copa.gu| 362| 88| 448|
#### copa.hi
| |train|validation|test|
|-------|----:|---------:|---:|
|copa.hi| 362| 88| 449|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{kakwani-etal-2020-indicnlpsuite,
title = "{I}ndic{NLPS}uite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for {I}ndian Languages",
author = "Kakwani, Divyanshu and
Kunchukuttan, Anoop and
Golla, Satish and
N.C., Gokul and
Bhattacharyya, Avik and
Khapra, Mitesh M. and
Kumar, Pratyush",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.445",
doi = "10.18653/v1/2020.findings-emnlp.445",
pages = "4948--4961",
}
@inproceedings{Levesque2011TheWS,
title={The Winograd Schema Challenge},
author={H. Levesque and E. Davis and L. Morgenstern},
booktitle={KR},
year={2011}
}
```
### Contributions
Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset. |
jxie/imagenet-100 | 2023-03-24T21:18:13.000Z | [
"license:mit",
"region:us"
] | jxie | null | null | null | 0 | 223 | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': n01558993
'1': n01692333
'2': n01729322
'3': n01735189
'4': n01749939
'5': n01773797
'6': n01820546
'7': n01855672
'8': n01978455
'9': n01980166
'10': n01983481
'11': n02009229
'12': n02018207
'13': n02085620
'14': n02086240
'15': n02086910
'16': n02087046
'17': n02089867
'18': n02089973
'19': n02090622
'20': n02091831
'21': n02093428
'22': n02099849
'23': n02100583
'24': n02104029
'25': n02105505
'26': n02106550
'27': n02107142
'28': n02108089
'29': n02109047
'30': n02113799
'31': n02113978
'32': n02114855
'33': n02116738
'34': n02119022
'35': n02123045
'36': n02138441
'37': n02172182
'38': n02231487
'39': n02259212
'40': n02326432
'41': n02396427
'42': n02483362
'43': n02488291
'44': n02701002
'45': n02788148
'46': n02804414
'47': n02859443
'48': n02869837
'49': n02877765
'50': n02974003
'51': n03017168
'52': n03032252
'53': n03062245
'54': n03085013
'55': n03259280
'56': n03379051
'57': n03424325
'58': n03492542
'59': n03494278
'60': n03530642
'61': n03584829
'62': n03594734
'63': n03637318
'64': n03642806
'65': n03764736
'66': n03775546
'67': n03777754
'68': n03785016
'69': n03787032
'70': n03794056
'71': n03837869
'72': n03891251
'73': n03903868
'74': n03930630
'75': n03947888
'76': n04026417
'77': n04067472
'78': n04099969
'79': n04111531
'80': n04127249
'81': n04136333
'82': n04229816
'83': n04238763
'84': n04336792
'85': n04418357
'86': n04429376
'87': n04435653
'88': n04485082
'89': n04493381
'90': n04517823
'91': n04589890
'92': n04592741
'93': n07714571
'94': n07715103
'95': n07753275
'96': n07831146
'97': n07836838
'98': n13037406
'99': n13040303
splits:
- name: train
num_bytes: 17418307775.035
num_examples: 126689
- name: validation
num_bytes: 1517725690.0
num_examples: 10000
download_size: 15838413847
dataset_size: 18936033465.035
---
|
griffin/chain_of_density | 2023-09-08T00:43:00.000Z | [
"region:us"
] | griffin | null | null | null | 39 | 223 | ---
dataset_info:
- config_name: annotated
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: prediction
sequence: string
- name: missing
sequence: string
- name: model
dtype: string
- name: annotations
sequence: int64
- name: num_tokens
sequence: int64
- name: num_entities
sequence: int64
- name: fusion
sequence: float64
- name: entity_density
sequence: float64
- name: inverse_lead_bias
sequence: float64
- name: extractive_density
sequence: float64
- name: extractive_coverage
sequence: float64
- name: unique_unigrams
sequence: float64
- name: unique_bigrams
sequence: float64
- name: unique_trigrams
sequence: float64
- name: rouge1
sequence: float64
- name: rouge2
sequence: float64
- name: rougeL
sequence: float64
- name: rougeLsum
sequence: float64
- name: gpt4_informative
sequence: float64
- name: gpt4_quality
sequence: float64
- name: gpt4_attributable
sequence: float64
- name: gpt4_coherence
sequence: float64
- name: gpt4_overall
sequence: float64
splits:
- name: test
num_bytes: 750471
num_examples: 100
download_size: 452599
dataset_size: 750471
- config_name: unannotated
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: prediction
sequence: string
- name: missing
sequence: string
- name: model
dtype: string
- name: num_tokens
sequence: int64
- name: num_entities
sequence: int64
- name: fusion
sequence: float64
- name: entity_density
sequence: float64
- name: inverse_lead_bias
sequence: float64
- name: extractive_density
sequence: float64
- name: extractive_coverage
sequence: float64
- name: unique_unigrams
sequence: float64
- name: unique_bigrams
sequence: float64
- name: unique_trigrams
sequence: float64
- name: rouge1
sequence: float64
- name: rouge2
sequence: float64
- name: rougeL
sequence: float64
- name: rougeLsum
sequence: float64
splits:
- name: train
num_bytes: 6948744
num_examples: 1000
download_size: 3719092
dataset_size: 6948744
configs:
- config_name: annotated
data_files:
- split: test
path: annotated/test-*
- config_name: unannotated
data_files:
- split: train
path: unannotated/train-*
---
# Dataset Card for "chain_of_density"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
che111/laion256 | 2022-10-21T13:52:40.000Z | [
"license:openrail",
"region:us"
] | che111 | null | null | null | 0 | 222 | ---
license: openrail
---
|
Confirm-Labs/pythia-12b-neuron-dataset-examples | 2023-08-16T00:43:14.000Z | [
"region:us"
] | Confirm-Labs | null | null | null | 1 | 222 | # pythia-12b-neuron-dataset-examples
This dataset contains the top 64 highest activating dataset examples for each
MLP neuron in Pythia-12b. The dataset examples are all 16 tokens long. See
https://confirmlabs.org/posts/dreaming.html for details.
Columns:
- `layer`: the layer of the neuron
- `neuron`: the index of the neuron
- `rank`: the rank of the example
- `activation`: the activation of the neuron on the example
- `position`: the token position for which the neuron is maximally activated.
- `text`: the text of the example: `tokenizer.decode(ids[:, :position+1])`
- `id#`: the token id at position `#` in 0 to 15. |
JasiekKaczmarczyk/maestro-sustain-quantized | 2023-09-15T10:26:58.000Z | [
"region:us"
] | JasiekKaczmarczyk | null | null | null | 0 | 222 | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart
sequence: float32
length: 128
- name: duration
sequence: float32
length: 128
- name: velocity
sequence: int16
length: 128
- name: dstart_bin
sequence: int8
length: 128
- name: duration_bin
sequence: int8
length: 128
- name: velocity_bin
sequence: int8
length: 128
splits:
- name: train
num_bytes: 89689142
num_examples: 43727
- name: validation
num_bytes: 10114654
num_examples: 4929
- name: test
num_bytes: 11695068
num_examples: 5695
download_size: 0
dataset_size: 111498864
---
# Dataset Card for "maestro-sustain-quantized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
choco9966/requests | 2023-09-14T15:15:04.000Z | [
"region:us"
] | choco9966 | null | null | null | 0 | 222 | Entry not found |
enriched_web_nlg | 2023-06-01T14:59:50.000Z | [
"task_categories:tabular-to-text",
"task_ids:rdf-to-text",
"annotations_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-web-nlg",
"language:de",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | null | WebNLG is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation. | @InProceedings{ferreiraetal2018,
author = "Castro Ferreira, Thiago and Moussallem, Diego and Wubben, Sander and Krahmer, Emiel",
title = "Enriching the WebNLG corpus",
booktitle = "Proceedings of the 11th International Conference on Natural Language Generation",
year = "2018",
series = {INLG'18},
publisher = "Association for Computational Linguistics",
address = "Tilburg, The Netherlands",
} | null | 1 | 221 | ---
annotations_creators:
- found
language_creators:
- crowdsourced
language:
- de
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-web-nlg
task_categories:
- tabular-to-text
task_ids:
- rdf-to-text
paperswithcode_id: null
pretty_name: Enriched WebNLG
dataset_info:
- config_name: en
features:
- name: category
dtype: string
- name: size
dtype: int32
- name: eid
dtype: string
- name: original_triple_sets
sequence:
- name: otriple_set
sequence: string
- name: modified_triple_sets
sequence:
- name: mtriple_set
sequence: string
- name: shape
dtype: string
- name: shape_type
dtype: string
- name: lex
sequence:
- name: comment
dtype: string
- name: lid
dtype: string
- name: text
dtype: string
- name: template
dtype: string
- name: sorted_triple_sets
sequence: string
- name: lexicalization
dtype: string
splits:
- name: train
num_bytes: 14665155
num_examples: 6940
- name: dev
num_bytes: 1843787
num_examples: 872
- name: test
num_bytes: 3931381
num_examples: 1862
download_size: 44284508
dataset_size: 20440323
- config_name: de
features:
- name: category
dtype: string
- name: size
dtype: int32
- name: eid
dtype: string
- name: original_triple_sets
sequence:
- name: otriple_set
sequence: string
- name: modified_triple_sets
sequence:
- name: mtriple_set
sequence: string
- name: shape
dtype: string
- name: shape_type
dtype: string
- name: lex
sequence:
- name: comment
dtype: string
- name: lid
dtype: string
- name: text
dtype: string
- name: template
dtype: string
- name: sorted_triple_sets
sequence: string
splits:
- name: train
num_bytes: 9748193
num_examples: 6940
- name: dev
num_bytes: 1238609
num_examples: 872
download_size: 44284508
dataset_size: 10986802
config_names:
- de
- en
---
# Dataset Card for WebNLG
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WebNLG challenge website](https://webnlg-challenge.loria.fr/)
- **Repository:** [Enriched WebNLG Github repository](https://github.com/ThiagoCF05/webnlg)
- **Paper:** [Enriching the WebNLG corpus](https://www.aclweb.org/anthology/W18-6521/)
### Dataset Summary
The WebNLG challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a
set of triples extracted from DBpedia and the text is a verbalisation of these triples. For instance, given the 3
DBpedia triples shown in (a), the aim is to generate a text such as (b). It is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture, such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.
### Supported Tasks and Leaderboards
The dataset supports a `other-rdf-to-text` task which requires a model takes a set of RDF (Resource Description
Format) triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural
language sentence expressing the information contained in the triples.
### Languages
The dataset is presented in two versions: English (config `en`) and German (config `de`)
## Dataset Structure
### Data Instances
A typical example contains the original RDF triples in the set, a modified version which presented to crowd workers, and
a set of possible verbalizations for this set of triples:
```
{ 'category': 'Politician',
'eid': 'Id10',
'lex': {'comment': ['good', 'good', 'good'],
'lid': ['Id1', 'Id2', 'Id3'],
'text': ['World War II had Chiang Kai-shek as a commander and United States Army soldier Abner W. Sibal.',
'Abner W. Sibal served in the United States Army during the Second World War and during that war Chiang Kai-shek was one of the commanders.',
'Abner W. Sibal, served in the United States Army and fought in World War II, one of the commanders of which, was Chiang Kai-shek.']},
'modified_triple_sets': {'mtriple_set': [['Abner_W._Sibal | battle | World_War_II',
'World_War_II | commander | Chiang_Kai-shek',
'Abner_W._Sibal | militaryBranch | United_States_Army']]},
'original_triple_sets': {'otriple_set': [['Abner_W._Sibal | battles | World_War_II', 'World_War_II | commander | Chiang_Kai-shek', 'Abner_W._Sibal | branch | United_States_Army'],
['Abner_W._Sibal | militaryBranch | United_States_Army',
'Abner_W._Sibal | battles | World_War_II',
'World_War_II | commander | Chiang_Kai-shek']]},
'shape': '(X (X) (X (X)))',
'shape_type': 'mixed',
'size': 3}
```
### Data Fields
The following fields can be found in the instances:
- `category`: the category of the DBpedia entites present in the RDF triples.
- `eid`: an example ID, only unique per split per category.
- `size`: number of RDF triples in the set.
- `shape`: (for v3 only) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. `shape`
is a string representation of the tree with nested parentheses where X is a node (
see [Newick tree format](https://en.wikipedia.org/wiki/Newick_format))
- `shape_type`: (for v3 only) is a type of the tree shape, which can be: `chain` (the object of one triple is the
subject of the other); `sibling` (triples with a shared subject); `mixed` (both chain and sibling types present).
- `2017_test_category`: (for `webnlg_challenge_2017`) tells whether the set of RDF triples was present in the training
set or not.
- `lex`: the lexicalizations, with:
- `text`: the text to be predicted.
- `lid`: a lexicalizayion ID, unique per example.
- `comment`: the lexicalizations were rated by crowd workers are either `good` or `bad`
### Data Splits
The `en` version has `train`, `test` and `dev` splits; the `de` version, only `train` and `dev`.
## Dataset Creation
### Curation Rationale
Natural Language Generation (NLG) is the process of automatically converting non-linguistic data into a linguistic output format (Reiter andDale, 2000; Gatt and Krahmer, 2018). Recently, the field has seen an increase in the number of available focused data resources as E2E (Novikova et al., 2017), ROTOWIRE(Wise-man et al., 2017) and WebNLG (Gardent et al.,2017a,b) corpora. Although theses recent releases are highly valuable resources for the NLG community in general,nall of them were designed to work with end-to-end NLG models. Hence, they consist of a collection of parallel raw representations and their corresponding textual realizations. No intermediate representations are available so researchersncan straight-forwardly use them to develop or evaluate popular tasks in NLG pipelines (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation, Referring Expression Generation, among others. Moreover, these new corpora, like many other resources in Computational Linguistics more in general, are only available in English, limiting the development of NLG-applications to other languages.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset uses the `cc-by-nc-sa-4.0` license. The source DBpedia project uses the `cc-by-sa-3.0` and `gfdl-1.1`
licenses.
### Citation Information
- If you use the Enriched WebNLG corpus, cite:
```
@InProceedings{ferreiraetal2018,
author = "Castro Ferreira, Thiago
and Moussallem, Diego
and Wubben, Sander
and Krahmer, Emiel",
title = "Enriching the WebNLG corpus",
booktitle = "Proceedings of the 11th International Conference on Natural Language Generation",
year = "2018",
series = {INLG'18},
publisher = "Association for Computational Linguistics",
address = "Tilburg, The Netherlands",
}
@inproceedings{web_nlg,
author = {Claire Gardent and
Anastasia Shimorina and
Shashi Narayan and
Laura Perez{-}Beltrachini},
editor = {Regina Barzilay and
Min{-}Yen Kan},
title = {Creating Training Corpora for {NLG} Micro-Planners},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational
Linguistics, {ACL} 2017, Vancouver, Canada, July 30 - August 4, Volume
1: Long Papers},
pages = {179--188},
publisher = {Association for Computational Linguistics},
year = {2017},
url = {https://doi.org/10.18653/v1/P17-1017},
doi = {10.18653/v1/P17-1017}
}
```
### Contributions
Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset. |
C-MTEB/EcomRetrieval | 2023-07-28T09:37:55.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 221 | ---
configs:
- config_name: default
data_files:
- split: corpus
path: data/corpus-*
- split: queries
path: data/queries-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 9930587
num_examples: 100902
- name: queries
num_bytes: 32376
num_examples: 1000
download_size: 8448455
dataset_size: 9962963
---
# Dataset Card for "EcomRetrieval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
C-MTEB/MedicalRetrieval | 2023-07-28T09:33:59.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 220 | ---
configs:
- config_name: default
data_files:
- split: corpus
path: data/corpus-*
- split: queries
path: data/queries-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 37393271
num_examples: 100999
- name: queries
num_bytes: 63649
num_examples: 1000
download_size: 25077981
dataset_size: 37456920
---
# Dataset Card for "MedicalRetrieval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
keremberke/license-plate-object-detection | 2023-01-18T20:37:51.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Self Driving",
"Anpr",
"region:us"
] | keremberke | null | @misc{ vehicle-registration-plates-trudk_dataset,
title = { Vehicle Registration Plates Dataset },
type = { Open Source Dataset },
author = { Augmented Startups },
howpublished = { \\url{ https://universe.roboflow.com/augmented-startups/vehicle-registration-plates-trudk } },
url = { https://universe.roboflow.com/augmented-startups/vehicle-registration-plates-trudk },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jun },
note = { visited on 2023-01-18 },
} | null | 7 | 219 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
- Self Driving
- Anpr
---
<div align="center">
<img width="640" alt="keremberke/license-plate-object-detection" src="https://huggingface.co/datasets/keremberke/license-plate-object-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['license_plate']
```
### Number of Images
```json
{'train': 6176, 'valid': 1765, 'test': 882}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/license-plate-object-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/augmented-startups/vehicle-registration-plates-trudk/dataset/1](https://universe.roboflow.com/augmented-startups/vehicle-registration-plates-trudk/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ vehicle-registration-plates-trudk_dataset,
title = { Vehicle Registration Plates Dataset },
type = { Open Source Dataset },
author = { Augmented Startups },
howpublished = { \\url{ https://universe.roboflow.com/augmented-startups/vehicle-registration-plates-trudk } },
url = { https://universe.roboflow.com/augmented-startups/vehicle-registration-plates-trudk },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jun },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on January 13, 2022 at 5:20 PM GMT
It includes 8823 images.
VRP are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
|
fcakyon/pokemon-classification | 2023-01-14T13:06:55.000Z | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Gaming",
"region:us"
] | fcakyon | null | @misc{ pokedex_dataset,
title = { Pokedex Dataset },
type = { Open Source Dataset },
author = { Lance Zhang },
howpublished = { \\url{ https://universe.roboflow.com/robert-demo-qvail/pokedex } },
url = { https://universe.roboflow.com/robert-demo-qvail/pokedex },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-14 },
} | null | 1 | 219 | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
- Gaming
---
<div align="center">
<img width="640" alt="fcakyon/pokemon-classification" src="https://huggingface.co/datasets/fcakyon/pokemon-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Golbat', 'Machoke', 'Omastar', 'Diglett', 'Lapras', 'Kabuto', 'Persian', 'Weepinbell', 'Golem', 'Dodrio', 'Raichu', 'Zapdos', 'Raticate', 'Magnemite', 'Ivysaur', 'Growlithe', 'Tangela', 'Drowzee', 'Rapidash', 'Venonat', 'Pidgeot', 'Nidorino', 'Porygon', 'Lickitung', 'Rattata', 'Machop', 'Charmeleon', 'Slowbro', 'Parasect', 'Eevee', 'Starmie', 'Staryu', 'Psyduck', 'Dragonair', 'Magikarp', 'Vileplume', 'Marowak', 'Pidgeotto', 'Shellder', 'Mewtwo', 'Farfetchd', 'Kingler', 'Seel', 'Kakuna', 'Doduo', 'Electabuzz', 'Charmander', 'Rhyhorn', 'Tauros', 'Dugtrio', 'Poliwrath', 'Gengar', 'Exeggutor', 'Dewgong', 'Jigglypuff', 'Geodude', 'Kadabra', 'Nidorina', 'Sandshrew', 'Grimer', 'MrMime', 'Pidgey', 'Koffing', 'Ekans', 'Alolan Sandslash', 'Venusaur', 'Snorlax', 'Paras', 'Jynx', 'Chansey', 'Hitmonchan', 'Gastly', 'Kangaskhan', 'Oddish', 'Wigglytuff', 'Graveler', 'Arcanine', 'Clefairy', 'Articuno', 'Poliwag', 'Abra', 'Squirtle', 'Voltorb', 'Ponyta', 'Moltres', 'Nidoqueen', 'Magmar', 'Onix', 'Vulpix', 'Butterfree', 'Krabby', 'Arbok', 'Clefable', 'Goldeen', 'Magneton', 'Dratini', 'Caterpie', 'Jolteon', 'Nidoking', 'Alakazam', 'Dragonite', 'Fearow', 'Slowpoke', 'Weezing', 'Beedrill', 'Weedle', 'Cloyster', 'Vaporeon', 'Gyarados', 'Golduck', 'Machamp', 'Hitmonlee', 'Primeape', 'Cubone', 'Sandslash', 'Scyther', 'Haunter', 'Metapod', 'Tentacruel', 'Aerodactyl', 'Kabutops', 'Ninetales', 'Zubat', 'Rhydon', 'Mew', 'Pinsir', 'Ditto', 'Victreebel', 'Omanyte', 'Horsea', 'Pikachu', 'Blastoise', 'Venomoth', 'Charizard', 'Seadra', 'Muk', 'Spearow', 'Bulbasaur', 'Bellsprout', 'Electrode', 'Gloom', 'Poliwhirl', 'Flareon', 'Seaking', 'Hypno', 'Wartortle', 'Mankey', 'Tentacool', 'Exeggcute', 'Meowth']
```
### Number of Images
```json
{'train': 4869, 'test': 732, 'valid': 1390}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("fcakyon/pokemon-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14](https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14?ref=roboflow2huggingface)
### Citation
```
@misc{ pokedex_dataset,
title = { Pokedex Dataset },
type = { Open Source Dataset },
author = { Lance Zhang },
howpublished = { \\url{ https://universe.roboflow.com/robert-demo-qvail/pokedex } },
url = { https://universe.roboflow.com/robert-demo-qvail/pokedex },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-14 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 20, 2022 at 5:34 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 6991 images.
Pokemon are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 224x224 (Fit (black edges))
No image augmentation techniques were applied.
|
djstrong/oscar-small | 2023-03-07T19:57:38.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:oscar",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb",
"language:ba",
"language:be",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"language:ca",
"language:ce",
"language:ceb",
"language:ckb",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:no",
"language:or",
"language:os",
"language:pa",
"language:pl",
"language:pnb",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:sa",
"language:sah",
"language:sd",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:yi",
"language:zh",
"license:cc0-1.0",
"arxiv:2010.14571",
"region:us"
] | djstrong | The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\ | @inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{\'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{\'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{\"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
} | null | 1 | 219 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- arz
- as
- az
- azb
- ba
- be
- bg
- bn
- bo
- br
- ca
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gu
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mhr
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nds
- ne
- nl
- nn
- 'no'
- or
- os
- pa
- pl
- pnb
- ps
- pt
- ro
- ru
- sa
- sah
- sd
- sh
- si
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- yi
- zh
license:
- cc0-1.0
multilinguality:
- multilingual
source_datasets:
- oscar
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: oscar
pretty_name: OSCAR
---
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
project-sloth/captcha-images | 2023-06-15T21:02:50.000Z | [
"task_categories:image-to-text",
"size_categories:1K<n<10K",
"license:wtfpl",
"captcha",
"ocr",
"region:us"
] | project-sloth | Captcha images dataset. | null | null | 0 | 219 | ---
dataset_info:
features:
- name: image
dtype: image
- name: solution
dtype: string
splits:
- name: train
num_bytes: 24564698
num_examples: 6000
- name: validation
num_bytes: 8195367
num_examples: 2000
- name: test
num_bytes: 8186295
num_examples: 2000
download_size: 28857965
dataset_size: 40946360
license: wtfpl
task_categories:
- image-to-text
tags:
- captcha
- ocr
size_categories:
- 1K<n<10K
---
# Captcha dataset
## Data
Captcha images with solutions of exactly 6 digit numbers
## Splits
* Train: 6000 images
* Validation: 2000 images
* Test: 2000 images
## Example
 |
IlyaGusev/ru_turbo_alpaca_evol_instruct | 2023-06-02T11:19:37.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ru",
"license:cc-by-4.0",
"region:us"
] | IlyaGusev | null | null | null | 6 | 218 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: iteration
dtype: uint32
splits:
- name: train
num_bytes: 105428021
num_examples: 47793
download_size: 27572163
dataset_size: 105428021
license: cc-by-4.0
task_categories:
- text-generation
language:
- ru
size_categories:
- 10K<n<100K
---
|
C-MTEB/VideoRetrieval | 2023-07-28T08:45:16.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 218 | ---
configs:
- config_name: default
data_files:
- split: corpus
path: data/corpus-*
- split: queries
path: data/queries-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 8176771
num_examples: 100930
- name: queries
num_bytes: 34156
num_examples: 1000
download_size: 7287165
dataset_size: 8210927
---
# Dataset Card for "VideoRetrieval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
C-MTEB/VideoRetrieval-qrels | 2023-07-28T09:22:40.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 218 | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
dataset_info:
features:
- name: qid
dtype: string
- name: pid
dtype: string
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 27968
num_examples: 1000
download_size: 17369
dataset_size: 27968
---
# Dataset Card for "VideoRetrieval-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arpelarpe/nota | 2022-10-11T07:56:49.000Z | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:da",
"license:cc0-1.0",
"region:us"
] | arpelarpe | Nota lyd- og tekstdata
Datasættet indeholder både tekst- og taledata fra udvalgte dele af Nota's lydbogsbiblotek. Datasættet består af
over 500 timers oplæsninger og medfølgende transkriptioner på dansk. Al lyddata er i .wav-format, mens tekstdata
er i .txt-format.
I data indgår indlæsninger af Notas eget blad "Inspiration" og "Radio/TV", som er udgivet i perioden 2007 til 2022.
Nota krediteres for arbejdet med at strukturere data, således at tekst og lyd stemmer overens.
Nota er en institution under Kulturministeriet, der gør trykte tekster tilgængelige i digitale formater til personer
med synshandicap og læsevanskeligheder, fx via produktion af lydbøger og oplæsning af aviser, magasiner, mv. | null | null | 2 | 217 | ---
pretty_name: Nota
license:
- cc0-1.0
language:
- da
multilinguality:
- monolingual
task_categories:
- automatic-speech-recognition
---
# Dataset Card Nota Lyd- og tekstdata
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Disclaimer](#disclaimer)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** https://sprogteknologi.dk/dataset/notalyd-ogtekstdata
- **Data Storage Url:** https://sprogtek-ressources.digst.govcloud.dk/nota/
- **Point of Contact:** info@sprogteknologi.dk
### Dataset Summary
This data was created by the public institution Nota (https://nota.dk/), which is part of the Danish Ministry of Culture. Nota has a library audiobooks and audiomagazines for people with reading or sight disabilities. Nota also produces a number of audiobooks and audiomagazines themselves.
The dataset consists of .wav and .txt files from Nota's audiomagazines "Inspiration" and "Radio/TV".
The dataset has been published as a part of the initiative sprogteknologi.dk, within the Danish Agency for Digital Government (www.digst.dk).
336 GB available data, containing voice recordings and accompanying transcripts.
Each publication has been segmented into bits of 2 - 50 seconds .wav files with an accompanying transcription
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Danish
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called path and its sentence.
`
{'path': '<path_to_clip>.wav', 'sentence': 'Dette er et eksempel', 'audio': {'path': <path_to_clip>.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 44100}
`
### Data Fields
path: The path to the audio file
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence that was read by the speaker
### Data Splits
The material has for now only a train split. As this is very early stage of the dataset, splits might be introduced at a later stage.
## Dataset Creation
### Disclaimer
There might be smaller discrepancies between the .wav and .txt files. Therefore, there might be issues in the alignment of timestamps, text and sound files.
There are no strict rules as to how readers read aloud non-letter characters (i.e. numbers, €, $, !, ?). These symbols can be read differently throughout the dataset.
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset is made public and free to use. Recorded individuals has by written contract accepted and agreed to the publication of their recordings.
Other names appearing in the dataset are already publically known individuals (i.e. TV or Radio host names). Their names are not to be treated as sensitive or personal data in the context of this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://sprogteknologi.dk/
Contact info@sprogteknologi.dk if you have questions regarding use of data.
They gladly receive inputs and ideas on how to distribute the data.
### Licensing Information
[CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/)
### |
luigisaetta/atco2_atcosim | 2023-03-02T09:09:43.000Z | [
"region:us"
] | luigisaetta | null | null | null | 0 | 217 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2049253684.428
num_examples: 8142
- name: test
num_bytes: 483912622.003
num_examples: 1957
download_size: 2521597292
dataset_size: 2533166306.4309998
---
# Dataset Card for "atco2_atcosim"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
C-MTEB/MedicalRetrieval-qrels | 2023-07-28T09:34:03.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 217 | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
dataset_info:
features:
- name: qid
dtype: string
- name: pid
dtype: string
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 26893
num_examples: 1000
download_size: 12201
dataset_size: 26893
---
# Dataset Card for "MedicalRetrieval-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
roszcz/giant-midi-sustain | 2023-08-15T18:55:06.000Z | [
"region:us"
] | roszcz | null | null | null | 0 | 217 | ---
dataset_info:
features:
- name: notes
struct:
- name: duration
sequence: float64
- name: end
sequence: float64
- name: pitch
sequence: int64
- name: start
sequence: float64
- name: velocity
sequence: int64
- name: midi_filename
dtype: string
splits:
- name: train
num_bytes: 1548922542
num_examples: 10853
download_size: 483630029
dataset_size: 1548922542
---
# Dataset Card for "giant-midi-sustain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
notrichardren/HaluEval | 2023-09-11T21:09:44.000Z | [
"region:us"
] | notrichardren | null | null | null | 0 | 217 | ---
dataset_info:
- config_name: dialogue
features:
- name: knowledge
dtype: string
- name: dialogue_history
dtype: string
- name: right_response
dtype: string
- name: hallucinated_response
dtype: string
- name: task_type
dtype: string
splits:
- name: train
num_bytes: 6332598
num_examples: 10000
download_size: 3451421
dataset_size: 6332598
- config_name: general
features:
- name: user_query
dtype: string
- name: chatgpt_response
dtype: string
- name: hallucination_label
dtype: string
- name: task_type
dtype: string
splits:
- name: train
num_bytes: 3010941
num_examples: 5000
download_size: 1849332
dataset_size: 3010941
- config_name: qa
features:
- name: knowledge
dtype: string
- name: question
dtype: string
- name: right_answer
dtype: string
- name: hallucinated_answer
dtype: string
- name: task_type
dtype: string
splits:
- name: train
num_bytes: 5546422
num_examples: 10000
download_size: 3753464
dataset_size: 5546422
- config_name: summarization
features:
- name: document
dtype: string
- name: right_summary
dtype: string
- name: hallucinated_summary
dtype: string
- name: task_type
dtype: string
splits:
- name: train
num_bytes: 46578787
num_examples: 10000
download_size: 27986765
dataset_size: 46578787
configs:
- config_name: dialogue
data_files:
- split: train
path: dialogue/train-*
- config_name: general
data_files:
- split: train
path: general/train-*
- config_name: qa
data_files:
- split: train
path: qa/train-*
- config_name: summarization
data_files:
- split: train
path: summarization/train-*
---
# Dataset Card for "HaluEval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/dc1d52d8 | 2023-10-01T15:17:52.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 217 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 168
num_examples: 10
download_size: 1321
dataset_size: 168
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dc1d52d8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
covid_qa_deepset | 2022-11-03T16:31:16.000Z | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"region:us"
] | null | COVID-QA is a Question Answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19. | @inproceedings{moller2020covid,
title={COVID-QA: A Question Answering Dataset for COVID-19},
author={M{\"o}ller, Timo and Reina, Anthony and Jayakumar, Raghavan and Pietsch, Malte},
booktitle={Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020},
year={2020}
} | null | 1 | 216 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
- extractive-qa
paperswithcode_id: null
pretty_name: COVID-QA
dataset_info:
features:
- name: document_id
dtype: int32
- name: context
dtype: string
- name: question
dtype: string
- name: is_impossible
dtype: bool
- name: id
dtype: int32
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: covid_qa_deepset
splits:
- name: train
num_bytes: 65151262
num_examples: 2019
download_size: 4418117
dataset_size: 65151262
---
# Dataset Card for COVID-QA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/deepset-ai/COVID-QA
- **Paper:** https://openreview.net/forum?id=JENSKEEzsoU
- **Point of Contact:** [deepset AI](https://github.com/deepset-ai)
### Dataset Summary
COVID-QA is a Question Answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19.
A total of 147 scientific articles from the CORD-19 dataset were annotated by 15 experts.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
**What do the instances that comprise the dataset represent?**
Each represents a question, a context (document passage from the CORD19 dataset) and an answer.
**How many instances are there in total?**
2019 instances
**What data does each instance consist of?**
Each instance is a question, a set of answers, and an id associated with each answer.
[More Information Needed]
### Data Fields
The data was annotated in SQuAD style fashion, where each row contains:
* **question**: Query question
* **context**: Context text to obtain the answer from
* **document_id** The document ID of the context text
* **answer**: Dictionary containing the answer string and the start index
### Data Splits
**data/COVID-QA.json**: 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19.
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The inital data collected comes from 147 scientific articles from the CORD-19 dataset. Question and answers were then
annotated afterwards.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
While annotators were volunteers, they were required to have at least a Master’s degree in biomedical sciences.
The annotation team was led by a medical doctor (G.A.R.) who vetted the volunteer’s credentials and
manually verified each question/answer pair produced. We used an existing, web-based annotation tool that had been
created by deepset and is available at their Neural Search framework [haystack](https://github.com/deepset-ai/haystack).
#### Who are the annotators?
The annotators are 15 volunteer biomedical experts on scientific articles related to COVID-19.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
The dataset aims to help build question answering models serving clinical and scientific researchers, public health authorities, and frontline workers.
These QA systems can help them find answers and patterns in research papers by locating relevant answers to common questions from scientific articles.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
## Additional Information
The listed authors in the homepage are maintaining/supporting the dataset.
### Dataset Curators
[More Information Needed]
### Licensing Information
The Proto_qa dataset is licensed under the [Apache License 2.0](https://github.com/deepset-ai/COVID-QA/blob/master/LICENSE)
### Citation Information
```
@inproceedings{moller2020covid,
title={COVID-QA: A Question Answering Dataset for COVID-19},
author={M{\"o}ller, Timo and Reina, Anthony and Jayakumar, Raghavan and Pietsch, Malte},
booktitle={Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020},
year={2020}
}
```
### Contributions
Thanks to [@olinguyen](https://github.com/olinguyen) for adding this dataset. |
wiki_summary | 2022-11-18T22:00:55.000Z | [
"task_categories:text2text-generation",
"task_categories:translation",
"task_categories:question-answering",
"task_categories:summarization",
"task_ids:abstractive-qa",
"task_ids:explanation-generation",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"task_ids:open-domain-abstractive-qa",
"task_ids:text-simplification",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:fa",
"license:apache-2.0",
"region:us"
] | null | \
The dataset extracted from Persian Wikipedia into the form of articles and highlights and cleaned the dataset into pairs of articles and highlights and reduced the articles' length (only version 1.0.0) and highlights' length to a maximum of 512 and 128, respectively, suitable for parsBERT. | \
@misc{Bert2BertWikiSummaryPersian,
author = {Mehrdad Farahani},
title = {Summarization using Bert2Bert model on WikiSummary dataset},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {https://github.com/m3hrdadfi/wiki-summary},
} | null | 4 | 216 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- fa
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
- translation
- question-answering
- summarization
task_ids:
- abstractive-qa
- explanation-generation
- extractive-qa
- open-domain-qa
- open-domain-abstractive-qa
- text-simplification
pretty_name: WikiSummary
dataset_info:
features:
- name: id
dtype: string
- name: link
dtype: string
- name: title
dtype: string
- name: article
dtype: string
- name: highlights
dtype: string
splits:
- name: train
num_bytes: 207186608
num_examples: 45654
- name: test
num_bytes: 25693509
num_examples: 5638
- name: validation
num_bytes: 23130954
num_examples: 5074
download_size: 255168504
dataset_size: 256011071
---
# Dataset Card for [Needs More Information]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/m3hrdadfi/wiki-summary
- **Repository:** https://github.com/m3hrdadfi/wiki-summary
- **Paper:** [More Information Needed]
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [Mehrdad Farahani](mailto:m3hrdadphi@gmail.com)
### Dataset Summary
The dataset extracted from Persian Wikipedia into the form of articles and highlights and cleaned the dataset into pairs of articles and highlights and reduced the articles' length (only version 1.0.0) and highlights' length to a maximum of 512 and 128, respectively, suitable for parsBERT. This dataset is created to achieve state-of-the-art results on some interesting NLP tasks like Text Summarization.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in Percy.
## Dataset Structure
### Data Instances
```
{
'id' :'0598cfd2ac491a928615945054ab7602034a8f4f',
'link': 'https://fa.wikipedia.org/wiki/انقلاب_1917_روسیه',
'title': 'انقلاب 1917 روسیه',
'article': 'نخست انقلاب فوریه ۱۹۱۷ رخ داد . در این انقلاب پس از یکسری اعتصابات ، تظاهرات و درگیریها ، نیکولای دوم ، آخرین تزار روسیه از سلطنت خلع شد و یک دولت موقت به قدرت رسید . دولت موقت زیر نظر گئورگی لووف و الکساندر کرنسکی تشکیل شد . اکثر اعضای دولت موقت ، از شاخه منشویک حزب سوسیال دموکرات کارگری روسیه بودند . دومین مرحله ، انقلاب اکتبر ۱۹۱۷ بود . انقلاب اکتبر ، تحت نظارت حزب بلشویک (شاخه رادیکال از حزب سوسیال دموکرات کارگری روسیه) و به رهبری ولادیمیر لنین به پیش رفت و طی یک یورش نظامی همهجانبه به کاخ زمستانی سن پترزبورگ و سایر اماکن مهم ، قدرت را از دولت موقت گرفت . در این انقلاب افراد بسیار کمی کشته شدند . از زمان شکست روسیه در جنگ ۱۹۰۵ با ژاپن ، اوضاع بد اقتصادی ، گرسنگی ، عقبماندگی و سرمایهداری و نارضایتیهای گوناگون در بین مردم ، سربازان ، کارگران ، کشاورزان و نخبگان روسیه بهوجود آمدهبود . سرکوبهای تزار و ایجاد مجلس دوما نظام مشروطه حاصل آن دوران است . حزب سوسیال دموکرات ، اصلیترین معترض به سیاستهای نیکلای دوم بود که بهطور گسترده بین دهقانان کشاورزان و کارگران کارخانجات صنعتی علیه سیاستهای سیستم تزار فعالیت داشت . در اوت ۱۹۱۴ میلادی ، امپراتوری روسیه به دستور تزار وقت و به منظور حمایت از اسلاوهای صربستان وارد جنگ جهانی اول در برابر امپراتوری آلمان و امپراتوری اتریش-مجارستان شد . نخست فقط بلشویکها ، مخالف ورود روسیه به این جنگ بودند و میگفتند که این جنگ ، سبب بدتر شدن اوضاع نابسامان اقتصادی و اجتماعی روسیه خواهد شد . در سال ۱۹۱۴ میلادی ، یعنی در آغاز جنگ جهانی اول ، روسیه بزرگترین ارتش جهان را داشت ، حدود ۱۲ میلیون سرباز و ۶ میلیون سرباز ذخیره ؛ ولی در پایان سال ۱۹۱۶ میلادی ، پنج میلیون نفر از سربازان روسیه کشته ، زخمی یا اسیر شده بودند . حدود دو میلیون سرباز نیز محل خدمت خود را ترک کرده و غالبا با اسلحه به شهر و دیار خود بازگشته بودند . در میان ۱۰ یا ۱۱ میلیون سرباز باقیمانده نیز ، اعتبار تزار و سلسله مراتب ارتش و اتوریته افسران بالا دست از بین رفته بود . عوامل نابسامان داخلی اعم از اجتماعی کشاورزی و فرماندهی نظامی در شکستهای روسیه بسیار مؤثر بود . شکستهای روسیه در جنگ جهانی اول ، حامیان نیکلای دوم در روسیه را به حداقل خود رساند . در اوایل فوریه ۱۹۱۷ میلادی اکثر کارگران صنعتی در پتروگراد و مسکو دست به اعتصاب زدند . سپس شورش به پادگانها و سربازان رسید . اعتراضات دهقانان نیز گسترش یافت . سوسیال دموکراتها هدایت اعتراضات را در دست گرفتند . در ۱۱ مارس ۱۹۱۷ میلادی ، تزار وقت روسیه ، نیکلای دوم ، فرمان انحلال مجلس روسیه را صادر کرد ، اما اکثر نمایندگان مجلس متفرق نشدند و با تصمیمات نیکلای دوم مخالفت کردند . سرانجام در پی تظاهرات گسترده کارگران و سپس نافرمانی سربازان در سرکوب تظاهرکنندگان در پتروگراد ، نیکلای دوم از مقام خود استعفا داد . بدین ترتیب حکمرانی دودمان رومانوفها بر روسیه پس از حدود سیصد سال پایان یافت .',
'highlights': 'انقلاب ۱۹۱۷ روسیه ، جنبشی اعتراضی ، ضد امپراتوری روسیه بود که در سال ۱۹۱۷ رخ داد و به سرنگونی حکومت تزارها و برپایی اتحاد جماهیر شوروی انجامید . مبانی انقلاب بر پایه صلح-نان-زمین استوار بود . این انقلاب در دو مرحله صورت گرفت : در طول این انقلاب در شهرهای اصلی روسیه همانند مسکو و سن پترزبورگ رویدادهای تاریخی برجستهای رخ داد . انقلاب در مناطق روستایی و رعیتی نیز پا به پای مناطق شهری در حال پیشروی بود و دهقانان زمینها را تصرف کرده و در حال بازتوزیع آن در میان خود بودند .'
}
```
### Data Fields
- `id`: Article id
- `link`: Article link
- `title`: Title of the article
- `article`: Full text content in the article
- `highlights`: Summary of the article
### Data Splits
| Train | Test | Validation |
|-------------|-------------|-------------|
| 45,654 | 5,638 | 5,074 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Mehrdad Farahani.
### Licensing Information
[Apache License 2.0](https://github.com/m3hrdadfi/wiki-summary/blob/master/LICENSE)
### Citation Information
```
@misc{Bert2BertWikiSummaryPersian,
author = {Mehrdad Farahani},
title = {Summarization using Bert2Bert model on WikiSummary dataset},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {https://github.com/m3hrdadfi/wiki-summary},
}
```
### Contributions
Thanks to [@tanmoyio](https://github.com/tanmoyio) for adding this dataset. |
roszcz/maestro-v1 | 2023-04-23T12:18:27.000Z | [
"region:us"
] | roszcz | null | null | null | 0 | 216 | ---
dataset_info:
features:
- name: notes
struct:
- name: duration
sequence: float64
- name: end
sequence: float64
- name: pitch
sequence: int64
- name: start
sequence: float64
- name: velocity
sequence: int64
- name: control_changes
struct:
- name: number
sequence: int64
- name: time
sequence: float64
- name: value
sequence: int64
- name: composer
dtype: string
- name: title
dtype: string
- name: year
dtype: int64
- name: midi_filename
dtype: string
splits:
- name: validation
num_bytes: 59070511.71238244
num_examples: 137
- name: test
num_bytes: 76317376.44592476
num_examples: 177
- name: train
num_bytes: 414787096.8416928
num_examples: 962
download_size: 155533838
dataset_size: 550174985.0
---
# Dataset Card for "maestro-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
C-MTEB/EcomRetrieval-qrels | 2023-07-28T09:37:58.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 216 | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
dataset_info:
features:
- name: qid
dtype: string
- name: pid
dtype: string
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 27890
num_examples: 1000
download_size: 14540
dataset_size: 27890
---
# Dataset Card for "EcomRetrieval-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zxvix/pubmed_rapnonbiomedical_2 | 2023-09-13T02:23:56.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 216 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: MedlineCitation
struct:
- name: PMID
dtype: int32
- name: DateCompleted
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: NumberOfReferences
dtype: int32
- name: DateRevised
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: Article
struct:
- name: Abstract
struct:
- name: AbstractText
dtype: string
- name: ArticleTitle
dtype: string
- name: AuthorList
struct:
- name: Author
sequence:
- name: LastName
dtype: string
- name: ForeName
dtype: string
- name: Initials
dtype: string
- name: CollectiveName
dtype: string
- name: Language
dtype: string
- name: GrantList
struct:
- name: Grant
sequence:
- name: GrantID
dtype: string
- name: Agency
dtype: string
- name: Country
dtype: string
- name: PublicationTypeList
struct:
- name: PublicationType
sequence: string
- name: MedlineJournalInfo
struct:
- name: Country
dtype: string
- name: ChemicalList
struct:
- name: Chemical
sequence:
- name: RegistryNumber
dtype: string
- name: NameOfSubstance
dtype: string
- name: CitationSubset
dtype: string
- name: MeshHeadingList
struct:
- name: MeshHeading
sequence:
- name: DescriptorName
dtype: string
- name: QualifierName
dtype: string
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
- name: text
dtype: string
- name: title
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 4112162.316
num_examples: 982
download_size: 2354929
dataset_size: 4112162.316
---
# Dataset Card for "pubmed_rapnonbiomedical_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sproos/arxiv-embeddings | 2023-09-20T18:34:36.000Z | [
"license:apache-2.0",
"region:us"
] | sproos | null | null | null | 0 | 216 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: abstract
dtype: string
- name: embedding
sequence: float64
splits:
- name: train
num_bytes: 3585049145.8887267
num_examples: 266311
- name: test
num_bytes: 398350760.11127335
num_examples: 29591
download_size: 3783925189
dataset_size: 3983399906.0
---
|
clips/mfaq | 2022-10-20T11:32:50.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:cs",
"language:da",
"language:de",
"language:en",
"language:es",
"language:fi",
"language:fr",
"language:he",
"language:hr",
"language:hu",
"language:id",
"language:it",
"language:nl",
"language:no",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sv",
"language:tr",
"language:vi",
"license:cc0-1.0",
"arxiv:2109.12870",
"region:us"
] | clips | We present the first multilingual FAQ dataset publicly available. We collected around 6M FAQ pairs from the web, in 21 different languages. | @InProceedings{mfaq_a_multilingual_dataset,
title={MFAQ: a Multilingual FAQ Dataset},
author={Maxime {De Bruyn} and Ehsan Lotfi and Jeska Buhmann and Walter Daelemans},
year={2021},
booktitle={MRQA @ EMNLP 2021}
} | null | 26 | 215 | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- cs
- da
- de
- en
- es
- fi
- fr
- he
- hr
- hu
- id
- it
- nl
- 'no'
- pl
- pt
- ro
- ru
- sv
- tr
- vi
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: MFAQ - a Multilingual FAQ Dataset
size_categories:
- unknown
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
---
# MFAQ
🚨 See [MQA](https://huggingface.co/datasets/clips/mqa) or [MFAQ Light](maximedb/mfaq_light) for an updated version of the dataset.
MFAQ is a multilingual corpus of *Frequently Asked Questions* parsed from the [Common Crawl](https://commoncrawl.org/).
```
from datasets import load_dataset
load_dataset("clips/mfaq", "en")
{
"qa_pairs": [
{
"question": "Do I need a rental Car in Cork?",
"answer": "If you plan on travelling outside of Cork City, for instance to Kinsale [...]"
},
...
]
}
```
## Languages
We collected around 6M pairs of questions and answers in 21 different languages. To download a language specific subset you need to specify the language key as configuration. See below for an example.
```
load_dataset("clips/mfaq", "en") # replace "en" by any language listed below
```
| Language | Key | Pairs | Pages |
|------------|-----|-----------|-----------|
| All | all | 6,346,693 | 1,035,649 |
| English | en | 3,719,484 | 608,796 |
| German | de | 829,098 | 111,618 |
| Spanish | es | 482,818 | 75,489 |
| French | fr | 351,458 | 56,317 |
| Italian | it | 155,296 | 24,562 |
| Dutch | nl | 150,819 | 32,574 |
| Portuguese | pt | 138,778 | 26,169 |
| Turkish | tr | 102,373 | 19,002 |
| Russian | ru | 91,771 | 22,643 |
| Polish | pl | 65,182 | 10,695 |
| Indonesian | id | 45,839 | 7,910 |
| Norwegian | no | 37,711 | 5,143 |
| Swedish | sv | 37,003 | 5,270 |
| Danish | da | 32,655 | 5,279 |
| Vietnamese | vi | 27,157 | 5,261 |
| Finnish | fi | 20,485 | 2,795 |
| Romanian | ro | 17,066 | 3,554 |
| Czech | cs | 16,675 | 2,568 |
| Hebrew | he | 11,212 | 1,921 |
| Hungarian | hu | 8,598 | 1,264 |
| Croatian | hr | 5,215 | 819 |
## Data Fields
#### Nested (per page - default)
The data is organized by page. Each page contains a list of questions and answers.
- **id**
- **language**
- **num_pairs**: the number of FAQs on the page
- **domain**: source web domain of the FAQs
- **qa_pairs**: a list of questions and answers
- **question**
- **answer**
- **language**
#### Flattened
The data is organized by pair (i.e. pages are flattened). You can access the flat version of any language by appending `_flat` to the configuration (e.g. `en_flat`). The data will be returned pair-by-pair instead of page-by-page.
- **domain_id**
- **pair_id**
- **language**
- **domain**: source web domain of the FAQs
- **question**
- **answer**
## Source Data
This section was adapted from the source data description of [OSCAR](https://huggingface.co/datasets/oscar#source-data)
Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and robots.txt policies.
To construct MFAQ, the WARC files of Common Crawl were used. We looked for `FAQPage` markup in the HTML and subsequently parsed the `FAQItem` from the page.
## People
This model was developed by [Maxime De Bruyn](https://www.linkedin.com/in/maximedebruyn/), Ehsan Lotfi, Jeska Buhmann and Walter Daelemans.
## Licensing Information
```
These data are released under this licensing scheme.
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
```
## Citation information
```
@misc{debruyn2021mfaq,
title={MFAQ: a Multilingual FAQ Dataset},
author={Maxime {De Bruyn} and Ehsan Lotfi and Jeska Buhmann and Walter Daelemans},
year={2021},
eprint={2109.12870},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
vblagoje/wikipedia_snippets_streamed | 2021-07-01T15:32:09.000Z | [
"region:us"
] | vblagoje | The dataset was built from the Wikipedia dump (https://dumps.wikimedia.org/).
Each example contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.). | @ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | null | 0 | 214 | Entry not found |
rguo123/trump_tweets | 2023-08-07T14:11:46.000Z | [
"region:us"
] | rguo123 | null | null | null | 0 | 214 | Entry not found |
zxvix/c4_academicbiomedical_2 | 2023-09-13T03:58:39.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 214 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 2352052.0
num_examples: 986
download_size: 1376270
dataset_size: 2352052.0
---
# Dataset Card for "c4_academicbiomedical_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kmaksatk/cn_data | 2023-09-21T09:54:29.000Z | [
"region:us"
] | kmaksatk | null | null | null | 1 | 214 | Entry not found |
allenai/scicite | 2023-01-25T14:43:39.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:1904.01608",
"region:us"
] | allenai | This is a dataset for classifying citation intents in academic papers.
The main citation intent label for each Json object is specified with the label
key while the citation context is specified in with a context key. Example:
{
'string': 'In chacma baboons, male-infant relationships can be linked to both
formation of friendships and paternity success [30,31].'
'sectionName': 'Introduction',
'label': 'background',
'citingPaperId': '7a6b2d4b405439',
'citedPaperId': '9d1abadc55b5e0',
...
}
You may obtain the full information about the paper using the provided paper ids
with the Semantic Scholar API (https://api.semanticscholar.org/).
The labels are:
Method, Background, Result | @InProceedings{Cohan2019Structural,
author={Arman Cohan and Waleed Ammar and Madeleine Van Zuylen and Field Cady},
title={Structural Scaffolds for Citation Intent Classification in Scientific Publications},
booktitle={NAACL},
year={2019}
} | null | 3 | 213 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
paperswithcode_id: scicite
pretty_name: SciCite
dataset_info:
features:
- name: string
dtype: string
- name: sectionName
dtype: string
- name: label
dtype:
class_label:
names:
'0': method
'1': background
'2': result
- name: citingPaperId
dtype: string
- name: citedPaperId
dtype: string
- name: excerpt_index
dtype: int32
- name: isKeyCitation
dtype: bool
- name: label2
dtype:
class_label:
names:
'0': supportive
'1': not_supportive
'2': cant_determine
'3': none
- name: citeEnd
dtype: int64
- name: citeStart
dtype: int64
- name: source
dtype:
class_label:
names:
'0': properNoun
'1': andPhrase
'2': acronym
'3': etAlPhrase
'4': explicit
'5': acronymParen
'6': nan
- name: label_confidence
dtype: float32
- name: label2_confidence
dtype: float32
- name: id
dtype: string
splits:
- name: test
num_bytes: 870809
num_examples: 1859
- name: train
num_bytes: 3843904
num_examples: 8194
- name: validation
num_bytes: 430296
num_examples: 916
download_size: 23189911
dataset_size: 5145009
---
# Dataset Card for "scicite"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/allenai/scicite
- **Paper:** [Structural Scaffolds for Citation Intent Classification in Scientific Publications](https://arxiv.org/abs/1904.01608)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 22.12 MB
- **Size of the generated dataset:** 4.91 MB
- **Total amount of disk used:** 27.02 MB
### Dataset Summary
This is a dataset for classifying citation intents in academic papers.
The main citation intent label for each Json object is specified with the label
key while the citation context is specified in with a context key. Example:
{
'string': 'In chacma baboons, male-infant relationships can be linked to both
formation of friendships and paternity success [30,31].'
'sectionName': 'Introduction',
'label': 'background',
'citingPaperId': '7a6b2d4b405439',
'citedPaperId': '9d1abadc55b5e0',
...
}
You may obtain the full information about the paper using the provided paper ids
with the Semantic Scholar API (https://api.semanticscholar.org/).
The labels are:
Method, Background, Result
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 22.12 MB
- **Size of the generated dataset:** 4.91 MB
- **Total amount of disk used:** 27.02 MB
An example of 'validation' looks as follows.
```
{
"citeEnd": 68,
"citeStart": 64,
"citedPaperId": "5e413c7872f5df231bf4a4f694504384560e98ca",
"citingPaperId": "8f1fbe460a901d994e9b81d69f77bfbe32719f4c",
"excerpt_index": 0,
"id": "8f1fbe460a901d994e9b81d69f77bfbe32719f4c>5e413c7872f5df231bf4a4f694504384560e98ca",
"isKeyCitation": false,
"label": 2,
"label2": 0,
"label2_confidence": 0.0,
"label_confidence": 0.0,
"sectionName": "Discussion",
"source": 4,
"string": "These results are in contrast with the findings of Santos et al.(16), who reported a significant association between low sedentary time and healthy CVF among Portuguese"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `string`: a `string` feature.
- `sectionName`: a `string` feature.
- `label`: a classification label, with possible values including `method` (0), `background` (1), `result` (2).
- `citingPaperId`: a `string` feature.
- `citedPaperId`: a `string` feature.
- `excerpt_index`: a `int32` feature.
- `isKeyCitation`: a `bool` feature.
- `label2`: a classification label, with possible values including `supportive` (0), `not_supportive` (1), `cant_determine` (2), `none` (3).
- `citeEnd`: a `int64` feature.
- `citeStart`: a `int64` feature.
- `source`: a classification label, with possible values including `properNoun` (0), `andPhrase` (1), `acronym` (2), `etAlPhrase` (3), `explicit` (4).
- `label_confidence`: a `float32` feature.
- `label2_confidence`: a `float32` feature.
- `id`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 8194| 916|1859|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{cohan-etal-2019-structural,
title = "Structural Scaffolds for Citation Intent Classification in Scientific Publications",
author = "Cohan, Arman and
Ammar, Waleed and
van Zuylen, Madeleine and
Cady, Field",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1361",
doi = "10.18653/v1/N19-1361",
pages = "3586--3596",
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
Paul/hatecheck | 2022-07-05T10:27:25.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2012.15606",
"region:us"
] | Paul | null | null | null | 4 | 213 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for HateCheck
## Dataset Description
HateCheck is a suite of functional test for hate speech detection models.
The dataset contains 3,728 validated test cases in 29 functional tests.
19 functional tests correspond to distinct types of hate. The other 11 functional tests cover challenging types of non-hate.
This allows for targeted diagnostic insights into model performance.
In our ACL paper, we found critical weaknesses in all commercial and academic hate speech detection model that we tested with HateCheck.
Please refer to the paper (linked below) for results and further discussion, as well as further information on the dataset and a full data statement.
- **Paper:** Röttger et al. (2021) - HateCheck: Functional Tests for Hate Speech Detection Model. https://aclanthology.org/2021.acl-long.4/ or https://arxiv.org/abs/2012.15606
- **Repository:** https://github.com/paul-rottger/hatecheck-data
- **Point of Contact:** paul.rottger@oii.ox.ac.uk
## Dataset Structure
"test.csv" contains all 3,728 validated test cases. Each test case (row) has the following attributes:
**functionality**
The shorthand for the functionality tested by the test case.
**case_id**
The unique ID of the test case (assigned to each of the 3,901 cases we initially generated)
**test_case**
The text of the test case.
**label_gold**
The gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group targeted or referenced by the test case. We cover seven protected groups in the test suite: women, trans people, gay people, black people, disabled people, Muslims and immigrants.
**direction**
For hateful cases, the binary secondary label indicating whether they are *directed* at an individual as part of a protected group or aimed at the group in *general*.
**focus_words**
Where applicable, the key word or phrase in a given test case (e.g. "cut their throats").
**focus_lemma**
Where applicable, the corresponding lemma (e.g. "cut sb. throat").
**ref_case_id**
For hateful cases, where applicable, the ID of the simpler hateful case which was perturbed to generate them.
For non-hateful cases, where applicable, the ID of the hateful case which is contrasted.
**ref_templ_id**
The equivalent, but for template IDs.
**templ_id**
The unique ID of the template from which the test case was generated (assigned to each of the 866 cases and templates from which we generated the 3,901 initial cases).
## Citation Information
When using HateCheck, please cite our ACL paper:
@inproceedings{rottger-etal-2021-hatecheck,
title = "{H}ate{C}heck: Functional Tests for Hate Speech Detection Models",
author = {R{\"o}ttger, Paul and
Vidgen, Bertie and
Nguyen, Dong and
Waseem, Zeerak and
Margetts, Helen and
Pierrehumbert, Janet},
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.4",
doi = "10.18653/v1/2021.acl-long.4",
pages = "41--58",
abstract = "Detecting online hate is a difficult task that even state-of-the-art models struggle with. Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score. However, this approach makes it difficult to identify specific model weak points. It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, we introduce HateCheck, a suite of functional tests for hate speech detection models. We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders. We craft test cases for each functionality and validate their quality through a structured annotation process. To illustrate HateCheck{'}s utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.",
}
|
indiejoseph/wikipedia-zh-yue-filtered | 2023-09-13T13:11:54.000Z | [
"license:cc-by-4.0",
"region:us"
] | indiejoseph | null | null | null | 0 | 213 | ---
license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 90299602
num_examples: 133133
download_size: 56260688
dataset_size: 90299602
---
|
Narsil/asr_dummy | 2023-03-30T14:10:15.000Z | [
"region:us"
] | Narsil | Self-supervised learning (SSL) has proven vital for advancing research in
natural language processing (NLP) and computer vision (CV). The paradigm
pretrains a shared model on large volumes of unlabeled data and achieves
state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the
speech processing community lacks a similar setup to systematically explore the
paradigm. To bridge this gap, we introduce Speech processing Universal
PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the
performance of a shared model across a wide range of speech processing tasks
with minimal architecture changes and labeled data. Among multiple usages of the
shared model, we especially focus on extracting the representation learned from
SSL due to its preferable re-usability. We present a simple framework to solve
SUPERB tasks by learning task-specialized lightweight prediction heads on top of
the frozen shared model. Our results demonstrate that the framework is promising
as SSL representations show competitive generalizability and accessibility
across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a
benchmark toolkit to fuel the research in representation learning and general
speech processing.
Note that in order to limit the required storage for preparing this dataset, the
audio is stored in the .flac format and is not converted to a float32 array. To
convert, the audio file to a float32 array, please make use of the `.map()`
function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | @article{DBLP:journals/corr/abs-2105-01051,
author = {Shu{-}Wen Yang and
Po{-}Han Chi and
Yung{-}Sung Chuang and
Cheng{-}I Jeff Lai and
Kushal Lakhotia and
Yist Y. Lin and
Andy T. Liu and
Jiatong Shi and
Xuankai Chang and
Guan{-}Ting Lin and
Tzu{-}Hsien Huang and
Wei{-}Cheng Tseng and
Ko{-}tik Lee and
Da{-}Rong Liu and
Zili Huang and
Shuyan Dong and
Shang{-}Wen Li and
Shinji Watanabe and
Abdelrahman Mohamed and
Hung{-}yi Lee},
title = {{SUPERB:} Speech processing Universal PERformance Benchmark},
journal = {CoRR},
volume = {abs/2105.01051},
year = {2021},
url = {https://arxiv.org/abs/2105.01051},
archivePrefix = {arXiv},
eprint = {2105.01051},
timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 0 | 212 | Entry not found |
nishita/webnlg-data2text | 2022-07-21T14:24:22.000Z | [
"region:us"
] | nishita | null | null | null | 0 | 212 | Entry not found |
sepidmnorozy/Korean_sentiment | 2022-08-16T09:25:48.000Z | [
"region:us"
] | sepidmnorozy | null | null | null | 1 | 212 | Entry not found |
SLPL/naab | 2022-11-03T06:33:48.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:fa",
"license:mit",
"arxiv:2208.13486",
"region:us"
] | SLPL | Huge corpora of textual data are always known to be a crucial need for training deep models such as transformer-based ones. This issue is emerging more in lower resource languages - like Farsi. We propose naab, the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. | @misc{https://doi.org/10.48550/arxiv.2208.13486,
doi = {10.48550/ARXIV.2208.13486},
url = {https://arxiv.org/abs/2208.13486},
author = {Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {naab: A ready-to-use plug-and-play corpus for Farsi},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
} | null | 23 | 212 | ---
language:
- fa
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: naab (A ready-to-use plug-and-play corpus in Farsi)
---
# naab: A ready-to-use plug-and-play corpus in Farsi
_[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL)
- **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486)
- **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com)
### Dataset Summary
naab is the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. We also provide the raw version of the corpus called naab-raw and an easy-to-use pre-processor that can be employed by those who wanted to make a customized corpus.
You can use this corpus by the commands below:
```python
from datasets import load_dataset
dataset = load_dataset("SLPL/naab")
```
You may need to download parts/splits of this corpus too, if so use the command below (You can find more ways to use it [here](https://huggingface.co/docs/datasets/loading#slice-splits)):
```python
from datasets import load_dataset
dataset = load_dataset("SLPL/naab", split="train[:10%]")
```
**Note: be sure that your machine has at least 130 GB free space, also it may take a while to download. If you are facing disk or internet shortage, you can use below code snippet helping you download your costume sections of the naab:**
```python
from datasets import load_dataset
# ==========================================================
# You should just change this part in order to download your
# parts of corpus.
indices = {
"train": [5, 1, 2],
"test": [0, 2]
}
# ==========================================================
N_FILES = {
"train": 126,
"test": 3
}
_BASE_URL = "https://huggingface.co/datasets/SLPL/naab/resolve/main/data/"
data_url = {
"train": [_BASE_URL + "train-{:05d}-of-{:05d}.txt".format(x, N_FILES["train"]) for x in range(N_FILES["train"])],
"test": [_BASE_URL + "test-{:05d}-of-{:05d}.txt".format(x, N_FILES["test"]) for x in range(N_FILES["test"])],
}
for index in indices['train']:
assert index < N_FILES['train']
for index in indices['test']:
assert index < N_FILES['test']
data_files = {
"train": [data_url['train'][i] for i in indices['train']],
"test": [data_url['test'][i] for i in indices['test']]
}
print(data_files)
dataset = load_dataset('text', data_files=data_files, use_auth_token=True)
```
### Supported Tasks and Leaderboards
This corpus can be used for training all language models which can be trained by Masked Language Modeling (MLM) or any other self-supervised objective.
- `language-modeling`
- `masked-language-modeling`
## Dataset Structure
Each row of the dataset will look like something like the below:
```json
{
'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.",
}
```
+ `text` : the textual paragraph.
### Data Splits
This dataset includes two splits (`train` and `test`). We split these two by dividing the randomly permuted version of the corpus into (95%, 5%) division respected to (`train`, `test`). Since `validation` is usually occurring during training with the `train` dataset we avoid proposing another split for it.
| | train | test |
|-------------------------|------:|-----:|
| Input Sentences | 225892925 | 11083849 |
| Average Sentence Length | 61 | 25 |
Below you can see the log-based histogram of word/paragraph over the two splits of the dataset.
<div align="center">
<img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-hist.png">
</div>
## Dataset Creation
### Curation Rationale
Due to the lack of a huge amount of text data in lower resource languages - like Farsi - researchers working on these languages were always finding it hard to start to fine-tune such models. This phenomenon can lead to a situation in which the golden opportunity for fine-tuning models is just in hands of a few companies or countries which contributes to the weakening the open science.
The last biggest cleaned merged textual corpus in Farsi is a 70GB cleaned text corpus from a compilation of 8 big data sets that have been cleaned and can be downloaded directly. Our solution to the discussed issues is called naab. It provides **126GB** (including more than **224 million** sequences and nearly **15 billion** words) as the training corpus and **2.3GB** (including nearly **11 million** sequences and nearly **300 million** words) as the test corpus.
### Source Data
The textual corpora that we used as our source data are illustrated in the figure below. It contains 5 corpora which are linked in the coming sections.
<div align="center">
<img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-pie.png">
</div>
#### Persian NLP
[This](https://github.com/persiannlp/persian-raw-text) corpus includes eight corpora that are sorted based on their volume as below:
- [Common Crawl](https://commoncrawl.org/): 65GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/commoncrawl_fa_merged.txt))
- [MirasText](https://github.com/miras-tech/MirasText): 12G
- [W2C – Web to Corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9): 1GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/w2c_merged.txt))
- Persian Wikipedia (March 2020 dump): 787MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/fawiki_merged.txt))
- [Leipzig Corpora](https://corpora.uni-leipzig.de/): 424M ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/LeipzigCorpus.txt))
- [VOA corpus](https://jon.dehdari.org/corpora/): 66MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/voa_persian_2003_2008_cleaned.txt))
- [Persian poems corpus](https://github.com/amnghd/Persian_poems_corpus): 61MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/poems_merged.txt))
- [TEP: Tehran English-Persian parallel corpus](http://opus.nlpl.eu/TEP.php): 33MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/TEP_fa.txt))
#### AGP
This corpus was a formerly private corpus for ASR Gooyesh Pardaz which is now published for all users by this project. This corpus contains more than 140 million paragraphs summed up in 23GB (after cleaning). This corpus is a mixture of both formal and informal paragraphs that are crawled from different websites and/or social media.
#### OSCAR-fa
[OSCAR](https://oscar-corpus.com/) or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the go classy architecture. Data is distributed by language in both original and deduplicated form. We used the unshuffled-deduplicated-fa from this corpus, after cleaning there were about 36GB remaining.
#### Telegram
Telegram, a cloud-based instant messaging service, is a widely used application in Iran. Following this hypothesis, we prepared a list of Telegram channels in Farsi covering various topics including sports, daily news, jokes, movies and entertainment, etc. The text data extracted from mentioned channels mainly contains informal data.
#### LSCP
[The Large Scale Colloquial Persian Language Understanding dataset](https://iasbs.ac.ir/~ansari/lscp/) has 120M sentences from 27M casual Persian sentences with its derivation tree, part-of-speech tags, sentiment polarity, and translations in English, German, Czech, Italian, and Hindi. However, we just used the Farsi part of it and after cleaning we had 2.3GB of it remaining. Since the dataset is casual, it may help our corpus have more informal sentences although its proportion to formal paragraphs is not comparable.
#### Initial Data Collection and Normalization
The data collection process was separated into two parts. In the first part, we searched for existing corpora. After downloading these corpora we started to crawl data from some social networks. Then thanks to [ASR Gooyesh Pardaz](https://asr-gooyesh.com/en/) we were provided with enough textual data to start the naab journey.
We used a preprocessor based on some stream-based Linux kernel commands so that this process can be less time/memory-consuming. The code is provided [here](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess).
### Personal and Sensitive Information
Since this corpus is briefly a compilation of some former corpora we take no responsibility for personal information included in this corpus. If you detect any of these violations please let us know, we try our best to remove them from the corpus ASAP.
We tried our best to provide anonymity while keeping the crucial information. We shuffled some parts of the corpus so the information passing through possible conversations wouldn't be harmful.
## Additional Information
### Dataset Curators
+ Sadra Sabouri (Sharif University of Technology)
+ Elnaz Rahmati (Sharif University of Technology)
### Licensing Information
mit?
### Citation Information
```
@article{sabouri2022naab,
title={naab: A ready-to-use plug-and-play corpus for Farsi},
author={Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
journal={arXiv preprint arXiv:2208.13486},
year={2022}
}
```
DOI: [https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486)
### Contributions
Thanks to [@sadrasabouri](https://github.com/sadrasabouri) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset.
### Keywords
+ Farsi
+ Persian
+ raw text
+ پیکره فارسی
+ پیکره متنی
+ آموزش مدل زبانی
|
yerevann/coco-karpathy | 2022-10-31T11:24:01.000Z | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"language:en",
"coco",
"image-captioning",
"region:us"
] | yerevann | null | null | null | 3 | 212 | ---
language:
- en
task_categories:
- image-to-text
task_ids:
- image-captioning
pretty_name: COCO Karpathy split
tags:
- coco
- image-captioning
---
# Dataset Card for "yerevann/coco-karpathy"
The Karpathy split of COCO for image captioning.
|
jamescalam/youtube-transcriptions | 2022-10-22T01:20:07.000Z | [
"task_categories:conversational",
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_categories:visual-question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"task_ids:document-retrieval",
"task_ids:visual-question-answering",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:afl-3.0",
"youtube",
"technical",
"speech to text",
"speech",
"video",
"video search",
"audio",
"audio search",
"region:us"
] | jamescalam | null | null | null | 18 | 212 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: Youtube Transcriptions
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- youtube
- technical
- speech to text
- speech
- video
- video search
- audio
- audio search
task_categories:
- conversational
- question-answering
- text-retrieval
- visual-question-answering
task_ids:
- open-domain-qa
- extractive-qa
- document-retrieval
- visual-question-answering
---
The YouTube transcriptions dataset contains technical tutorials (currently from [James Briggs](https://www.youtube.com/c/jamesbriggs), [Daniel Bourke](https://www.youtube.com/channel/UCr8O8l5cCX85Oem1d18EezQ), and [AI Coffee Break](https://www.youtube.com/c/aicoffeebreak)) transcribed using [OpenAI's Whisper](https://huggingface.co/openai/whisper-large) (large). Each row represents roughly a sentence-length chunk of text alongside the video URL and timestamp.
Note that each item in the dataset contains just a short chunk of text. For most use cases you will likely need to merge multiple rows to create more substantial chunks of text, if you need to do that, this code snippet will help:
```python
from datasets import load_dataset
# first download the dataset
data = load_dataset(
'jamescalam/youtube-transcriptions',
split='train'
)
new_data = [] # this will store adjusted data
window = 6 # number of sentences to combine
stride = 3 # number of sentences to 'stride' over, used to create overlap
for i in range(0, len(data), stride):
i_end = min(len(data)-1, i+window)
if data[i]['title'] != data[i_end]['title']:
# in this case we skip this entry as we have start/end of two videos
continue
# create larger text chunk
text = ' '.join(data[i:i_end]['text'])
# add to adjusted data list
new_data.append({
'start': data[i]['start'],
'end': data[i_end]['end'],
'title': data[i]['title'],
'text': text,
'id': data[i]['id'],
'url': data[i]['url'],
'published': data[i]['published']
})
``` |
laion/OIG | 2023-03-31T00:06:28.000Z | [
"license:apache-2.0",
"region:us"
] | laion | null | null | null | 252 | 211 | ---
license: apache-2.0
---
# This is the Open Instruction Generalist Dataset
This is our attempt to create a large instruction dataset of medium quality along with a smaller high quality instruciton dataset (OIG-small-chip2).
The data is in the form of jsonl objects, with at least a 'text' field. Some datasets may also include a 'metadata' field. The 'text' field contains a string of the form of one or more of:
- \<human\>: instruction\n\<bot\>: response
- \<human\>: instruction\n\<bot\>: response .. \<human\>: instruction\n\<bot\>: response
The purpose of the larger dataset is to perform continued pre-training, followed by a finetune on the smaller high quality dataset.
The purpose of the smaller OIG-small-chip2 dataset is to make it easy to convert a language model pretrained on large amounts of text into an instruction following model using a small amount of additional compute via finetuning or softprompt tuning.
Many additional datasets are being prepared by various community members and will be incorporated into this dataset as we are able to verify the quality and formatting of the data. Our goal is to make helpful and non-toxic instruction tuned models available to everyone.
OIG is currently at 44M. We will continue to publish ever larger diverse instruction datasets with the goal of creating 1 trillion tokens of diverse instructions - enough to pretrain an LLM from scratch.
It is best to download the individual jsonl files directly that you wish to use instead of using HF load_datasets. https://huggingface.co/datasets/laion/OIG/tree/main
## unified_abstract_infill.jsonl (~232000)
dbpedia and wikipedia snippets combined with a small portion of https://github.com/google-research/dialog-inpainting
## unified_basic.jsonl (30)
## unified_conv_finqa.jsonl (~9000)
https://github.com/czyssrs/ConvFinQA
## unified_cuad.jsonl (~500)
https://www.atticusprojectai.org/cuad
## unified_essays.jsonl (~2000)
- essays available on the public web
## unified_grade_school_math_instructions.jsonl (~9000)
- https://github.com/openai/grade-school-math
## unified_hc3_human.jsonl (~58000)
## unified_image_prompts_instructions.jsonl (~15000)
- A very small subset of LAION-400M
## unified_joke_explanations.jsonl (356)
- Crawled from public internet.
## unified_mathqa_flanv2_kojma_cot.jsonl (~107000)
- https://huggingface.co/datasets/math_qa,
## unified_merged_code_xp3.jsonl (~67000)
- https://huggingface.co/datasets/bigscience/xP3
## unified_multi_news.jsonl (~90000)
- https://www.tensorflow.org/datasets/catalog/multi_news
## unified_multi_sum.jsonl (~1700000)
## unified_nq.jsonl (~307000)
## unified_openai_summarize_tldr.jsonl (~233000)
- https://github.com/openai/summarize-from-feedback
## unified_oscar_en_sample_dialog.jsonl (~2670000)
- https://oscar-project.org/
- https://huggingface.co/datasets/TurkuNLP/register_oscar
## unified_plot_screenplay_books_dialog.jsonl (~8000)
- https://github.com/markriedl/WikiPlots extracted from Wikipedia, snippets from the Pile’s https://huggingface.co/datasets/the_pile_books3, and snippets of screenplays available on the public web.
## unified_sqlv1.jsonl (~17000)
- public text 2 sql datasets.
## unified_sqlv2.jsonl(~24000)
- public text 2 sql datasets.
## unified_squad_v2.jsonl (~19000)
- https://rajpurkar.github.io/SQuAD-explorer/
## unified_squad_v2_more_neg.jsonl (~19000)
- https://rajpurkar.github.io/SQuAD-explorer/
## unified_ul2_plus_oscar_en_sample_dialog.jsonl (~2900000)
- https://oscar-project.org/
- https://huggingface.co/datasets/TurkuNLP/register_oscar
## unified_unifiedskg_instructions.jsonl (~223000)
- https://github.com/HKUNLP/UnifiedSKG
## unified_unnatural_instructions.jsonl (~238000)
- https://github.com/orhonovich/unnatural-instructions
## unified_xp3_sample.jsonl (~188000)
- https://huggingface.co/datasets/bigscience/xP3
## unified_canadian_parliament.jsonl(~301000)
- https://openparliament.ca/data-download/
## unified_poetry_2_song.jsonl (~12000)
- https://huggingface.co/datasets/merve/poetry
- https://huggingface.co/datasets/matthh/gutenberg-poetry-corpus
## unified_flan.jsonl (~2700000)
- https://github.com/google-research/FLAN/tree/main/flan/v2
## unified_ni.jsonl (~256000)
- https://github.com/allenai/natural-instructions
## unified_p3.jsonl (~31000000)
- https://huggingface.co/datasets/bigscience/P3
## unified_soda_dialog.jsonl (~1200000)
- https://huggingface.co/datasets/allenai/soda
## unified_rallio_soda_upgraded_2048.jsonl (~210000)
- https://huggingface.co/datasets/allenai/soda
- a newer version of the unified_soda_dialog dataset, with multiple dialogs on one line
- recommend to use either the unified_soda_dailog.jsonl or unified_rallio_soda_upgraded_2048, and not both.
## unified_rallio_safety_and_prosocial.jsonl (~319000)
- Generated from public datasets and generated from Wiki similar to the chip2 data
- Find a full list in the end of the document
- This dataset also includes https://huggingface.co/datasets/allenai/prosocial-dialog and https://huggingface.co/datasets/Anthropic/hh-rlhf
## unified-chip2.jsonl / OIG-small-chip2 (~210000):
This dataset was created as part of the LAION OA effort by @rallio67 and other members of the LAION contributors. It is a high quality dataset intended to be mixed into a large pre-train dataset and can be used for a final finetune. Chip2 contains:
### Python Code Examples (~6,000):
A set of instruction / response pairs where the User requests the agent to generate a python function. These examples were generated using a large language model and few shot prompting with python code verified to execute. There are also ~3000 examples of manually curated one line python code examples from the Conala publication (see: https://conala-corpus.github.io/)
### Natural Instruction Examples (~124,000):
A balanced set of diverse natural and factual questions and answers made using few shot prompted UL2 20B and an instruction tuned GPT-NeoX-20B model (Chip) and then rejection sampled using multiple automatic evaluations to remove low quality outputs and to filter out factually inaccurate answers. Also includes some filtered natural instructions from Anthropic Helpful instructions (see: https://github.com/anthropics/hh-rlhf).
### Generic Harmless Instruction Examples (~6,500):
A set of instruction / response pairs sourced from the Anthropic redteam paper github (see: https://github.com/anthropics/hh-rlhf). This dataset includes a lot of data regarding real humans trying to make the Anthropic language models say harmful/toxic/trolling things. For this dataset only examples that were rated lowly on the harmful scale (0,1,2 out of 4, where 4 is the most toxic) were included. Again, only the first lines of dialogue (instruction, first_agent_response) were retained.
### Instruction/Responses with Lists (~14,000):
A set of filtered and reformatted instruction / response pairs where the agent response contains a list. Sourced from the Anthropic github (see: https://github.com/anthropics/hh-rlhf). Sourced from wikihow text lists created by b-mc2 (https://huggingface.co/datasets/b-mc2/wikihow_lists). And rejection filtered instruction response pairs generated by Chip20B that contained lists. All lists are formatted in a similar style.
### Follow-up questions (~12,500):
Examples of instructions and responses where an appropriate response is to ask for more information from the prompter. These examples were generated from a combination of few shot prompted UL2 20B (to generate natural questions) and a large dialogue prompted language model to generate the responses containing follow-up questions.
### Wikipedia Toxic Adversarial Questions (~12,000):
Questions and answers generated from wikipedia articles that discuss potentially sensitive topics (flagged as potentially toxic by an early toxicity detection model).
### Grade School Math GSM8K (~9,000):
GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. The dataset is segmented into 7.5K training problems and 1K test problems. These problems take between 2 and 8 steps to solve, and solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the final answer. A bright middle school student should be able to solve every problem. It can be used for multi-step mathematical reasoning. (https://github.com/openai/grade-school-math)
### Reasoning Instructions (~4,500):
Examples from the Com2Sense and Strategy QA datasets that were reformatted into natural instructions using large language models with few shot prompting and additional quality filtering steps.
### Character and Scene Descriptions (~30,000):
Examples of instructions and responses for the generation of character or scene descriptions. Scenes were sourced from video game wikis and reformatted into instruction / response format using large language models or generated by few shot prompting with large language models.
## Support this project
Your contributions and feedback support the open source ecosystem, improve the bot and provide datasets for future AI research. To participate you can:
Submit Github issues, track issues and help create datasets that need improvement. https://github.com/LAION-AI/Open-Instruction-Generalist
Join our Discord to talk with other team members working on this! https://discord.gg/xBPBXfcFHd
## Update: March 20, 2023
- Added the metadata column to all datasets to alleviate issues with HF datasets loader.
- Broke some of the p3 dialogs into parts for ease of loading.
## Disclaimer
These datasets contain synthetic data and in some cases data that includes humans trying to get the language model to say toxic/offensive/trolling things. If you are concerned about the presence of this type of material in the dataset please make sure you carefully inspect each of the entries and filter appropriately. Our goal is for the model to be as helpful and non-toxic as possible and we are actively evaluating ways to reduce or eliminate undesirable content from the instruction tuning datasets.
## License
The OIG dataset that is authored by LAION volunteers is released under an Apache 2.0 license. However, the data also includes content licensed under other permissive licenses such as Wikipedia data which is licensed under CC-BY-SA, or web-crawled data which is used under fair use principles.
## Acknowledgement
- We would like to thank all of our amazing LAION volunteers including: @Rallio, @Jue, @Ce Zhang, @Player-1, @Laurel, @danielpatrickhug, @Jjmachan, @Mylo, @Khalid, @Coco.han, @Jordiclive, @Pszemraj, all volunteers from the Open Assistant project who initially created synthetic data, and many others.
- We would like to thank Together for their tireless dedication to the open source and AI community and their contribution to many of the datasets.
- We would like to thank AI Horde and user @Db0 for their incredible contribution of filtered data that were flagged as unethical.
- Please check out our related project: https://github.com/LAION-AI/Open-Assistant for our work in human feedback gathering and RLHF.
- Lastly, Ontocord.ai’s founders are grateful to have the opportunity to create a portion of the data augmentation and safety-moderation code for this project.
|
zxvix/pubmed_counterfactual | 2023-08-25T06:56:31.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 211 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: MedlineCitation
struct:
- name: PMID
dtype: int32
- name: DateCompleted
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: NumberOfReferences
dtype: int32
- name: DateRevised
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: Article
struct:
- name: Abstract
struct:
- name: AbstractText
dtype: string
- name: ArticleTitle
dtype: string
- name: AuthorList
struct:
- name: Author
sequence:
- name: LastName
dtype: string
- name: ForeName
dtype: string
- name: Initials
dtype: string
- name: CollectiveName
dtype: string
- name: Language
dtype: string
- name: GrantList
struct:
- name: Grant
sequence:
- name: GrantID
dtype: string
- name: Agency
dtype: string
- name: Country
dtype: string
- name: PublicationTypeList
struct:
- name: PublicationType
sequence: string
- name: MedlineJournalInfo
struct:
- name: Country
dtype: string
- name: ChemicalList
struct:
- name: Chemical
sequence:
- name: RegistryNumber
dtype: string
- name: NameOfSubstance
dtype: string
- name: CitationSubset
dtype: string
- name: MeshHeadingList
struct:
- name: MeshHeading
sequence:
- name: DescriptorName
dtype: string
- name: QualifierName
dtype: string
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
- name: text
dtype: string
- name: title
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 4026750.12
num_examples: 991
download_size: 2241988
dataset_size: 4026750.12
---
# Dataset Card for "pubmed_counterfactual"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_10000_clean2_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T04:54:58.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 211 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 236440000
num_examples: 10000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 111490531
dataset_size: 472880000
---
# Dataset Card for "autotree_pmlb_10000_clean2_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/0eb4c62d | 2023-10-01T19:54:21.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 211 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 175
num_examples: 10
download_size: 1353
dataset_size: 175
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "0eb4c62d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
conv_ai | 2022-11-03T16:30:55.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"evaluating-dialogue-systems",
"region:us"
] | null | ConvAI is a dataset of human-to-bot conversations labelled for quality. This data can be used to train a metric for evaluating dialogue systems. Moreover, it can be used in the development of chatbots themselves: it contains the information on the quality of utterances and entire dialogues, that can guide a dialogue system in search of better answers. | null | null | 2 | 210 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- conversational
- text-classification
task_ids:
- text-scoring
paperswithcode_id: null
pretty_name: ConvAi
tags:
- evaluating-dialogue-systems
dataset_info:
features:
- name: id
dtype: int32
- name: dialogId
dtype: int32
- name: context
dtype: string
- name: users
list:
- name: userType
dtype: string
- name: id
dtype: string
- name: evaluation
list:
- name: breadth
dtype: int32
- name: userId
dtype: string
- name: quality
dtype: int32
- name: engagement
dtype: int32
- name: thread
list:
- name: evaluation
dtype: int32
- name: text
dtype: string
- name: userId
dtype: string
- name: time
dtype: int32
config_name: conv_ai
splits:
- name: train
num_bytes: 3924265
num_examples: 2778
download_size: 5804611
dataset_size: 3924265
---
# Dataset Card for ConvAi
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
- **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
- **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
ekinakyurek/ftrace | 2022-10-23T05:56:05.000Z | [
"task_ids:masked-language-modeling",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:TRex",
"source_datasets:Lama",
"language:en",
"license:cc-by-sa-4.0",
"license:cc-by-nc-4.0",
"arxiv:2205.11482",
"region:us"
] | ekinakyurek | Factual Tracing Dataset that contains queries and abstracts, and their corresponding ground truth. | \ | null | 3 | 210 | ---
language:
- en
license:
- cc-by-sa-4.0
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: FTRACE
size_categories:
- 1M<n<10M
source_datasets:
- TRex
- Lama
task_categories:
- influence-attribution
- information-retrieval
- question-answering-retrieval
task_ids:
- influence-attribution
- masked-language-modeling
---
# Dataset Card for "FTRACE"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/ekinakyurek/ftrace
- **Repository:** https://github.com/ekinakyurek/influence
- **Paper:** https://arxiv.org/pdf/2205.11482.pdf
- **Point of Contact:** [Ekin Akyürek](mailto:akyurek@mit.edu)
- **Size of downloaded dataset files:** 113.7 MB
- **Size of the generated dataset:** 1006.6 MB
- **Total amount of disk used:** 1120.3 MB
### Dataset Summary
[PAPER]
FTRACE is a zero-shot infromation retrieval benchmark deviced for tracing a language model’s predictions back to training examples. In the accompanying paper, we evaluate commonly studied influence methods, including gradient-based (TracIn) and embedding-based approaches. The dataset contains two parts. First, factual queries for that we trace the knowledge are extracted from existing LAMA queries (Petroni et al., 2019). Second, Wikidata sentences are extracted from TREx corpus (Elsahar et al., 2018). We annotate the extracted sentences with their stated facts, and these facts can be mathed with the facts in query set. In both parts, we provide (input, target) pairs as a masked language modeling task -- see examples in the below. However, one can use the same data in other formalities for example auto-regressive completion via a processing of `input_pretokenized` and `targets_pretokenized` field.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### Abstracts
- **Size of downloaded dataset files:** 112 MB
- **Size of the generated dataset:** 884 MB
- **Total amount of disk used:** 996 MB
An example of 'abstract' looks as follows.
```
{"inputs_pretokenized": "The name Austroasiatic comes from the Latin words for \"south\" and \"Asia\", hence \"<extra_id_0>\".",
"targets_pretokenized": "<extra_id_0> South Asia",
"page_uri": "Q33199",
"masked_uri": "Q771405",
"masked_type": "subject",
"example_uris": "Q33199-1-Q48-Q771405-1",
"facts": "P361,Q48,Q771405;P30,Q48,Q771405",
"id": 8}
```
#### Queries
- **Size of downloaded dataset files:** 1.7 MB
- **Size of the generated dataset:** 8.9 MB
- **Total amount of disk used:** 10.6 MB
An example of 'query' looks as follows.
```
{"inputs_pretokenized": "Paul Ehrlich used to work in <extra_id_0> .",
"targets_pretokenized": "<extra_id_0> Frankfurt",
"uuid": "5b063008-a8ba-4064-9f59-e70102bb8c50",
"obj_uri": "Q1794",
"sub_uri": "Q57089",
"predicate_id": "P937",
"obj_surface": "Frankfurt",
"sub_surface": "Paul Ehrlich"}
```
### Data Fields
The data fields are the same among all splits.
#### Abstracts
- `inputs_pretokenized`: a `string` feature.
- `targets_pretokenized`: a `string` feature.
- `masked_uri`: a `string` feature.
- `masked_type`: a `string` feature.
- `facts`: a `string` feature.
- `id`: a `string` feature.
- `example_uris`: a `string` feature.
- `page_uri`: a `string` feature.
#### Queries
- `inputs_pretokenized`: a `string` feature.
- `targets_pretokenized`: a `string` feature.
- `obj_surface`: a `string` feature.
- `sub_surface`: a `string` feature.
- `obj_uri`: a `string` feature.
- `sub_uri`: a `string` feature.
- `predicate_id`: a `string` feature.
- `uuid`: a `string` feature.
### Data Splits
| name | train |
|-----------|------:|
|Abstracts |1560453|
|Queries |31479 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
LAMA: https://github.com/facebookresearch/LAMA
TRex: https://hadyelsahar.github.io/t-rex/
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The parts of this dataset are available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) and [The Creative Commons Attribution-Noncommercial 4.0 International License](https://github.com/facebookresearch/LAMA/blob/master/LICENSE)
### Citation Information
The main paper should be cited as follow:
```
@misc{https://doi.org/10.48550/arxiv.2205.11482,
doi = {10.48550/ARXIV.2205.11482},
url = {https://arxiv.org/abs/2205.11482},
author = {Akyürek, Ekin and Bolukbasi, Tolga and Liu, Frederick and Xiong, Binbin and Tenney, Ian and Andreas, Jacob and Guu, Kelvin},
keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Tracing Knowledge in Language Models Back to the Training Data},
publisher = {arXiv},
year = {2022},
}
```
Please also cite Petroni et al., 2019 for the query set, and Elsahar et al., 2018 for the abstract set.
```
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
```
```
@inproceedings{elsahar2018t,
title={T-rex: A large scale alignment of natural language with knowledge base triples},
author={Elsahar, Hady and Vougiouklis, Pavlos and Remaci, Arslen and Gravier, Christophe and Hare, Jonathon and Laforest, Frederique and Simperl, Elena},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
### Contributions |
IlyaGusev/ru_turbo_saiga | 2023-09-04T13:26:47.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:ru",
"license:cc-by-4.0",
"chat",
"region:us"
] | IlyaGusev | null | null | null | 10 | 210 | ---
dataset_info:
features:
- name: messages
sequence:
- name: role
dtype: string
- name: content
dtype: string
- name: seed
dtype: string
- name: source
dtype: string
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 87316730
num_examples: 37731
download_size: 21742388
dataset_size: 87316730
license: cc-by-4.0
task_categories:
- text-generation
- text2text-generation
language:
- ru
tags:
- chat
size_categories:
- 10K<n<100K
---
# Saiga
Dataset of ChatGPT-generated chats in Russian.
<img src="https://cdn.midjourney.com/0db33d04-9d39-45f3-acb2-e5c789852e23/0_3.png" >
Based on the [Baize](https://github.com/project-baize/baize-chatbot) paper.
Code: [link](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/generate_chat.py).
Prompt:
```
Идёт диалог между пользователем и ИИ ассистентом.
Пользователь и ассистент общаются на тему: {{seed}}
Реплики человека начинаются с [Пользователь], реплики ассистента начинаются с [Ассистент].
Пользователь задаёт вопросы на основе темы и предыдущих сообщений.
Пользователь обрывает беседу, когда у него не остается вопросов.
Ассистент даёт максимально полные, информативные, точные и творческие ответы.
Ассистент старается не задавать вопросов, за исключением уточняющих.
Ассистент может отвечать несколькими абзацами.
Ассистент может использовать Markdown.
Закончи диалог точно в таком же формате.
[Пользователь] Привет!
[Ассистент] Привет! Чем я могу помочь?
```
## Legal disclaimer
Data is based on OpenAI’s gpt-3.5-turbo, whose [terms of use](https://openai.com/policies/terms-of-use) prohibit for us developing models that compete with OpenAI. Not for you. |
bigcode/ta-prompt | 2023-05-04T12:20:22.000Z | [
"language:code",
"license:apache-2.0",
"region:us"
] | bigcode | null | null | null | 151 | 210 | ---
license: apache-2.0
language:
- code
programming_language:
- Java
- JavaScript
- Python
---
# Dataset summary
This repository is dedicated to prompts used to perform in-context learning with [starcoder](https://huggingface.co/bigcode/starcoder). As a matter of fact, the model is an
autoregressive language model that is trained on both code and natural language text. It can be turned into an AI-powered technical assistant by prepending conversations to
its 8192-tokens context window.
# Format
The prompt is a .txt file which contains multiple conversations between a human and the assistant. Here is the format
```
-----
Human: <instruction>
Assistant: <answer>
-----
Human: <instruction>
Assistant: <answer>
Human: <instruction>
Assistant: <answer>
.
.
.
-----
```
# Use cases
We want the technical assistant to cover a diverse set of use cases
- **Code-to-text**:
- `What is the purpose of the following code?<code>`
- `What is the bug in the following code?<code>`
- **Text-to-code**:
- `Write/Design/Implement a function to <task>`
- **Code-to-code**:
- `Translate this <code> from <programming language> to <programming language>.`
- **Text-to-text**:
- `What is <technical concept>`
- **General-purpose Q&A**
- `What are you?`
- `What is your purpose?`
# Scope of the work
As a model designed for coding tasks, the user should not expect the model to output relevant answers when prompted with a general-purpose question. When it comes to coding
requests, the output of the model should be post-processed before testing them. |
readerbench/ro-offense-sequences | 2023-09-23T18:28:19.000Z | [
"task_categories:token-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:readerbench/ro-offense",
"language:ro",
"license:apache-2.0",
"hate-speech-detection",
"region:us"
] | readerbench | null | null | null | 0 | 210 | ---
license: apache-2.0
annotations_creators:
- expert-generated
language_creators:
- found
task_categories:
- token-classification
language:
- ro
multilinguality:
- monolingual
source_datasets:
- readerbench/ro-offense
tags:
- hate-speech-detection
task_ids:
- hate-speech-detection
pretty_name: RO-Offense-Sequences
size_categories:
- 1K<n<10K
extra_gated_prompt: 'Warning: this repository contains harmful content (abusive language,
hate speech).'
---
# Dataset Card for "RO-Offense-Sequences"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
<!--
- **Paper:** News-RO-Offense - A Romanian Offensive Language Dataset and Baseline Models Centered on News Article Comments
-->
- **Homepage:** [https://github.com/readerbench/ro-offense-sequences](https://github.com/readerbench/ro-offense-sequences)
- **Repository:** [https://github.com/readerbench/ro-offense-sequences](https://github.com/readerbench/ro-offense-sequences)
- **Point of Contact:** [Teodora-Andreea Ion](mailto:theoion21.andr@gmail.com)
-
### Dataset Summary
a novel Romanian language dataset for offensive sequence detection with manually
annotated offensive sequences from a local Romanian sports news website (gsp.ro):
Resulting in 4800 annotated messages
### Languages
Romanian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'id': 5,
'text':'PLACEHOLDER TEXT',
'offensive_substrings': ['substr1','substr2'],
'offensive_sequences': [(0,10), (16,20)]
}
```
### Data Fields
- `id`: The unique comment ID, corresponding to the ID in [RO Offense](https://huggingface.co/datasets/readerbench/ro-offense)
- `text`: full comment text
- `offensive_substrings`: a list of offensive substrings. Can contain duplicates if some offensive substring appears twice
- `offensive_sequences`: a list of tuples with (start, end) position of the offensive sequences
*Attention*: the sequences are computed for \n line sepparator! Git might convert the csv to \r\n.
### Data Splits
| name |train|validate|test|
|---------|----:|---:|---:|
|ro|4,000|400|400|
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification for Romanian Language.
### Source Data
Sports News Articles comments
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Sports News Article readers
### Annotations
#### Annotation process
#### Who are the annotators?
Native speakers
### Personal and Sensitive Information
The data was public at the time of collection. PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
This data is available and distributed under Apache-2.0 license
### Citation Information
```
tbd
```
### Contributions
|
tucan-ai/summaries-de-v1 | 2023-09-29T05:59:39.000Z | [
"region:us"
] | tucan-ai | null | null | null | 0 | 210 | Entry not found |
result-kand2-sdxl-wuerst-karlo/5e4a199e | 2023-10-01T22:19:01.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 210 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 247
num_examples: 10
download_size: 1418
dataset_size: 247
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "5e4a199e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
loubnabnl/clean_prs2 | 2023-09-15T17:58:59.000Z | [
"region:us"
] | loubnabnl | null | null | null | 0 | 209 | ---
dataset_info:
features:
- name: bucket
dtype: string
- name: pull_request_info
struct:
- name: org.id
dtype: int64
- name: public
dtype: bool
- name: pull_request.additions
dtype: int64
- name: pull_request.body
dtype: string
- name: pull_request.changed_files
dtype: int64
- name: pull_request.closed_at
dtype: string
- name: pull_request.comments
dtype: int64
- name: pull_request.commits
dtype: int64
- name: pull_request.created_at
dtype: string
- name: pull_request.deletions
dtype: int64
- name: pull_request.guid
dtype: string
- name: pull_request.id
dtype: int64
- name: pull_request.merged_at
dtype: string
- name: pull_request.merged_by.login
dtype: string
- name: pull_request.milestone.description
dtype: string
- name: pull_request.milestone.number
dtype: int64
- name: pull_request.milestone.title
dtype: string
- name: pull_request.number
dtype: int64
- name: pull_request.review_comments
dtype: int64
- name: pull_request.state
dtype: string
- name: pull_request.title
dtype: string
- name: pull_request.user.id
dtype: int64
- name: pull_request.user.login
dtype: string
- name: repo.id
dtype: int64
- name: repo.name
dtype: string
- name: head_repo_info
struct:
- name: pull_request.head.label
dtype: string
- name: pull_request.head.ref
dtype: string
- name: pull_request.head.repo.default_branch
dtype: string
- name: pull_request.head.repo.description
dtype: string
- name: pull_request.head.repo.homepage
dtype: string
- name: pull_request.head.repo.language
dtype: string
- name: pull_request.head.repo.license.name
dtype: string
- name: pull_request.head.repo.name
dtype: string
- name: pull_request.head.repo.owner.login
dtype: string
- name: pull_request.head.repo.owner.type
dtype: string
- name: pull_request.head.repo.private
dtype: bool
- name: pull_request.head.repo.stargazers_count
dtype: int64
- name: pull_request.head.sha
dtype: string
- name: pull_request.head.user.login
dtype: string
- name: pull_request.head.user.type
dtype: string
- name: base_repo_info
struct:
- name: pull_request.base.label
dtype: string
- name: pull_request.base.ref
dtype: string
- name: pull_request.base.repo.default_branch
dtype: string
- name: pull_request.base.repo.description
dtype: string
- name: pull_request.base.repo.forks_count
dtype: int64
- name: pull_request.base.repo.homepage
dtype: string
- name: pull_request.base.repo.language
dtype: string
- name: pull_request.base.repo.license.name
dtype: string
- name: pull_request.base.repo.name
dtype: string
- name: pull_request.base.repo.open_issues_count
dtype: int64
- name: pull_request.base.repo.owner.login
dtype: string
- name: pull_request.base.repo.owner.type
dtype: string
- name: pull_request.base.repo.private
dtype: bool
- name: pull_request.base.repo.stargazers_count
dtype: int64
- name: pull_request.base.repo.watchers_count
dtype: int64
- name: pull_request.base.sha
dtype: string
- name: pull_request.base.user.login
dtype: string
- name: pull_request.base.user.type
dtype: string
- name: pull_request.comments
dtype: int64
- name: pull_request.label.name
dtype: 'null'
- name: pull_request.review_comments
dtype: int64
- name: events
list:
- name: action
dtype: string
- name: actor.id
dtype: int64
- name: actor.login
dtype: string
- name: comment.author_association
dtype: string
- name: comment.body
dtype: string
- name: comment.commit_id
dtype: string
- name: comment.created_at
dtype: string
- name: comment.diff_hunk
dtype: string
- name: comment.id
dtype: int64
- name: comment.in_reply_to_id
dtype: int64
- name: comment.line
dtype: int64
- name: comment.original_commit_id
dtype: string
- name: comment.original_line
dtype: int64
- name: comment.original_position
dtype: int64
- name: comment.original_start_line
dtype: int64
- name: comment.path
dtype: string
- name: comment.position
dtype: int64
- name: comment.side
dtype: string
- name: comment.start_line
dtype: int64
- name: comment.start_side
dtype: string
- name: comment.updated_at
dtype: string
- name: created_at
dtype: timestamp[us, tz=UTC]
- name: issue.author
dtype: string
- name: issue.comment
dtype: string
- name: issue.comment_id
dtype: float64
- name: review.author_association
dtype: string
- name: review.body
dtype: string
- name: review.commit_id
dtype: string
- name: review.id
dtype: int64
- name: review.state
dtype: string
- name: review.submitted_at
dtype: string
- name: type
dtype: string
- name: user.login
dtype: string
- name: user.type
dtype: string
splits:
- name: train
num_bytes: 54214029
num_examples: 10000
download_size: 16095878
dataset_size: 54214029
---
# Dataset Card for "clean_prs2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
germaner | 2023-01-25T14:30:52.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:de",
"license:apache-2.0",
"region:us"
] | null | GermaNER is a freely available statistical German Named Entity Tagger based on conditional random fields(CRF). The tagger is trained and evaluated on the NoSta-D Named Entity dataset, which was used in the GermEval 2014 for named entity recognition. The tagger comes close to the performance of the best (proprietary) system in the competition with 77% F-measure (this is the latest result; the one reported in the paper is 76%) test set performance on the four standard NER classes (PERson, LOCation, ORGanisation and OTHer).
We describe a range of features and their influence on German NER classification and provide a comparative evaluation and some analysis of the results. The software components, the training data and all data used for feature generation are distributed under permissive licenses, thus this tagger can be used in academic and commercial settings without restrictions or fees. The tagger is available as a command-line tool and as an Apache UIMA component. | @inproceedings{Benikova2015GermaNERFO,
title={GermaNER: Free Open German Named Entity Recognition Tool},
author={Darina Benikova and S. Yimam and Prabhakaran Santhanam and Chris Biemann},
booktitle={GSCL},
year={2015}
} | null | 0 | 207 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- de
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: GermaNER
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-OTH
'3': B-PER
'4': I-LOC
'5': I-ORG
'6': I-OTH
'7': I-PER
'8': O
splits:
- name: train
num_bytes: 9059606
num_examples: 26200
download_size: 4363657
dataset_size: 9059606
---
# Dataset Card for GermaNER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/tudarmstadt-lt/GermaNER
- **Paper:** https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf
- **Point of Contact:** [Darina Benikova](mailto:benikova@aiphes.tu-darmstadt.de)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
German
## Dataset Structure
### Data Instances
An example instance looks as follows:
```
{
'id': '3',
'ner_tags': [1, 5, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8],
'tokens': ['Bayern', 'München', 'ist', 'wieder', 'alleiniger', 'Top-', 'Favorit', 'auf', 'den', 'Gewinn', 'der', 'deutschen', 'Fußball-Meisterschaft', '.']
}
```
### Data Fields
Each instance in the dataset has:
- `id`: an id as a string
- `tokens`: sequence of tokens
- `ner_tags`: NER tags for each token (encoded as IOB)
NER tags can be: 'B-LOC' (0), 'B-ORG' (1), 'B-OTH' (2), 'B-PER' (3), 'I-LOC' (4), 'I-ORG' (5), 'I-OTH' (6), 'I-PER' (7), 'O' (8)
### Data Splits
Dataset provides only train part (26200 data instances).
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License of GermaNER:
```
GermaNER is licensed under ASL 2.0 and other lenient licenses, allowing its use for academic and commercial purposes without restrictions. The licenses of its compenents are mixed licensed and are individually listed in Data/Licenses.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of this License; and
You must cause any modified files to carry prominent notices stating that You changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
```
### Citation Information
```bibtex
@inproceedings{Benikova2015GermaNERFO,
title={GermaNER: Free Open German Named Entity Recognition Tool},
author={Darina Benikova and Seid Muhie Yimam and P. Santhanam and Chris Biemann},
booktitle={GSCL},
year={2015}
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
MLCommons/peoples_speech | 2023-05-16T16:11:10.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1T<n",
"source_datasets:original",
"language:en",
"license:cc-by-2.0",
"license:cc-by-2.5",
"license:cc-by-3.0",
"license:cc-by-4.0",
"license:cc-by-sa-3.0",
"license:cc-by-sa-4.0",
"robust-speech-recognition",
"noisy-speech-recognition",
"speech-recognition",
"arxiv:2111.09344",
"region:us"
] | MLCommons | The People's Speech is a free-to-download 30,000-hour and growing supervised
conversational English speech recognition dataset licensed for academic and
commercial usage under CC-BY-SA (with a CC-BY subset). | @article{DBLP:journals/corr/abs-2111-09344,
author = {Daniel Galvez and
Greg Diamos and
Juan Ciro and
Juan Felipe Ceron and
Keith Achorn and
Anjali Gopi and
David Kanter and
Maximilian Lam and
Mark Mazumder and
Vijay Janapa Reddi},
title = {The People's Speech: A Large-Scale Diverse English Speech Recognition
Dataset for Commercial Usage},
journal = {CoRR},
volume = {abs/2111.09344},
year = {2021},
url = {https://arxiv.org/abs/2111.09344},
eprinttype = {arXiv},
eprint = {2111.09344},
timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 24 | 207 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
license:
- cc-by-2.0
- cc-by-2.5
- cc-by-3.0
- cc-by-4.0
- cc-by-sa-3.0
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1T<n
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: People's Speech
tags:
- robust-speech-recognition
- noisy-speech-recognition
- speech-recognition
---
# Dataset Card for People's Speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://mlcommons.org/en/peoples-speech/
- **Repository:** https://github.com/mlcommons/peoples-speech
- **Paper:** https://arxiv.org/abs/2111.09344
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [datasets@mlcommons.org](mailto:datasets@mlcommons.org)
### Dataset Summary
The People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
{
"id": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac",
"audio": {
"path": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac"
"array": array([-6.10351562e-05, ...]),
"sampling_rate": 16000
}
"duration_ms": 14490,
"text": "contends that the suspension clause requires a [...]"
}
### Data Fields
{
"id": datasets.Value("string"),
"audio": datasets.Audio(sampling_rate=16_000),
"duration_ms": datasets.Value("int32"),
"text": datasets.Value("string"),
}
### Data Splits
We provide the following configurations for the dataset: `cc-by-clean`, `cc-by-dirty`, `cc-by-sa-clean`, `cc-by-sa-dirty`, and `microset`. We don't provide splits for any of the configurations.
## Dataset Creation
### Curation Rationale
See our [paper](https://arxiv.org/abs/2111.09344).
### Source Data
#### Initial Data Collection and Normalization
Data was downloaded via the archive.org API. No data inference was done.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
No manual annotation is done. We download only source audio with already existing transcripts.
#### Who are the annotators?
For the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems.
### Personal and Sensitive Information
Several of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis.
The dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset.
Our sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time.
### Discussion of Biases
Our data is downloaded from archive.org. As such, the data is biased towards whatever users decide to upload there.
Almost all of our data is American accented English.
### Other Known Limitations
As of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
We provide CC-BY and CC-BY-SA subsets of the dataset.
### Citation Information
Please cite:
```
@article{DBLP:journals/corr/abs-2111-09344,
author = {Daniel Galvez and
Greg Diamos and
Juan Ciro and
Juan Felipe Cer{\'{o}}n and
Keith Achorn and
Anjali Gopi and
David Kanter and
Maximilian Lam and
Mark Mazumder and
Vijay Janapa Reddi},
title = {The People's Speech: {A} Large-Scale Diverse English Speech Recognition
Dataset for Commercial Usage},
journal = {CoRR},
volume = {abs/2111.09344},
year = {2021},
url = {https://arxiv.org/abs/2111.09344},
eprinttype = {arXiv},
eprint = {2111.09344},
timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
joelniklaus/eurlex_resources | 2023-05-10T08:04:28.000Z | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv",
"license:cc-by-4.0",
"region:us"
] | joelniklaus | null | 4 | 207 | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "EurlexResources: A Corpus Covering the Largest EURLEX Resources"
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for EurlexResources: A Corpus Covering the Largest EURLEX Resources
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/JoelNiklaus/LegalDatasets/tree/main/pretrain/eurlex)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
This dataset contains large text resources (~179GB in total) from EURLEX that can be used for pretraining language models.
Use the dataset like this:
```python
from datasets import load_dataset
config = "de_caselaw" # {lang}_{resource}
dataset = load_dataset("joelito/eurlex_resources", config, split='train', streaming=True)
```
### Supported Tasks and Leaderboards
The dataset supports the task of masked language modeling.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl.xz and there is one split available ("train").
The following resource types are supported: caselaw, decision, directive, intagr, proposal, recommendation, regulation
More information about the resource types can be found here:
- Caselaw: [EU](https://eur-lex.europa.eu/collection/eu-law/eu-case-law.html)
- Decision: [EU](https://eur-lex.europa.eu/EN/legal-content/summary/european-union-decisions.html), [Wikipedia](https://en.wikipedia.org/wiki/Decision_(European_Union))
- Directive: [EU](https://european-union.europa.eu/institutions-law-budget/law/types-legislation_en), [Wikipedia](https://en.wikipedia.org/wiki/Directive_(European_Union))
- Recommendation: [EU](https://eur-lex.europa.eu/EN/legal-content/glossary/recommendation.html), [Wikipedia](https://en.wikipedia.org/wiki/Recommendation_(European_Union))
- Regulation: [EU](https://european-union.europa.eu/institutions-law-budget/law/types-legislation_en), [Wikipedia](https://en.wikipedia.org/wiki/Regulation_(European_Union))
- Intagr: [EU](https://eur-lex.europa.eu/collection/eu-law/inter-agree.html), [Wikipedia](https://en.wikipedia.org/wiki/Treaties_of_the_European_Union)
- Proposal: No resource found
| Source | Size (MB) | Words | Documents | Words/Document |
|:-------------------|------------:|------------:|------------:|-----------------:|
| all_all | 180668 | 12106556233 | 8306749 | 1457 |
| all_caselaw | 34939 | 3413551598 | 2487794 | 1372 |
| all_decision | 28519 | 1698585620 | 1267402 | 1340 |
| all_directive | 4786 | 368577940 | 104187 | 3537 |
| all_intagr | 11421 | 743271516 | 274485 | 2707 |
| all_proposal | 26526 | 2087989530 | 702392 | 2972 |
| all_recommendation | 1886 | 164979037 | 80277 | 2055 |
| all_regulation | 72590 | 3629600992 | 3390212 | 1070 |
| bg_all | 7819 | 398067053 | 348691 | 1141 |
| bg_caselaw | 1588 | 109749174 | 104434 | 1050 |
| bg_decision | 1248 | 58817972 | 54075 | 1087 |
| bg_directive | 263 | 15731608 | 4388 | 3585 |
| bg_intagr | 603 | 31292848 | 11581 | 2702 |
| bg_proposal | 1083 | 60674956 | 29251 | 2074 |
| bg_recommendation | 89 | 5588991 | 3321 | 1682 |
| bg_regulation | 2943 | 116211504 | 141641 | 820 |
| cs_all | 8360 | 471961631 | 449793 | 1049 |
| cs_caselaw | 1163 | 110005022 | 104519 | 1052 |
| cs_decision | 1102 | 58921128 | 54075 | 1089 |
| cs_directive | 186 | 13951134 | 4388 | 3179 |
| cs_intagr | 449 | 28106332 | 11581 | 2426 |
| cs_proposal | 840 | 61838692 | 29252 | 2113 |
| cs_recommendation | 64 | 5416549 | 3323 | 1630 |
| cs_regulation | 4557 | 193722774 | 242655 | 798 |
| da_all | 8932 | 671484862 | 332500 | 2019 |
| da_caselaw | 1746 | 185589641 | 88234 | 2103 |
| da_decision | 1356 | 89498535 | 54085 | 1654 |
| da_directive | 207 | 17525792 | 4388 | 3994 |
| da_intagr | 506 | 35596169 | 11582 | 3073 |
| da_proposal | 1399 | 119759476 | 29257 | 4093 |
| da_recommendation | 100 | 9463897 | 3352 | 2823 |
| da_regulation | 3618 | 214051352 | 141602 | 1511 |
| de_all | 9607 | 695512401 | 348290 | 1996 |
| de_caselaw | 1930 | 193232441 | 104228 | 1853 |
| de_decision | 1449 | 93688222 | 53980 | 1735 |
| de_directive | 218 | 17337760 | 4385 | 3953 |
| de_intagr | 531 | 36791153 | 11580 | 3177 |
| de_proposal | 1556 | 126987454 | 29219 | 4346 |
| de_recommendation | 109 | 9608034 | 3318 | 2895 |
| de_regulation | 3813 | 217867337 | 141580 | 1538 |
| el_all | 12469 | 696216541 | 349667 | 1991 |
| el_caselaw | 2951 | 202027703 | 105138 | 1921 |
| el_decision | 1823 | 94919886 | 54150 | 1752 |
| el_directive | 321 | 19411959 | 4390 | 4421 |
| el_intagr | 701 | 38965777 | 11584 | 3363 |
| el_proposal | 2085 | 128005737 | 29290 | 4370 |
| el_recommendation | 145 | 9344866 | 3357 | 2783 |
| el_regulation | 4443 | 203540613 | 141758 | 1435 |
| en_all | 9217 | 769465561 | 348641 | 2207 |
| en_caselaw | 1846 | 222891827 | 104422 | 2134 |
| en_decision | 1504 | 114626013 | 54054 | 2120 |
| en_directive | 204 | 18860876 | 4388 | 4298 |
| en_intagr | 499 | 39029843 | 11581 | 3370 |
| en_proposal | 1538 | 140781768 | 29242 | 4814 |
| en_recommendation | 97 | 10091809 | 3320 | 3039 |
| en_regulation | 3530 | 223183425 | 141634 | 1575 |
| es_all | 8588 | 725125274 | 348443 | 2081 |
| es_caselaw | 1870 | 220621730 | 104312 | 2115 |
| es_decision | 1334 | 98163499 | 54001 | 1817 |
| es_directive | 221 | 21484479 | 4385 | 4899 |
| es_intagr | 516 | 41841805 | 11581 | 3612 |
| es_proposal | 1366 | 133674486 | 29224 | 4574 |
| es_recommendation | 82 | 8864018 | 3319 | 2670 |
| es_regulation | 3199 | 200475257 | 141621 | 1415 |
| et_all | 6090 | 328068754 | 349615 | 938 |
| et_caselaw | 1074 | 93096396 | 105111 | 885 |
| et_decision | 1069 | 50752324 | 54159 | 937 |
| et_directive | 177 | 11555930 | 4390 | 2632 |
| et_intagr | 436 | 24018147 | 11584 | 2073 |
| et_proposal | 810 | 51600852 | 29283 | 1762 |
| et_recommendation | 61 | 4451369 | 3355 | 1326 |
| et_regulation | 2464 | 92593736 | 141733 | 653 |
| fi_all | 7346 | 404265224 | 349633 | 1156 |
| fi_caselaw | 1596 | 126525296 | 105119 | 1203 |
| fi_decision | 1227 | 59659475 | 54163 | 1101 |
| fi_directive | 204 | 12766491 | 4389 | 2908 |
| fi_intagr | 463 | 25392311 | 11584 | 2192 |
| fi_proposal | 1075 | 69198401 | 29288 | 2362 |
| fi_recommendation | 73 | 5070392 | 3356 | 1510 |
| fi_regulation | 2707 | 105652858 | 141734 | 745 |
| fr_all | 9937 | 828959218 | 348295 | 2380 |
| fr_caselaw | 2158 | 246262666 | 104228 | 2362 |
| fr_decision | 1473 | 108648744 | 53981 | 2012 |
| fr_directive | 222 | 20308801 | 4385 | 4631 |
| fr_intagr | 536 | 41986012 | 11580 | 3625 |
| fr_proposal | 1592 | 149134298 | 29218 | 5104 |
| fr_recommendation | 112 | 11510415 | 3318 | 3469 |
| fr_regulation | 3845 | 251108282 | 141585 | 1773 |
| ga_all | 1028 | 65030095 | 349778 | 185 |
| ga_caselaw | 11 | 696305 | 105205 | 6 |
| ga_decision | 87 | 4415457 | 54189 | 81 |
| ga_directive | 18 | 1512027 | 4390 | 344 |
| ga_intagr | 19 | 1820723 | 11586 | 157 |
| ga_proposal | 289 | 26106889 | 29298 | 891 |
| ga_recommendation | 10 | 902390 | 3361 | 268 |
| ga_regulation | 594 | 29576304 | 141749 | 208 |
| hr_all | 4594 | 258816068 | 348691 | 742 |
| hr_caselaw | 617 | 62432734 | 104434 | 597 |
| hr_decision | 596 | 31911903 | 54075 | 590 |
| hr_directive | 156 | 10855913 | 4388 | 2474 |
| hr_intagr | 450 | 24962086 | 11581 | 2155 |
| hr_proposal | 552 | 33437815 | 29251 | 1143 |
| hr_recommendation | 40 | 3612247 | 3321 | 1087 |
| hr_regulation | 2183 | 91603370 | 141641 | 646 |
| hu_all | 6653 | 375253894 | 349605 | 1073 |
| hu_caselaw | 1278 | 110179375 | 105144 | 1047 |
| hu_decision | 1147 | 57108172 | 54156 | 1054 |
| hu_directive | 200 | 13568304 | 4389 | 3091 |
| hu_intagr | 470 | 27258501 | 11586 | 2352 |
| hu_proposal | 912 | 60882750 | 29291 | 2078 |
| hu_recommendation | 70 | 5312868 | 3357 | 1582 |
| hu_regulation | 2576 | 100943924 | 141682 | 712 |
| it_all | 9586 | 768605772 | 333631 | 2303 |
| it_caselaw | 1889 | 206117726 | 89560 | 2301 |
| it_decision | 1445 | 102848859 | 53983 | 1905 |
| it_directive | 217 | 19687773 | 4385 | 4489 |
| it_intagr | 528 | 40134330 | 11580 | 3465 |
| it_proposal | 1533 | 140713925 | 29218 | 4816 |
| it_recommendation | 109 | 10923431 | 3318 | 3292 |
| it_regulation | 3865 | 248179728 | 141587 | 1752 |
| lt_all | 6400 | 364361783 | 200565 | 1816 |
| lt_caselaw | 1137 | 101808706 | 105477 | 965 |
| lt_decision | 1096 | 55850308 | 21990 | 2539 |
| lt_directive | 185 | 13078983 | 3239 | 4037 |
| lt_intagr | 452 | 27009631 | 7481 | 3610 |
| lt_proposal | 850 | 58553579 | 29272 | 2000 |
| lt_recommendation | 64 | 5121089 | 3363 | 1522 |
| lt_regulation | 2617 | 102939487 | 29743 | 3460 |
| lv_all | 6349 | 363239195 | 349919 | 1038 |
| lv_caselaw | 1153 | 103456811 | 105242 | 983 |
| lv_decision | 1103 | 55512944 | 54224 | 1023 |
| lv_directive | 186 | 13023024 | 4392 | 2965 |
| lv_intagr | 452 | 26693107 | 11630 | 2295 |
| lv_proposal | 96 | 58176216 | 29298 | 1985 |
| lv_recommendation | 64 | 5074494 | 3361 | 1509 |
| lv_regulation | 2545 | 101302599 | 141772 | 714 |
| mt_all | 6540 | 367834815 | 350292 | 1050 |
| mt_caselaw | 1164 | 100423543 | 105479 | 952 |
| mt_decision | 1109 | 55239141 | 54280 | 1017 |
| mt_directive | 203 | 14355266 | 4392 | 3268 |
| mt_intagr | 470 | 27701991 | 11675 | 2372 |
| mt_proposal | 878 | 59749277 | 29274 | 2041 |
| mt_recommendation | 65 | 5039600 | 3363 | 1498 |
| mt_regulation | 2650 | 105325997 | 141829 | 742 |
| nl_all | 9586 | 770312808 | 349407 | 2204 |
| nl_caselaw | 1847 | 206271837 | 105005 | 1964 |
| nl_decision | 1456 | 104060901 | 54152 | 1921 |
| nl_directive | 217 | 19529361 | 4388 | 4450 |
| nl_intagr | 529 | 40247634 | 11584 | 3474 |
| nl_proposal | 1540 | 141258274 | 29279 | 4824 |
| nl_recommendation | 111 | 11002405 | 3355 | 3279 |
| nl_regulation | 3886 | 247942396 | 141644 | 1750 |
| pl_all | 6677 | 406648795 | 350349 | 1160 |
| pl_caselaw | 1231 | 115824759 | 105479 | 1098 |
| pl_decision | 1125 | 60407576 | 54287 | 1112 |
| pl_directive | 197 | 14672157 | 4392 | 3340 |
| pl_intagr | 466 | 28543668 | 11680 | 2443 |
| pl_proposal | 886 | 64728230 | 29317 | 2207 |
| pl_recommendation | 68 | 5769893 | 3363 | 1715 |
| pl_regulation | 2703 | 116702512 | 141831 | 822 |
| pt_all | 8450 | 675152149 | 348449 | 1937 |
| pt_caselaw | 1763 | 198084937 | 104312 | 1898 |
| pt_decision | 1327 | 93278293 | 54007 | 1727 |
| pt_directive | 217 | 19831549 | 4385 | 4522 |
| pt_intagr | 504 | 37999753 | 11581 | 3281 |
| pt_proposal | 1361 | 127461782 | 29224 | 4361 |
| pt_recommendation | 81 | 8396661 | 3319 | 2529 |
| pt_regulation | 3197 | 190099174 | 141621 | 1342 |
| ro_all | 6315 | 415038571 | 350300 | 1184 |
| ro_caselaw | 1110 | 114780999 | 105516 | 1087 |
| ro_decision | 1047 | 59479553 | 54281 | 1095 |
| ro_directive | 206 | 16101628 | 4392 | 3666 |
| ro_intagr | 481 | 31497000 | 11675 | 2697 |
| ro_proposal | 805 | 62130419 | 29274 | 2122 |
| ro_recommendation | 63 | 5977913 | 3363 | 1777 |
| ro_regulation | 2603 | 125071059 | 141799 | 882 |
| sk_all | 6484 | 392235510 | 350570 | 1118 |
| sk_caselaw | 1160 | 110125141 | 105608 | 1042 |
| sk_decision | 1111 | 59576875 | 54349 | 1096 |
| sk_directive | 188 | 14132755 | 4393 | 3217 |
| sk_intagr | 458 | 28298155 | 11676 | 2423 |
| sk_proposal | 859 | 63726047 | 29290 | 2175 |
| sk_recommendation | 66 | 5654790 | 3364 | 1680 |
| sk_regulation | 2642 | 110721747 | 141890 | 780 |
| sl_all | 6222 | 394814289 | 350574 | 1126 |
| sl_caselaw | 1071 | 111238184 | 105608 | 1053 |
| sl_decision | 1075 | 59454906 | 54349 | 1093 |
| sl_directive | 176 | 13908097 | 4393 | 3165 |
| sl_intagr | 441 | 28239078 | 11676 | 2418 |
| sl_proposal | 812 | 63391970 | 29290 | 2164 |
| sl_recommendation | 62 | 5628775 | 3364 | 1673 |
| sl_regulation | 2585 | 112953279 | 141894 | 796 |
| sv_all | 7419 | 500085970 | 351051 | 1424 |
| sv_caselaw | 1585 | 162108645 | 105980 | 1529 |
| sv_decision | 1213 | 71744934 | 54357 | 1319 |
| sv_directive | 195 | 15386273 | 4393 | 3502 |
| sv_intagr | 463 | 29845462 | 11676 | 2556 |
| sv_proposal | 1059 | 86016237 | 29292 | 2936 |
| sv_recommendation | 79 | 7152141 | 3366 | 2124 |
| sv_regulation | 2825 | 127832278 | 141987 | 900 |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data has been downloaded using the R package [eurlex](https://cran.r-project.org/web/packages/eurlex/vignettes/eurlexpkg.html) between June and August 2022.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
[see also the legal notice](https://eur-lex.europa.eu/content/legal-notice/legal-notice.html)
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| ||
DongfuTingle/FeTaQA | 2023-05-08T15:52:42.000Z | [
"task_categories:table-question-answering",
"task_categories:table-to-text",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | DongfuTingle | null | null | null | 3 | 207 | ---
license: mit
task_categories:
- table-question-answering
- table-to-text
- question-answering
language:
- en
pretty_name: fetaqa
size_categories:
- 1K<n<10K
---
This repo is the unofficial FeTA-QA dataset from paper [FeTaQA: Free-form Table Question Answering](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00446/109273/FeTaQA-Free-form-Table-Question-Answering).
The original purpose to make it easier for users to download and use dataset. All the data is publicly avaliable on [their offical Github site](https://github.com/Yale-LILY/FeTaQA)
If there is anything wrong, please raise an issue in the community and I will fix it if I am available. |
nuprl/MultiPL-T | 2023-09-13T12:57:50.000Z | [
"license:bigcode-openrail-m",
"arxiv:2308.09895",
"region:us"
] | nuprl | null | null | null | 1 | 207 | ---
license: bigcode-openrail-m
dataset_info:
features:
- name: content
dtype: string
splits:
- name: racket
num_bytes: 14482516
num_examples: 40510
- name: ocaml
num_bytes: 19240207
num_examples: 43401
- name: lua
num_bytes: 25917278
num_examples: 48194
download_size: 7491686
dataset_size: 59640001
---
# MultiPL-T fine-tuning sets
This dataset contains the MultiPL-T fine-tuning sets described in the paper "Knowledge Transfer from High-Resource to Low-Resource
Programming Languages for Code LLMs": [Arxiv](https://arxiv.org/abs/2308.09895).
## MultiPL-T tuned models
StarCoderBase-1b: https://huggingface.co/nuprl/MultiPLCoder-1b
StarCoderBase-15b: https://huggingface.co/nuprl/MultiPLCoder-15b
|
yzhuang/autotree_pmlb_10000_banana_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T01:51:46.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 207 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 154520000
num_examples: 10000
- name: validation
num_bytes: 154520000
num_examples: 10000
download_size: 50636856
dataset_size: 309040000
---
# Dataset Card for "autotree_pmlb_10000_banana_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kmyoo/cnn-dailymail-v1-tiny | 2022-12-02T14:00:12.000Z | [
"region:us"
] | kmyoo | null | null | null | 0 | 206 | Entry not found |
BI55/MedText | 2023-07-25T09:30:17.000Z | [
"license:cc-by-4.0",
"region:us"
] | BI55 | null | null | null | 49 | 206 | ---
license: cc-by-4.0
---
This is the shuffled version of medtext_1, so the datapoints are in random order and not sorted by category. This is to prevent catastrophic forgetting by category.
This is a medical diagnosis dataset containing over 1000 top notch textbook quality patient presentations and diagnosis/treatments. The 100 most common diseases and the 30 most common injuries people go to the hospital with, are, among others, fully captured in the dataset, with multiple datapoints for each ranging from mild to complicated to severe. Full list below. The dataset also contains completions about the nature of the AI itself, that it never can replace a doctor and always emphasizes to go to a professional and some nonsensical or doubtful presentations. A model trained on this dataset explicitly tells when it CANNOT answer with confidence or if the presentation is insufficient. This is to prevent hallucinations.
Medtext is a free to use (CC BY 4.0) dataset of over 1000 patient presentations and their diagnosis/treatment plans.
This is original data, converted into uniform datapoints using GPT-4.
We then pulled 10 random examples of the dataset and showed them to 3 different doctors, 2 of them involved and 1 of them uninvolved, and they all categorize the quality as „textbook quality“.
It’s content includes:
NOISE/DATA POLLUTION
*Dismissing of non-medical or non-psychological issues
*specifically asking for more information / admitting no possible diagnosis with confidence if insufficient data
*conflicting/contradicting and irrelevant information
*cases where symptoms are misleading to seemingly obvious diagnosis but actually being something different
*information about the model (What are you? What can you do? Are you able to replace a doctor? This is to make the model humble and always emphasize that it can never replace a professional and it is just there to do some substitute analysis)
MISC
*emergency cases / first aid / almost fatal njuries that require emergency surgery
*injuries from crimes
*sexual injuries and STDs
*Infant specific cases
*Gynecological and urological cases
*genetic anomalies
*Previous medical mishandling
*Abuse/Overdosing/Misuse of drugs
*Cross side effects of drugs
ANALYSIS
*Textual analysis of blood tests, ultrasound, CT, MRI and X-ray examinations.
INJURIES:
* Sprains and strains
* Fractures
* Contusions (bruises)
* Cuts and lacerations
* Concussions
* Burns
* Dislocations
* Abrasions (scrapes)
* Whiplash injuries
* Eye injuries
* Puncture wounds
* Bites and stings
* Back injuries
* Broken nose
* Knee injuries
* Ankle injuries
* Shoulder injuries
* Wrist injuries
* Chest injuries
* Head injuries
DISEASES:
* Acne
* Allergies
* Alzheimer's Disease
* Anemia
* Angina
* Anxiety Disorders
* Arthritis
* Asthma
* Atherosclerosis
* Athlete's Foot
* Attention Deficit Hyperactivity Disorder (ADHD)
* Autism Spectrum Disorder
* Back Pain
* Bipolar Disorder
* Bronchitis
* Cataracts
* Chickenpox
* Chronic Obstructive Pulmonary Disease (COPD)
* Common Cold
* Conjunctivitis (Pink Eye)
* Constipation
* Coronary Heart Disease
* Cystitis
* Dementia
* Depression
* Diabetes Type 1
* Diabetes Type 2
* Diarrhea
* Diverticulitis
* Dizziness (Vertigo)
* Ear Infections
* Eczema
* Endometriosis
* Erectile Dysfunction
* Fibromyalgia
* Flu (Influenza)
* Food Poisoning
* Gallstones
* Gastroenteritis
* Gastroesophageal Reflux Disease (GERD)
* Gout
* Hay Fever (Allergic Rhinitis)
* Headaches
* Heart Failure
* Hemorrhoids
* Hepatitis B
* Hepatitis C
* Herpes Simplex Virus (HSV)
* High Blood Pressure (Hypertension)
* High Cholesterol (Hypercholesterolemia)
* HIV/AIDS
* Hyperthyroidism (Overactive Thyroid)
* Hypothyroidism (Underactive Thyroid)
* Inflammatory Bowel Disease (Including Crohn's and Ulcerative Colitis)
* Insomnia
* Iron Deficiency Anemia
* Irritable Bowel Syndrome (IBS)
* Kidney Stones
* Lactose Intolerance
* Lyme Disease
* Macular Degeneration
* Malaria
* Menopause
* Migraine
* Multiple Sclerosis
* Obesity
* Osteoarthritis
* Osteoporosis
* Otitis Media (Middle Ear Infection)
* Pancreatitis
* Parkinson's Disease
* Peptic Ulcers
* Periodontal Disease
* Pneumonia
* Polycystic Ovary Syndrome (PCOS)
* Prostate Enlargement (Benign Prostatic Hyperplasia)
* Psoriasis
* Pulmonary Embolism
* Restless Legs Syndrome
* Rheumatoid Arthritis
* Rosacea
* Schizophrenia
* Sciatica
* Scoliosis
* Seasonal Affective Disorder (SAD)
* Sinusitis
* Skin Cancer
* Sleep Apnea
* Strokes
* Tendonitis
* Tonsillitis
* Tuberculosis
* Urinary Tract Infection (UTI)
* Varicose Veins
* Vitiligo
* Yeast Infection (Candidiasis)
* Zika Virus |
nielsr/eurosat-demo | 2022-04-04T15:48:08.000Z | [
"region:us"
] | nielsr | null | null | null | 1 | 205 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.