id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
w8ay/security-paper-datasets | w8ay | 2023-10-16T10:34:13Z | 121 | 8 | null | [
"region:us"
] | 2023-10-16T10:34:13Z | 2023-08-25T02:11:45.000Z | 2023-08-25T02:11:45 | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 1690579945
num_examples: 428155
download_size: 751689097
dataset_size: 1690579945
---
# Dataset Card for "security-paper-datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4237462282180786,
-0.2716887295246124,
0.23081201314926147,
0.029698863625526428,
-0.27944597601890564,
0.17115163803100586,
0.4557952880859375,
-0.15722039341926575,
0.8448206186294556,
0.5142261981964111,
-0.6095712780952454,
-0.8568670749664307,
-0.7412126064300537,
-0.27817904949188... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shariqfarooq/lcn_bd | shariqfarooq | 2023-11-16T12:06:47Z | 121 | 0 | null | [
"region:us"
] | 2023-11-16T12:06:47Z | 2023-11-16T10:40:58.000Z | 2023-11-16T10:40:58 | ---
dataset_info:
features:
- name: caption
dtype: string
- name: condition
dtype: image
- name: controlnet
dtype: image
- name: ours
dtype: image
- name: idd
dtype: string
splits:
- name: train
num_bytes: 14336782.0
num_examples: 17
download_size: 14350234
dataset_size: 14336782.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lcn_bd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5305697321891785,
-0.3848077356815338,
0.10520287603139877,
0.3757150173187256,
-0.3749704957008362,
-0.002501737792044878,
0.15849249064922333,
-0.09909025579690933,
0.7340660095214844,
0.6780939698219299,
-0.9259713888168335,
-0.9175178408622742,
-0.46025294065475464,
-0.2565973997116... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aps/dynahate | aps | 2022-05-18T00:11:13Z | 120 | 1 | null | [
"region:us"
] | 2022-05-18T00:11:13Z | 2022-04-29T18:50:55.000Z | 2022-04-29T18:50:55 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BeIR/quora | BeIR | 2022-10-23T06:03:40Z | 120 | 1 | beir | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-23T06:03:40Z | 2022-06-05T16:53:54.000Z | 2022-06-05T16:53:54 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | [
-0.5227212905883789,
-0.5249219536781311,
0.14435674250125885,
0.04820423573255539,
0.055916160345077515,
0.0011022627586498857,
-0.1081070527434349,
-0.24874727427959442,
0.28598034381866455,
0.07840226590633392,
-0.45233607292175293,
-0.7186435461044312,
-0.347678542137146,
0.20300328731... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nexdata/Microphone_Collecting_Radio_Frequency_Noise_Data | Nexdata | 2023-08-30T10:40:13Z | 120 | 0 | null | [
"region:us"
] | 2023-08-30T10:40:13Z | 2022-06-22T06:23:39.000Z | 2022-06-22T06:23:39 | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Microphone_Collecting_Radio_Frequency_Noise_Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/34?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The data is collected in 66 rooms, 2-4 point locations in each room. According to the relative position of the sound source and the point, 2-5 sets of data are collected for each point. The valid time is 20 hours. The data is recorded in a wide range and can be used for smart home scene product development.
For more details, please refer to the link: https://www.nexdata.ai/datasets/34?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition,noisy-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Noise data
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions | [
-0.7046657204627991,
-0.4995056390762329,
-0.07739588618278503,
0.2624078392982483,
-0.029780518263578415,
-0.09875854104757309,
-0.25178608298301697,
-0.46017932891845703,
0.47914209961891174,
0.4842832684516907,
-1.0274105072021484,
-0.9627717137336731,
-0.4936935603618622,
0.09691567718... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-750000-800000 | tomekkorbak | 2022-10-04T22:48:41Z | 120 | 0 | null | [
"region:us"
] | 2022-10-04T22:48:41Z | 2022-10-04T17:52:22.000Z | 2022-10-04T17:52:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
albertvillanova/universal_dependencies | albertvillanova | 2023-11-24T13:31:54Z | 120 | 6 | universal-dependencies | [
"task_categories:token-classification",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:af",
"language:aii",
"language:ajp",
"language:akk",
"languag... | 2023-11-24T13:31:54Z | 2022-12-14T17:34:02.000Z | 2022-12-14T17:34:02 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- af
- aii
- ajp
- akk
- am
- apu
- aqz
- ar
- be
- bg
- bho
- bm
- br
- bxr
- ca
- ckt
- cop
- cs
- cu
- cy
- da
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fo
- fr
- fro
- ga
- gd
- gl
- got
- grc
- gsw
- gun
- gv
- he
- hi
- hr
- hsb
- hu
- hy
- id
- is
- it
- ja
- kfm
- kk
- kmr
- ko
- koi
- kpv
- krl
- la
- lt
- lv
- lzh
- mdf
- mr
- mt
- myu
- myv
- nl
- 'no'
- nyq
- olo
- orv
- otk
- pcm
- pl
- pt
- ro
- ru
- sa
- sk
- sl
- sme
- sms
- soj
- sq
- sr
- sv
- swl
- ta
- te
- th
- tl
- tpn
- tr
- ug
- uk
- ur
- vi
- wbp
- wo
- yo
- yue
- zh
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- parsing
paperswithcode_id: universal-dependencies
pretty_name: Universal Dependencies Treebank
config_names:
- af_afribooms
- aii_as
- ajp_madar
- akk_pisandub
- akk_riao
- am_att
- apu_ufpa
- aqz_tudet
- ar_nyuad
- ar_padt
- ar_pud
- be_hse
- bg_btb
- bho_bhtb
- bm_crb
- br_keb
- bxr_bdt
- ca_ancora
- ckt_hse
- cop_scriptorium
- cs_cac
- cs_cltt
- cs_fictree
- cs_pdt
- cs_pud
- cu_proiel
- cy_ccg
- da_ddt
- de_gsd
- de_hdt
- de_lit
- de_pud
- el_gdt
- en_esl
- en_ewt
- en_gum
- en_gumreddit
- en_lines
- en_partut
- en_pronouns
- en_pud
- es_ancora
- es_gsd
- es_pud
- et_edt
- et_ewt
- eu_bdt
- fa_perdt
- fa_seraji
- fi_ftb
- fi_ood
- fi_pud
- fi_tdt
- fo_farpahc
- fo_oft
- fr_fqb
- fr_ftb
- fr_gsd
- fr_partut
- fr_pud
- fr_sequoia
- fr_spoken
- fro_srcmf
- ga_idt
- gd_arcosg
- gl_ctg
- gl_treegal
- got_proiel
- grc_perseus
- grc_proiel
- gsw_uzh
- gun_dooley
- gun_thomas
- gv_cadhan
- he_htb
- hi_hdtb
- hi_pud
- hr_set
- hsb_ufal
- hu_szeged
- hy_armtdp
- id_csui
- id_gsd
- id_pud
- is_icepahc
- is_pud
- it_isdt
- it_partut
- it_postwita
- it_pud
- it_twittiro
- it_vit
- ja_bccwj
- ja_gsd
- ja_modern
- ja_pud
- kfm_aha
- kk_ktb
- kmr_mg
- ko_gsd
- ko_kaist
- ko_pud
- koi_uh
- kpv_ikdp
- kpv_lattice
- krl_kkpp
- la_ittb
- la_llct
- la_perseus
- la_proiel
- lt_alksnis
- lt_hse
- lv_lvtb
- lzh_kyoto
- mdf_jr
- mr_ufal
- mt_mudt
- myu_tudet
- myv_jr
- nl_alpino
- nl_lassysmall
- no_bokmaal
- no_nynorsk
- no_nynorsklia
- nyq_aha
- olo_kkpp
- orv_rnc
- orv_torot
- otk_tonqq
- pcm_nsc
- pl_lfg
- pl_pdb
- pl_pud
- pt_bosque
- pt_gsd
- pt_pud
- qhe_hiencs
- qtd_sagt
- ro_nonstandard
- ro_rrt
- ro_simonero
- ru_gsd
- ru_pud
- ru_syntagrus
- ru_taiga
- sa_ufal
- sa_vedic
- sk_snk
- sl_ssj
- sl_sst
- sme_giella
- sms_giellagas
- soj_aha
- sq_tsa
- sr_set
- sv_lines
- sv_pud
- sv_talbanken
- swl_sslc
- ta_mwtt
- ta_ttb
- te_mtg
- th_pud
- tl_trg
- tl_ugnayan
- tpn_tudet
- tr_boun
- tr_gb
- tr_imst
- tr_pud
- ug_udt
- uk_iu
- ur_udtb
- vi_vtb
- wbp_ufal
- wo_wtb
- yo_ytb
- yue_hk
- zh_cfl
- zh_gsd
- zh_gsdsimp
- zh_hk
- zh_pud
tags:
- constituency-parsing
- dependency-parsing
dataset_info:
- config_name: af_afribooms
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 3523113
num_examples: 1315
- name: validation
num_bytes: 547285
num_examples: 194
- name: test
num_bytes: 1050299
num_examples: 425
download_size: 3088237
dataset_size: 5120697
- config_name: akk_pisandub
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 153470
num_examples: 101
download_size: 101789
dataset_size: 153470
- config_name: akk_riao
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 3374577
num_examples: 1804
download_size: 2022357
dataset_size: 3374577
- config_name: aqz_tudet
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 8286
num_examples: 24
download_size: 5683
dataset_size: 8286
- config_name: sq_tsa
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 116034
num_examples: 60
download_size: 68875
dataset_size: 116034
- config_name: am_att
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1554859
num_examples: 1074
download_size: 1019607
dataset_size: 1554859
- config_name: grc_perseus
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22611612
num_examples: 11476
- name: validation
num_bytes: 3152233
num_examples: 1137
- name: test
num_bytes: 3004502
num_examples: 1306
download_size: 18898313
dataset_size: 28768347
- config_name: grc_proiel
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 30938089
num_examples: 15014
- name: validation
num_bytes: 2264551
num_examples: 1019
- name: test
num_bytes: 2192289
num_examples: 1047
download_size: 23715831
dataset_size: 35394929
- config_name: apu_ufpa
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 75578
num_examples: 76
download_size: 69565
dataset_size: 75578
- config_name: ar_nyuad
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 79064476
num_examples: 15789
- name: validation
num_bytes: 9859912
num_examples: 1986
- name: test
num_bytes: 9880240
num_examples: 1963
download_size: 58583673
dataset_size: 98804628
- config_name: ar_padt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 58537298
num_examples: 6075
- name: validation
num_bytes: 7787253
num_examples: 909
- name: test
num_bytes: 7428063
num_examples: 680
download_size: 51208169
dataset_size: 73752614
- config_name: ar_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2816625
num_examples: 1000
download_size: 2084082
dataset_size: 2816625
- config_name: hy_armtdp
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 7697891
num_examples: 1975
- name: validation
num_bytes: 988849
num_examples: 249
- name: test
num_bytes: 947287
num_examples: 278
download_size: 6886567
dataset_size: 9634027
- config_name: aii_as
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 52540
num_examples: 57
download_size: 32639
dataset_size: 52540
- config_name: bm_crb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1502886
num_examples: 1026
download_size: 892924
dataset_size: 1502886
- config_name: eu_bdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 8199861
num_examples: 5396
- name: validation
num_bytes: 2701073
num_examples: 1798
- name: test
num_bytes: 2734601
num_examples: 1799
download_size: 8213576
dataset_size: 13635535
- config_name: be_hse
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 34880663
num_examples: 21555
- name: validation
num_bytes: 1745668
num_examples: 1090
- name: test
num_bytes: 1818113
num_examples: 889
download_size: 26433402
dataset_size: 38444444
- config_name: bho_bhtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 947740
num_examples: 357
download_size: 614159
dataset_size: 947740
- config_name: br_keb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1026257
num_examples: 888
download_size: 679680
dataset_size: 1026257
- config_name: bg_btb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 18545312
num_examples: 8907
- name: validation
num_bytes: 2393174
num_examples: 1115
- name: test
num_bytes: 2344136
num_examples: 1116
download_size: 14910603
dataset_size: 23282622
- config_name: bxr_bdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 17364
num_examples: 19
- name: test
num_bytes: 1116630
num_examples: 908
download_size: 726053
dataset_size: 1133994
- config_name: yue_hk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1242850
num_examples: 1004
download_size: 710060
dataset_size: 1242850
- config_name: ca_ancora
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 46502842
num_examples: 13123
- name: validation
num_bytes: 6282364
num_examples: 1709
- name: test
num_bytes: 6441038
num_examples: 1846
download_size: 35924146
dataset_size: 59226244
- config_name: zh_cfl
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 660584
num_examples: 451
download_size: 384725
dataset_size: 660584
- config_name: zh_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 9268661
num_examples: 3997
- name: validation
num_bytes: 1188371
num_examples: 500
- name: test
num_bytes: 1130467
num_examples: 500
download_size: 6828367
dataset_size: 11587499
- config_name: zh_gsdsimp
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 9268663
num_examples: 3997
- name: validation
num_bytes: 1188383
num_examples: 500
- name: test
num_bytes: 1130459
num_examples: 500
download_size: 6828419
dataset_size: 11587505
- config_name: zh_hk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 880193
num_examples: 1004
download_size: 494447
dataset_size: 880193
- config_name: zh_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2425817
num_examples: 1000
download_size: 1606982
dataset_size: 2425817
- config_name: ckt_hse
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 808669
num_examples: 1004
download_size: 771943
dataset_size: 808669
- config_name: lzh_kyoto
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 26615708
num_examples: 38669
- name: validation
num_bytes: 3770507
num_examples: 5296
- name: test
num_bytes: 3155207
num_examples: 4469
download_size: 22658287
dataset_size: 33541422
- config_name: cop_scriptorium
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 3944468
num_examples: 1089
- name: validation
num_bytes: 1566786
num_examples: 381
- name: test
num_bytes: 1487709
num_examples: 403
download_size: 4502996
dataset_size: 6998963
- config_name: hr_set
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 19104315
num_examples: 6914
- name: validation
num_bytes: 2787184
num_examples: 960
- name: test
num_bytes: 3035797
num_examples: 1136
download_size: 15103034
dataset_size: 24927296
- config_name: cs_cac
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 81527862
num_examples: 23478
- name: validation
num_bytes: 1898678
num_examples: 603
- name: test
num_bytes: 1878841
num_examples: 628
download_size: 55990235
dataset_size: 85305381
- config_name: cs_cltt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 4277239
num_examples: 860
- name: validation
num_bytes: 752253
num_examples: 129
- name: test
num_bytes: 646103
num_examples: 136
download_size: 3745656
dataset_size: 5675595
- config_name: cs_fictree
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 21490020
num_examples: 10160
- name: validation
num_bytes: 2677727
num_examples: 1309
- name: test
num_bytes: 2679930
num_examples: 1291
download_size: 17464342
dataset_size: 26847677
- config_name: cs_pdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 201356662
num_examples: 68495
- name: validation
num_bytes: 27366981
num_examples: 9270
- name: test
num_bytes: 29817339
num_examples: 10148
download_size: 171506068
dataset_size: 258540982
- config_name: cs_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 3195818
num_examples: 1000
download_size: 2231853
dataset_size: 3195818
- config_name: da_ddt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 8689809
num_examples: 4383
- name: validation
num_bytes: 1117939
num_examples: 564
- name: test
num_bytes: 1082651
num_examples: 565
download_size: 6425281
dataset_size: 10890399
- config_name: nl_alpino
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22503950
num_examples: 12264
- name: validation
num_bytes: 1411253
num_examples: 718
- name: test
num_bytes: 1354908
num_examples: 596
download_size: 16858557
dataset_size: 25270111
- config_name: nl_lassysmall
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 9001614
num_examples: 5787
- name: validation
num_bytes: 1361552
num_examples: 676
- name: test
num_bytes: 1391136
num_examples: 875
download_size: 8034396
dataset_size: 11754302
- config_name: en_esl
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5335977
num_examples: 4124
- name: validation
num_bytes: 648562
num_examples: 500
- name: test
num_bytes: 651829
num_examples: 500
download_size: 3351548
dataset_size: 6636368
- config_name: en_ewt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22755753
num_examples: 12543
- name: validation
num_bytes: 2829889
num_examples: 2002
- name: test
num_bytes: 2820398
num_examples: 2077
download_size: 16893922
dataset_size: 28406040
- config_name: en_gum
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 8999554
num_examples: 4287
- name: validation
num_bytes: 1704949
num_examples: 784
- name: test
num_bytes: 1743317
num_examples: 890
download_size: 7702761
dataset_size: 12447820
- config_name: en_gumreddit
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1365930
num_examples: 587
- name: validation
num_bytes: 317546
num_examples: 150
- name: test
num_bytes: 374707
num_examples: 158
download_size: 1195979
dataset_size: 2058183
- config_name: en_lines
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5728898
num_examples: 3176
- name: validation
num_bytes: 1911762
num_examples: 1032
- name: test
num_bytes: 1766797
num_examples: 1035
download_size: 5522254
dataset_size: 9407457
- config_name: en_partut
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 4133445
num_examples: 1781
- name: validation
num_bytes: 265039
num_examples: 156
- name: test
num_bytes: 326834
num_examples: 153
download_size: 2720286
dataset_size: 4725318
- config_name: en_pronouns
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 207364
num_examples: 285
download_size: 147181
dataset_size: 207364
- config_name: en_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2282027
num_examples: 1000
download_size: 1340563
dataset_size: 2282027
- config_name: myv_jr
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2763297
num_examples: 1690
download_size: 1945981
dataset_size: 2763297
- config_name: et_edt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 42901059
num_examples: 24633
- name: validation
num_bytes: 5551620
num_examples: 3125
- name: test
num_bytes: 5994421
num_examples: 3214
download_size: 32393618
dataset_size: 54447100
- config_name: et_ewt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 4199896
num_examples: 2837
- name: validation
num_bytes: 1089459
num_examples: 743
- name: test
num_bytes: 1600116
num_examples: 913
download_size: 4044147
dataset_size: 6889471
- config_name: fo_farpahc
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2114958
num_examples: 1020
- name: validation
num_bytes: 809707
num_examples: 300
- name: test
num_bytes: 798245
num_examples: 301
download_size: 2186706
dataset_size: 3722910
- config_name: fo_oft
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1220792
num_examples: 1208
download_size: 802681
dataset_size: 1220792
- config_name: fi_ftb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 16800109
num_examples: 14981
- name: validation
num_bytes: 2074201
num_examples: 1875
- name: test
num_bytes: 2144908
num_examples: 1867
download_size: 13132466
dataset_size: 21019218
- config_name: fi_ood
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2366923
num_examples: 2122
download_size: 1480506
dataset_size: 2366923
- config_name: fi_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2086421
num_examples: 1000
download_size: 1411514
dataset_size: 2086421
- config_name: fi_tdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22065448
num_examples: 12217
- name: validation
num_bytes: 2483303
num_examples: 1364
- name: test
num_bytes: 2855263
num_examples: 1555
download_size: 16692242
dataset_size: 27404014
- config_name: fr_fqb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2674644
num_examples: 2289
download_size: 1556235
dataset_size: 2674644
- config_name: fr_ftb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 44714315
num_examples: 14759
- name: validation
num_bytes: 3929428
num_examples: 1235
- name: test
num_bytes: 7583038
num_examples: 2541
download_size: 30926802
dataset_size: 56226781
- config_name: fr_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 38329902
num_examples: 14449
- name: validation
num_bytes: 3861548
num_examples: 1476
- name: test
num_bytes: 1086926
num_examples: 416
download_size: 25492044
dataset_size: 43278376
- config_name: fr_partut
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2620477
num_examples: 803
- name: validation
num_bytes: 205839
num_examples: 107
- name: test
num_bytes: 288829
num_examples: 110
download_size: 1817897
dataset_size: 3115145
- config_name: fr_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2660405
num_examples: 1000
download_size: 1685033
dataset_size: 2660405
- config_name: fr_sequoia
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5370647
num_examples: 2231
- name: validation
num_bytes: 1065411
num_examples: 412
- name: test
num_bytes: 1067676
num_examples: 456
download_size: 4415282
dataset_size: 7503734
- config_name: fr_spoken
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1625626
num_examples: 1167
- name: validation
num_bytes: 1091750
num_examples: 909
- name: test
num_bytes: 1078438
num_examples: 730
download_size: 2483341
dataset_size: 3795814
- config_name: gl_ctg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 8157432
num_examples: 2272
- name: validation
num_bytes: 3057483
num_examples: 860
- name: test
num_bytes: 3053764
num_examples: 861
download_size: 8230649
dataset_size: 14268679
- config_name: gl_treegal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1804389
num_examples: 600
- name: test
num_bytes: 1174023
num_examples: 400
download_size: 1741471
dataset_size: 2978412
- config_name: de_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 32297384
num_examples: 13814
- name: validation
num_bytes: 1504189
num_examples: 799
- name: test
num_bytes: 2000117
num_examples: 977
download_size: 21507364
dataset_size: 35801690
- config_name: de_hdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 334214761
num_examples: 153035
- name: validation
num_bytes: 39099013
num_examples: 18434
- name: test
num_bytes: 39519143
num_examples: 18459
download_size: 249243037
dataset_size: 412832917
- config_name: de_lit
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 3327891
num_examples: 1922
download_size: 2060988
dataset_size: 3327891
- config_name: de_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2684407
num_examples: 1000
download_size: 1731875
dataset_size: 2684407
- config_name: got_proiel
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5175361
num_examples: 3387
- name: validation
num_bytes: 1498101
num_examples: 985
- name: test
num_bytes: 1518642
num_examples: 1029
download_size: 5225655
dataset_size: 8192104
- config_name: el_gdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 6028077
num_examples: 1662
- name: validation
num_bytes: 1492610
num_examples: 403
- name: test
num_bytes: 1521094
num_examples: 456
download_size: 5788161
dataset_size: 9041781
- config_name: he_htb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 17324640
num_examples: 5241
- name: validation
num_bytes: 1440985
num_examples: 484
- name: test
num_bytes: 1550465
num_examples: 491
download_size: 12054025
dataset_size: 20316090
- config_name: qhe_hiencs
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1510145
num_examples: 1448
- name: validation
num_bytes: 244129
num_examples: 225
- name: test
num_bytes: 236291
num_examples: 225
download_size: 914584
dataset_size: 1990565
- config_name: hi_hdtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 61893814
num_examples: 13304
- name: validation
num_bytes: 7748544
num_examples: 1659
- name: test
num_bytes: 7786343
num_examples: 1684
download_size: 51589681
dataset_size: 77428701
- config_name: hi_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 3384789
num_examples: 1000
download_size: 2303495
dataset_size: 3384789
- config_name: hu_szeged
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2822934
num_examples: 910
- name: validation
num_bytes: 1584932
num_examples: 441
- name: test
num_bytes: 1419130
num_examples: 449
download_size: 3687905
dataset_size: 5826996
- config_name: is_icepahc
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 97197159
num_examples: 34007
- name: validation
num_bytes: 18931295
num_examples: 4865
- name: test
num_bytes: 19039838
num_examples: 5157
download_size: 85106126
dataset_size: 135168292
- config_name: is_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2304432
num_examples: 1000
download_size: 1525635
dataset_size: 2304432
- config_name: id_csui
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1611334
num_examples: 656
- name: test
num_bytes: 888832
num_examples: 374
download_size: 1448601
dataset_size: 2500166
- config_name: id_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 11728948
num_examples: 4477
- name: validation
num_bytes: 1513894
num_examples: 559
- name: test
num_bytes: 1417208
num_examples: 557
download_size: 9487349
dataset_size: 14660050
- config_name: id_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1768596
num_examples: 1000
download_size: 1149692
dataset_size: 1768596
- config_name: ga_idt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 10327215
num_examples: 4005
- name: validation
num_bytes: 1057313
num_examples: 451
- name: test
num_bytes: 1109028
num_examples: 454
download_size: 7417728
dataset_size: 12493556
- config_name: it_isdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 33510781
num_examples: 13121
- name: validation
num_bytes: 1439348
num_examples: 564
- name: test
num_bytes: 1267932
num_examples: 482
download_size: 20998527
dataset_size: 36218061
- config_name: it_partut
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5428686
num_examples: 1781
- name: validation
num_bytes: 335085
num_examples: 156
- name: test
num_bytes: 413752
num_examples: 153
download_size: 3582155
dataset_size: 6177523
- config_name: it_postwita
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 10523322
num_examples: 5368
- name: validation
num_bytes: 1299818
num_examples: 671
- name: test
num_bytes: 1344079
num_examples: 674
download_size: 7611319
dataset_size: 13167219
- config_name: it_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2612838
num_examples: 1000
download_size: 1641073
dataset_size: 2612838
- config_name: it_twittiro
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2536429
num_examples: 1138
- name: validation
num_bytes: 323504
num_examples: 144
- name: test
num_bytes: 316211
num_examples: 142
download_size: 1894686
dataset_size: 3176144
- config_name: it_vit
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 24536095
num_examples: 8277
- name: validation
num_bytes: 3144507
num_examples: 743
- name: test
num_bytes: 2870355
num_examples: 1067
download_size: 17605311
dataset_size: 30550957
- config_name: ja_bccwj
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 119164443
num_examples: 40740
- name: validation
num_bytes: 23390188
num_examples: 8417
- name: test
num_bytes: 21904413
num_examples: 7871
download_size: 87340125
dataset_size: 164459044
- config_name: ja_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 36905139
num_examples: 7027
- name: validation
num_bytes: 2662999
num_examples: 501
- name: test
num_bytes: 2858141
num_examples: 543
download_size: 30397358
dataset_size: 42426279
- config_name: ja_modern
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 3062149
num_examples: 822
download_size: 2163988
dataset_size: 3062149
- config_name: ja_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 6322307
num_examples: 1000
download_size: 4661525
dataset_size: 6322307
- config_name: krl_kkpp
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 370378
num_examples: 228
download_size: 226103
dataset_size: 370378
- config_name: kk_ktb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 64737
num_examples: 31
- name: test
num_bytes: 1263246
num_examples: 1047
download_size: 849300
dataset_size: 1327983
- config_name: kfm_aha
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 8464
num_examples: 10
download_size: 6290
dataset_size: 8464
- config_name: koi_uh
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 117629
num_examples: 81
download_size: 91509
dataset_size: 117629
- config_name: kpv_ikdp
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 182189
num_examples: 132
download_size: 121684
dataset_size: 182189
- config_name: kpv_lattice
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 685683
num_examples: 435
download_size: 467085
dataset_size: 685683
- config_name: ko_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5480313
num_examples: 4400
- name: validation
num_bytes: 1156603
num_examples: 950
- name: test
num_bytes: 1129555
num_examples: 989
download_size: 4882238
dataset_size: 7766471
- config_name: ko_kaist
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 29037654
num_examples: 23010
- name: validation
num_bytes: 2511880
num_examples: 2066
- name: test
num_bytes: 2792215
num_examples: 2287
download_size: 21855177
dataset_size: 34341749
- config_name: ko_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2511856
num_examples: 1000
download_size: 2024810
dataset_size: 2511856
- config_name: kmr_mg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 30374
num_examples: 20
- name: test
num_bytes: 1248564
num_examples: 734
download_size: 765158
dataset_size: 1278938
- config_name: la_ittb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 54306304
num_examples: 22775
- name: validation
num_bytes: 4236222
num_examples: 2101
- name: test
num_bytes: 4221459
num_examples: 2101
download_size: 40247546
dataset_size: 62763985
- config_name: la_llct
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 26885433
num_examples: 7289
- name: validation
num_bytes: 3363915
num_examples: 850
- name: test
num_bytes: 3352500
num_examples: 884
download_size: 21975884
dataset_size: 33601848
- config_name: la_perseus
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2542043
num_examples: 1334
- name: test
num_bytes: 1575350
num_examples: 939
download_size: 2573703
dataset_size: 4117393
- config_name: la_proiel
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 24956038
num_examples: 15917
- name: validation
num_bytes: 2020476
num_examples: 1234
- name: test
num_bytes: 2029828
num_examples: 1260
download_size: 18434442
dataset_size: 29006342
- config_name: lv_lvtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 29167529
num_examples: 10156
- name: validation
num_bytes: 4501172
num_examples: 1664
- name: test
num_bytes: 4565919
num_examples: 1823
download_size: 25227301
dataset_size: 38234620
- config_name: lt_alksnis
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 7272501
num_examples: 2341
- name: validation
num_bytes: 1763901
num_examples: 617
- name: test
num_bytes: 1648521
num_examples: 684
download_size: 7008248
dataset_size: 10684923
- config_name: lt_hse
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 433214
num_examples: 153
- name: validation
num_bytes: 433214
num_examples: 153
- name: test
num_bytes: 433214
num_examples: 153
download_size: 265619
dataset_size: 1299642
- config_name: olo_kkpp
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 18096
num_examples: 19
- name: test
num_bytes: 175355
num_examples: 106
download_size: 121837
dataset_size: 193451
- config_name: mt_mudt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1858001
num_examples: 1123
- name: validation
num_bytes: 826004
num_examples: 433
- name: test
num_bytes: 892629
num_examples: 518
download_size: 2011753
dataset_size: 3576634
- config_name: gv_cadhan
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 483042
num_examples: 291
download_size: 287206
dataset_size: 483042
- config_name: mr_ufal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 420345
num_examples: 373
- name: validation
num_bytes: 60791
num_examples: 46
- name: test
num_bytes: 56582
num_examples: 47
download_size: 339354
dataset_size: 537718
- config_name: gun_dooley
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1037858
num_examples: 1046
download_size: 571571
dataset_size: 1037858
- config_name: gun_thomas
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 143111
num_examples: 98
download_size: 92963
dataset_size: 143111
- config_name: mdf_jr
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 234147
num_examples: 167
download_size: 162330
dataset_size: 234147
- config_name: myu_tudet
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 26202
num_examples: 62
download_size: 20315
dataset_size: 26202
- config_name: pcm_nsc
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 16079391
num_examples: 7279
- name: validation
num_bytes: 2099571
num_examples: 991
- name: test
num_bytes: 2063685
num_examples: 972
download_size: 14907410
dataset_size: 20242647
- config_name: nyq_aha
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 8723
num_examples: 10
download_size: 6387
dataset_size: 8723
- config_name: sme_giella
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1987666
num_examples: 2257
- name: test
num_bytes: 1142396
num_examples: 865
download_size: 1862302
dataset_size: 3130062
- config_name: no_bokmaal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 25647647
num_examples: 15696
- name: validation
num_bytes: 3828310
num_examples: 2409
- name: test
num_bytes: 3151638
num_examples: 1939
download_size: 19177350
dataset_size: 32627595
- config_name: no_nynorsk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 25630539
num_examples: 14174
- name: validation
num_bytes: 3277649
num_examples: 1890
- name: test
num_bytes: 2601676
num_examples: 1511
download_size: 18532495
dataset_size: 31509864
- config_name: no_nynorsklia
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 3500907
num_examples: 3412
- name: validation
num_bytes: 1003845
num_examples: 881
- name: test
num_bytes: 999943
num_examples: 957
download_size: 3349676
dataset_size: 5504695
- config_name: cu_proiel
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 6106144
num_examples: 4124
- name: validation
num_bytes: 1639912
num_examples: 1073
- name: test
num_bytes: 1648459
num_examples: 1141
download_size: 6239839
dataset_size: 9394515
- config_name: fro_srcmf
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 11959859
num_examples: 13909
- name: validation
num_bytes: 1526574
num_examples: 1842
- name: test
num_bytes: 1535923
num_examples: 1927
download_size: 9043098
dataset_size: 15022356
- config_name: orv_rnc
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1527306
num_examples: 320
- name: test
num_bytes: 2552216
num_examples: 637
download_size: 2627398
dataset_size: 4079522
- config_name: orv_torot
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 18077991
num_examples: 13336
- name: validation
num_bytes: 2408313
num_examples: 1852
- name: test
num_bytes: 2347934
num_examples: 1756
download_size: 15296362
dataset_size: 22834238
- config_name: otk_tonqq
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 22829
num_examples: 18
download_size: 14389
dataset_size: 22829
- config_name: fa_perdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 48654947
num_examples: 26196
- name: validation
num_bytes: 2687750
num_examples: 1456
- name: test
num_bytes: 2600303
num_examples: 1455
download_size: 33606395
dataset_size: 53943000
- config_name: fa_seraji
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 12627691
num_examples: 4798
- name: validation
num_bytes: 1634327
num_examples: 599
- name: test
num_bytes: 1675134
num_examples: 600
download_size: 9890107
dataset_size: 15937152
- config_name: pl_lfg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 16810910
num_examples: 13774
- name: validation
num_bytes: 2093712
num_examples: 1745
- name: test
num_bytes: 2100915
num_examples: 1727
download_size: 14865541
dataset_size: 21005537
- config_name: pl_pdb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 44652289
num_examples: 17722
- name: validation
num_bytes: 5494883
num_examples: 2215
- name: test
num_bytes: 5322608
num_examples: 2215
download_size: 36340919
dataset_size: 55469780
- config_name: pl_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2943603
num_examples: 1000
download_size: 1943983
dataset_size: 2943603
- config_name: pt_bosque
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22808617
num_examples: 8328
- name: validation
num_bytes: 1201577
num_examples: 560
- name: test
num_bytes: 1131511
num_examples: 476
download_size: 15201503
dataset_size: 25141705
- config_name: pt_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22208385
num_examples: 9664
- name: validation
num_bytes: 2805628
num_examples: 1210
- name: test
num_bytes: 2732063
num_examples: 1204
download_size: 15300844
dataset_size: 27746076
- config_name: pt_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2431942
num_examples: 1000
download_size: 1516883
dataset_size: 2431942
- config_name: ro_nonstandard
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 74489083
num_examples: 24121
- name: validation
num_bytes: 2663152
num_examples: 1052
- name: test
num_bytes: 3017162
num_examples: 1052
download_size: 50345748
dataset_size: 80169397
- config_name: ro_rrt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 23695399
num_examples: 8043
- name: validation
num_bytes: 2190973
num_examples: 752
- name: test
num_bytes: 2092520
num_examples: 729
download_size: 17187956
dataset_size: 27978892
- config_name: ro_simonero
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 15390734
num_examples: 3747
- name: validation
num_bytes: 1926639
num_examples: 443
- name: test
num_bytes: 1940787
num_examples: 491
download_size: 11409378
dataset_size: 19258160
- config_name: ru_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 10504099
num_examples: 3850
- name: validation
num_bytes: 1635884
num_examples: 579
- name: test
num_bytes: 1597603
num_examples: 601
download_size: 8830986
dataset_size: 13737586
- config_name: ru_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2695958
num_examples: 1000
download_size: 1869304
dataset_size: 2695958
- config_name: ru_syntagrus
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 126305584
num_examples: 48814
- name: validation
num_bytes: 17043673
num_examples: 6584
- name: test
num_bytes: 16880203
num_examples: 6491
download_size: 102745164
dataset_size: 160229460
- config_name: ru_taiga
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5802733
num_examples: 3138
- name: validation
num_bytes: 1382140
num_examples: 945
- name: test
num_bytes: 1314084
num_examples: 881
download_size: 5491427
dataset_size: 8498957
- config_name: sa_ufal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 431697
num_examples: 230
download_size: 424675
dataset_size: 431697
- config_name: sa_vedic
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2179608
num_examples: 2524
- name: test
num_bytes: 1209605
num_examples: 1473
download_size: 2041583
dataset_size: 3389213
- config_name: gd_arcosg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 3952356
num_examples: 1990
- name: validation
num_bytes: 1038211
num_examples: 645
- name: test
num_bytes: 1034788
num_examples: 538
download_size: 3474087
dataset_size: 6025355
- config_name: sr_set
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 9309552
num_examples: 3328
- name: validation
num_bytes: 1503953
num_examples: 536
- name: test
num_bytes: 1432672
num_examples: 520
download_size: 7414381
dataset_size: 12246177
- config_name: sms_giellagas
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 174744
num_examples: 104
download_size: 116491
dataset_size: 174744
- config_name: sk_snk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 12017312
num_examples: 8483
- name: validation
num_bytes: 1863926
num_examples: 1060
- name: test
num_bytes: 1943012
num_examples: 1061
download_size: 10013420
dataset_size: 15824250
- config_name: sl_ssj
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 16713639
num_examples: 6478
- name: validation
num_bytes: 2070847
num_examples: 734
- name: test
num_bytes: 2083062
num_examples: 788
download_size: 12455962
dataset_size: 20867548
- config_name: sl_sst
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2903675
num_examples: 2078
- name: test
num_bytes: 1493885
num_examples: 1110
download_size: 2655777
dataset_size: 4397560
- config_name: soj_aha
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 6218
num_examples: 8
download_size: 4577
dataset_size: 6218
- config_name: ajp_madar
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 71956
num_examples: 100
download_size: 43174
dataset_size: 71956
- config_name: es_ancora
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 50101327
num_examples: 14305
- name: validation
num_bytes: 5883940
num_examples: 1654
- name: test
num_bytes: 5928986
num_examples: 1721
download_size: 37668083
dataset_size: 61914253
- config_name: es_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 39582074
num_examples: 14187
- name: validation
num_bytes: 3834443
num_examples: 1400
- name: test
num_bytes: 1253720
num_examples: 426
download_size: 26073760
dataset_size: 44670237
- config_name: es_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2595946
num_examples: 1000
download_size: 1628475
dataset_size: 2595946
- config_name: swl_sslc
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 57443
num_examples: 87
- name: validation
num_bytes: 59002
num_examples: 82
- name: test
num_bytes: 24542
num_examples: 34
download_size: 81699
dataset_size: 140987
- config_name: sv_lines
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 6731662
num_examples: 3176
- name: validation
num_bytes: 2239951
num_examples: 1032
- name: test
num_bytes: 2070626
num_examples: 1035
download_size: 7245283
dataset_size: 11042239
- config_name: sv_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2554725
num_examples: 1000
download_size: 1722516
dataset_size: 2554725
- config_name: sv_talbanken
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 9287256
num_examples: 4303
- name: validation
num_bytes: 1361535
num_examples: 504
- name: test
num_bytes: 2835742
num_examples: 1219
download_size: 8476012
dataset_size: 13484533
- config_name: gsw_uzh
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 111357
num_examples: 100
download_size: 59675
dataset_size: 111357
- config_name: tl_trg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 86696
num_examples: 128
download_size: 61344
dataset_size: 86696
- config_name: tl_ugnayan
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 90863
num_examples: 94
download_size: 55207
dataset_size: 90863
- config_name: ta_mwtt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 522349
num_examples: 534
download_size: 414263
dataset_size: 522349
- config_name: ta_ttb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1538780
num_examples: 400
- name: validation
num_bytes: 305206
num_examples: 80
- name: test
num_bytes: 478941
num_examples: 120
download_size: 1753448
dataset_size: 2322927
- config_name: te_mtg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 703512
num_examples: 1051
- name: validation
num_bytes: 91547
num_examples: 131
- name: test
num_bytes: 99757
num_examples: 146
download_size: 643764
dataset_size: 894816
- config_name: th_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2341697
num_examples: 1000
download_size: 1606517
dataset_size: 2341697
- config_name: tpn_tudet
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 8089
num_examples: 8
download_size: 5447
dataset_size: 8089
- config_name: qtd_sagt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 583697
num_examples: 285
- name: validation
num_bytes: 1564765
num_examples: 801
- name: test
num_bytes: 1710777
num_examples: 805
download_size: 2299611
dataset_size: 3859239
- config_name: tr_boun
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 12827173
num_examples: 7803
- name: validation
num_bytes: 1577760
num_examples: 979
- name: test
num_bytes: 1580727
num_examples: 979
download_size: 9742035
dataset_size: 15985660
- config_name: tr_gb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2146729
num_examples: 2880
download_size: 1474083
dataset_size: 2146729
- config_name: tr_imst
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5063905
num_examples: 3664
- name: validation
num_bytes: 1342351
num_examples: 988
- name: test
num_bytes: 1347524
num_examples: 983
download_size: 4711018
dataset_size: 7753780
- config_name: tr_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2021772
num_examples: 1000
download_size: 1359487
dataset_size: 2021772
- config_name: uk_iu
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 18886802
num_examples: 5496
- name: validation
num_bytes: 2592721
num_examples: 672
- name: test
num_bytes: 3561164
num_examples: 892
download_size: 17344586
dataset_size: 25040687
- config_name: hsb_ufal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 54257
num_examples: 23
- name: test
num_bytes: 1246592
num_examples: 623
download_size: 781067
dataset_size: 1300849
- config_name: ur_udtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 19808745
num_examples: 4043
- name: validation
num_bytes: 2652349
num_examples: 552
- name: test
num_bytes: 2702596
num_examples: 535
download_size: 15901007
dataset_size: 25163690
- config_name: ug_udt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2570856
num_examples: 1656
- name: validation
num_bytes: 1406032
num_examples: 900
- name: test
num_bytes: 1371993
num_examples: 900
download_size: 3455092
dataset_size: 5348881
- config_name: vi_vtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1689772
num_examples: 1400
- name: validation
num_bytes: 948019
num_examples: 800
- name: test
num_bytes: 987207
num_examples: 800
download_size: 2055529
dataset_size: 3624998
- config_name: wbp_ufal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 48533
num_examples: 55
download_size: 38326
dataset_size: 48533
- config_name: cy_ccg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1629465
num_examples: 704
- name: test
num_bytes: 1779002
num_examples: 953
download_size: 1984759
dataset_size: 3408467
- config_name: wo_wtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2781883
num_examples: 1188
- name: validation
num_bytes: 1204839
num_examples: 449
- name: test
num_bytes: 1227124
num_examples: 470
download_size: 3042699
dataset_size: 5213846
- config_name: yo_ytb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 905766
num_examples: 318
download_size: 567955
dataset_size: 905766
---
# Dataset Card for Universal Dependencies Treebank
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Universal Dependencies](https://universaldependencies.org/)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@jplu](https://github.com/jplu) for adding this dataset. | [
-0.461434930562973,
-0.3389219641685486,
0.18518134951591492,
0.3428705930709839,
-0.16032497584819794,
0.13135145604610443,
-0.19626978039741516,
-0.6820873618125916,
0.5088189244270325,
0.861587405204773,
-0.7749984860420227,
-1.1194077730178833,
-0.7278755903244019,
0.11514592170715332,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pospos12/core50 | pospos12 | 2023-05-07T05:36:50Z | 120 | 0 | null | [
"region:us"
] | 2023-05-07T05:36:50Z | 2023-05-07T05:29:13.000Z | 2023-05-07T05:29:13 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': o1
'1': o10
'2': o11
'3': o12
'4': o13
'5': o14
'6': o15
'7': o16
'8': o17
'9': o18
'10': o19
'11': o2
'12': o20
'13': o21
'14': o22
'15': o23
'16': o24
'17': o25
'18': o26
'19': o27
'20': o28
'21': o29
'22': o3
'23': o30
'24': o31
'25': o32
'26': o33
'27': o34
'28': o35
'29': o36
'30': o37
'31': o38
'32': o39
'33': o4
'34': o40
'35': o41
'36': o42
'37': o43
'38': o44
'39': o45
'40': o46
'41': o47
'42': o48
'43': o49
'44': o5
'45': o50
'46': o6
'47': o7
'48': o8
'49': o9
splits:
- name: train
num_bytes: 4679767790.178506
num_examples: 131892
- name: test
num_bytes: 1167433089.5734935
num_examples: 32974
download_size: 5860983180
dataset_size: 5847200879.751999
---
# Dataset Card for "core50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.803902268409729,
-0.13022758066654205,
0.163681760430336,
0.2027997523546219,
-0.13329176604747772,
0.0013457314344123006,
0.15023241937160492,
-0.22178229689598083,
0.7353982329368591,
0.5284774899482727,
-0.8965074419975281,
-0.7651601433753967,
-0.46621251106262207,
-0.25027677416801... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zjkarina/matreshka | zjkarina | 2023-05-13T15:38:52Z | 120 | 11 | null | [
"task_categories:conversational",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:ru",
"license:cc-by-4.0",
"region:us"
] | 2023-05-13T15:38:52Z | 2023-05-07T20:31:03.000Z | 2023-05-07T20:31:03 | ---
dataset_info:
features:
- name: role
sequence: string
- name: dialog
sequence: string
- name: persona
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 7320311
num_examples: 6655
- name: validation
num_bytes: 1806432
num_examples: 1664
download_size: 4092810
dataset_size: 9126743
language:
- ru
pretty_name: matreshka
size_categories:
- 1K<n<10K
task_categories:
- conversational
- summarization
- text-generation
license: cc-by-4.0
---
# Dataset Card for "matreshka"

(image generated by Kandinsky-2.1 neural network)
Russian dialogues, the persona of the first interlocutor, and a summary of the dialogue generated by GPT-3.5, starting with the first phrase given in the prompt.
The matreshka dataset is a multi task datasey, you can use it for the task of summarizing a dialogue or generating a dialogue. Contains life dialogues and is also filled with facts about the world. The dataset was going to give the interlocutor a human manner of communication.
After generation, some data contained a format that did not match the request, so we stripped the data with regular expressions. Next, we checked for the correct data type in each line, and changed to the correct format if necessary.
authors' telegram channels: [@nadlskom](https://t.me/nadlskom), [@lovedeathtransformers](https://t.me/lovedeathtransformers) | [
-0.33363789319992065,
-0.46306559443473816,
0.25404807925224304,
0.07309623062610626,
-0.6915947198867798,
0.16162845492362976,
-0.036333415657281876,
-0.00449051009491086,
0.4134086072444916,
0.5969874858856201,
-1.0566022396087646,
-0.5730614066123962,
-0.5220522284507751,
-0.01899379491... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
leeseeun/tokenzied_news_2gb_data | leeseeun | 2023-10-24T06:05:04Z | 120 | 0 | null | [
"region:us"
] | 2023-10-24T06:05:04Z | 2023-10-24T06:03:53.000Z | 2023-10-24T06:03:53 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 2230572200
num_examples: 544042
download_size: 989285251
dataset_size: 2230572200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tokenzied_news_2gb_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.36985328793525696,
-0.4939858615398407,
0.11100784689188004,
0.37303322553634644,
-0.5159765481948853,
-0.05944215506315231,
0.18345394730567932,
-0.2282593548297882,
1.0052276849746704,
0.5153529047966003,
-0.7161084413528442,
-0.7222430109977722,
-0.6356489062309265,
-0.56242823600769... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
coastalcph/fm-updates-llama2-7b | coastalcph | 2023-11-21T16:57:59Z | 120 | 0 | null | [
"region:us"
] | 2023-11-21T16:57:59Z | 2023-11-13T11:08:09.000Z | 2023-11-13T11:08:09 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: query
struct:
- name: label
dtype: string
- name: objects
list:
- name: aliases
sequence: string
- name: label
dtype: string
- name: qid
dtype: string
- name: qid
dtype: string
- name: rel_id
dtype: string
- name: relation
dtype: string
- name: prediction
struct:
- name: predictions
list:
- name: answer
dtype: string
- name: first_token_probability
dtype: float64
- name: per_token_probability
sequence: float64
- name: perplexity
dtype: float64
- name: query
dtype: string
- name: f1
dtype: float64
- name: relation
dtype: string
- name: type
dtype: string
- name: original_answer
dtype: string
- name: updates
sequence: string
splits:
- name: test
num_bytes: 442069.6082474227
num_examples: 492
- name: validation
num_bytes: 48519.83505154639
num_examples: 54
download_size: 385039
dataset_size: 490589.44329896907
---
# Dataset Card for "fm-updates-llama2-7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.34190499782562256,
-0.12133333832025528,
0.3644507825374603,
0.6858110427856445,
-0.48471900820732117,
0.08880866318941116,
0.35918840765953064,
-0.36136624217033386,
0.7049558758735657,
0.4441378116607666,
-0.9241505861282349,
-0.7504207491874695,
-0.7290946841239929,
-0.08274052292108... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Yaxin/SemEval2016Task5Raw | Yaxin | 2022-08-15T08:19:35Z | 119 | 2 | null | [
"region:us"
] | 2022-08-15T08:19:35Z | 2022-04-20T14:39:38.000Z | 2022-04-20T14:39:38 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lccc | null | 2022-11-18T22:07:56Z | 119 | 14 | lccc | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:zh",
"license:mit",
"arxiv:2008.03946",
"region:us"
] | 2022-11-18T22:07:56Z | 2022-06-14T18:05:32.000Z | 2022-06-14T18:05:32 | ---
annotations_creators:
- other
language_creators:
- other
language:
- zh
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: lccc
pretty_name: 'LCCC: Large-scale Cleaned Chinese Conversation corpus'
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- conversational
task_ids:
- dialogue-generation
dataset_info:
- config_name: large
features:
- name: dialog
list: string
splits:
- name: train
num_bytes: 1530827965
num_examples: 12007759
download_size: 607605643
dataset_size: 1530827965
- config_name: base
features:
- name: dialog
list: string
splits:
- name: train
num_bytes: 932634902
num_examples: 6820506
- name: test
num_bytes: 1498216
num_examples: 10000
- name: validation
num_bytes: 2922731
num_examples: 20000
download_size: 371475095
dataset_size: 937055849
---
# Dataset Card for LCCC
## Table of Contents
- [Dataset Card for LCCC](#dataset-card-for-lccc)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/thu-coai/CDial-GPT
- **Paper:** https://arxiv.org/abs/2008.03946
### Dataset Summary
LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large Chinese dialogue corpus originate from Chinese social medias. A rigorous data cleaning pipeline is designed to ensure the quality of the corpus. This pipeline involves a set of rules and several classifier-based filters. Noises such as offensive or sensitive words, special symbols, emojis, grammatically incorrect sentences, and incoherent conversations are filtered.
LCCC是一套来自于中文社交媒体的对话数据,我们设计了一套严格的数据过滤流程来确保该数据集中对话数据的质量。 这一数据过滤流程中包括一系列手工规则以及若干基于机器学习算法所构建的分类器。 我们所过滤掉的噪声包括:脏字脏词、特殊字符、颜表情、语法不通的语句、上下文不相关的对话等。
### Supported Tasks and Leaderboards
- dialogue-generation: The dataset can be used to train a model for generating dialogue responses.
- response-retrieval: The dataset can be used to train a reranker model that can be used to implement a retrieval-based dialogue model.
### Languages
LCCC is in Chinese
LCCC中的对话是中文的
## Dataset Structure
### Data Instances
```json
{
"dialog": ["火锅 我 在 重庆 成都 吃 了 七八 顿 火锅", "哈哈哈哈 ! 那 我 的 嘴巴 可能 要 烂掉 !", "不会 的 就是 好 油腻"]
}
```
### Data Fields
- `dialog` (list of strings): List of utterances consisting of a dialogue.
### Data Splits
We do not provide the offical split for LCCC-large.
But we provide a split for LCCC-base:
|train|valid|test|
|---:|---:|---:|
|6,820,506 | 20,000 | 10,000|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
MIT License
Copyright (c) 2020 lemon234071
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
### Citation Information
```bibtex
@inproceedings{wang2020chinese,
title={A Large-Scale Chinese Short-Text Conversation Dataset},
author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},
booktitle={NLPCC},
year={2020},
url={https://arxiv.org/abs/2008.03946}
}
```
### Contributions
Thanks to [Yinhe Zheng](https://github.com/silverriver) for adding this dataset. | [
-0.44474127888679504,
-0.7051114439964294,
0.1477804034948349,
0.16282500326633453,
-0.2598066031932831,
0.08428255468606949,
-0.4287908971309662,
-0.3089574873447418,
0.27018555998802185,
0.6879869699478149,
-0.7164800763130188,
-0.9362083077430725,
-0.3799017369747162,
0.0706883668899536... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Bingsu/zeroth-korean | Bingsu | 2022-08-15T10:30:30Z | 119 | 10 | null | [
"task_categories:automatic-speech-recognition",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|kresnik/zeroth_korean",
"language:ko",
"license:cc-by-4.0",
"region:us"
] | 2022-08-15T10:30:30Z | 2022-08-14T08:50:33.000Z | 2022-08-14T08:50:33 | ---
language:
- ko
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: zeroth-korean
source_datasets:
- extended|kresnik/zeroth_korean
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
---
# Zeroth-Korean
## Dataset Description
- **Homepage:** [OpenSLR](https://www.openslr.org/40/)
- **Repository:** [goodatlas/zeroth](https://github.com/goodatlas/zeroth)
- **Download Size** 2.68 GiB
- **Generated Size** 2.85 GiB
- **Total Size** 5.52 GiB
## Zeroth-Korean
The data set contains transcriebed audio data for Korean. There are 51.6 hours transcribed Korean audio for training data (22,263 utterances, 105 people, 3000 sentences) and 1.2 hours transcribed Korean audio for testing data (457 utterances, 10 people). This corpus also contains pre-trained/designed language model, lexicon and morpheme-based segmenter(morfessor).
Zeroth project introduces free Korean speech corpus and aims to make Korean speech recognition more broadly accessible to everyone.
This project was developed in collaboration between Lucas Jo(@Atlas Guide Inc.) and Wonkyum Lee(@Gridspace Inc.).
Contact: Lucas Jo(lucasjo@goodatlas.com), Wonkyum Lee(wonkyum@gridspace.com)
### License
CC BY 4.0
## Dataset Structure
### Data Instance
```pycon
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/zeroth-korean")
>>> dataset
DatasetDict({
train: Dataset({
features: ['audio', 'text'],
num_rows: 22263
})
test: Dataset({
features: ['text', 'audio'],
num_rows: 457
})
})
```
### Data Size
download: 2.68 GiB<br>
generated: 2.85 GiB<br>
total: 5.52 GiB
### Data Fields
- audio: `audio`, sampling rate = 16000
- A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
- text: `string`
```pycon
>>> dataset["train"][0]
{'audio': {'path': None,
'array': array([-3.0517578e-05, 0.0000000e+00, -3.0517578e-05, ...,
0.0000000e+00, 0.0000000e+00, -6.1035156e-05], dtype=float32),
'sampling_rate': 16000},
'text': '인사를 결정하는 과정에서 당 지도부가 우 원내대표 및 원내지도부와 충분한 상의를 거치지 않은 채 일방적으로 인사를 했다는 불만도 원내지도부를 중심으로 흘러나왔다'}
```
### Data Splits
| | train | test |
| ---------- | -------- | ----- |
| # of data | 22263 | 457 |
| [
-0.4957588016986847,
-0.43070971965789795,
0.24920138716697693,
0.3647570312023163,
-0.4675835371017456,
-0.07679662853479385,
-0.2831723392009735,
-0.16357791423797607,
0.5098061561584473,
0.31279465556144714,
-0.514872670173645,
-0.8751562237739563,
-0.4489762485027313,
0.113836847245693... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-800000-850000 | tomekkorbak | 2022-10-04T22:47:07Z | 119 | 0 | null | [
"region:us"
] | 2022-10-04T22:47:07Z | 2022-10-04T17:47:49.000Z | 2022-10-04T17:47:49 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EdwardLin2023/MELD-Audio | EdwardLin2023 | 2023-04-24T04:04:52Z | 119 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-04-24T04:04:52Z | 2023-04-21T02:47:11.000Z | 2023-04-21T02:47:11 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OpenGVLab/InternVid | OpenGVLab | 2023-11-28T12:19:24Z | 119 | 22 | null | [
"task_categories:feature-extraction",
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2307.06942",
"region:us"
] | 2023-11-28T12:19:24Z | 2023-07-14T07:24:39.000Z | 2023-07-14T07:24:39 | ---
license: cc-by-nc-sa-4.0
task_categories:
- feature-extraction
language:
- en
size_categories:
- 10M<n<100M
extra_gated_prompt: "You agree to not use the data to conduct experiments that cause harm to human subjects."
extra_gated_fields:
Name: text
Company/Organization: text
E-Mail: text
---
# InternVid
## Dataset Description
- **Homepage:** [InternVid](https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid)
- **Repository:** [OpenGVLab](https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid)
- **Paper:** [2307.06942](https://arxiv.org/pdf/2307.06942.pdf)
- **Point of Contact:** mailto:[InternVideo](gvx-sh@pjlab.org.cn)
## InternVid-10M-FLT
We present InternVid-10M-FLT, a subset of this dataset, consisting of 10 million video clips, with generated high-quality captions for publicly available web videos.
## Download
The 10M samples are provided in jsonlines file. Columns include the videoID, timestamps, generated caption and their UMT similarity scores.\
## How to Use
```
from datasets import load_dataset
dataset = load_dataset("OpenGVLab/InternVid")
```
## Method

## Citation
If you find this work useful for your research, please consider citing InternVid. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
```
@article{wang2023internvid,
title={InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation},
author={Wang, Yi and He, Yinan and Li, Yizhuo and Li, Kunchang and Yu, Jiashuo and Ma, Xin and Chen, Xinyuan and Wang, Yaohui and Luo, Ping and Liu, Ziwei and Wang, Yali and Wang, Limin and Qiao, Yu},
journal={arXiv preprint arXiv:2307.06942},
year={2023}
}
@article{wang2022internvideo,
title={InternVideo: General Video Foundation Models via Generative and Discriminative Learning},
author={Wang, Yi and Li, Kunchang and Li, Yizhuo and He, Yinan and Huang, Bingkun and Zhao, Zhiyu and Zhang, Hongjie and Xu, Jilan and Liu, Yi and Wang, Zun and Xing, Sen and Chen, Guo and Pan, Junting and Yu, Jiashuo and Wang, Yali and Wang, Limin and Qiao, Yu},
journal={arXiv preprint arXiv:2212.03191},
year={2022}
}
```
| [
-0.5177204608917236,
-0.7515518665313721,
-0.014647592790424824,
0.19996564090251923,
-0.30729907751083374,
-0.2594218850135803,
-0.4599100947380066,
0.13348820805549622,
-0.19901014864444733,
0.0016743880696594715,
-0.5116037726402283,
-0.6640249490737915,
-0.5418635606765747,
0.173496797... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shibing624/sharegpt_gpt4 | shibing624 | 2023-08-07T14:27:34Z | 119 | 37 | LLM | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_ids:text-scoring",
"annotations_creators:shibing624",
"language_creators:shibing624",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:https://huggingface.co/datasets/openchat/openchat_sharegpt4... | 2023-08-07T14:27:34Z | 2023-07-27T05:45:49.000Z | 2023-07-27T05:45:49 | ---
annotations_creators:
- shibing624
language_creators:
- shibing624
language:
- zh
- en
- gl
- ko
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/tree/main
task_categories:
- text-classification
- text-generation
task_ids:
- text-scoring
paperswithcode_id: LLM
pretty_name: GPT4 sharegpt
---
# Dataset Card
## Dataset Description
- **Repository:** [shibing624/MedicalGPT](https://github.com/shibing624/MedicalGPT)
- **Dataset:** [shibing624/sharegpt_gpt4](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
- **Size of downloaded dataset files:** 772 M
- **Total amount of disk used:** 772 M
### Dataset Summary
ShareGPT中挑选出的GPT4多轮问答数据,多语言问答。
### Languages
数据集是多语言,包括中文、英文、汉语、日语等常用语言。
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- `conversations`: a List of `string` .
```shell
head -n 1 sharegpt_gpt4.jsonl
{"conversations":[
{'from': 'human',
'value': '採用優雅現代中文,用中文繁體字型,回答以下問題。為所有標題或專用字詞提供對應的英語翻譯:Using scholarly style, summarize in detail James Barr\'s book "Semantics of Biblical Language". Provide examples supplied in the book and give corresponding page numbers wherever applicable.'},
{'from': 'gpt',
'value': '《聖經語言的語義學》(Semantics of Biblical Language)是詹姆斯·巴爾(James Barr)於1961年出版的一本研究聖經語言的專著。在這部作品中,巴爾對聖經希伯來語和希臘語的語言學特徵進行了深入的探討,他批判了當時盛行的聖經研究方法,主張應該將語言學原理應用於聖經文本的詮釋,以便更準確地理解聖經的意思。\n\n巴爾在《聖經語言的語義學》中提出了幾個重要的觀點。首先,他主張語言學和語言學家在聖經研究中的角色應該受到重視。他認為,應該避免將聖經語言學和其他古典語言學區分開來,因為聖經語言和其他古代語言一樣,都有其固有的語法、語義和詞彙特點(第18-23頁)。\n\n其次,巴爾批評了當時流行的"字根意義論"(root-meaning fallacy),即過分強調單詞的字根在詮釋聖經含義時的作用。他指出,詞義是由語境決定的,而非僅僅由字根決定。巴爾舉了希伯來語單詞 "כפר"(kāpar)為例,它在不同的語境下可以表示"遮蓋"、"贖回"或"和解",而不是僅僅依賴其字根意義(第183-184頁)。\n\n第三,巴爾強調了詞彙意義在詮釋聖經時的重要性。他認為,了解單詞的意義和用法對於恰當理解聖經文本至關重要。巴爾引用了希伯來語單詞 "ברית"(berîṯ,英文:covenant)和希臘語單詞 "διαθήκη"(diathēkē,英文:testament)為例,解釋了這兩個詞在聖經中的不'},
{'from': 'human', 'value': '繼續'},
{'from': 'gpt',
'value': '同含義和用法。巴爾認為,了解這些詞語在不同語境下的意義對於理解聖經中的盟約和遺囑概念至關重要(第50-53頁)。\n\n第四,巴爾強調語境在詮釋聖經時的重要性。他認為,要理解聖經的含義,就必須考慮到文本的歷史背景、文化環境和語言風格。此外,巴爾提倡從語言學的角度研究聖經,以便更好地理解聖經的結構和風格。他以希伯來詩篇的平行主義(Parallelism)為例,闡述了語言學原理如何有助於揭示詩篇中的意象和修辭結構(第250-252頁)。\n\n總之,《聖經語言的語義學》一書對聖經研究領域產生了深遠的影響。詹姆斯·巴爾通過對聖經語言學的深入研究,提出了許多富有洞察力的觀點,尤其是關於語言學在聖經詮釋中的重要性。這些觀點對後來的聖經研究者產生了很大的啟示,使他們在研究聖經時更加重視語言學原理,從而更加準確地理解聖經的含義。'}
]
}
```
### Data Splits
```shell
> wc -l *
6206 sharegpt_gpt4.jsonl
58674 sharegpt_V3_format.jsonl
38535 sharegpt_zh_38K_format.jsonl
103415 total
```
#### Who are the annotators?
原作者。
### Licensing Information
same to sharegpt.
### Contributions
[shibing624](https://github.com/shibing624) add this dataset. | [
-0.6313626170158386,
-0.7936276793479919,
0.3095603585243225,
0.37756696343421936,
-0.6669445633888245,
-0.35222896933555603,
-0.3253909647464752,
-0.31836384534835815,
0.494589239358902,
0.4504951536655426,
-0.3840998709201813,
-0.7300581336021423,
-0.8421399593353271,
0.2325824350118637,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
godoyj/wikilingua | godoyj | 2023-09-08T17:36:48Z | 119 | 0 | null | [
"task_categories:summarization",
"language:pt",
"region:us"
] | 2023-09-08T17:36:48Z | 2023-09-07T17:09:14.000Z | 2023-09-07T17:09:14 | ---
language:
- pt
task_categories:
- summarization
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vlsp-2023-vllm/grade_12_exams | vlsp-2023-vllm | 2023-09-30T08:28:29Z | 119 | 0 | null | [
"region:us"
] | 2023-09-30T08:28:29Z | 2023-09-10T19:54:48.000Z | 2023-09-10T19:54:48 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: metadata
struct:
- name: grade
dtype: int64
- name: language
dtype: string
- name: subject
dtype: string
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 921887
num_examples: 1955
- name: validation
num_bytes: 224168
num_examples: 488
download_size: 461705
dataset_size: 1146055
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "grade_12_exams"
Reference: https://huggingface.co/datasets/exams | [
-0.3206786513328552,
-0.24431821703910828,
-0.05865519866347313,
0.6245863437652588,
-0.2162141054868698,
-0.17277580499649048,
0.3821353316307068,
-0.018394434824585915,
0.392557293176651,
0.33071058988571167,
-0.6336362361907959,
-0.9971066117286682,
-0.3243061602115631,
-0.2149658799171... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jamescalam/ai-arxiv | jamescalam | 2023-10-10T12:57:37Z | 119 | 10 | null | [
"region:us"
] | 2023-10-10T12:57:37Z | 2023-10-09T21:07:32.000Z | 2023-10-09T21:07:32 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/WildChat | allenai | 2023-11-15T16:16:44Z | 119 | 19 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"not-for-all-audiences",
"instruction-finetuning",
"region:us"
] | 2023-11-15T16:16:44Z | 2023-10-27T23:53:36.000Z | 2023-10-27T23:53:36 | ---
dataset_info:
features:
- name: conversation_id
dtype: string
- name: model
dtype: string
- name: timestamp
dtype: timestamp[s, tz=UTC]
- name: conversation
list:
- name: content
dtype: string
- name: language
dtype: string
- name: redacted
dtype: bool
- name: role
dtype: string
- name: toxic
dtype: bool
- name: turn
dtype: int64
- name: language
dtype: string
- name: openai_moderation
list:
- name: categories
struct:
- name: harassment
dtype: bool
- name: harassment/threatening
dtype: bool
- name: hate
dtype: bool
- name: hate/threatening
dtype: bool
- name: self-harm
dtype: bool
- name: self-harm/instructions
dtype: bool
- name: self-harm/intent
dtype: bool
- name: sexual
dtype: bool
- name: sexual/minors
dtype: bool
- name: violence
dtype: bool
- name: violence/graphic
dtype: bool
- name: category_scores
struct:
- name: harassment
dtype: float64
- name: harassment/threatening
dtype: float64
- name: hate
dtype: float64
- name: hate/threatening
dtype: float64
- name: self-harm
dtype: float64
- name: self-harm/instructions
dtype: float64
- name: self-harm/intent
dtype: float64
- name: sexual
dtype: float64
- name: sexual/minors
dtype: float64
- name: violence
dtype: float64
- name: violence/graphic
dtype: float64
- name: flagged
dtype: bool
- name: detoxify_moderation
list:
- name: identity_attack
dtype: float32
- name: insult
dtype: float32
- name: obscene
dtype: float32
- name: severe_toxicity
dtype: float32
- name: sexual_explicit
dtype: float32
- name: threat
dtype: float32
- name: toxicity
dtype: float32
- name: toxic
dtype: bool
- name: redacted
dtype: bool
splits:
- name: train
num_bytes: 3900538458
num_examples: 652139
download_size: 2102684185
dataset_size: 3900538458
pretty_name: WildChat
extra_gated_prompt: >-
Access to this dataset is automatically granted upon accepting the [**AI2
ImpACT License - Low Risk Artifacts (“LR
Agreement”)**](https://allenai.org/licenses/impact-lr) and completing all
fields below.
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the low risk artifact(s): text
I AGREE to the terms and conditions of the LR Agreement above: checkbox
I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox
I CERTIFY that the information I have provided is true and accurate: checkbox
tags:
- not-for-all-audiences
- instruction-finetuning
size_categories:
- 100K<n<1M
task_categories:
- conversational
- text-generation
- question-answering
---
# Dataset Card for WildChat
## Dataset Description
- **Paper:** https://wenting-zhao.github.io/papers/wildchat.pdf
- **License:** https://allenai.org/licenses/impact-lr
- **Language(s) (NLP):** multi-lingual
- **Point of Contact:** [Yuntian Deng](mailto:yuntiand@allenai.org)
### Dataset Summary
WildChat is a collection of 650K conversations between human users and ChatGPT. We collected WildChat by offering online users free access to OpenAI's GPT-3.5 and GPT-4. The dataset contains a broad spectrum of user-chatbot interactions that are not previously covered by other instruction fine-tuning datasets: for example, interactions include ambiguous user requests, code-switching, topic-switching, political discussions, etc. WildChat can serve both as a dataset for instructional fine-tuning and as a valuable resource for studying user behaviors. Note that this dataset contains toxic user inputs/ChatGPT responses. A nontoxic subset of this dataest can be found [here](https://huggingface.co/datasets/allenai/WildChat-nontoxic).
WildChat has been openly released under AI2's ImpACT license as a low-risk artifact. The use of WildChat to cause harm is strictly prohibited.
### Languages
66 languages were detected in WildChat.
### Personal and Sensitive Information
The data has been de-identified with Microsoft Presidio and hand-written rules by the authors.
### Data Fields
- `conversation_id` (string): Each conversation has a unique id.
- `model` (string): The underlying OpenAI model, such as gpt-3.5-turbo or gpt-4.
- `timestamp` (timestamp): The timestamp of the last turn in the conversation in UTC.
- `conversation` (list): A list of user/assistant utterances. Each utterance is a dictionary containing the `role` of the speaker (user or assistant), the `content` of the utterance, the detected `language` of the utterance, whether the content of the utterance is considered `toxic`, and whether PII has been detected and anonymized (`redacted`).
- `turn` (int): The number of turns in the conversation. A turn refers to one round of user-assistant interaction.
- `language` (string): The language of the conversation. Note that this is the most frequently detected language in the utterances of the conversation.
- `openai_moderation` (list): A list of OpenAI Moderation results. Each element in the list corresponds to one utterance in the conversation.
- `detoxify_moderation` (list): A list of Detoxify results. Each element in the list corresponds to one utterance in the conversation.
- `toxic` (bool): Whether this conversation contains any utterances considered to be toxic by either OpenAI Moderation or Detoxify.
- `redacted` (bool): Whether this conversation contains any utterances in which PII is detected and anonymized.
### Licensing Information
WildChat is made available under the [**AI2
ImpACT License - Low Risk Artifacts ("LR
Agreement")**](https://allenai.org/licenses/impact-lr)
### Citation Information
Please cite [our paper](https://wenting-zhao.github.io/papers/wildchat.pdf) when using this dataset:
```
@misc{zhao2023wildchat,
title={(InThe)WildChat: 650K ChatGPT Interaction Logs in the Wild},
author={Wenting Zhao, Xiang Ren, Jack Hessel, Claire Cardie, Yejin Choi, Yuntian Deng.},
year={2023},
eprint={},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.09981856495141983,
-0.9177632927894592,
-0.014598449692130089,
0.2977793514728546,
-0.3220016360282898,
-0.19550421833992004,
-0.3582015931606293,
-0.6653554439544678,
0.36811643838882446,
0.47156718373298645,
-0.7227122783660889,
-0.4933764934539795,
-0.3370210528373718,
-0.08980520814... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sebastiandizon/spotify-million-song | sebastiandizon | 2023-11-02T17:41:50Z | 119 | 0 | null | [
"region:us"
] | 2023-11-02T17:41:50Z | 2023-11-02T17:19:49.000Z | 2023-11-02T17:19:49 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
A dataset containing songs, artists names, link to song and lyrics
## Dataset Details
Dataset retrieved from https://www.kaggle.com/datasets/notshrirang/spotify-million-song-dataset
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
This is Spotify Million Song Dataset. This dataset contains song names, artists names, link to the song and lyrics. This dataset can be used for recommending songs, classifying or clustering songs.
- **Curated by:** SHRIRANG MAHAJAN
- **Language(s) (NLP):** ENGLISH
- **License:** CC0 PUBLIC DOMAIN
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://www.kaggle.com/datasets/notshrirang/spotify-million-song-dataset
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
dataset_info:
features:
- name: {artist}
dtype: {string}
- name: {song}
dtype: {string}
- name: {link}
dtype: {string}
- name: {text} # Song Lyrics
dtype: {string}
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Dataset Card Contact
[More Information Needed] | [
-0.5353251695632935,
-0.4669644832611084,
0.1680678427219391,
0.30003875494003296,
-0.17152920365333557,
0.019148288294672966,
-0.15652920305728912,
-0.5142108798027039,
0.5449243187904358,
0.9470329880714417,
-0.9678361415863037,
-0.9364017844200134,
-0.5456644892692566,
-0.00083105324301... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shariqfarooq/lcn_box | shariqfarooq | 2023-11-16T12:07:04Z | 119 | 0 | null | [
"region:us"
] | 2023-11-16T12:07:04Z | 2023-11-16T10:43:29.000Z | 2023-11-16T10:43:29 | ---
dataset_info:
features:
- name: caption
dtype: string
- name: condition
dtype: image
- name: controlnet
dtype: image
- name: ours
dtype: image
- name: idd
dtype: string
splits:
- name: train
num_bytes: 18748870.0
num_examples: 21
download_size: 18762411
dataset_size: 18748870.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lcn_box"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7174791693687439,
-0.45717310905456543,
0.08565951883792877,
0.13612188398838043,
-0.3154834806919098,
0.03681120648980141,
0.33922144770622253,
-0.08780435472726822,
0.7393777370452881,
0.8116378784179688,
-0.9855732917785645,
-0.7973358035087585,
-0.3568774461746216,
-0.09465099126100... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tungkho178/NLLB_translations_Vietnamese_51.8K | tungkho178 | 2023-11-21T19:14:26Z | 119 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-21T19:14:26Z | 2023-11-21T19:14:00.000Z | 2023-11-21T19:14:00 | ---
license: apache-2.0
---
| [
-0.1285337507724762,
-0.18616773188114166,
0.6529127359390259,
0.4943627715110779,
-0.193193256855011,
0.23607444763183594,
0.36071985960006714,
0.050563156604766846,
0.5793652534484863,
0.7400138974189758,
-0.6508103013038635,
-0.23783966898918152,
-0.7102247476577759,
-0.0478259548544883... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Splend1dchan/slue-text | Splend1dchan | 2022-04-13T10:04:41Z | 118 | 0 | null | [
"region:us"
] | 2022-04-13T10:04:41Z | 2022-04-13T03:39:05.000Z | 2022-04-13T03:39:05 | Entry not found | [
-0.32276496291160583,
-0.22568435966968536,
0.8622260093688965,
0.43461480736732483,
-0.5282987952232361,
0.7012965083122253,
0.7915714979171753,
0.07618625462055206,
0.7746025323867798,
0.25632181763648987,
-0.7852815389633179,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
statworx/haiku | statworx | 2022-07-02T13:25:45Z | 118 | 2 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2022-07-02T13:25:45Z | 2022-05-19T09:40:41.000Z | 2022-05-19T09:40:41 | ---
annotations_creators: []
language_creators: []
language:
- en
license: []
multilinguality:
- monolingual
pretty_name: Haiku
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for Haiku Data
| [
-0.0443510040640831,
0.0644911453127861,
-0.06194019317626953,
0.16966097056865692,
-0.7859717011451721,
-0.008119925856590271,
-0.18467028439044952,
-0.1032465472817421,
0.40365323424339294,
0.4216769337654114,
-0.29175645112991333,
-0.9996718168258667,
-0.46317505836486816,
-0.3157483935... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-900000-950000 | tomekkorbak | 2022-10-04T23:47:24Z | 118 | 0 | null | [
"region:us"
] | 2022-10-04T23:47:24Z | 2022-10-04T17:53:32.000Z | 2022-10-04T17:53:32 | Entry not found | [
-0.32276496291160583,
-0.22568435966968536,
0.8622260093688965,
0.43461480736732483,
-0.5282987952232361,
0.7012965083122253,
0.7915714979171753,
0.07618625462055206,
0.7746025323867798,
0.25632181763648987,
-0.7852815389633179,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-850000-900000 | tomekkorbak | 2022-10-04T23:55:21Z | 118 | 0 | null | [
"region:us"
] | 2022-10-04T23:55:21Z | 2022-10-04T17:55:29.000Z | 2022-10-04T17:55:29 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/StanfordCars_test | Multimodal-Fatima | 2023-06-12T02:33:45Z | 118 | 0 | null | [
"region:us"
] | 2023-06-12T02:33:45Z | 2023-01-28T02:30:24.000Z | 2023-01-28T02:30:24 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': am general hummer suv 2000
'1': acura rl sedan 2012
'2': acura tl sedan 2012
'3': acura tl type-s 2008
'4': acura tsx sedan 2012
'5': acura integra type r 2001
'6': acura zdx hatchback 2012
'7': aston martin v8 vantage convertible 2012
'8': aston martin v8 vantage coupe 2012
'9': aston martin virage convertible 2012
'10': aston martin virage coupe 2012
'11': audi rs 4 convertible 2008
'12': audi a5 coupe 2012
'13': audi tts coupe 2012
'14': audi r8 coupe 2012
'15': audi v8 sedan 1994
'16': audi 100 sedan 1994
'17': audi 100 wagon 1994
'18': audi tt hatchback 2011
'19': audi s6 sedan 2011
'20': audi s5 convertible 2012
'21': audi s5 coupe 2012
'22': audi s4 sedan 2012
'23': audi s4 sedan 2007
'24': audi tt rs coupe 2012
'25': bmw activehybrid 5 sedan 2012
'26': bmw 1 series convertible 2012
'27': bmw 1 series coupe 2012
'28': bmw 3 series sedan 2012
'29': bmw 3 series wagon 2012
'30': bmw 6 series convertible 2007
'31': bmw x5 suv 2007
'32': bmw x6 suv 2012
'33': bmw m3 coupe 2012
'34': bmw m5 sedan 2010
'35': bmw m6 convertible 2010
'36': bmw x3 suv 2012
'37': bmw z4 convertible 2012
'38': bentley continental supersports conv. convertible 2012
'39': bentley arnage sedan 2009
'40': bentley mulsanne sedan 2011
'41': bentley continental gt coupe 2012
'42': bentley continental gt coupe 2007
'43': bentley continental flying spur sedan 2007
'44': bugatti veyron 16.4 convertible 2009
'45': bugatti veyron 16.4 coupe 2009
'46': buick regal gs 2012
'47': buick rainier suv 2007
'48': buick verano sedan 2012
'49': buick enclave suv 2012
'50': cadillac cts-v sedan 2012
'51': cadillac srx suv 2012
'52': cadillac escalade ext crew cab 2007
'53': chevrolet silverado 1500 hybrid crew cab 2012
'54': chevrolet corvette convertible 2012
'55': chevrolet corvette zr1 2012
'56': chevrolet corvette ron fellows edition z06 2007
'57': chevrolet traverse suv 2012
'58': chevrolet camaro convertible 2012
'59': chevrolet hhr ss 2010
'60': chevrolet impala sedan 2007
'61': chevrolet tahoe hybrid suv 2012
'62': chevrolet sonic sedan 2012
'63': chevrolet express cargo van 2007
'64': chevrolet avalanche crew cab 2012
'65': chevrolet cobalt ss 2010
'66': chevrolet malibu hybrid sedan 2010
'67': chevrolet trailblazer ss 2009
'68': chevrolet silverado 2500hd regular cab 2012
'69': chevrolet silverado 1500 classic extended cab 2007
'70': chevrolet express van 2007
'71': chevrolet monte carlo coupe 2007
'72': chevrolet malibu sedan 2007
'73': chevrolet silverado 1500 extended cab 2012
'74': chevrolet silverado 1500 regular cab 2012
'75': chrysler aspen suv 2009
'76': chrysler sebring convertible 2010
'77': chrysler town and country minivan 2012
'78': chrysler 300 srt-8 2010
'79': chrysler crossfire convertible 2008
'80': chrysler pt cruiser convertible 2008
'81': daewoo nubira wagon 2002
'82': dodge caliber wagon 2012
'83': dodge caliber wagon 2007
'84': dodge caravan minivan 1997
'85': dodge ram pickup 3500 crew cab 2010
'86': dodge ram pickup 3500 quad cab 2009
'87': dodge sprinter cargo van 2009
'88': dodge journey suv 2012
'89': dodge dakota crew cab 2010
'90': dodge dakota club cab 2007
'91': dodge magnum wagon 2008
'92': dodge challenger srt8 2011
'93': dodge durango suv 2012
'94': dodge durango suv 2007
'95': dodge charger sedan 2012
'96': dodge charger srt-8 2009
'97': eagle talon hatchback 1998
'98': fiat 500 abarth 2012
'99': fiat 500 convertible 2012
'100': ferrari ff coupe 2012
'101': ferrari california convertible 2012
'102': ferrari 458 italia convertible 2012
'103': ferrari 458 italia coupe 2012
'104': fisker karma sedan 2012
'105': ford f-450 super duty crew cab 2012
'106': ford mustang convertible 2007
'107': ford freestar minivan 2007
'108': ford expedition el suv 2009
'109': ford edge suv 2012
'110': ford ranger supercab 2011
'111': ford gt coupe 2006
'112': ford f-150 regular cab 2012
'113': ford f-150 regular cab 2007
'114': ford focus sedan 2007
'115': ford e-series wagon van 2012
'116': ford fiesta sedan 2012
'117': gmc terrain suv 2012
'118': gmc savana van 2012
'119': gmc yukon hybrid suv 2012
'120': gmc acadia suv 2012
'121': gmc canyon extended cab 2012
'122': geo metro convertible 1993
'123': hummer h3t crew cab 2010
'124': hummer h2 sut crew cab 2009
'125': honda odyssey minivan 2012
'126': honda odyssey minivan 2007
'127': honda accord coupe 2012
'128': honda accord sedan 2012
'129': hyundai veloster hatchback 2012
'130': hyundai santa fe suv 2012
'131': hyundai tucson suv 2012
'132': hyundai veracruz suv 2012
'133': hyundai sonata hybrid sedan 2012
'134': hyundai elantra sedan 2007
'135': hyundai accent sedan 2012
'136': hyundai genesis sedan 2012
'137': hyundai sonata sedan 2012
'138': hyundai elantra touring hatchback 2012
'139': hyundai azera sedan 2012
'140': infiniti g coupe ipl 2012
'141': infiniti qx56 suv 2011
'142': isuzu ascender suv 2008
'143': jaguar xk xkr 2012
'144': jeep patriot suv 2012
'145': jeep wrangler suv 2012
'146': jeep liberty suv 2012
'147': jeep grand cherokee suv 2012
'148': jeep compass suv 2012
'149': lamborghini reventon coupe 2008
'150': lamborghini aventador coupe 2012
'151': lamborghini gallardo lp 570-4 superleggera 2012
'152': lamborghini diablo coupe 2001
'153': land rover range rover suv 2012
'154': land rover lr2 suv 2012
'155': lincoln town car sedan 2011
'156': mini cooper roadster convertible 2012
'157': maybach landaulet convertible 2012
'158': mazda tribute suv 2011
'159': mclaren mp4-12c coupe 2012
'160': mercedes-benz 300-class convertible 1993
'161': mercedes-benz c-class sedan 2012
'162': mercedes-benz sl-class coupe 2009
'163': mercedes-benz e-class sedan 2012
'164': mercedes-benz s-class sedan 2012
'165': mercedes-benz sprinter van 2012
'166': mitsubishi lancer sedan 2012
'167': nissan leaf hatchback 2012
'168': nissan nv passenger van 2012
'169': nissan juke hatchback 2012
'170': nissan 240sx coupe 1998
'171': plymouth neon coupe 1999
'172': porsche panamera sedan 2012
'173': ram c/v cargo van minivan 2012
'174': rolls-royce phantom drophead coupe convertible 2012
'175': rolls-royce ghost sedan 2012
'176': rolls-royce phantom sedan 2012
'177': scion xd hatchback 2012
'178': spyker c8 convertible 2009
'179': spyker c8 coupe 2009
'180': suzuki aerio sedan 2007
'181': suzuki kizashi sedan 2012
'182': suzuki sx4 hatchback 2012
'183': suzuki sx4 sedan 2012
'184': tesla model s sedan 2012
'185': toyota sequoia suv 2012
'186': toyota camry sedan 2012
'187': toyota corolla sedan 2012
'188': toyota 4runner suv 2012
'189': volkswagen golf hatchback 2012
'190': volkswagen golf hatchback 1991
'191': volkswagen beetle hatchback 2012
'192': volvo c30 hatchback 2012
'193': volvo 240 sedan 1993
'194': volvo xc90 suv 2007
'195': smart fortwo convertible 2012
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: LLM_Description_opt175b_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_stanfordcars
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
splits:
- name: test
num_bytes: 1016320238.0
num_examples: 8041
download_size: 989991348
dataset_size: 1016320238.0
---
# Dataset Card for "StanfordCars_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6414772272109985,
-0.33927708864212036,
0.21383070945739746,
0.4039938747882843,
-0.10911750793457031,
-0.11057016253471375,
0.14595885574817657,
-0.22216984629631042,
0.4364734888076782,
0.285363107919693,
-0.8956985473632812,
-0.6781929135322571,
-0.20239776372909546,
-0.3705653846263... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
musabg/wikipedia-tr | musabg | 2023-05-16T20:32:53Z | 118 | 5 | null | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:tr",
"license:cc-by-sa-3.0",
"license:gfdl",
"wikipedia,... | 2023-05-16T20:32:53Z | 2023-02-24T03:02:31.000Z | 2023-02-24T03:02:31 | ---
annotations_creators:
- no-annotation
language:
- tr
language_creators:
- crowdsourced
license:
- cc-by-sa-3.0
- gfdl
multilinguality: []
pretty_name: Turkish Wikipedia 2023
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- wikipedia, wiki,
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 956353353
num_examples: 520542
download_size: 529875169
dataset_size: 956353353
---
# 📖 Türkçe Vikipedi Mayıs 2023
Bu veri kümesi, Türkçe Vikipedi'den alınan makalelerin bir derlemesi olup, maskeleme dil modelleme ve metin oluşturma görevleri için tasarlanmıştır.
## 🗣️ Etiketlemeler
Bu veri kümesindeki makaleler, özellikle belirli bir görev için etiketlenmemiş olup, veri kümesi etiketsizdir.
## 🌐 Dil
Bu veri kümesi Türkçe yazılmış olup, gönüllülerden oluşan bir ekip tarafından topluluk katılımı yöntemleri ile oluşturulmuştur.
## 📜 Lisans
CC-BY-SA 3.0 ve GFDL
## 💻 Kaynak Veri Kümeleri
Bu veri kümesi, Türkçe Vikipedi'den oluşturulan orijinal bir veri kümesidir.
Türkçe Vikipedi veri kümesini kullandığınız için teşekkürler! Dil modelleme ve metin oluşturma görevleriniz için faydalı olmasını umuyoruz.
---
# 📖 Wikipedia Turkish 2023
This dataset is a collection of articles from the Turkish Wikipedia and is designed to be used for masked language modeling and text generation tasks.
## 📚 Dataset Info
Processed and cleaned using Huggingface wikipedia cleaner.
## 🗣️ Annotations
The articles in this dataset were not specifically annotated for any particular task, meaning that the dataset is unlabeled.
## 🌐 Language
This dataset is written in Turkish and was created using crowdsourcing methods by a team of volunteers.
## 📜 License
CC-BY-SA 3.0 and GFDL
## 💻 Source Datasets
This dataset is an original dataset created from the Turkish Wikipedia.
| [
-0.7504981160163879,
-0.745168924331665,
0.20415443181991577,
0.13307644426822662,
-0.4646275043487549,
-0.35798197984695435,
-0.11452461034059525,
-0.4530443549156189,
0.33496594429016113,
0.7439340353012085,
-0.5393769145011902,
-0.6475006341934204,
-0.52961665391922,
0.3216691017150879,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lamini/open_llms | lamini | 2023-07-24T03:48:24Z | 118 | 4 | null | [
"region:us"
] | 2023-07-24T03:48:24Z | 2023-07-24T03:48:21.000Z | 2023-07-24T03:48:21 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 914763.8050314465
num_examples: 1001
- name: test
num_bytes: 102351.19496855346
num_examples: 112
download_size: 184863
dataset_size: 1017115.0
---
# Dataset Card for "open_llms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.47360149025917053,
-0.2392563372850418,
0.29698267579078674,
0.1372629702091217,
-0.2116527259349823,
-0.19314929842948914,
0.08416295796632767,
-0.029782403260469437,
0.7807105779647827,
0.6667147874832153,
-0.9605262279510498,
-1.0466877222061157,
-0.49512651562690735,
-0.213857531547... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bibidentuhanoi/BMO_BASE_FUNCTION_TEXT | bibidentuhanoi | 2023-11-21T18:05:38Z | 118 | 0 | null | [
"region:us"
] | 2023-11-21T18:05:38Z | 2023-10-30T15:26:57.000Z | 2023-10-30T15:26:57 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 279596
num_examples: 354
download_size: 88700
dataset_size: 279596
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "BMO_BASE_FUNCTION_TEXT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2884880602359772,
-0.52781081199646,
0.27258288860321045,
0.30431073904037476,
-0.43116164207458496,
-0.21409223973751068,
0.05030343681573868,
-0.17671585083007812,
0.4323920011520386,
0.6391192078590393,
-0.7407854795455933,
-0.7378790378570557,
-0.7349531650543213,
-0.084874615073204... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vwxyzjn/cai-conversation-dev1-h4 | vwxyzjn | 2023-11-27T19:41:37Z | 118 | 0 | null | [
"region:us"
] | 2023-11-27T19:41:37Z | 2023-11-07T18:10:41.000Z | 2023-11-07T18:10:41 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: init_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: init_response
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: critic_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: critic_response
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: revision_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: revision_response
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 35674961.23498535
num_examples: 13107
- name: test
num_bytes: 8919420.765014648
num_examples: 3277
download_size: 20673986
dataset_size: 44594382.0
---
# Dataset Card for "cai-conversation-dev1-h4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6666560173034668,
-0.43111297488212585,
0.09596007317304611,
0.4455260634422302,
-0.15220864117145538,
0.22760134935379028,
0.24734367430210114,
-0.17579859495162964,
0.9746602177619934,
0.36510801315307617,
-0.8358112573623657,
-0.7753100991249084,
-0.46996986865997314,
-0.354529350996... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liangzid/prompts | liangzid | 2023-11-16T09:30:36Z | 118 | 0 | null | [
"region:us"
] | 2023-11-16T09:30:36Z | 2023-11-16T09:26:29.000Z | 2023-11-16T09:26:29 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
transformersbook/codeparrot-valid | transformersbook | 2022-02-05T16:23:18Z | 117 | 0 | null | [
"region:us"
] | 2022-02-05T16:23:18Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # CodeParrot Dataset
This is the validation split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
See the [full dataset](https://huggingface.co/datasets/transformersbook/codeparrot) for more information. | [
-0.6774494647979736,
-0.2909165918827057,
-0.31013014912605286,
0.049038760364055634,
-0.16488417983055115,
0.45857682824134827,
0.030280834063887596,
0.13386309146881104,
0.07882755249738693,
0.6950255036354065,
-0.9460088014602661,
-0.3154359757900238,
-0.3490622639656067,
0.361669838428... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
winvoker/turkish-sentiment-analysis-dataset | winvoker | 2023-07-19T13:15:13Z | 117 | 21 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"language:tr",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-07-19T13:15:13Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
language:
- tr
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Turkish Sentiment Dataset
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset
This dataset contains positive , negative and notr sentences from several data sources given in the references. In the most sentiment models , there are only two labels; positive and negative. However , user input can be totally notr sentence. For such cases there were no data I could find. Therefore I created this dataset with 3 class. Positive and negative sentences are listed below. Notr examples are extraced from turkish wiki dump. In addition, added some random text inputs like "Lorem ipsum dolor sit amet.".
There are 492.782 labeled sentences. %10 of them used for testing.
# Türkçe Duygu Analizi Veriseti
Bu veriseti , farklı kaynaklardan derlenmiş pozitif , negatif ve nötr sınıflardan örnekler içerir. Bir çok verisetinde sadece pozitif ve negatif bulunur. Fakat kullanıcı input'u nötr olabilir. Bu tarz durumlar için türkçe bir dataset bulmakta zorlandım. Dolayısıyla , 3 sınıftan oluşan bu dataseti oluşturdum. Pozitif ve negatif örnekleri aldığın kaynaklar referans kısmında listelenmiştir. Nötr cümleler ise wikipedia datasından alınmıştır. Ek olarak bazı rastgele inputlar nötr olarak eklenmiştir. Örneğin: "Lorem ipsum dolor sit amet.".
There are 492.782 labeled sentences. %10 of them used for testing.
# References
- https://www.kaggle.com/burhanbilenn/duygu-analizi-icin-urun-yorumlari
- https://github.com/fthbrmnby/turkish-text-data
- https://www.kaggle.com/mustfkeskin/turkish-wikipedia-dump
- https://github.com/ezgisubasi/turkish-tweets-sentiment-analysis
- http://humirapps.cs.hacettepe.edu.tr/
You can reach me via LinkedIn. https://www.linkedin.com/in/batuhanayhan/ | [
-0.5025810599327087,
-0.7441498041152954,
0.24508556723594666,
0.35719379782676697,
-0.2950817942619324,
-0.460263192653656,
-0.14760209619998932,
-0.2649473547935486,
0.27280354499816895,
0.4642588794231415,
-0.4438236951828003,
-0.7757515907287598,
-0.6103596687316895,
0.5082183480262756... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Team-PIXEL/rendered-bookcorpus | Team-PIXEL | 2022-08-03T12:03:32Z | 117 | 4 | bookcorpus | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:rendered|BookCorpusOpen",
"language:en",
"license:unknown",
"arxiv:1506.06724",
"arxiv:2207.06991",
"arxiv:2105.05241",
"region:us"
] | 2022-08-03T12:03:32Z | 2022-05-11T14:41:02.000Z | 2022-05-11T14:41:02 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: Team-PIXEL/rendered-bookcorpus
size_categories:
- 1M<n<10M
source_datasets:
- rendered|BookCorpusOpen
task_categories:
- masked-auto-encoding
- rendered-language-modelling
task_ids:
- masked-auto-encoding
- rendered-language-modeling
paperswithcode_id: bookcorpus
---
# Dataset Card for Team-PIXEL/rendered-bookcorpus
## Dataset Description
- **Homepage:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
- **Repository:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
- **Papers:** [Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
](https://arxiv.org/abs/1506.06724), [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991)
- **Point of Contact:** [Phillip Rust](mailto:p.rust@di.ku.dk)
- **Size of downloaded dataset files:** 63.58 GB
- **Size of the generated dataset:** 63.59 GB
- **Total amount of disk used:** 127.17 GB
### Dataset Summary
This dataset is a version of the BookCorpus available at [https://huggingface.co/datasets/bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) with examples rendered as images with resolution 16x8464 pixels.
The original BookCorpus was introduced by Zhu et al. (2015) in [Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books](https://arxiv.org/abs/1506.06724) and contains 17868 books of various genres. The rendered BookCorpus was used to train the [PIXEL](https://huggingface.co/Team-PIXEL/pixel-base) model introduced in the paper [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) by Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Elizabeth Salesky, Miryam de Lhoneux, and Desmond Elliott.
The BookCorpusOpen dataset was rendered book-by-book into 5.4M examples containing approximately 1.1B words in total. The dataset is stored as a collection of 162 parquet files. It was rendered using the script openly available at [https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_bookcorpus.py](https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_bookcorpus.py). The text renderer uses a PyGame backend and a collection of merged Google Noto Sans fonts. The PyGame backend does not support complex text layouts (e.g. ligatures and right-to-left scripts) or emoji, so occurrences of such text in the BookCorpus have not been rendered accurately.
Each example consists of a "pixel_values" field which stores a 16x8464 (height, width) grayscale image containing the rendered text, and an integer value "num_patches" which stores how many image patches (when splitting the image into 529 non-overlapping patches of resolution 16x16 pixels) in the associated images contain actual text, i.e. are neither blank (fully white) nor are the fully black end-of-sequence patch.
The rendered BookCorpus can be loaded via the datasets library as follows:
```python
from datasets import load_dataset
# Download the full dataset to disk
load_dataset("Team-PIXEL/rendered-bookcorpus", split="train")
# Stream the dataset directly from the hub
load_dataset("Team-PIXEL/rendered-bookcorpus", split="train", streaming=True)
```
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 63.58 GB
- **Size of the generated dataset:** 63.59 GB
- **Total amount of disk used:** 127.17 GB
An example of 'train' looks as follows.
```
{
"pixel_values": <PIL.PngImagePlugin.PngImageFile image mode=L size=8464x16
"num_patches": "498"
}
```
### Data Fields
The data fields are the same among all splits.
- `pixel_values`: an `Image` feature.
- `num_patches`: a `Value(dtype="int64")` feature.
### Data Splits
|train|
|:----|
|5400000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The books have been crawled from smashwords.com, see their [terms of service](https://www.smashwords.com/about/tos) for more information.
A data sheet for this dataset has also been created and published in [Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus](https://arxiv.org/abs/2105.05241)
### Citation Information
```bibtex
@InProceedings{Zhu_2015_ICCV,
title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books},
author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}
```
```bibtex
@article{rust-etal-2022-pixel,
title={Language Modelling with Pixels},
author={Phillip Rust and Jonas F. Lotz and Emanuele Bugliarello and Elizabeth Salesky and Miryam de Lhoneux and Desmond Elliott},
journal={arXiv preprint},
year={2022},
url={https://arxiv.org/abs/2207.06991}
}
```
### Contact Person
This dataset was added by Phillip Rust.
Github: [@xplip](https://github.com/xplip)
Twitter: [@rust_phillip](https://twitter.com/rust_phillip) | [
-0.5171654224395752,
-0.48877379298210144,
-0.026574349030852318,
-0.04251910001039505,
-0.28431662917137146,
0.01717509888112545,
-0.2093980610370636,
-0.41384953260421753,
0.2294222116470337,
0.46222832798957825,
-0.6869438886642456,
-0.7695372104644775,
-0.3878677785396576,
-0.012863811... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AhmedSSoliman/CoNaLa-Large | AhmedSSoliman | 2022-08-14T20:18:08Z | 117 | 0 | null | [
"region:us"
] | 2022-08-14T20:18:08Z | 2022-08-14T20:17:00.000Z | 2022-08-14T20:17:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-650000-700000 | tomekkorbak | 2022-10-04T18:03:56Z | 117 | 0 | null | [
"region:us"
] | 2022-10-04T18:03:56Z | 2022-10-04T18:03:45.000Z | 2022-10-04T18:03:45 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NeelNanda/c4-10k | NeelNanda | 2022-12-26T23:12:52Z | 117 | 0 | null | [
"region:us"
] | 2022-12-26T23:12:52Z | 2022-12-26T23:12:45.000Z | 2022-12-26T23:12:45 | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[us]
- name: url
dtype: string
splits:
- name: train
num_bytes: 21970889
num_examples: 10000
download_size: 13645542
dataset_size: 21970889
---
# Dataset Card for "c4-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6631172895431519,
0.0029973930213600397,
0.3037472665309906,
0.4165838062763214,
-0.2723380923271179,
0.17270918190479279,
0.27595213055610657,
-0.5029117465019226,
0.831780731678009,
0.44202059507369995,
-0.7898090481758118,
-0.7579014897346497,
-0.6155948638916016,
0.00954250711947679... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Tuana/presidents | Tuana | 2023-02-28T01:06:47Z | 117 | 1 | null | [
"region:us"
] | 2023-02-28T01:06:47Z | 2023-02-28T00:51:03.000Z | 2023-02-28T00:51:03 | ---
dataset_info:
features:
- name: id
dtype: string
- name: content
dtype: string
- name: content_type
dtype: string
- name: meta
struct:
- name: url
dtype: string
- name: _split_id
dtype: int64
- name: id_hash_keys
sequence: string
- name: score
dtype: 'null'
- name: embedding
dtype: 'null'
splits:
- name: train
num_bytes: 9366886
num_examples: 5529
download_size: 4997888
dataset_size: 9366886
---
# Dataset Card for "presidents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6082457304000854,
-0.4532470405101776,
0.3487613797187805,
0.20160813629627228,
-0.19608592987060547,
0.18333245813846588,
0.1805211901664734,
-0.08363556861877441,
0.8869959115982056,
0.6201968193054199,
-0.8909289836883545,
-0.7233744263648987,
-0.6341902613639832,
-0.2569296956062317... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HuggingFaceH4/cherry_picked_prompts | HuggingFaceH4 | 2023-03-08T21:24:46Z | 117 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-03-08T21:24:46Z | 2023-03-08T12:49:42.000Z | 2023-03-08T12:49:42 | ---
license: apache-2.0
---
# Dataset Card for Cherry Picked Prompts 🍒
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Lewis Tunstall
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.642457902431488,
-0.5464490056037903,
0.20741470158100128,
0.28377264738082886,
-0.36571019887924194,
0.2142883688211441,
-0.19062678515911102,
0.006446003448218107,
0.5613669157028198,
0.7354578971862793,
-0.994861364364624,
-1.0611025094985962,
-0.5867118239402771,
0.04561761394143104... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NicolaiSivesind/human-vs-machine | NicolaiSivesind | 2023-05-11T13:03:54Z | 117 | 6 | null | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc",
"chatgpt",
"gpt",
"research abstracts",
"wikipedia introductions",
"region:us"
] | 2023-05-11T13:03:54Z | 2023-04-14T12:24:29.000Z | 2023-04-14T12:24:29 | ---
license: cc
task_categories:
- text-classification
pretty_name: Human vs Machine - Labled text segments produced by humans and LLMs
size_categories:
- 100K<n<1M
language:
- en
tags:
- chatgpt
- gpt
- research abstracts
- wikipedia introductions
---
# Human-vs-Machine
This is a dataset collection created in relation to a bachelor thesis written by Nicolai Thorer Sivesind and Andreas Bentzen Winje. It contains human-produced and machine-generated text samples from two domains: Wikipedia introducions and Scientific research abstracts.
Each of the two domains are already exisitng datasets reformatted for text-classification:
[GPT-wiki-intros:](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro)
+ Generated samples are produced using the GPT-3 model, _text-curie-001_
+ Target content set by title of real wikipedia introduction and a starter sentence.
+ Target word count of 200 words each.
+ Contains 150k data points of each class.
+ Created by Aaditya Bhat
[ChatGPT-Research-Abstracts](https://huggingface.co/datasets/NicolaiSivesind/ChatGPT-Research-Abstracts):
+ Generated samples are produced using the GPT-3.5 model, _GPT-3.5-turbo-0301_ (Snapshot of the model used in ChatGPT 1st of March, 2023).
+ Target content set by title of real abstract.
+ Target word count equal to the human-produced abstract
+ Contains 10k data points of each class.
+ Created by Nicolai Thorer Sivesind
### Credits
+ [GPT-wiki-intro](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro), by Aaditya Bhat
### Citation
Please use the following citation:
```
@misc {sivesind_2023,
author = { {Nicolai Thorer Sivesind}, {Andreas Bentzen Winje}},
title = { Human-vs-Machine },
year = 2023,
publisher = { Hugging Face }
}
```
More information about the dataset will be added once the thesis is finished (end of may 2023). | [
-0.5047453045845032,
-0.6418343186378479,
0.09485967457294464,
-0.21438419818878174,
-0.11631142348051071,
0.04227733984589577,
-0.13861395418643951,
-0.5939226150512695,
0.16134928166866302,
0.38821324706077576,
-0.6227350234985352,
-0.4313771724700928,
-0.5141825079917908,
0.354619354009... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MadVoyager/stable_diffusion_instructional_dataset | MadVoyager | 2023-04-30T09:55:41Z | 117 | 16 | null | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_categories:conversational",
"language:en",
"stable diffusion",
"llama",
"chatgpt",
"alpaca",
"llm",
"dataset",
"region:us"
] | 2023-04-30T09:55:41Z | 2023-04-30T09:41:01.000Z | 2023-04-30T09:41:01 | ---
task_categories:
- question-answering
- text2text-generation
- conversational
language:
- en
tags:
- stable diffusion
- llama
- chatgpt
- alpaca
- llm
- dataset
pretty_name: sd_instruc
--- | [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
d0rj/OpenOrca-ru | d0rj | 2023-07-26T15:18:17Z | 117 | 6 | orca-progressive-learning-from-complex | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extra... | 2023-07-26T15:18:17Z | 2023-07-19T21:29:12.000Z | 2023-07-19T21:29:12 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 11568757682
num_examples: 4233923
download_size: 5699482220
dataset_size: 11568757682
size_categories:
- 1M<n<10M
language_creators:
- translated
language:
- ru
multilinguality:
- monolingual
pretty_name: Dolphin (ru)
source_datasets:
- Open-Orca/OpenOrca
license: mit
tags:
- ChatGPT
- instruct
- instruct-tune
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
paperswithcode_id: orca-progressive-learning-from-complex
---
# OpenOrca-ru
## Dataset Description
- **Paper:** https://arxiv.org/abs/2306.02707
This is translated version of [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) into Russian. | [
0.048157885670661926,
-0.33299773931503296,
0.07286295294761658,
0.09597451239824295,
-0.6729267239570618,
-0.2630929946899414,
0.004218655172735453,
-0.5550985932350159,
0.5574052333831787,
0.6785401105880737,
-0.4935365915298462,
-0.8994669914245605,
-0.48649224638938904,
-0.187074512243... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LDJnr/Verified-Camel | LDJnr | 2023-11-21T17:55:57Z | 117 | 24 | null | [
"task_categories:conversational",
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"Physics",
"Biology",
"Math",
"Chemistry",
"Culture",
"Logic",
"region:us"
] | 2023-11-21T17:55:57Z | 2023-09-26T02:20:36.000Z | 2023-09-26T02:20:36 | ---
license: apache-2.0
task_categories:
- conversational
- question-answering
- text-generation
language:
- en
tags:
- Physics
- Biology
- Math
- Chemistry
- Culture
- Logic
pretty_name: Verified-Camel
size_categories:
- n<1K
---
## This is the Official Verified Camel dataset. Just over 100 verified examples, and many more coming soon!
- Comprised of over 100 highly filtered and curated examples from specific portions of CamelAI stem datasets.
- These examples are verified to be true by experts in the specific related field, with atleast a bachelors degree in the subject.
- Roughly 30-40% of the originally curated data from CamelAI was found to have atleast minor errors and/or incoherent questions(as determined by experts in said field)
## Purpose?
- This dataset is not intended to be trained on by itself(besides perhaps interesting research purposes) however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!
## Quality filtering and cleaning.
- Extensive cleaning was done to make sure there is no possible instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021"
- This was done for the initial curation due to the responses being originally created by GPT-4.
## Future Plans & How you can help!
This is a relatively early build amongst the grand plans for the future of what I plan to work on!
In the near future we plan on leveraging the help of even more domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
Citation:
```
@article{daniele2023amplify-instruct,
title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for Effecient LLM Training.},
author={Daniele, Luigi and Suphavadeeprasit},
journal={arXiv preprint arXiv:(comming soon)},
year={2023}
}
``` | [
-0.29918161034584045,
-0.736346960067749,
0.08010762929916382,
0.22379831969738007,
-0.08290594816207886,
-0.05776244401931763,
-0.035605669021606445,
-0.5144330859184265,
0.08518164604902267,
0.4467771351337433,
-0.7639418840408325,
-0.3508851230144501,
-0.32147809863090515,
0.00549064669... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
coastalcph/mutability_classifier-1-1 | coastalcph | 2023-11-04T11:14:08Z | 117 | 0 | null | [
"region:us"
] | 2023-11-04T11:14:08Z | 2023-11-04T11:14:00.000Z | 2023-11-04T11:14:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: query
dtype: string
- name: answer
list:
- name: wikidata_id
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: relation
dtype: string
- name: date
dtype: int64
- name: type
dtype: string
- name: is_mutable
dtype: int64
splits:
- name: train
num_bytes: 1095030.2883583691
num_examples: 6230
- name: validation
num_bytes: 995487.3818577483
num_examples: 5783
- name: test
num_bytes: 858144.5198522622
num_examples: 4360
download_size: 1062216
dataset_size: 2948662.19006838
---
# Dataset Card for "mutability_classifier-1-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6242280602455139,
-0.2734483778476715,
0.11033657193183899,
0.32805103063583374,
-0.13031667470932007,
0.0309305377304554,
0.3231324255466461,
-0.06591584533452988,
0.8400402069091797,
0.3185979127883911,
-0.77740079164505,
-0.5729749202728271,
-0.6843487024307251,
-0.3796825408935547,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
2ndBestKiller/DrugTest | 2ndBestKiller | 2023-11-11T11:35:07Z | 117 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-11T11:35:07Z | 2023-11-11T11:34:39.000Z | 2023-11-11T11:34:39 | ---
license: unknown
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai4bharat/IndicHeadlineGeneration | ai4bharat | 2022-10-13T06:08:20Z | 116 | 0 | null | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:27K<n<341K",
"source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.",
"language:as",
"language:bn",
"language:gu",
... | 2022-10-13T06:08:20Z | 2022-03-10T09:58:27.000Z | 2022-03-10T09:58:27 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicHeadlineGeneration
size_categories:
- 27K<n<341K
source_datasets:
- original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-headline-generation
---
# Dataset Card for "IndicHeadlineGeneration"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicHeadlineGeneration is the news headline generation dataset released as part of IndicNLG Suite. Each
input document is paired with an output as title. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 1.4M.
### Supported Tasks and Leaderboards
**Tasks:** Headline Generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '14',
'input': "अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।अरियाना ग्रांडे नई दिल्लीः अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।वहीं इस वीडियो पर कमेंट्स की बाढ़ आ गई है।गाने में मीन गर्ल्स, ब्रिंग इट ऑन, लीगली ब्लॉंड और 13 गोइंग 30 के कुछ फेमस सीन्स को दिखाया गया है।गाने में क्रिस जैनर का कैमियो भी है।बता दें अभी कुछ महीने पहले ही अरियाना के एक्स ब्वॉयफ्रेंड मैक मिलर का 26 साल की उम्र में निधन हो गया था।इस खबर को सुनकर अरियाना टूट सी गई थीं।उन्होंने सोशल मीडिया पर पोस्ट कर कई बार अपनी भावनाएं व्यक्त की।अरियाना ग्रांडे और रैपर मैक मिलर ने करीब 2 साल तक एक दूसरे को डेट किया।मैक के निधन की वजह ड्रग्स की ओवरडोज बताई गई।दोनों की मुलाकात साल 2012 में हुई थी।दोनों ने एक कंसर्ट में साथ कई गानों पर परफॉर्म भी किया था।जिसके बाद दोनों एक दूसरे को डेट करने लगे लेकिन नशे की लत के कारण अरियाना ने उनसे ब्रेकअप कर लिया।पर देश-विदेश की ताजा और स्पेशल स्टोरी पढ़ते हुए अपने आप को रखिए अप-टू-डेट।के लिए क्लिक करें सिनेमा सेक्शन",
'target': 'अरियाना ग्रांडे का नया गाना रिलीज, सोशल मीडिया पर वायरल',
'url': 'https://www.indiatv.in/entertainment/hollywood-ariana-grande-shatters-24-hour-views-record-612835'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: News article as input.
- `target (strings)`: Output as headline of the news article.
- `url (string)`: Source web link of the news article.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 29,631 | 14,592 | 14,808 |
Bengali | bn | 113,424 | 14,739 | 14,568 |
Gujarati | gu | 199,972 | 31,270 | 31,215 |
Hindi | hi | 208,221 | 44,738 | 44,514 |
Kannada | kn | 132,380 | 19,416 | 3,261 |
Malayalam | ml | 10,358 | 5,388 | 5,220 |
Marathi | mr | 114,042 | 14,253 | 14,340 |
Oriya | or | 58,225 | 7,484 | 7,137 |
Punjabi | pa | 48,441 | 6,108 | 6,086 |
Tamil | ta | 60,650 | 7,616 | 7,688 |
Telugu | te | 21,352 | 2,690 | 2,675 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
For hindi, web sources like [Dainik Bhaskar](https://www.bhaskar.com), [Naidunia](https://www.naidunia.com/), [NDTV](https://ndtv.in/), [Business Standard](https://hindi.business-standard.com/) and [IndiaTV](https://www.indiatv.in/). For other languages, modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) | [
-0.45708125829696655,
-0.6483364701271057,
-0.09851973503828049,
0.35613906383514404,
-0.3828602135181427,
0.24752014875411987,
-0.39865702390670776,
-0.30685216188430786,
0.3296849727630615,
0.17692603170871735,
-0.5444332957267761,
-0.6945350766181946,
-0.644375741481781,
0.2634645104408... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/reddit_china | ruanchaves | 2022-03-10T20:10:55Z | 116 | 0 | null | [
"region:us"
] | 2022-03-10T20:10:55Z | 2022-03-10T19:39:58.000Z | 2022-03-10T19:39:58 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggan/cartoon-faces | huggan | 2022-03-24T09:25:10Z | 116 | 0 | null | [
"region:us"
] | 2022-03-24T09:25:10Z | 2022-03-24T09:25:07.000Z | 2022-03-24T09:25:07 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ceyda/fashion-products-small | ceyda | 2022-07-21T08:24:03Z | 116 | 6 | null | [
"region:us"
] | 2022-07-21T08:24:03Z | 2022-07-16T21:04:41.000Z | 2022-07-16T21:04:41 | For test purposes!
Preprocessed version of https://www.kaggle.com/datasets/paramaggarwal/fashion-product-images-dataset
Images resized to have max 512 | [
-0.8358604907989502,
-0.38256868720054626,
-0.0033287573605775833,
0.7550428509712219,
-0.6021174788475037,
-0.08722188323736191,
-0.2862164080142975,
-0.43063196539878845,
0.45351895689964294,
0.767833411693573,
-0.8550748229026794,
-0.8528563380241394,
-0.37737688422203064,
0.15834636986... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jakartaresearch/semeval-absa | jakartaresearch | 2022-08-14T05:38:21Z | 116 | 1 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"aspect-based-sentiment-analysis",
"seme... | 2022-08-14T05:38:21Z | 2022-08-14T05:35:35.000Z | 2022-08-14T05:35:35 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'SemEval 2015: Aspect-based Sentiement Analysis'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- aspect-based-sentiment-analysis
- semeval
- semeval2015
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for SemEval Task 12: Aspect-based Sentiment Analysis
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is orignally from [SemEval-2015 Task 12](https://alt.qcri.org/semeval2015/task12/).
From the page:
> SE-ABSA15 will focus on the same domains as SE-ABSA14 (restaurants and laptops). However, unlike SE-ABSA14, the input datasets of SE-ABSA15 will contain entire reviews, not isolated (potentially out of context) sentences. SE-ABSA15 consolidates the four subtasks of SE-ABSA14 within a unified framework. In addition, SE-ABSA15 will include an out-of-domain ABSA subtask, involving test data from a domain unknown to the participants, other than the domains that will be considered during training. In particular, SE-ABSA15 consists of the following two subtasks.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. | [
-0.6116175651550293,
-0.4946225583553314,
0.3222719132900238,
0.2647465169429779,
-0.29867151379585266,
-0.0026186401955783367,
-0.20519037544727325,
-0.35065990686416626,
0.48512566089630127,
0.6564902067184448,
-0.9895658493041992,
-0.9637671113014221,
-0.6420037150382996,
0.174191668629... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yhavinga/xsum_dutch | yhavinga | 2022-08-21T20:50:08Z | 116 | 0 | xsum_dutch | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"language:nl",
"region:us"
] | 2022-08-21T20:50:08Z | 2022-08-21T20:29:43.000Z | 2022-08-21T20:29:43 | ---
pretty_name: Extreme Summarization (XSum) in Dutch
language:
- nl
paperswithcode_id: xsum_dutch
task_categories:
- summarization
task_ids:
- news-articles-summarization
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
# Dataset Card for "xsum_dutch" 🇳🇱🇧🇪 Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
The Xsum Dutch 🇳🇱🇧🇪 Dataset is an English-language dataset translated to Dutch.
*This dataset currently (Aug '22) has a single config, which is
config `default` of [xsum](https://huggingface.co/datasets/xsum) translated to Dutch
with [yhavinga/t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi).*
- **Homepage:** [https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 245.38 MB
- **Size of the generated dataset:** 507.60 MB
- **Total amount of disk used:** 752.98 MB
### Dataset Summary
Extreme Summarization (XSum) Dataset.
There are three features:
- document: Input news article.
- summary: One sentence summary of the article.
- id: BBC ID of the article.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 245.38 MB
- **Size of the generated dataset:** 507.60 MB
- **Total amount of disk used:** 752.98 MB
An example of 'validation' looks as follows.
```
{
"document": "some-body",
"id": "29750031",
"summary": "some-sentence"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `document`: a `string` feature.
- `summary`: a `string` feature.
- `id`: a `string` feature.
### Data Splits
| name |train |validation|test |
|-------|-----:|---------:|----:|
|default|204045| 11332|11334|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@jbragg](https://github.com/jbragg), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding the English version of this dataset.
The dataset was translated on Cloud TPU compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
| [
-0.6291293501853943,
-0.42724862694740295,
0.04394497349858284,
0.10655414313077927,
-0.33498072624206543,
-0.02826119214296341,
-0.45336008071899414,
-0.44436872005462646,
0.7982985973358154,
0.4349170923233032,
-0.7209619283676147,
-0.9035648703575134,
-0.5963584184646606,
0.020508801564... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-950000-1000000 | tomekkorbak | 2022-10-04T22:55:50Z | 116 | 0 | null | [
"region:us"
] | 2022-10-04T22:55:50Z | 2022-10-04T18:01:11.000Z | 2022-10-04T18:01:11 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1200000-1250000 | tomekkorbak | 2022-10-04T23:47:33Z | 116 | 0 | null | [
"region:us"
] | 2022-10-04T23:47:33Z | 2022-10-04T23:47:25.000Z | 2022-10-04T23:47:25 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1100000-1150000 | tomekkorbak | 2022-10-04T23:49:53Z | 116 | 0 | null | [
"region:us"
] | 2022-10-04T23:49:53Z | 2022-10-04T23:49:46.000Z | 2022-10-04T23:49:46 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
medalpaca/medical_meadow_mmmlu | medalpaca | 2023-04-06T17:49:48Z | 116 | 0 | null | [
"region:us"
] | 2023-04-06T17:49:48Z | 2023-04-06T17:49:34.000Z | 2023-04-06T17:49:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
climatebert/climate_detection | climatebert | 2023-04-18T14:39:49Z | 116 | 2 | null | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-04-18T14:39:49Z | 2023-04-11T13:06:20.000Z | 2023-04-11T13:06:20 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: ClimateTalkDetection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'no'
'1': 'yes'
splits:
- name: train
num_bytes: 638487
num_examples: 1300
- name: test
num_bytes: 222330
num_examples: 400
download_size: 492038
dataset_size: 860817
---
# Dataset Card for climate_detection
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for detecting climate-related paragraphs in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a binary classification task of whether a given paragraph is climate-related or not.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 1
}
```
### Data Fields
- text: a paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> not climate-related, 1 -> climate-related)
### Data Splits
The dataset is split into:
- train: 1,300
- test: 400
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains climate-related paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. | [
-0.3148379921913147,
-0.3766123354434967,
0.2576292157173157,
0.16745761036872864,
-0.3922991454601288,
-0.03938449174165726,
-0.28360775113105774,
-0.6275699138641357,
0.2626118063926697,
0.37668299674987793,
-0.549292266368866,
-0.8253455758094788,
-0.5496020317077637,
-0.011522741988301... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bloyal/small-uniref30 | bloyal | 2023-05-04T22:13:06Z | 116 | 0 | null | [
"task_categories:fill-mask",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"region:us"
] | 2023-05-04T22:13:06Z | 2023-05-04T21:50:38.000Z | 2023-05-04T21:50:38 | ---
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: int64
- name: num
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1067207.070393368
num_examples: 4096
- name: test
num_bytes: 167427.70557437633
num_examples: 640
- name: validation
num_bytes: 169382.9274292743
num_examples: 640
download_size: 1368501
dataset_size: 1404017.7033970184
task_categories:
- fill-mask
size_categories:
- 1K<n<10K
--- | [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
grantprice/CriticalRoleTranscripts | grantprice | 2023-06-14T18:56:45Z | 116 | 0 | null | [
"region:us"
] | 2023-06-14T18:56:45Z | 2023-06-14T18:56:33.000Z | 2023-06-14T18:56:33 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
augtoma/medmcqa | augtoma | 2023-08-11T20:44:27Z | 116 | 1 | null | [
"region:us"
] | 2023-08-11T20:44:27Z | 2023-08-11T20:44:11.000Z | 2023-08-11T20:44:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: cop
dtype:
class_label:
names:
'0': a
'1': b
'2': c
'3': d
- name: choice_type
dtype: string
- name: exp
dtype: string
- name: subject_name
dtype: string
- name: topic_name
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer_idx
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 136988451
num_examples: 182822
- name: test
num_bytes: 2350095
num_examples: 4183
download_size: 90978864
dataset_size: 139338546
---
# Dataset Card for "medmcqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.607702374458313,
-0.11872135102748871,
0.44417157769203186,
-0.051720451563596725,
-0.20060914754867554,
0.18420054018497467,
0.5614520311355591,
0.010334053076803684,
0.7862076163291931,
0.6124122142791748,
-0.9880490899085999,
-0.8480109572410583,
-0.5603548884391785,
-0.2960230410099... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lgaalves/camel-ai-physics | lgaalves | 2023-10-17T19:27:21Z | 116 | 1 | null | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2303.17760",
"region:us"
] | 2023-10-17T19:27:21Z | 2023-09-05T14:51:49.000Z | 2023-09-05T14:51:49 | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: topic;
dtype: string
- name: sub_topic
dtype: string
- name: message_1
dtype: string
- name: message_2
dtype: string
splits:
- name: train
num_bytes: 51650490
num_examples: 20000
download_size: 23872398
dataset_size: 51650490
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: CAMEL Physics
task_categories:
- text-generation
arxiv: 2303.17760
extra_gated_prompt: "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT."
extra_gated_fields:
Name: text
Email: text
I will adhere to the terms and conditions of this dataset: checkbox
---
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Physics dataset is composed of 20K problem-solution pairs obtained using gpt-4.
The dataset problem-solutions pairs generating from 25 physics topics, 25 subtopics for each topic and 32 problems for each "topic,subtopic" pairs.
## Data Fields
**The data fields are as follows:**
* `role_1`: assistant role
* `topic`: physics topic
* `sub_topic`: physics subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
**Download in python**
```python
from datasets import load_dataset
dataset = load_dataset("lgaalves/camel-ai-physics")
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
| [
-0.3829326927661896,
-0.9442813396453857,
0.3544104993343353,
0.0673450455069542,
-0.029208507388830185,
0.04816504195332527,
-0.3649360239505768,
-0.3552941083908081,
0.27545350790023804,
0.24767932295799255,
-0.5088491439819336,
-0.25422096252441406,
-0.6316516995429993,
-0.0227009262889... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JosephLee/science_textbook_elementary_kor | JosephLee | 2023-11-07T07:25:44Z | 116 | 0 | null | [
"task_categories:question-answering",
"language:ko",
"testbook",
"elementary",
"science",
"region:us"
] | 2023-11-07T07:25:44Z | 2023-11-06T06:06:36.000Z | 2023-11-06T06:06:36 | ---
language:
- ko
task_categories:
- question-answering
tags:
- testbook
- elementary
- science
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dglover1/mapa-eur-lex | dglover1 | 2023-11-08T15:25:00Z | 116 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:joelniklaus/mapa",
"language:multilingual",
"language:bg",
"language:cs",
"language:d... | 2023-11-08T15:25:00Z | 2023-11-07T10:11:34.000Z | 2023-11-07T10:11:34 | ---
annotations_creators:
- other
language_creators:
- found
language:
- multilingual
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hu
- it
- lt
- lv
- mt
- nl
- pt
- ro
- sk
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- joelniklaus/mapa
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Spanish Datasets for Sensitive Entity Detection in the Legal Domain
tags:
- named-entity-recognition-and-classification
---
# Dataset Card for Multilingual European Datasets for Sensitive Entity Detection in the Legal Domain
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Split Distribution](#split-distribution)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **
Repository:** [Spanish](https://elrc-share.eu/repository/browse/mapa-anonymization-package-spanish/b550e1a88a8311ec9c1a00155d026706687917f92f64482587c6382175dffd76/), [Most](https://elrc-share.eu/repository/search/?q=mfsp:3222a6048a8811ec9c1a00155d0267067eb521077db54d6684fb14ce8491a391), [German, Portuguese, Slovak, Slovenian, Swedish](https://elrc-share.eu/repository/search/?q=mfsp:833df1248a8811ec9c1a00155d0267067685dcdb77064822b51cc16ab7b81a36)
- **Paper:** de Gibert Bonet, O., García Pablos, A., Cuadros, M., & Melero, M. (2022). Spanish Datasets for Sensitive
Entity Detection in the Legal Domain. Proceedings of the Language Resources and Evaluation Conference, June,
3751–3760. http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.400.pdf
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
This dataset is a completed version of the [MAPA](https://huggingface.co/datasets/joelniklaus/mapa) EUR-LEX dataset, originally converted to Huggingface format by [joelniklaus](https://huggingface.co/datasets/joelniklaus). See the [dataset card](https://huggingface.co/datasets/joelniklaus/mapa) for more information about MAPA.
3 of the (Spanish) EUR-LEX WebAnno TSV files in the source MAPA repository are malformed, so they were omitted from the [original conversion](https://huggingface.co/datasets/joelniklaus/mapa), causing under-representation of the Spanish language.
These files were repaired manually, and the whole dataset reparsed using joelniklaus' [conversion script](https://huggingface.co/datasets/joelniklaus/mapa/blob/main/convert_to_hf_dataset.py). The script was modified slightly to include the original sentence of each example in the "sentence" column.
### Split Distribution
For all languages other than Spanish, [joelniklaus](https://huggingface.co/datasets/joelniklaus)' dataset splits have been preserved for consistency. The split of Spanish samples has changed due to the availability of more data.
Optionally, to create balanced splits with improved distribution of labelled entities, use the following:
```
from datasets import load_dataset, concatenate_datasets
mapa = load_dataset("dglover1/mapa-eur-lex")
mapa = concatenate_datasets((mapa["train"], mapa["validation"], mapa["test"]))
mapa = mapa.train_test_split(test_size=0.2, seed=1)
mapa = mapa.flatten_indices()
```
Note that this only creates train/test splits. For train/test/validation, you can further split either train or test and rename accordingly.
### Licensing Information
[Attribution 4.0 International (CC BY 4.0) ](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@article{DeGibertBonet2022,
author = {{de Gibert Bonet}, Ona and {Garc{\'{i}}a Pablos}, Aitor and Cuadros, Montse and Melero, Maite},
journal = {Proceedings of the Language Resources and Evaluation Conference},
number = {June},
pages = {3751--3760},
title = {{Spanish Datasets for Sensitive Entity Detection in the Legal Domain}},
url = {https://aclanthology.org/2022.lrec-1.400},
year = {2022}
}
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this
dataset. | [
-0.4117625057697296,
-0.5249287486076355,
0.28748616576194763,
0.31497228145599365,
-0.35070401430130005,
-0.009107488207519054,
-0.4811130464076996,
-0.6473863124847412,
0.3223026692867279,
0.40607979893684387,
-0.43746745586395264,
-1.0506958961486816,
-0.6613870859146118,
0.344525277614... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DFKI-SLT/multitacred | DFKI-SLT | 2023-11-06T12:19:37Z | 115 | 1 | multitacred | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"size_categories:100K<n<1M",
"source_datasets:DFKI-NLP/tacred",
"language:ar",
"language:de",
"language:es",
"lan... | 2023-11-06T12:19:37Z | 2022-09-30T11:31:31.000Z | 2022-09-30T11:31:31 | ---
language:
- ar
- de
- es
- fi
- fr
- hi
- hu
- ja
- pl
- ru
- tr
- zh
license: other
license_details: https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf
tags:
- relation extraction
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- found
pretty_name: MultiTACRED - Multilingual TAC Relation Extraction Dataset
size_categories:
- 100K<n<1M
source_datasets:
- DFKI-NLP/tacred
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: multitacred
dataset_info:
- config_name: original-ar
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 32371641
num_examples: 67736
- name: test
num_bytes: 6895001
num_examples: 15425
- name: validation
num_bytes: 10353930
num_examples: 22502
- name: backtranslated_test
num_bytes: 5687302
num_examples: 15425
download_size: 0
dataset_size: 55307874
- config_name: revisited-ar
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 32371641
num_examples: 67736
- name: test
num_bytes: 6895001
num_examples: 15425
- name: validation
num_bytes: 10353930
num_examples: 22502
- name: backtranslated_test
num_bytes: 5687302
num_examples: 15425
download_size: 157165
dataset_size: 55307874
- config_name: retacred-ar
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 27777106
num_examples: 58171
- name: test
num_bytes: 5950395
num_examples: 13348
- name: validation
num_bytes: 8941018
num_examples: 19480
- name: backtranslated_test
num_bytes: 4906896
num_examples: 13348
download_size: 3702157
dataset_size: 47575415
- config_name: original-de
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 27810245
num_examples: 67253
- name: test
num_bytes: 6043815
num_examples: 15282
- name: validation
num_bytes: 9007367
num_examples: 22343
- name: backtranslated_test
num_bytes: 5467635
num_examples: 15079
download_size: 0
dataset_size: 48329062
- config_name: revisited-de
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 27810245
num_examples: 67253
- name: test
num_bytes: 6043815
num_examples: 15282
- name: validation
num_bytes: 9007367
num_examples: 22343
- name: backtranslated_test
num_bytes: 5467635
num_examples: 15079
download_size: 157165
dataset_size: 48329062
- config_name: retacred-de
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 23935820
num_examples: 57792
- name: test
num_bytes: 5219772
num_examples: 13227
- name: validation
num_bytes: 7794542
num_examples: 19365
- name: backtranslated_test
num_bytes: 4715329
num_examples: 13046
download_size: 3702157
dataset_size: 41665463
- config_name: original-es
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 27586822
num_examples: 65247
- name: test
num_bytes: 5941821
num_examples: 14908
- name: validation
num_bytes: 8921047
num_examples: 21697
- name: backtranslated_test
num_bytes: 5414680
num_examples: 14688
download_size: 0
dataset_size: 47864370
- config_name: revisited-es
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 27586822
num_examples: 65247
- name: test
num_bytes: 5941821
num_examples: 14908
- name: validation
num_bytes: 8921047
num_examples: 21697
- name: backtranslated_test
num_bytes: 5414680
num_examples: 14688
download_size: 157165
dataset_size: 47864370
- config_name: retacred-es
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 23707989
num_examples: 55998
- name: test
num_bytes: 5139146
num_examples: 12907
- name: validation
num_bytes: 7711621
num_examples: 18788
- name: backtranslated_test
num_bytes: 4676107
num_examples: 12722
download_size: 3702157
dataset_size: 41234863
- config_name: original-fi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 25394979
num_examples: 66751
- name: test
num_bytes: 5478260
num_examples: 15083
- name: validation
num_bytes: 8205629
num_examples: 22268
- name: backtranslated_test
num_bytes: 5204235
num_examples: 14462
download_size: 0
dataset_size: 44283103
- config_name: revisited-fi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 25394979
num_examples: 66751
- name: test
num_bytes: 5478260
num_examples: 15083
- name: validation
num_bytes: 8205629
num_examples: 22268
- name: backtranslated_test
num_bytes: 5204235
num_examples: 14462
download_size: 157165
dataset_size: 44283103
- config_name: retacred-fi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 21807425
num_examples: 57332
- name: test
num_bytes: 4724204
num_examples: 13046
- name: validation
num_bytes: 7084020
num_examples: 19278
- name: backtranslated_test
num_bytes: 4475178
num_examples: 12480
download_size: 3702157
dataset_size: 38090827
- config_name: original-fr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 29580179
num_examples: 66856
- name: test
num_bytes: 6409145
num_examples: 15237
- name: validation
num_bytes: 9601199
num_examples: 22298
- name: backtranslated_test
num_bytes: 5535658
num_examples: 15088
download_size: 0
dataset_size: 51126181
- config_name: revisited-fr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 29580179
num_examples: 66856
- name: test
num_bytes: 6409145
num_examples: 15237
- name: validation
num_bytes: 9601199
num_examples: 22298
- name: backtranslated_test
num_bytes: 5535658
num_examples: 15088
download_size: 157165
dataset_size: 51126181
- config_name: retacred-fr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 25484188
num_examples: 57466
- name: test
num_bytes: 5553110
num_examples: 13209
- name: validation
num_bytes: 8323210
num_examples: 19341
- name: backtranslated_test
num_bytes: 4786142
num_examples: 13078
download_size: 3702157
dataset_size: 44146650
- config_name: original-hi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 47358490
num_examples: 67751
- name: test
num_bytes: 10235547
num_examples: 15440
- name: validation
num_bytes: 15362616
num_examples: 22511
- name: backtranslated_test
num_bytes: 5654198
num_examples: 15440
download_size: 0
dataset_size: 78610851
- config_name: revisited-hi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 47358490
num_examples: 67751
- name: test
num_bytes: 10235547
num_examples: 15440
- name: validation
num_bytes: 15362616
num_examples: 22511
- name: backtranslated_test
num_bytes: 5654198
num_examples: 15440
download_size: 157165
dataset_size: 78610851
- config_name: retacred-hi
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 40764637
num_examples: 58186
- name: test
num_bytes: 8839508
num_examples: 13363
- name: validation
num_bytes: 13280435
num_examples: 19488
- name: backtranslated_test
num_bytes: 4878649
num_examples: 13363
download_size: 3702157
dataset_size: 67763229
- config_name: original-hu
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26869925
num_examples: 67766
- name: test
num_bytes: 5810768
num_examples: 15436
- name: validation
num_bytes: 8658082
num_examples: 22519
- name: backtranslated_test
num_bytes: 5695172
num_examples: 15436
download_size: 0
dataset_size: 47033947
- config_name: revisited-hu
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26869925
num_examples: 67766
- name: test
num_bytes: 5810768
num_examples: 15436
- name: validation
num_bytes: 8658082
num_examples: 22519
- name: backtranslated_test
num_bytes: 5695172
num_examples: 15436
download_size: 157165
dataset_size: 47033947
- config_name: retacred-hu
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 23084933
num_examples: 58200
- name: test
num_bytes: 5011087
num_examples: 13357
- name: validation
num_bytes: 7476013
num_examples: 19495
- name: backtranslated_test
num_bytes: 4912553
num_examples: 13357
download_size: 3702157
dataset_size: 40484586
- config_name: original-ja
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 31425001
num_examples: 61571
- name: test
num_bytes: 6560885
num_examples: 13701
- name: validation
num_bytes: 9996196
num_examples: 20290
- name: backtranslated_test
num_bytes: 4706581
num_examples: 12913
download_size: 0
dataset_size: 52688663
- config_name: revisited-ja
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 31425001
num_examples: 61571
- name: test
num_bytes: 6560885
num_examples: 13701
- name: validation
num_bytes: 9996196
num_examples: 20290
- name: backtranslated_test
num_bytes: 4706581
num_examples: 12913
download_size: 157165
dataset_size: 52688663
- config_name: retacred-ja
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 26944316
num_examples: 52748
- name: test
num_bytes: 5627890
num_examples: 11815
- name: validation
num_bytes: 8591269
num_examples: 17470
- name: backtranslated_test
num_bytes: 4032503
num_examples: 11138
download_size: 3702157
dataset_size: 45195978
- config_name: original-pl
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26989666
num_examples: 68124
- name: test
num_bytes: 5845988
num_examples: 15509
- name: validation
num_bytes: 8728082
num_examples: 22631
- name: backtranslated_test
num_bytes: 5594933
num_examples: 15509
download_size: 0
dataset_size: 47158669
- config_name: revisited-pl
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26989666
num_examples: 68124
- name: test
num_bytes: 5845988
num_examples: 15509
- name: validation
num_bytes: 8728082
num_examples: 22631
- name: backtranslated_test
num_bytes: 5594933
num_examples: 15509
download_size: 157165
dataset_size: 47158669
- config_name: retacred-pl
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 23161229
num_examples: 58465
- name: test
num_bytes: 5044812
num_examples: 13418
- name: validation
num_bytes: 7535491
num_examples: 19584
- name: backtranslated_test
num_bytes: 4824801
num_examples: 13418
download_size: 3702157
dataset_size: 40566333
- config_name: original-ru
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 36546830
num_examples: 66413
- name: test
num_bytes: 7846828
num_examples: 14995
- name: validation
num_bytes: 11847712
num_examples: 21998
- name: backtranslated_test
num_bytes: 5335337
num_examples: 14703
download_size: 0
dataset_size: 61576707
- config_name: revisited-ru
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 36546830
num_examples: 66413
- name: test
num_bytes: 7846828
num_examples: 14995
- name: validation
num_bytes: 11847712
num_examples: 21998
- name: backtranslated_test
num_bytes: 5335337
num_examples: 14703
download_size: 157165
dataset_size: 61576707
- config_name: retacred-ru
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 31523203
num_examples: 57060
- name: test
num_bytes: 6793985
num_examples: 12975
- name: validation
num_bytes: 10263742
num_examples: 19052
- name: backtranslated_test
num_bytes: 4603168
num_examples: 12724
download_size: 3702157
dataset_size: 53184098
- config_name: original-tr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26093320
num_examples: 67749
- name: test
num_bytes: 5633846
num_examples: 15429
- name: validation
num_bytes: 8403271
num_examples: 22510
- name: backtranslated_test
num_bytes: 5571104
num_examples: 15429
download_size: 0
dataset_size: 45701541
- config_name: revisited-tr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26093320
num_examples: 67749
- name: test
num_bytes: 5633846
num_examples: 15429
- name: validation
num_bytes: 8403271
num_examples: 22510
- name: backtranslated_test
num_bytes: 5571104
num_examples: 15429
download_size: 157165
dataset_size: 45701541
- config_name: retacred-tr
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 22386009
num_examples: 58183
- name: test
num_bytes: 4857933
num_examples: 13352
- name: validation
num_bytes: 7257304
num_examples: 19488
- name: backtranslated_test
num_bytes: 4805734
num_examples: 13352
download_size: 3702157
dataset_size: 39306980
- config_name: original-zh
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26159615
num_examples: 65260
- name: test
num_bytes: 5483795
num_examples: 14694
- name: validation
num_bytes: 8348430
num_examples: 21538
- name: backtranslated_test
num_bytes: 5155679
num_examples: 14021
download_size: 0
dataset_size: 45147519
- config_name: revisited-zh
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_headquarters
'3': org:country_of_headquarters
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:parents
'11': org:political/religious_affiliation
'12': org:shareholders
'13': org:stateorprovince_of_headquarters
'14': org:subsidiaries
'15': org:top_members/employees
'16': org:website
'17': per:age
'18': per:alternate_names
'19': per:cause_of_death
'20': per:charges
'21': per:children
'22': per:cities_of_residence
'23': per:city_of_birth
'24': per:city_of_death
'25': per:countries_of_residence
'26': per:country_of_birth
'27': per:country_of_death
'28': per:date_of_birth
'29': per:date_of_death
'30': per:employee_of
'31': per:origin
'32': per:other_family
'33': per:parents
'34': per:religion
'35': per:schools_attended
'36': per:siblings
'37': per:spouse
'38': per:stateorprovince_of_birth
'39': per:stateorprovince_of_death
'40': per:stateorprovinces_of_residence
'41': per:title
splits:
- name: train
num_bytes: 26159615
num_examples: 65260
- name: test
num_bytes: 5483795
num_examples: 14694
- name: validation
num_bytes: 8348430
num_examples: 21538
- name: backtranslated_test
num_bytes: 5155679
num_examples: 14021
download_size: 157165
dataset_size: 45147519
- config_name: retacred-zh
features:
- name: id
dtype: string
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: subj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: obj_type
dtype:
class_label:
names:
'0': LOCATION
'1': ORGANIZATION
'2': PERSON
'3': DATE
'4': MONEY
'5': PERCENT
'6': TIME
'7': CAUSE_OF_DEATH
'8': CITY
'9': COUNTRY
'10': CRIMINAL_CHARGE
'11': EMAIL
'12': HANDLE
'13': IDEOLOGY
'14': NATIONALITY
'15': RELIGION
'16': STATE_OR_PROVINCE
'17': TITLE
'18': URL
'19': NUMBER
'20': ORDINAL
'21': MISC
'22': DURATION
'23': O
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names
'2': org:city_of_branch
'3': org:country_of_branch
'4': org:dissolved
'5': org:founded
'6': org:founded_by
'7': org:member_of
'8': org:members
'9': org:number_of_employees/members
'10': org:political/religious_affiliation
'11': org:shareholders
'12': org:stateorprovince_of_branch
'13': org:top_members/employees
'14': org:website
'15': per:age
'16': per:cause_of_death
'17': per:charges
'18': per:children
'19': per:cities_of_residence
'20': per:city_of_birth
'21': per:city_of_death
'22': per:countries_of_residence
'23': per:country_of_birth
'24': per:country_of_death
'25': per:date_of_birth
'26': per:date_of_death
'27': per:employee_of
'28': per:identity
'29': per:origin
'30': per:other_family
'31': per:parents
'32': per:religion
'33': per:schools_attended
'34': per:siblings
'35': per:spouse
'36': per:stateorprovince_of_birth
'37': per:stateorprovince_of_death
'38': per:stateorprovinces_of_residence
'39': per:title
splits:
- name: train
num_bytes: 22440419
num_examples: 56049
- name: test
num_bytes: 4717593
num_examples: 12718
- name: validation
num_bytes: 7200681
num_examples: 18642
- name: backtranslated_test
num_bytes: 4441386
num_examples: 12127
download_size: 3702157
dataset_size: 38800079
---
# Dataset Card for "MultiTACRED"
## Dataset Description
- **Homepage:** [https://github.com/DFKI-NLP/MultiTACRED](https://github.com/DFKI-NLP/MultiTACRED)
- **Paper:** [MultiTACRED: A Multilingual Version of the TAC Relation Extraction Dataset](https://arxiv.org/abs/2305.04582)
- **Point of Contact:** See [https://github.com/DFKI-NLP/MultiTACRED](https://github.com/DFKI-NLP/MultiTACRED)
- **Size of downloaded dataset files:** 15.4KB (TACRED-Revisited), 3.7 MB (Re-TACRED)
- **Size of the generated dataset:** 1.7 GB (all languages, all versions)
- **Total amount of disk used:** 1.7 GB (all languages, all versions)
### Dataset Summary
MultiTACRED is a multilingual version of the large-scale [TAC Relation Extraction Dataset](https://nlp.stanford.edu/projects/tacred).
It covers 12 typologically diverse languages from 9 language families, and was created by the
[Speech & Language Technology group of DFKI](https://www.dfki.de/slt) by machine-translating the instances of the
original TACRED dataset and automatically projecting their entity annotations. For details of the original TACRED's
data collection and annotation process, see the [Stanford paper](https://aclanthology.org/D17-1004/). Translations are
syntactically validated by checking the correctness of the XML tag markup. Any translations with an invalid tag
structure, e.g. missing or invalid head or tail tag pairs, are discarded (on average, 2.3% of the instances).
Languages covered are: Arabic, Chinese, Finnish, French, German, Hindi, Hungarian, Japanese, Polish,
Russian, Spanish, Turkish. Intended use is supervised relation classification. Audience - researchers.
Please see [our ACL paper](https://arxiv.org/abs/2305.04582) for full details.
NOTE: This Datasetreader supports a reduced version of the original TACRED JSON format with the following changes:
- Removed fields: stanford_pos, stanford_ner, stanford_head, stanford_deprel, docid
The motivation for this is that we want to support additional languages, for which these fields were not required
or available. The reader expects the specification of a language-specific configuration specifying the variant
(original, revisited or retacred) and the language (as a two-letter iso code).
The DatasetReader changes the offsets of the following fields, to conform with standard Python usage (see
_generate_examples()):
- subj_end to subj_end + 1 (make end offset exclusive)
- obj_end to obj_end + 1 (make end offset exclusive)
NOTE 2: The MultiTACRED dataset offers an additional 'split', namely the backtranslated test data (translated to a
target language and then back to English). To access this split, use dataset['backtranslated_test'].
You can find the TACRED dataset reader for the English version of the dataset at
[https://huggingface.co/datasets/DFKI-SLT/tacred](https://huggingface.co/datasets/DFKI-SLT/tacred).
### Supported Tasks and Leaderboards
- **Tasks:** Relation Classification
- **Leaderboards:** [https://paperswithcode.com/sota/relation-extraction-on-multitacred](https://paperswithcode.com/sota/relation-extraction-on-multitacred)
### Languages
The languages in the dataset are Arabic, German, English, Spanish, Finnish, French, Hindi, Hungarian, Japanese, Polish, Russian, Turkish, and Chinese.
All languages except English are machine-translated using either Deepl's or Google's translation APIs.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 15.4KB (TACRED-Revisited), 3.7 MB (Re-TACRED)
- **Size of the generated dataset:** 1.7 GB (all languages, all versions)
- **Total amount of disk used:** 1.7 GB (all languages, all versions)
An example of 'train' looks as follows:
```json
{
"id": "61b3a5c8c9a882dcfcd2",
"token": ["Tom", "Thabane", "trat", "im", "Oktober", "letzten", "Jahres", "zurück", ",", "um", "die", "All", "Basotho", "Convention", "-LRB-", "ABC", "-RRB-", "zu", "gründen", ",", "die", "mit", "17", "Abgeordneten", "das", "Wort", "ergriff", ",", "woraufhin", "der", "konstitutionelle", "Monarch", "König", "Letsie", "III.", "das", "Parlament", "auflöste", "und", "Neuwahlen", "ansetzte", "."],
"relation": "org:founded_by",
"subj_start": 11,
"subj_end": 13,
"obj_start": 0,
"obj_end": 1,
"subj_type": "ORGANIZATION",
"obj_type": "PERSON"
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: the instance id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, a `list` of `string` features.
- `relation`: the relation label of this instance, a `string` classification label.
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `subj_type`: the NER type of the subject mention, among the types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
- `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `obj_type`: the NER type of the object mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
### Data Splits
To miminize dataset bias, TACRED is stratified across years in which the TAC KBP challenge was run.
Languages statistics for the splits differ because not all instances could be translated with the
subject and object entity markup still intact, these were discarded.
| Language | Train | Dev | Test | Backtranslated Test | Translation Engine |
| ----- | ------ | ----- | ---- | ---- | ---- |
| en | 68,124 | 22,631 | 15,509 | - | - |
| ar | 67,736 | 22,502 | 15,425 | 15,425 | Google |
| de | 67,253 | 22,343 | 15,282 | 15,079 | DeepL |
| es | 65,247 | 21,697 | 14,908 | 14,688 | DeepL |
| fi | 66,751 | 22,268 | 15,083 | 14,462 | DeepL |
| fr | 66,856 | 22,298 | 15,237 | 15,088 | DeepL |
| hi | 67,751 | 22,511 | 15,440 | 15,440 | Google |
| hu | 67,766 | 22,519 | 15,436 | 15,436 | Google |
| ja | 61,571 | 20,290 | 13,701 | 12,913 | DeepL |
| pl | 68,124 | 22,631 | 15,509 | 15,509 | Google |
| ru | 66,413 | 21,998 | 14,995 | 14,703 | DeepL |
| tr | 67,749 | 22,510 | 15,429 | 15,429 | Google |
| zh | 65,260 | 21,538 | 14,694 | 14,021 | DeepL |
## Dataset Creation
### Curation Rationale
To enable more research on multilingual Relation Extraction, we generate translations of the TAC relation extraction
dataset using DeepL and Google Translate.
### Source Data
#### Initial Data Collection and Normalization
The instances of this dataset are sentences from the
[original TACRED dataset](https://nlp.stanford.edu/projects/tacred/), which in turn
are sampled from the [corpus](https://catalog.ldc.upenn.edu/LDC2018T03) used in the yearly
[TAC Knowledge Base Population (TAC KBP) challenges](https://tac.nist.gov/2017/KBP/index.html).
#### Who are the source language producers?
Newswire and web texts collected for the [TAC Knowledge Base Population (TAC KBP) challenges](https://tac.nist.gov/2017/KBP/index.html).
### Annotations
#### Annotation process
See the Stanford paper, the TACRED Revisited paper, and the Re-TACRED paper, plus their appendices, for
details on the original annotation process. The translated versions do not change the original labels.
Translations were tokenized with language-specific Spacy models (Spacy 3.1, 'core_news/web_sm' models)
or Trankit (Trankit 1.1.0) when there was no Spacy model for a given language (Hungarian, Turkish, Arabic, Hindi).
#### Who are the annotators?
The original TACRED dataset was annotated by crowd workers, see the [TACRED paper](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf).
### Personal and Sensitive Information
The [authors](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf) of the original TACRED dataset
have not stated measures that prevent collecting sensitive or offensive text. Therefore, we do
not rule out the possible risk of sensitive/offensive content in the translated data.
## Considerations for Using the Data
### Social Impact of Dataset
not applicable
### Discussion of Biases
The dataset is drawn from web and newswire text, and thus reflects any biases of these original
texts, as well as biases introduced by the MT models.
### Other Known Limitations
not applicable
## Additional Information
### Dataset Curators
The dataset was created by members of the
[DFKI SLT team: Leonhard Hennig, Philippe Thomas, Sebastian Möller, Gabriel Kressin](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology/speech-and-language-technology-staff-members)
### Licensing Information
To respect the copyright of the underlying TACRED dataset, MultiTACRED is released via the
Linguistic Data Consortium ([LDC License](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf)).
You can download MultiTACRED from the [LDC MultiTACRED webpage](https://catalog.ldc.upenn.edu/TODO).
If you are an LDC member, the access will be free; otherwise, an access fee of $25 is needed.
### Citation Information
The original dataset:
```
@inproceedings{zhang2017tacred,
author = {Zhang, Yuhao and Zhong, Victor and Chen, Danqi and Angeli, Gabor and Manning, Christopher D.},
booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017)},
title = {Position-aware Attention and Supervised Data Improve Slot Filling},
url = {https://nlp.stanford.edu/pubs/zhang2017tacred.pdf},
pages = {35--45},
year = {2017}
}
```
For the revised version, please also cite:
```
@inproceedings{alt-etal-2020-tacred,
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
author = "Alt, Christoph and
Gabryszak, Aleksandra and
Hennig, Leonhard",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.142",
doi = "10.18653/v1/2020.acl-main.142",
pages = "1558--1569",
}
```
For the Re-TACRED version, please also cite:
```
@inproceedings{DBLP:conf/aaai/StoicaPP21,
author = {George Stoica and
Emmanouil Antonios Platanios and
Barnab{\'{a}}s P{\'{o}}czos},
title = {Re-TACRED: Addressing Shortcomings of the {TACRED} Dataset},
booktitle = {Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI}
2021, Thirty-Third Conference on Innovative Applications of Artificial
Intelligence, {IAAI} 2021, The Eleventh Symposium on Educational Advances
in Artificial Intelligence, {EAAI} 2021, Virtual Event, February 2-9,
2021},
pages = {13843--13850},
publisher = {{AAAI} Press},
year = {2021},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/17631},
}
```
### Contributions
Thanks to [@leonhardhennig](https://github.com/leonhardhennig) for adding this dataset. | [
-0.595138430595398,
-0.534724771976471,
0.2645217776298523,
0.3375717103481293,
-0.24134127795696259,
0.015483894385397434,
-0.5128241777420044,
-0.3987366557121277,
0.3416058123111725,
0.4176124036312103,
-0.5615899562835693,
-0.7517154216766357,
-0.6049544811248779,
0.38157686591148376,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1150000-1200000 | tomekkorbak | 2022-10-04T23:45:42Z | 115 | 0 | null | [
"region:us"
] | 2022-10-04T23:45:42Z | 2022-10-04T23:45:34.000Z | 2022-10-04T23:45:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bmd1905/error-correction-vi | bmd1905 | 2023-03-07T07:30:51Z | 115 | 1 | null | [
"language:vi",
"license:apache-2.0",
"region:us"
] | 2023-03-07T07:30:51Z | 2023-03-07T07:02:30.000Z | 2023-03-07T07:02:30 | ---
license: apache-2.0
language:
- vi
--- | [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jiacheng-ye/logiqa-zh | jiacheng-ye | 2023-04-21T00:56:28Z | 115 | 14 | logiqa | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:zh",
"region:us"
] | 2023-04-21T00:56:28Z | 2023-04-17T12:39:52.000Z | 2023-04-17T12:39:52 | ---
task_categories:
- question-answering
language:
- zh
pretty_name: LogiQA-zh
size_categories:
- 1K<n<10K
paperswithcode_id: logiqa
dataset_info:
features:
- name: context
dtype: string
- name: query
dtype: string
- name: options
sequence:
dtype: string
- name: correct_option
dtype: string
splits:
- name: train
num_examples: 7376
- name: validation
num_examples: 651
- name: test
num_examples: 651
---
# Dataset Card for LogiQA
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the Chinese versions only.
## Dataset Structure
### Data Instances
An example from `train` looks as follows:
```
{'context': '有些广东人不爱吃辣椒.因此,有些南方人不爱吃辣椒.',
'query': '以下哪项能保证上述论证的成立?',
'options': ['有些广东人爱吃辣椒',
'爱吃辣椒的有些是南方人',
'所有的广东人都是南方人',
'有些广东人不爱吃辣椒也不爱吃甜食'],
'correct_option': 2}
```
### Data Fields
- `context`: a `string` feature.
- `query`: a `string` feature.
- `answers`: a `list` feature containing `string` features.
- `correct_option`: a `string` feature.
### Data Splits
|train|validation|test|
|----:|---------:|---:|
| 7376| 651| 651|
## Additional Information
### Dataset Curators
The original LogiQA was produced by Jian Liu, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{liu2020logiqa,
title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
journal={arXiv preprint arXiv:2007.08124},
year={2020}
}
```
### Contributions
[@jiacheng-ye](https://github.com/jiacheng-ye) added this Chinese dataset.
[@lucasmccabe](https://github.com/lucasmccabe) added the English dataset. | [
-0.12892037630081177,
-0.492971807718277,
0.27889084815979004,
0.10694780945777893,
-0.38330549001693726,
-0.18603505194187164,
0.10141955316066742,
-0.25557154417037964,
0.3299902081489563,
0.5451594591140747,
-0.7254401445388794,
-0.6978040337562561,
-0.30922678112983704,
0.1222071200609... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
oyxy2019/THUCNewsText | oyxy2019 | 2023-05-10T03:05:21Z | 115 | 1 | null | [
"region:us"
] | 2023-05-10T03:05:21Z | 2023-05-10T02:59:44.000Z | 2023-05-10T02:59:44 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': education
'1': entertainment
'2': fashion
'3': finance
'4': game
'5': politic
'6': society
'7': sport
'8': stock
'9': technology
splits:
- name: train
num_bytes: 126435258
num_examples: 50000
- name: validation
num_bytes: 12851939
num_examples: 5000
- name: test
num_bytes: 25321290
num_examples: 9890
download_size: 110495565
dataset_size: 164608487
---
# Dataset Card for "THUCNewsText"
这是[seamew/THUCNewsText](https://huggingface.co/datasets/seamew/THUCNewsText)的克隆,试图解决谷歌硬盘国内无法访问的问题443
```python
from datasets import load_dataset
datasets = load_dataset("seamew/THUCNewsText")
datasets.push_to_hub("oyxy2019/THUCNewsText")
``` | [
-0.23055937886238098,
-0.5062924027442932,
-0.008744772523641586,
0.8937299847602844,
-0.7151626944541931,
-0.02297462522983551,
-0.1602761596441269,
-0.16592903435230255,
0.5061676502227783,
0.5296176075935364,
-0.6718019247055054,
-0.6586214900016785,
-0.46558332443237305,
0.141283079981... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GHonem/fashion_image_caption-3500 | GHonem | 2023-07-09T11:33:56Z | 115 | 2 | null | [
"region:us"
] | 2023-07-09T11:33:56Z | 2023-07-09T11:29:24.000Z | 2023-07-09T11:29:24 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2469968504.75
num_examples: 3506
download_size: 2469379841
dataset_size: 2469968504.75
---
# Dataset Card for "fashion_image_caption-3500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6528957486152649,
-0.03548353910446167,
0.09387736022472382,
0.4302857518196106,
-0.44570204615592957,
0.0025970854330807924,
0.3908863067626953,
-0.2883845865726471,
0.7298318147659302,
0.5263701677322388,
-0.9868161678314209,
-0.6864657402038574,
-0.4873964786529541,
-0.14569844305515... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
VMware/open-instruct | VMware | 2023-07-12T15:01:23Z | 115 | 10 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2023-07-12T15:01:23Z | 2023-07-11T21:54:42.000Z | 2023-07-11T21:54:42 | ---
dataset_info:
features:
- name: alpaca_prompt
dtype: string
- name: response
dtype: string
- name: instruction
dtype: string
- name: source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 125656035
num_examples: 142622
download_size: 57912402
dataset_size: 125656035
license: cc-by-3.0
task_categories:
- text-generation
- conversational
- text2text-generation
language:
- en
pretty_name: T
size_categories:
- 100K<n<1M
---
# Dataset Card for "open-instruct"
This dataset is a combination of:
1. Filtered subset of [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
2. train split of [Mosaic-dolly-hhrlhf](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) (consists of [Databrick's dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and a filtered subset of [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf)).
3. Filtered subset of [conceptofmind/cot_submix_original](https://huggingface.co/datasets/conceptofmind/cot_submix_original)
## Dataset
The dataset consists of 6 columns:
1. instruction: The natural language instruction without any prompt templates (we extracted them out of the alpaca-format in Mosaic-dolly-hhrlhf)
2. alpaca_prompt: Alpaca prompt template versions of instruction
3. response: The response to the instruction
4. source: Dataset source
5. task_name
6. template_type: flan template used (zeroshot or fewshot)
## License
- It is usable for commercial purposes so long as you follow the terms of the license.
### Dataset subset licenses:
- Open-instruct-v1-dolly-hhrlhf-oasst1 (Mosaic/Dolly-HHRLHF + filtered OASST1) - cc by 3.0
Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples:
- ESNLI - MIT
- ECQA - CDLA 1.0 - Sharing
- Strategy - MIT
- CREAK - MIT
- gsmk8 - MIT
- aqua - MIT
- qasc - Apache 2.0
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
- Copyright © Wikipedia editors and contributors.
Databricks (https://www.databricks.com)
- Copyright © Databricks
Mosaic ML (https://www.mosaicml.com/)
- Copyright © Mosaic ML
VMware
- Copyright © VMware
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6990383863449097,
-0.552272379398346,
-0.0217954870313406,
0.3809458911418915,
-0.3445734679698944,
-0.31470921635627747,
0.05252482369542122,
-0.31101706624031067,
0.34289970993995667,
0.7355167269706726,
-0.936560869216919,
-0.5571737289428711,
-0.5471493005752563,
0.02326084673404693... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/earnings21 | distil-whisper | 2023-10-13T10:33:58Z | 115 | 0 | null | [
"region:us"
] | 2023-10-13T10:33:58Z | 2023-10-13T10:33:24.000Z | 2023-10-13T10:33:24 | ---
dataset_info:
config_name: full
features:
- name: audio
dtype: audio
- name: file_id
dtype: string
- name: audio_length
dtype: string
- name: sample_rate
dtype: string
- name: company_name
dtype: string
- name: financial_quarter
dtype: string
- name: sector
dtype: string
- name: speaker_switches
dtype: string
- name: unique_speakers
dtype: string
- name: curator_id
dtype: string
- name: transcription
dtype: string
splits:
- name: test
num_bytes: 778199575.0
num_examples: 44
download_size: 772949298
dataset_size: 778199575.0
configs:
- config_name: full
data_files:
- split: test
path: full/test-*
---
# Dataset Card for "earnings21"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.29467517137527466,
-0.20913104712963104,
0.20394553244113922,
0.45816317200660706,
-0.10789594799280167,
0.21469981968402863,
0.2889665961265564,
-0.5782439708709717,
1.0058423280715942,
0.6037084460258484,
-0.8693020343780518,
-0.6629698872566223,
-0.6965805888175964,
-0.32798588275909... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuvalkirstain/task_prediction_train | yuvalkirstain | 2023-10-31T18:44:28Z | 115 | 0 | null | [
"region:us"
] | 2023-10-31T18:44:28Z | 2023-10-31T06:18:08.000Z | 2023-10-31T06:18:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: path
dtype: string
- name: text
dtype: string
- name: task_name
dtype: string
splits:
- name: train
num_bytes: 659890949
num_examples: 5663600
- name: validation
num_bytes: 7823929
num_examples: 60002
download_size: 0
dataset_size: 667714878
---
# Dataset Card for "task_prediction_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.36804699897766113,
-0.05255821719765663,
0.2208973914384842,
0.2796056866645813,
-0.041638344526290894,
-0.16010083258152008,
0.13691021502017975,
-0.1875731199979782,
0.6107326149940491,
0.38758161664009094,
-1.044571876525879,
-0.5287175178527832,
-0.7876326441764832,
-0.4235304594039... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1050000-1100000 | tomekkorbak | 2022-10-04T23:53:15Z | 114 | 0 | null | [
"region:us"
] | 2022-10-04T23:53:15Z | 2022-10-04T23:53:07.000Z | 2022-10-04T23:53:07 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/Million-AID | jonathan-roberts1 | 2023-03-31T15:46:07Z | 114 | 1 | null | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | 2023-03-31T15:46:07Z | 2023-02-14T12:49:18.000Z | 2023-02-14T12:49:18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label_1
dtype:
class_label:
names:
'0': unutilized land
'1': commercial land
'2': public service land
'3': transportation land
'4': industrial land
'5': water area
'6': residential land
'7': agriculture land
- name: label_2
dtype:
class_label:
names:
'0': dam
'1': religious land
'2': rock land
'3': sparse shrub land
'4': arable land
'5': factory area
'6': detached house
'7': desert
'8': lake
'9': power station
'10': beach
'11': ice land
'12': bare land
'13': island
'14': woodland
'15': mobile home park
'16': railway area
'17': river
'18': grassland
'19': apartment
'20': special land
'21': port area
'22': commercial area
'23': highway area
'24': mining area
'25': sports land
'26': airport area
'27': leisure land
- name: label_3
dtype:
class_label:
names:
'0': dam
'1': parking lot
'2': greenhouse
'3': pier
'4': bridge
'5': mine
'6': rock land
'7': baseball field
'8': apron
'9': tennis court
'10': sparse shrub land
'11': works
'12': oil field
'13': meadow
'14': ground track field
'15': detached house
'16': golf course
'17': forest
'18': desert
'19': lake
'20': beach
'21': paddy field
'22': ice land
'23': bare land
'24': storage tank
'25': basketball court
'26': island
'27': substation
'28': mobile home park
'29': cemetery
'30': quarry
'31': solar power plant
'32': helipad
'33': roundabout
'34': runway
'35': wastewater plant
'36': river
'37': apartment
'38': dry field
'39': intersection
'40': swimming pool
'41': commercial area
'42': church
'43': road
'44': orchard
'45': terraced field
'46': stadium
'47': train station
'48': railway
'49': viaduct
'50': wind turbine
splits:
- name: train
num_bytes: 871962498
num_examples: 10000
download_size: 871644115
dataset_size: 871962498
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "Million-AID"
## Dataset Description
- **Paper** [On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid](https://ieeexplore.ieee.org/iel7/4609443/9314330/09393553.pdf)
- **Split** Train
## Split Information
This HuggingFace dataset repository contains just the Train split.
### Licensing Information
[CC BY-NC-ND 4.0](https://competitions.codalab.org/competitions/35974#learn_the_details-terms-and-conditions)
## Citation Information
[On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid](https://ieeexplore.ieee.org/iel7/4609443/9314330/09393553.pdf)
```
@article{long2021creating,
title = {On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid},
author = {Long, Yang and Xia, Gui-Song and Li, Shengyang and Yang, Wen and Yang, Michael Ying and Zhu, Xiao Xiang and Zhang, Liangpei and Li, Deren},
year = 2021,
journal = {IEEE Journal of selected topics in applied earth observations and remote sensing},
publisher = {IEEE},
volume = 14,
pages = {4205--4230}
}
``` | [
-0.7392362952232361,
-0.06506657600402832,
0.06759834289550781,
0.29469624161720276,
-0.4492397606372833,
-0.20650207996368408,
0.12884363532066345,
-0.4058787524700165,
0.10105843096971512,
0.4425557851791382,
-0.4954979419708252,
-0.49478164315223694,
-0.6281717419624329,
0.0872093737125... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChilleD/LastLetterConcat | ChilleD | 2023-05-11T13:43:26Z | 114 | 0 | null | [
"region:us"
] | 2023-05-11T13:43:26Z | 2023-05-11T13:42:51.000Z | 2023-05-11T13:42:51 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hltcoe/megawika | hltcoe | 2023-10-03T17:24:24Z | 114 | 24 | null | [
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:af",
"language:ar",
"language:az",
"language:bn",
"language:cs",
"language:de",
"language:en",
"language:e... | 2023-10-03T17:24:24Z | 2023-05-17T02:07:50.000Z | 2023-05-17T02:07:50 | ---
license: cc-by-sa-4.0
task_categories:
- summarization
- question-answering
- text-generation
- text2text-generation
language:
- af
- ar
- az
- bn
- cs
- de
- en
- es
- et
- fa
- fi
- fr
- ga
- gl
- gu
- he
- hi
- hr
- id
- it
- ja
- ka
- kk
- km
- ko
- lt
- lv
- mk
- ml
- mn
- mr
- my
- ne
- nl
- pl
- ps
- pt
- ro
- ru
- si
- sl
- sv
- ta
- th
- tr
- uk
- ur
- vi
- xh
- zh
pretty_name: MegaWika
size_categories:
- 10M<n<100M
---
# Dataset Card for MegaWika
## Dataset Description
- **Homepage:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
- **Repository:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
- **Paper:** [Coming soon]
- **Leaderboard:** [Coming soon]
- **Point of Contact:** [Samuel Barham](samuel.barham@jhuapl.edu)
### Dataset Summary
MegaWika is a multi- and crosslingual text dataset containing 30 million Wikipedia passages with their scraped and cleaned web citations. The passages span
50 Wikipedias in 50 languages, and the articles in which the passages were originally embedded are included for convenience. Where a Wikipedia passage is in a
non-English language, an automated English translation is provided. Furthermore, nearly 130 million English question/answer pairs were extracted from the
passages, and FrameNet events occurring in the passages are detected using the [LOME](https://aclanthology.org/2021.eacl-demos.19.pdf) FrameNet parser.
<!---
To get a feel for the dataset -- its structure, content, strengths and weaknesses -- you may visit the [dataset viewer](https://huggingface.co/spaces/hltcoe/megawika)
we have set up as a HuggingFace Space. It allows the curious visitor to explore a small set of examples spread across a number of the dataset's constituent languages.
-->
### Dataset Creation
The pipeline through which MegaWika was created is complex, and is described in more detail in the paper (linked above),
but the following diagram illustrates the basic approach.

### Supported Tasks and Leaderboards
MegaWika is meant to support research across a variety of tasks, including report generation, summarization, information retrieval, question answering, etc.
### Languages
MegaWika is divided by Wikipedia language. There are 50 languages, including English, each designated by their 2-character ISO language code:
- `af`: Afrikaans
- `ar`: Arabic
- `az`: Azeri (Azerbaijani)
- `bn`: Bengali
- `cs`: Czech
- `de`: German (Deutsch)
- `en`: English
- `es`: Spanish (Español)
- `et`: Estonian
- `fa`: Farsi (Persian)
- `fi`: Finnish
- `fr`: French
- `ga`: Irish (Gaelic)
- `gl`: Galician
- `gu`: Gujarati
- `he`: Hebrew
- `hi`: Hindi
- `hr`: Hungarian
- `id`: Indonesian
- `it`: Italian
- `ja`: Japanese
- `ka`: Georgian (Kartvelian/Kartlian)
- `kk`: Kazakh
- `km`: Khmer
- `ko`: Korean
- `lt`: Lithuanian
- `lv`: Latvian
- `mk`: Macedonian (Makedonski)
- `ml`: Malay (Malayalam)
- `mn`: Mongolian
- `mr`: Marathi
- `my`: Burmese (Myanmar language)
- `ne`: Nepali
- `nl`: Dutch (Nederlands)
- `pl`: Polish
- `ps`: Pashto
- `pt`: Portuguese
- `ro`: Romanian
- `ru`: Russian
- `si`: Sinhalese (Sri Lankan language)
- `sl`: Slovenian
- `sv`: Swedish (Svenska)
- `ta`: Tamil
- `th`: Thai
- `tr`: Turkish
- `uk`: Ukrainian
- `ur`: Urdu
- `vi`: Vietnamese
- `xh`: Xhosa
- `zh`: Chinese (Zhōng wén)
## Dataset Structure
The dataset is divided by language, and the data for each of the 50 languages is further chunked into discrete JSON lines files.
Each line of these files -- we'll call such a line an **instance** -- contains the data extracted from a single Wikipedia article.
### Data Instances
Each instance contains the text of the seed Wikipedia article, along with a list of **entries**. Each entry consists basically in
an extracted Wikipedia passage, the URL and scraped text of the web source it cites, a list of questions/answer pairs extracted from the passage,
and a framenet parse of the passage. Where the passage is from a non-English Wikipedia, a machine translation into English is also provided.
### Data Fields
The detailed structure of an instance is as follows:
```
{
"article_title": <string : title of original Wikipedia article>
"article_text": <string : text of Wikipedia article>
"entries": [
# Wiki Passage
"id": <string : passage ID>
"passage": {
"text": <string : text of passage in English (possibly via MT)>
"parse": <list of dict : FrameNet parse of English passage text>
"en_tokens": <dict : tokenization of passage in English>
"lang_tokens": <dict : tokenization of original non-English passage>
"en_lang_token_map": <dict : alignment mapping between English and original language token indices>
}
# MT
"original": <string : original language passage>
"original_sents": <list of string : sentencized original language passage>
"translation": <string : machine translation of passage>
"translation_sents": <list of string : sentencized machine translation of passage>
"translation_probs": <list of float : log prob of machine translation by sentence, where available>
"repetitious_translation": <string \in ("true", "false") : automated judgment on whether machine translation is pathologically repetitious>
"source_lang": <string : language ID, 2-character ISO code>
# Source
"source_url": <string : URL of the cited web source>
"source_text": <string : content extracted from the scrape of the source URL>
# Question/Answer Pairs
"qa_pairs": [
...
{
"question": <string : generated question>
"passage_id": <string : passage ID>
"en_answer": <string : English answer>
"lang_answer": <string : aligned original language answer>
"frames": [
...
{
"frame": <string : frame triggered by the question>
"argument": <string : detected frame arguments>
}
...
]
# NB: answer matches can be empty, in the case no matching span exists
"en_matches_in_source": <list of int : start and end index of the English language-answer token(s) in the source document>
"en_match_in_passage": <list of int : start and end index of the English language-answer token(s) in the English language translation of the passage>
"lang_matches_in_source": <list of int : start and end index of the original language-answer token(s) in the source document>
"lang_match_in_passage": <list of int : start and end index of the original language-answer token(s) in the original language passage>
"passage": <list of string : sentencized view of the passage>
"en_answer_tokens": <list of string>
"match_disambiguated_question": <string : disambiguated version of question obtained by matching pronouns with article title (noisy but often helpful)>
}
...
]
]
}
```
English language instances differ not in structure but in content;
1. Fields in the block labeled "MT" above are naturally null (that is, they are set to falsy values in Python -- specifically `None`)
2. Since the Wiki passage only exists in English, and has no corresponding non-English "original language" version, answer spans also necessarily have only an English-language version (and no non-English "original-language" version. Therefore, fields in the `qa_pairs` block beginning with `lang_` are set to null/falsy values in Python (in this case, empty lists).
### Data Splits
MegaWika is currently split only by language, as each task will imply its own approach to filtering, sampling, downselecting, and splitting into train/test splits.
<!---
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
-->
## Licensing and Takedown
MegaWika 1.0 consists in part of documents scraped from across the web (based on citations linked in Wikipedia articles.)
We do not own any of the scraped text nor do we claim copyright: text drawn from Wikipedia citations are meant for research use in algorithmic design and model training.
We release this dataset and all its contents under CC-BY-SA-4.0.
### Notice and Takedown Policy:
*NB*: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
- Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
- Clearly identify the copyrighted work claimed to be infringed.
- Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact the authors.
*Take down*: We will comply to legitimate requests by removing the affected sources from the next release of the dataset.
## Additional Information
### Dataset Curators
Released and maintained by the Johns Hopkins University Human Language Technology Center of Excellence (JHU/HLTCOE).
You can contact one the MegaWika authors, including [Samuel Barham](mailto:samuel.barham@jhuapl.edu), [Orion Weller](mailto:oweller2@jhu.edu),
and [Ben van Durme](mailto:vandurme@jhu.edu) with questions.
### Licensing Information
Released under the [Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license.
### Citation Information
```
@misc{barham2023megawika,
title={MegaWika: Millions of reports and their sources across 50 diverse languages},
author={Samuel Barham and and Weller and Michelle Yuan and Kenton Murray and Mahsa Yarmohammadi and Zhengping Jiang and Siddharth Vashishtha and Alexander Martin and Anqi Liu and Aaron Steven White and Jordan Boyd-Graber and Benjamin Van Durme},
year={2023},
eprint={2307.07049},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
### Contributions
[More Information Needed]
-->
| [
-0.6139512658119202,
-0.8138614296913147,
0.2445366382598877,
0.14617344737052917,
-0.2059181183576584,
-0.22532518208026886,
-0.3629690408706665,
-0.4612930715084076,
0.6362102031707764,
0.45880329608917236,
-0.6870642900466919,
-0.5060291290283203,
-0.4978933334350586,
0.6614489555358887... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/scifact-pl | clarin-knext | 2023-06-07T10:07:12Z | 114 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T10:07:12Z | 2023-06-02T13:55:34.000Z | 2023-06-02T13:55:34 | ---
language:
- pl
pretty_name: BEIR-PL benchmark Scifact-PL
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl
| [
-0.2209920436143875,
-0.9029766917228699,
0.5094642043113708,
0.2354191392660141,
-0.318521112203598,
-0.1491902619600296,
-0.16673962771892548,
-0.4962919354438782,
-0.01896025240421295,
0.41122618317604065,
-0.5503097772598267,
-0.6913566589355469,
-0.4166175127029419,
-0.048304717987775... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yaful/DeepfakeTextDetect | yaful | 2023-07-11T01:59:02Z | 114 | 4 | null | [
"license:apache-2.0",
"arxiv:2305.13242",
"region:us"
] | 2023-07-11T01:59:02Z | 2023-06-27T07:30:58.000Z | 2023-06-27T07:30:58 | ---
license: apache-2.0
---
<div align="center">
<h1>Deepfake Text Detection in the Wild</h1>
<!-- **Authors:** -->
_**Yafu Li<sup>†</sup><sup>‡</sup>, Qintong Li<sup>§</sup>, Leyang Cui<sup>¶</sup>, Wei Bi<sup>¶</sup>,<br>**_
_**Longyue Wang<sup>¶</sup>, Linyi Yang<sup>‡</sup>, Shuming Shi<sup>¶</sup>, Yue Zhang<sup>‡</sup><br>**_
<!-- **Affiliations:** -->
_<sup>†</sup> Zhejiang University,
<sup>‡</sup> Westlake University,
<sup>§</sup> The University of Hong Kong,
<sup>¶</sup> Tencent AI Lab_
Presenting a comprehensive benchmark dataset designed to assess the proficiency of deepfake detectors amidst real-world scenarios.
</div>
## 📌 Table of Contents
- [Introduction](#🚀-introduction)
- [Dataset](#📝-dataset)
- [Try Detection](#🖥%EF%B8%8F-try-detection)
- [Citation](#📚-citation)
## 🚀 Introduction
Recent advances in large language models have enabled them to reach a level of text generation comparable to that of humans.
These models show powerful capabilities across a wide range of content, including news article writing, story generation, and scientific writing.
Such capability further narrows the gap between human-authored and machine-generated texts, highlighting the importance of deepfake text detection to avoid potential risks such as fake news propagation and plagiarism.
In practical scenarios, the detector faces texts from various domains or LLMs without knowing their sources.
To this end, we build **a comprehensive testbed for deepfake text detection**, by gathering texts from various human writings and deepfake texts generated by different LLMs.
The data in this repository is used to evaluate the effectiveness of deepfake detection methods, as described in our paper titled "Deepfake Text Detection in the Wild" (available at https://arxiv.org/abs/2305.13242). We invite you to test your own detection methods on our testbed and encourage you to star our Github repo at https://github.com/yafuly/DeepfakeTextDetect.
## 📝 Dataset
The dataset consists of **447,674** human-written and machine-generated texts from a wide range of sources in the wild:
- Human-written texts from **10 datasets** covering a wide range of writing tasks, e.g., news article writing, story generation, scientific writing, etc.
- Machine-generated texts generated by **27 mainstream LLMs** from 7 sources, e.g., OpenAI, LLaMA, and EleutherAI, etc.
- **6 systematic testbed**s with increasing wildness and detection difficulty.
- **2 wilder test sets**: (1) texts collected from new datasets and generated by GPT-4; (2) paraphrased texts.
### 📥 How to Get the Data
#### 1. Huggingface
You can access the full dataset, which includes the Cross-domains & Cross-models testbed and two additional wilder test sets, through the Huggingface API:
```python
from datasets import load_dataset
dataset = load_dataset("yaful/DeepfakeTextDetect")
```
which includes traditional splits (train.csv, valid.csv and test.csv) and two wilder test sets (test_ood_set_gpt.csv and test_ood_set_gpt_para.csv).
The csv files have three columns: text, label (0 for machine-generated and
1 for human-written) and text source information (e.g., ''cmv_human'' denotes the text is written by humans,
whereas ''roct_machine_continuation_flan_t5_large'' denotes the text is generated by ''flan_t5_large'' using continuation prompt).
To obtain the 6 testbeds mentioned in our paper, simply apply the provided script:
```shell
python3 deployment/prepare_testbeds.py DATA_PATH
```
Replace ''DATA_PATH'' with the output data directory where you want to save the 6 testbeds.
#### 2. Cloud Drive
Alternatively, you can access the 6 testbeds by downloading them directly through [Google Drive](https://drive.google.com/drive/folders/1p09vDiEvoA-ZPmpqkB2WApcwMQWiiMRl?usp=sharing)
or [Tencent Weiyun](https://share.weiyun.com/JUWQxF4H):
The folder contains 4 packages:
- testbeds_processed.zip: 6 testbeds based on the ''processed'' version, which can be directly used for detecting in-distribution and out-of-distribution detection performance.
- wilder_testsets.zip: 2 wilder test sets with texts processed, aiming for (1) detecting deepfake text generated by GPT-4, and (2) detecting deepfake text in paraphrased versions.
- source.zip: Source texts of human-written texts and corresponding texts generated by LLMs, without filtering.
- processed.zip: This is a refined version of the "source" that filters out low-quality texts and specifies sources as CSV file names. For example, the "cmv_machine_specified_gpt-3.5-trubo.csv" file contains texts from the CMV domain generated by the "gpt-3.5-trubo" model using specific prompts, while "cmv_human" includes human-written CMV texts.
## 🖥️ Try Detection
### Model Access
Our Longformer detector, which has been trained on the entire dataset, is now accessible through [Huggingface](https://huggingface.co/nealcly/detection-longformer). Additionally, you can try detection directly using our [online demo](https://huggingface.co/spaces/yaful/DeepfakeTextDetect).
### Deployment
We have refined the decision boundary based on out-of-distribution settings. To ensure optimal performance, we recommend preprocessing texts before sending them to the detector.
See 🏃 [Deepfake Text Detection in the Wild](https://github.com/yafuly/DeepfakeTextDetect) for the complete detection pipeline:
```python
import torch
import os
from transformers import AutoModelForSequenceClassification,AutoTokenizer
from deployment import preprocess, detect
# init
device = 'cpu' # use 'cuda:0' if GPU is available
model_dir = "nealcly/detection-longformer"
tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModelForSequenceClassification.from_pretrained(model_dir).to(device)
# preprocess
text = preprocess(text)
# detection
result = detect(text,tokenizer,model,device)
```
## 📚 Citation
If you use this dataset in your research, please cite it as follows:
```bibtex
@misc{li2023deepfake,
title={Deepfake Text Detection in the Wild},
author={Yafu Li and Qintong Li and Leyang Cui and Wei Bi and Longyue Wang and Linyi Yang and Shuming Shi and Yue Zhang},
year={2023},
eprint={2305.13242},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We welcome contributions to improve this dataset! If you have any questions or feedback, please feel free to reach out at yafuly@gmail.com.
<!-- # 🤝 Contributing --> | [
-0.4564288854598999,
-0.9356058835983276,
0.4826473593711853,
0.18892303109169006,
-0.10423232614994049,
-0.13677488267421722,
-0.11935566365718842,
-0.4496936500072479,
-0.02328922413289547,
0.37503790855407715,
-0.6647129654884338,
-0.822155773639679,
-0.6196098327636719,
0.2762580811977... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fantasyfish/laion-art | fantasyfish | 2023-06-30T08:55:13Z | 114 | 1 | null | [
"region:us"
] | 2023-06-30T08:55:13Z | 2023-06-30T06:20:14.000Z | 2023-06-30T06:20:14 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: aesthetic
dtype: float64
splits:
- name: train
num_bytes: 11640624315.8
num_examples: 20072
- name: test
num_bytes: 538961083.0
num_examples: 855
download_size: 12347056207
dataset_size: 12179585398.8
---
# Dataset Card for "laion-art"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.30155980587005615,
-0.14589615166187286,
0.2389000803232193,
0.09272320568561554,
-0.24021188914775848,
-0.05179394409060478,
0.29858025908470154,
-0.22928212583065033,
0.9897010922431946,
0.6843671798706055,
-0.7393572926521301,
-0.8576347827911377,
-0.5566585063934326,
-0.498794108629... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
learn3r/squad_with_test | learn3r | 2023-09-10T12:54:59Z | 114 | 0 | null | [
"region:us"
] | 2023-09-10T12:54:59Z | 2023-09-10T12:54:39.000Z | 2023-09-10T12:54:39 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 79346108
num_examples: 87599
- name: validation
num_bytes: 5236492.0
num_examples: 5285
- name: test
num_bytes: 5236492.0
num_examples: 5285
download_size: 19827427
dataset_size: 89819092.0
---
# Dataset Card for "squad_with_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5799192786216736,
-0.27200958132743835,
0.015459218062460423,
0.4049034118652344,
-0.04560811445116997,
0.25652867555618286,
0.25389498472213745,
-0.15718382596969604,
0.6975108981132507,
0.24224263429641724,
-1.2144685983657837,
-0.6381192207336426,
-0.4150558114051819,
-0.112820900976... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shrutisingh/dataset_recommendation_mcq_sc | shrutisingh | 2023-10-12T17:14:33Z | 114 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-12T17:14:33Z | 2023-10-11T17:25:45.000Z | 2023-10-11T17:25:45 | ---
license: apache-2.0
---
Task: MCQ with single correct answer.
Dataset: Recommendation of datasets to validate a research question.
This dataset is derived from the [DataFinder](https://aclanthology.org/2023.acl-long.573/) dataset. We curate the abstracts of each dataset from [PapersWithCode](https://paperswithcode.com/datasets).
Given is a short `query` discussing a research question, and keyphrases relevant the query.
The original training set of the DataFinder dataset has positive and negative candidates for each query, to train a contrastive model.
We objective is to convert the dataset into a MCQ question-answering task with a single correct answer. We also add the abstracts from the research papers introducing the datasets so that context can be provided to the models.
To reproduce the construction of this dataset, please visit [https://github.com/shruti-singh/scidata_recommendation](https://github.com/shruti-singh/scidata_recommendation).
Please note that the query instances in this dataset have no intersection with the [`dataset_recommendation_mcq_mc`](https://huggingface.co/datasets/shrutisingh/dataset_recommendation_mcq_mc) dataset. | [
-0.5055800676345825,
-0.5217264890670776,
0.4411121904850006,
-0.09403466433286667,
-0.24720053374767303,
-0.10477585345506668,
0.0788247138261795,
0.0418727733194828,
0.3203723430633545,
0.7151428461074829,
-0.7861489057540894,
-0.5211786031723022,
-0.19945211708545685,
0.2502837181091308... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OsamaBsher/AITA-Reddit-Dataset | OsamaBsher | 2023-11-01T22:19:37Z | 114 | 2 | null | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:100K<n<1M",
"arxiv:2310.18336",
"region:us"
] | 2023-11-01T22:19:37Z | 2023-10-20T17:31:34.000Z | 2023-10-20T17:31:34 | ---
task_categories:
- text-generation
- text-classification
size_categories:
- 100K<n<1M
---
# Dataset Card for AITA Reddit Posts and Comments
Posts of the AITA subreddit, with the 2 top voted comments that share the post verdict. Extracted using REDDIT PushShift (from 2013 to April 2023)
## Dataset Details
The dataset contains 270,709 entiries each of which contain the post title, text, verdict, comment1, comment2 and score (number of upvotes)
For more details see paper: https://arxiv.org/abs/2310.18336
### Dataset Sources
The Reddit PushShift data dumps are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.
## Dataset Card Authors
@OsamaBsher and Ameer Sabri
## Dataset Card Contact
@OsamaBsher | [
-0.5282895565032959,
-0.40560078620910645,
0.39659979939460754,
0.17305639386177063,
-0.5837774872779846,
0.016426296904683113,
0.21205992996692657,
-0.3802144229412079,
0.4524887204170227,
0.774018406867981,
-0.5874873399734497,
-0.43002909421920776,
-0.7453660368919373,
0.318771123886108... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
iarbel/amazon-product-data-filter | iarbel | 2023-11-12T16:59:36Z | 114 | 2 | null | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | 2023-11-12T16:59:36Z | 2023-10-29T07:30:06.000Z | 2023-10-29T07:30:06 | ---
dataset_info:
features:
- name: asin
dtype: string
- name: category
dtype: string
- name: img_url
dtype: string
- name: title
dtype: string
- name: feature-bullets
sequence: string
- name: tech_data
sequence:
sequence: string
- name: labels
dtype: string
- name: tech_process
dtype: string
splits:
- name: train
num_bytes: 2686223
num_examples: 716
- name: validation
num_bytes: 763820
num_examples: 204
- name: test
num_bytes: 390684
num_examples: 103
download_size: 2162385
dataset_size: 3840727
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for "amazon-product-data-filter"
## Dataset Description
- **Homepage:** [τenai.io - AI Consulting](https://www.tenai.io/)
- **Point of Contact:** [Iftach Arbel](mailto:ia@momentum-ai.io)
### Dataset Summary
The Amazon Product Dataset contains product listing data from the Amazon US website. It can be used for various NLP and classification tasks, such as text generation, product type classification, attribute extraction, image recognition and more.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
Each data point provides product information, such as ASIN (Amazon Standard Identification Number), title, feature-bullets, and more.
### Data Fields
- `asin`: Amazon Standard Identification Number.
- `category`: The product category. This field represents the search-string used to obtain the listing, it is not the product category as appears on Amazon.com.
- `img_url`: Main image URL from the product page.
- `title`: Product title, as appears on the product page.
- `feature-bullets`: Product feature-bullets list, as they appear on the product page.
- `tech_data`: Product technical data (material, style, etc.), as they appear on the product page. Structured as a list of tuples, where the first element is a feature (e.g. material) and the second element is a value (e.g. plastic).
- `labels`: A processed instance of `feature-bullets` field. The original feature-bullets were aligned to form a standard structure with a capitalized prefix, remove emojis, etc. Finally, the list items were concatenated to a single string with a `\n` seperator.
- `tech_process`: A processed instance of `tech_data` field. The original tech data was filtered and transformed from a `(key, value)` structure to a natural language text.
### Data Splits
The dataset was randomly split into train (70%), validation (20%), test (10%). Since the main usage is text-generation, the train split is to be used for fine-tuning or as a few-shot context. The validation split can be used for tracking perplexity during fine-tuning. The test split should be used to generate text and inspect quality of results.
## Dataset Creation
### Curation Rationale
This dataset was built to provide high-quality data in the e-commerce domain, and fine-tuning LLMs for specific tasks. Raw, unstractured data was collected from Amazom.com, parsed, processed, and filtered using various techniques (annotations, rule-based, models).
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by collected raw HTML data from Amazom.com.
### Annotations
The dataset does not contain any additional annotations.
### Personal and Sensitive Information
There is no personal information in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
To the best of our knowledge, there is no social impact for this dataset. The data is highly technical, and usage for product text-generation or classification does not pose a risk.
### Other Known Limitations
The quality of product listings may vary, and may not be accurate.
## Additional Information
### Dataset Curators
The dataset was collected and curated by [Iftach Arbel](mailto:ia@momentum-ai.io).
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{amazon_product_filter,
author = {Iftach Arbel},
title = {Amazon Product Dataset Filtered},
year = {2023},
publisher = {Huggingface},
journal = {Huggingface dataset},
howpublished = {\url{https://huggingface.co/datasets/iarbel/amazon-product-data-filter}},
}
``` | [
-0.5070323348045349,
-0.7860350608825684,
-0.02157423086464405,
0.31897443532943726,
-0.29853561520576477,
0.1341322660446167,
-0.0960051640868187,
-0.5995830297470093,
0.2554541230201721,
0.5026707649230957,
-0.8861336708068848,
-1.1141642332077026,
-0.3845277428627014,
0.0995560958981514... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kiamehr74/CoarseWSD-20 | kiamehr74 | 2021-08-10T09:48:50Z | 113 | 1 | null | [
"region:us"
] | 2021-08-10T09:48:50Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tasksource/crowdflower | tasksource | 2023-06-21T12:50:08Z | 113 | 0 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"language:en",
"region:us"
] | 2023-06-21T12:50:08Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
pretty_name: ethics
size_categories:
- unknown
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
- fact-checking
---
```
@inproceedings{van2012designing,
title={Designing a scalable crowdsourcing platform},
author={Van Pelt, Chris and Sorokin, Alex},
booktitle={Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data},
pages={765--766},
year={2012}
}
``` | [
-0.6468431949615479,
-0.0024321621749550104,
0.3438973128795624,
0.5172755122184753,
-0.11919873207807541,
0.2662641704082489,
0.008335898630321026,
-0.6290188431739807,
0.5571566820144653,
0.5246455073356628,
-0.8480353355407715,
-0.3667696714401245,
-0.48071643710136414,
0.07771713286638... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yxchar/rct-20k-tlm | yxchar | 2021-11-05T01:18:46Z | 113 | 0 | null | [
"region:us"
] | 2021-11-05T01:18:46Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
juletxara/tydiqa_xtreme | juletxara | 2022-07-01T19:19:05Z | 113 | 1 | tydi-qa | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:extended|wikipedia",
"language:en",
"language:ar",
"language:bn",
"language:fi",
"l... | 2022-07-01T19:19:05Z | 2022-06-08T10:42:42.000Z | 2022-06-08T10:42:42 | ---
pretty_name: TyDi QA
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
- ar
- bn
- fi
- id
- ja
- sw
- ko
- ru
- te
- th
license:
- apache-2.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: tydi-qa
---
# Dataset Card for "tydiqa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3726.74 MB
- **Size of the generated dataset:** 5812.92 MB
- **Total amount of disk used:** 9539.67 MB
### Dataset Summary
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
the use of translation (unlike MLQA and XQuAD).
We also include "translate-train" and "translate-test" splits for each non-English languages from XTREME (Hu et al., 2020). These splits are the automatic translations from English to each target language used in the XTREME paper [https://arxiv.org/abs/2003.11080]. The "translate-train" split purposefully ignores the non-English TyDiQA-GoldP training data to simulate the transfer learning scenario where original-language data is not available and system builders must rely on labeled English data plus existing machine translation systems.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### primary_task
- **Size of downloaded dataset files:** 1863.37 MB
- **Size of the generated dataset:** 5757.59 MB
- **Total amount of disk used:** 7620.96 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"annotations": {
"minimal_answers_end_byte": [-1, -1, -1],
"minimal_answers_start_byte": [-1, -1, -1],
"passage_answer_candidate_index": [-1, -1, -1],
"yes_no_answer": ["NONE", "NONE", "NONE"]
},
"document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...",
"document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร",
"document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...",
"language": "thai",
"passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...",
"question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..."
}
```
#### secondary_task
- **Size of downloaded dataset files:** 1863.37 MB
- **Size of the generated dataset:** 55.34 MB
- **Total amount of disk used:** 1918.71 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [394],
"text": ["بطولتين"]
},
"context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...",
"id": "arabic-2387335860751143628-1",
"question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...",
"title": "قائمة نهائيات كأس العالم"
}
```
### Data Fields
The data fields are the same among all splits.
#### primary_task
- `passage_answer_candidates`: a dictionary feature containing:
- `plaintext_start_byte`: a `int32` feature.
- `plaintext_end_byte`: a `int32` feature.
- `question_text`: a `string` feature.
- `document_title`: a `string` feature.
- `language`: a `string` feature.
- `annotations`: a dictionary feature containing:
- `passage_answer_candidate_index`: a `int32` feature.
- `minimal_answers_start_byte`: a `int32` feature.
- `minimal_answers_end_byte`: a `int32` feature.
- `yes_no_answer`: a `string` feature.
- `document_plaintext`: a `string` feature.
- `document_url`: a `string` feature.
#### secondary_task
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | train | validation |
| -------------- | -----: | ---------: |
| primary_task | 166916 | 18670 |
| secondary_task | 49881 | 5077 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{tydiqa,
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year = {2020},
journal = {Transactions of the Association for Computational Linguistics}
}
```
```
@inproceedings{ruder-etal-2021-xtreme,
title = "{XTREME}-{R}: Towards More Challenging and Nuanced Multilingual Evaluation",
author = "Ruder, Sebastian and
Constant, Noah and
Botha, Jan and
Siddhant, Aditya and
Firat, Orhan and
Fu, Jinlan and
Liu, Pengfei and
Hu, Junjie and
Garrette, Dan and
Neubig, Graham and
Johnson, Melvin",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.802",
doi = "10.18653/v1/2021.emnlp-main.802",
pages = "10215--10245",
}
}
```
| [
-0.6934342980384827,
-0.6740506291389465,
0.26815712451934814,
0.08140195906162262,
-0.21187573671340942,
0.026626024395227432,
-0.34339454770088196,
-0.3125552237033844,
0.5910134315490723,
0.46073779463768005,
-0.7823289632797241,
-0.8528454303741455,
-0.5719126462936401,
0.3013562262058... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fmplaza/offendes | fmplaza | 2023-03-27T08:19:06Z | 113 | 8 | null | [
"language:es",
"license:apache-2.0",
"region:us"
] | 2023-03-27T08:19:06Z | 2022-06-16T14:32:03.000Z | 2022-06-16T14:32:03 | ---
license: apache-2.0
language:
- es
---
# Dataset Card for OffendES
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Paper: OffendES:** [A New Corpus in Spanish for Offensive Language Research](https://aclanthology.org/2021.ranlp-1.123.pdf)
- **Leaderboard:** [Leaderboard for OffendES / Spanish](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6388)
- **Point of Contact: fmplaza@ujaen.es**
### Dataset Summary
Focusing on young influencers from the well-known social platforms of Twitter, Instagram, and YouTube, we have collected a corpus composed of Spanish comments manually labeled on offensive pre-defined categories. From the total corpus, we selected 30,416 posts to be publicly published, they correspond to the ones used in the MeOffendES competition at IberLEF 2021. The posts are labeled with the following categories:
- Offensive, the target is a person (OFP). Offensive text targeting a specific individual.
- Offensive, the target is a group of people or collective (OFG). Offensive text targeting a group of people belonging to the same ethnic group, gender or sexual orientation, political ideology, religious belief, or other common characteristics.
- Non-offensive, but with expletive language (NOE). A text that contains rude words, blasphemes, or swearwords but without the aim of offending, and usually with a positive connotation.
- Non-offensive (NO). Text that is neither offensive nor contains expletive language
### Supported Tasks and Leaderboards
This dataset is intended for multi-class offensive classification and binary offensive classification.
Competition [MeOffendES task on offensive detection for Spanish at IberLEF 2021](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6388)
### Languages
- Spanish
## Dataset Structure
### Data Instances
For each instance, there is a string for the id of the tweet, a string for the emotion class, a string for the offensive class, and a string for the event. See the []() to explore more examples.
```
{'comment_id': '8003',
'influencer': 'dalas',
'comment': 'Estupido aburrido',
'label': 'NO',
'influencer_gender': 'man',
'media': youtube
}
```
### Data Fields
- `comment_id`: a string to identify the comment
- `influencer`: a string containing the influencer associated with the comment
- `comment`: a string containing the text of the comment
- `label`: a string containing the offensive gold label
- `influencer_gender`: a string containing the genre of the influencer
- `media`: a string containing the social media platform where the comment has been retrieved
### Data Splits
The OffendES dataset contains 3 splits: _train_, _validation_, and _test_. Below are the statistics for each class.
| OffendES | Number of Instances in Split per class| | |
| ------------- | ---------------------------------|---------------------------------|------------------------------------------|
| `Class` | `Train` | `Validation` | `Test` |
| NO | 13,212 | 64 | 9,651 |
| NOE | 1,235 | 22 | 2,340 |
| OFP | 2,051 | 10 | 1,404 |
| OFG | 212 | 4 | 211 |
| Total | 16,710 | 100 | 13,606 |
## Dataset Creation
### Source Data
Twitter, Youtube, Instagram
#### Who are the annotators?
Amazon Mechanical Turkers
## Additional Information
### Licensing Information
The OffendES dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
@inproceedings{plaza-del-arco-etal-2021-offendes,
title = "{O}ffend{ES}: A New Corpus in {S}panish for Offensive Language Research",
author = "{Plaza-del-Arco}, Flor Miriam and Montejo-R{\'a}ez, Arturo and Ure{\~n}a-L{\'o}pez, L. Alfonso and Mart{\'\i}n-Valdivia, Mar{\'\i}a-Teresa",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = sep,
year = "2021",
address = "Held Online",
url = "https://aclanthology.org/2021.ranlp-1.123.pdf",
language = "English",
pages = "1096--1108"
}
@article{meoffendes2021,
title="{{Overview of MeOffendEs at IberLEF 2021: Offensive Language Detection in Spanish Variants}}",
author="{Flor Miriam Plaza-del-Arco and Casavantes, Marco and Jair Escalante, Hugo and Martín-Valdivia, M. Teresa and Montejo-Ráez, Arturo and {Montes-y-Gómez}, Manuel and Jarquín-Vásquez, Horacio and Villaseñor-Pineda, Luis}",
journal="Procesamiento del Lenguaje Natural",
url = "https://bit.ly/3QpRDfy",
volume="67",
pages="183--194",
year="2021"
} | [
-0.4363255202770233,
-0.7320181131362915,
-0.06367423385381699,
0.29609769582748413,
-0.2361757755279541,
0.09447857737541199,
-0.344471275806427,
-0.6165981888771057,
0.5118223428726196,
0.26726290583610535,
-0.4400266408920288,
-0.7975510358810425,
-0.8265796303749084,
0.4323434829711914... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MicPie/unpredictable_cluster11 | MicPie | 2022-08-04T19:50:50Z | 113 | 0 | null | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:tabular-cl... | 2022-08-04T19:50:50Z | 2022-07-08T17:19:16.000Z | 2022-07-08T17:19:16 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-cluster11
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-cluster11" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
| [
-0.5860000252723694,
-0.5545541048049927,
0.474488765001297,
0.3139568865299225,
0.09260275214910507,
0.1667506992816925,
-0.14680038392543793,
-0.6086714863777161,
0.5327023863792419,
0.28222569823265076,
-1.0129457712173462,
-0.6818604469299316,
-0.6659947633743286,
0.1980273425579071,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1000000-1050000 | tomekkorbak | 2022-10-04T23:58:45Z | 113 | 0 | null | [
"region:us"
] | 2022-10-04T23:58:45Z | 2022-10-04T23:58:37.000Z | 2022-10-04T23:58:37 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keremberke/shoe-classification | keremberke | 2023-01-27T13:46:52Z | 113 | 2 | null | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Sports",
"Retail",
"Benchmark",
"region:us"
] | 2023-01-27T13:46:52Z | 2023-01-27T13:46:37.000Z | 2023-01-27T13:46:37 | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
- Sports
- Retail
- Benchmark
---
<div align="center">
<img width="640" alt="keremberke/shoe-classification" src="https://huggingface.co/datasets/keremberke/shoe-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['converse', 'adidas', 'nike']
```
### Number of Images
```json
{'train': 576, 'test': 83, 'valid': 166}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/shoe-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/popular-benchmarks/nike-adidas-and-converse-shoes-classification/dataset/4](https://universe.roboflow.com/popular-benchmarks/nike-adidas-and-converse-shoes-classification/dataset/4?ref=roboflow2huggingface)
### Citation
```
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on October 28, 2022 at 2:38 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 825 images.
Shoes are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| [
-0.43494248390197754,
-0.07015727460384369,
0.06048962473869324,
0.06551040709018707,
-0.4812255799770355,
0.1457926481962204,
-0.1964782476425171,
-0.5143302083015442,
0.15847647190093994,
-0.09099627286195755,
-0.6608752608299255,
-0.8465251922607422,
-0.4954579472541809,
-0.022798977792... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/wikipedia_vi | vietgpt | 2023-09-16T05:11:18Z | 113 | 4 | null | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:vi",
"LM",
"region:us"
] | 2023-09-16T05:11:18Z | 2023-02-21T20:39:38.000Z | 2023-02-21T20:39:38 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: revid
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1053551922.960177
num_examples: 1284930
download_size: 569515706
dataset_size: 1053551922.960177
task_categories:
- text-generation
language:
- vi
size_categories:
- 1M<n<10M
tags:
- LM
---
# Wikipedia
- Source: https://huggingface.co/datasets/wikipedia
- Num examples: 1,281,412
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikipedia_vi")
``` | [
-0.5655173659324646,
-0.4780227839946747,
0.08222171664237976,
0.5414257645606995,
-0.34557467699050903,
-0.37018781900405884,
-0.17197398841381073,
-0.05019792169332504,
0.31114593148231506,
0.3841885030269623,
-0.4314413070678711,
-0.5561960339546204,
-0.4160699248313904,
0.5318042039871... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
johnrobinsn/alpaca-cleaned | johnrobinsn | 2023-03-30T08:42:40Z | 113 | 0 | null | [
"region:us"
] | 2023-03-30T08:42:40Z | 2023-03-30T08:41:04.000Z | 2023-03-30T08:41:04 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maykcaldas/smiles-transformers | maykcaldas | 2023-04-04T22:02:47Z | 113 | 2 | null | [
"size_categories:100M<n<1B",
"language:en",
"license:mit",
"region:us"
] | 2023-04-04T22:02:47Z | 2023-04-04T13:10:48.000Z | 2023-04-04T13:10:48 | ---
license: mit
language:
- en
pretty_name: smiles-transformer-dataset
size_categories:
- 100M<n<1B
dataset_info:
features:
- name: text
dtype: string
- name: formula
dtype: string
- name: NumHDonors
dtype: int64
- name: NumHAcceptors
dtype: int64
- name: MolLogP
dtype: float64
- name: NumHeteroatoms
dtype: int64
- name: RingCount
dtype: int64
- name: NumRotatableBonds
dtype: int64
- name: NumAromaticBonds
dtype: int64
- name: NumAcidGroups
dtype: int64
- name: NumBasicGroups
dtype: int64
- name: Apol
dtype: float64
splits:
- name: train
num_bytes: 136431671689
num_examples: 908086717
- name: test
num_bytes: 7437928022
num_examples: 50487919
- name: validation
num_bytes: 7621324737
num_examples: 50605067
download_size: 34998665406
dataset_size: 151490924448
---
# smiles-transformers dataset
TODO: Add references to the datasets we curated
## dataset features
- name: text
- Molecule SMILES : string
- name: formula
- Molecular formula : string
- name: NumHDonors
- Number of hidrogen bond donors : int
- name: NumHAcceptors
- Number of hidrogen bond acceptors : int
- name: MolLogP
- Wildman-Crippen LogP : float
- name: NumHeteroatoms
- Number of hetero atoms: int
- name: RingCount
- Number of rings : int
- name: NumRotatableBonds
- Number of rotable bonds : int
- name: NumAromaticBonds
- Number of aromatic bonds : int
- name: NumAcidGroups
- Number of acid groups : int
- name: NumBasicGroups
- Number of basic groups : int
- name: Apol
## citation information | [
-0.6160966157913208,
0.15021516382694244,
0.5756543874740601,
-0.09846407175064087,
-0.1675657480955124,
0.25720638036727905,
-0.01487067248672247,
0.1378341019153595,
0.3941274583339691,
0.5960208177566528,
-1.1705628633499146,
-0.7349252700805664,
-0.699586033821106,
0.4738711714744568,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bloyal/deeploc | bloyal | 2023-08-15T13:46:01Z | 113 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-08-15T13:46:01Z | 2023-08-08T21:44:50.000Z | 2023-08-08T21:44:50 | ---
license: cc-by-4.0
---
# DeepLoc-2.0 Training Data
Dataset from https://services.healthtech.dtu.dk/services/DeepLoc-2.0/ used to train the DeepLoc-2.0 model.
## Data preparation
Data downloaded and processed using the following Python script:
```python
import pandas as pd
df = pd.read_csv('https://services.healthtech.dtu.dk/services/DeepLoc-2.0/data/Swissprot_Train_Validation_dataset.csv').drop(['Unnamed: 0', 'Partition'], axis=1)
df['labels'] = df[['Cell membrane', 'Cytoplasm','Endoplasmic reticulum', 'Extracellular', 'Golgi apparatus', 'Lysosome/Vacuole', 'Mitochondrion', 'Nucleus', 'Peroxisome', 'Plastid']].astype('float32').values.tolist()
df['Membrane'] = df['Membrane'].astype('float32')
df = df[['Kingdom', 'ACC', 'Sequence','Membrane','labels']]
train = df.sample(frac=0.8)
df = df.drop(train.index)
val = df.sample(frac=0.5)
test = df.drop(val.index)
train = train.reset_index(drop=True)
val = val.reset_index(drop=True)
test = test.reset_index(drop=True)
train.to_parquet('deeploc-train.parquet', index=False)
val.to_parquet('deploc-val.parquet', index=False)
test.to_parquet('deeploc-test.parquet', index=False)
```
## Labels
{'Cell membrane': 0,
'Cytoplasm': 1,
'Endoplasmic reticulum': 2,
'Extracellular': 3,
'Golgi apparatus': 4,
'Lysosome/Vacuole': 5,
'Mitochondrion': 6,
'Nucleus': 7,
'Peroxisome': 8,
'Plastid': 9}
## Citation
**DeepLoc-2.0:**
```
Vineet Thumuluri and others, DeepLoc 2.0: multi-label subcellular localization prediction using protein language models, Nucleic Acids Research, Volume 50, Issue W1, 5 July 2022, Pages W228–W234, https://doi.org/10.1093/nar/gkac278
```
The DeepLoc data is a derivative of the UniProt dataset:
**UniProt**
```
The UniProt Consortium
UniProt: the Universal Protein Knowledgebase in 2023
Nucleic Acids Res. 51:D523–D531 (2023)
```
| [
-0.2636411488056183,
-0.4250534772872925,
0.4131883382797241,
-0.20891982316970825,
-0.17679305374622345,
0.07904339581727982,
0.09100344777107239,
-0.05136721581220627,
0.1400682032108307,
0.38624030351638794,
-0.5671507120132446,
-0.908426821231842,
-0.4620039463043213,
0.111832119524478... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jinaai/code_exercises | jinaai | 2023-09-07T08:18:18Z | 113 | 13 | null | [
"task_categories:text-generation",
"size_categories:100M<n<1B",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-09-07T08:18:18Z | 2023-08-17T06:38:59.000Z | 2023-08-17T06:38:59 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 1121418005
num_examples: 1468146
download_size: 486193162
dataset_size: 1121418005
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- en
size_categories:
- 100M<n<1B
license: cc-by-nc-sa-4.0
---
# Dataset Card for "code_exercises"
# Code exercise
This dataset is composed of a diverse set of \~120k Python code exercises (~120m total tokens) generated by ChatGPT 3.5. It is designed to distill ChatGPT 3.5 knowledge about Python coding tasks into other (potentially smaller) models. The exercises have been generated by following the steps described in the [related GitHub repository](https://github.com/jina-ai/textbook).
The generated exercises follow the format of the [Human Eval benchmark](https://github.com/openai/human-eval). Each training sample is split into a Python function signature with a descriptive docstring, and a solution to the exercise.
This approach is inspired by several works on synthetic dataset generation, especially by _Textbooks Are All You Need_ [(Gunasekar et al. 2023)](https://doi.org/10.48550/arXiv.2306.11644).
## Disclaimer
* This dataset has been generated using ChatGPT 3.5, and you should check the legal status of AI-generated content in your jurisdiction before use. We cannot guarantee that it is free of IP restrictions. You should also make sure that your usage complies with the [OpenAI Terms of Use](https://openai.com/policies/terms-of-use), in so far as legally applicable.
* This dataset focuses narrowly on improving performance on the kinds of tasks described in the Human Eval benchmark. The Human Eval benchmark has limitations and does not necessarily fully represent the coding abilities of a large language model, and there is no way to guarantee that an improvement on this benchmark represents an overall improvement in programming performance. We present this data as is, without any guarantee of its usefulness in any specific context, to encourage research that might be inspired by our method.
## Synthetic exercise creation
Model distillation is the process of transferring some of the skilled performance of large models on specific classes of tasks to significantly smaller models. The purpose is to get performance comparable to the larger model, but at a fraction of the cost and at vastly quicker speed. The general outline of this strategy is described (without technical implementation details) in [Textbooks Are All You Need](https://doi.org/10.48550/arXiv.2306.11644).
Key to the distillation process is the creation of synthetic data, generated by the larger AI model, to train the smaller model. We have applied this approach to Python programming tasks and are publishing a summary of our methods here along with the synthetic dataset.
For fuller details and implementation code, see the [related GitHub repository](https://github.com/jina-ai/textbook).
### Diversity
The main problem with model-generated synthetic data is its diversity. If we had constructed this dataset by giving ChatGPT 3.5 the same prompt several hundred thousand times, we would get many very similar, if not functionally identical, results. This would reduce the usefulness of the dataset for training. In principle, one might solve the problem by filtering the results for near duplicates, but this is a non-trivial problem, and even if it could be solved, it would be a wasteful and potentially expensive use of the larger model.
And even then, we could not be sure the examples adequately covered the topic. To solve this problem, we introduced a novel scheme for systematically prompting large language models to produce diverse examples.
### Using a topic tree to build diverse prompts
We constructed a hierarchical model of subjects in Python programming, i.e. a topic tree. First, we manually identified 42 general topic areas in Python knowledge, for example, _data structures_ and _sorting algorithms_. We asked an LLM to propose 10 subtopics for each, and then for each of those 420 fine-grained topics, we asked the LLM to generate 5 even more fine-grained sub-subtopics. This resulted in roughly 2000 very fine-grained topics.
We generated prompts by randomly selecting two of those roughly two thousand topics and combining them:
```
Create a code completion exercise on the intersection of {topic 1} and {topic 2}.
```
To increase randomness and diversity in the results, we also constructed a list of 40 professions, like _economist_, _engineer_, and _social worker_, and added them to the prompt:
```
Create a code completion exercise on the intersection of {topic 1} and {topic 2}.
Write it for a {profession}.
```
In principle, there are approximately two million possible pairs of topics, and with 40 possible professions, this yields 80 million unique prompts. If the response to each prompt averages 100 tokens, this means our method can generate an 8 billion token synthetic dataset while maintaining a high degree of diversity. The roughly 120,000 published here is a small random subset of what is possible.
## Credits
This dataset was developed at [Jina.ai](https://jina.ai/) | [
-0.3706604242324829,
-0.7756279110908508,
0.22182400524616241,
0.1750682294368744,
-0.03100866638123989,
0.2042509913444519,
-0.42155322432518005,
-0.16627679765224457,
-0.12258373200893402,
0.26823946833610535,
-0.4916025400161743,
-0.4298032224178314,
-0.43122926354408264,
0.178921088576... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shrutisingh/dataset_recommendation_mcq_mc | shrutisingh | 2023-10-12T17:15:59Z | 113 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-12T17:15:59Z | 2023-10-12T17:02:16.000Z | 2023-10-12T17:02:16 | ---
license: apache-2.0
---
Task: MCQ with multiple correct answers.
Dataset: Recommendation of datasets to validate a research question.
This dataset is derived from the [DataFinder](https://aclanthology.org/2023.acl-long.573/) dataset. We curate the abstracts of each dataset from [PapersWithCode](https://paperswithcode.com/datasets).
Given is a short `query` discussing a research question, and keyphrases relevant the query.
The original training set of the DataFinder dataset has positive and negative candidates for each query, to train a contrastive model.
We objective is to convert the dataset into a MCQ question-answering task with multiple correct answers. We also add the abstracts from the research papers introducing the datasets so that context can be provided to the models.
To reproduce the construction of this dataset, please visit [https://github.com/shruti-singh/scidata_recommendation](https://github.com/shruti-singh/scidata_recommendation).
Please note that the query instances in this dataset have no intersection with the [`dataset_recommendation_mcq_sc`](https://huggingface.co/datasets/shrutisingh/dataset_recommendation_mcq_sc) dataset. [`dataset_recommendation_mcq_sc`](https://huggingface.co/datasets/shrutisingh/dataset_recommendation_mcq_sc) is a variant of this MCQ question-answering task with only single correct answer. | [
-0.4819428026676178,
-0.5125411152839661,
0.4947871267795563,
0.035988591611385345,
-0.17171205580234528,
-0.09548410028219223,
0.05673525854945183,
-0.010951361618936062,
0.22250239551067352,
0.6700291633605957,
-0.7486115097999573,
-0.46721866726875305,
-0.24625013768672943,
0.3186288177... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.