id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tomekkorbak/pile-curse-chunk-28 | tomekkorbak | 2022-03-18T22:06:36Z | 21 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:36Z | 2022-03-18T22:06:26.000Z | 2022-03-18T22:06:26 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggan/metfaces | huggan | 2022-03-25T09:13:25Z | 21 | 0 | null | [
"region:us"
] | 2022-03-25T09:13:25Z | 2022-03-25T09:12:34.000Z | 2022-03-25T09:12:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
UrukHan/t5-russian-spell_III | UrukHan | 2022-03-27T13:01:02Z | 21 | 0 | null | [
"region:us"
] | 2022-03-27T13:01:02Z | 2022-03-27T12:59:47.000Z | 2022-03-27T12:59:47 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
h4iku/coconut_c2005 | h4iku | 2023-09-28T23:19:25Z | 21 | 0 | null | [
"code",
"region:us"
] | 2023-09-28T23:19:25Z | 2022-03-30T01:06:36.000Z | 2022-03-30T01:06:36 | ---
tags:
- code
pretty_name: CoCoNuT-C(2005)
---
# Dataset Card for CoCoNuT-C(2005)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- C
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 12,577 | 2,735,506 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| [
-0.3948109745979309,
-0.6973398327827454,
0.21130385994911194,
0.16425931453704834,
-0.3735975921154022,
0.22806422412395477,
-0.23279446363449097,
-0.5321910381317139,
0.27565085887908936,
0.32141244411468506,
-0.4550902247428894,
-0.5760902762413025,
-0.4614473879337311,
0.20886178314685... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggan/few-shot-pokemon | huggan | 2022-04-12T14:06:36Z | 21 | 4 | null | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-12T14:06:36Z | 2022-04-01T11:56:00.000Z | 2022-04-01T11:56:00 | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
-0.5524431467056274,
-0.8028348684310913,
0.018525347113609314,
0.33572760224342346,
-0.09379876405000687,
-0.17921070754528046,
-0.08067688345909119,
-0.28826087713241577,
0.07932980358600616,
-0.0419771634042263,
-0.3548431992530823,
-0.3427698016166687,
-0.3939037621021271,
0.0571840740... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ghomasHudson/hotpotExtendedAno | ghomasHudson | 2022-04-13T11:01:17Z | 21 | 0 | null | [
"region:us"
] | 2022-04-13T11:01:17Z | 2022-04-13T10:55:51.000Z | 2022-04-13T10:55:51 | # hotpotQA-Extended (Annotated)
A version of [HotpotQA-Extended](https://huggingface.co/datasets/ghomasHudson/hotpotExtended) with extra annotations about what part of the input contains the answer. | [
-0.5732613801956177,
-0.7813419103622437,
0.21168646216392517,
0.41923266649246216,
-0.37145280838012695,
-0.0803055539727211,
0.034423261880874634,
-0.22188685834407806,
0.9087356328964233,
0.8273818492889404,
-0.6925815939903259,
-0.3317567706108093,
-0.7313722372055054,
0.14359435439109... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dl4phys/top_tagging | dl4phys | 2022-04-18T07:43:02Z | 21 | 0 | null | [
"license:cc-by-4.0",
"arxiv:1902.09914",
"region:us"
] | 2022-04-18T07:43:02Z | 2022-04-16T09:53:34.000Z | 2022-04-16T09:53:34 | ---
license: cc-by-4.0
---
# Dataset Card for Top Quark Tagging
## Table of Contents
- [Dataset Card for Top Quark Tagging](#dataset-card-for-top-quark-tagging)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/2603256
- **Paper:** https://arxiv.org/abs/1902.09914
- **Point of Contact:** [Gregor Kasieczka](gregor.kasieczka@uni-hamburg.de)
### Dataset Summary
Top Quark Tagging is a dataset of Monte Carlo simulated events produced by proton-proton collisions at the Large Hadron Collider. The top-quark signal and mixed quark-gluon background jets are produced with Pythia8 with its default tune for a center-of-mass energy of 14 TeV. Multiple interactions and pile-up are ignored. The leading 200 jet constituent four-momenta \\( (E, p_x, p_y, p_z) \\) are stored, with zero-padding applied to jets with fewer than 200 constituents.
### Supported Tasks and Leaderboards
- `tabular-classification`: The dataset can be used to train a model for tabular binary classification, which consists in predicting whether an event is produced from a top signal or quark-gluon background. Success on this task is typically measured by achieving a *high* [accuracy](https://huggingface.co/metrics/accuracy) and AUC score.
## Dataset Structure
### Data Instances
Each instance in the dataset consists of the four-momenta of the leading 200 jet constituents, sorted by \\(p_T\\). For jets with fewer than 200 constituents, zero-padding is applied. The four-momenta of the top-quark are also provided, along with a label in the `is_signal_new` column to indicate whether the event stems from a top-quark (1) or QCD background (0). An example instance looks as follows:
```
{'E_0': 474.0711364746094,
'PX_0': -250.34703063964844,
'PY_0': -223.65196228027344,
'PZ_0': -334.73809814453125,
...
'E_199': 0.0,
'PX_199': 0.0,
'PY_199': 0.0,
'PZ_199': 0.0,
'truthE': 0.0,
'truthPX': 0.0,
'truthPY': 0.0,
'truthPZ': 0.0,
'ttv': 0,
'is_signal_new': 0}
```
### Data Fields
The fields in the dataset have the following meaning:
- `E_i`: the energy of jet constituent \\(i\\).
- `PX_i`: the \\(x\\) component of the jet constituent's momentum
- `PY_i`: the \\(y\\) component of the jet constituent's momentum
- `PZ_i`: the \\(z\\) component of the jet constituent's momentum
- `truthE`: the energy of the top-quark
- `truthPX`: the \\(x\\) component of the top quark's momentum
- `truthPY`: the \\(y\\) component of the top quark's momentum
- `truthPZ`: the \\(z\\) component of the top quark's momentum
- `ttv`: a flag that indicates which split (train, validation, or test) that a jet belongs to. Redundant since each split is provided as a separate dataset
- `is_signal_new`: the label for each jet. A 1 indicates a top-quark, while a 0 indicates QCD background.
### Data Splits
| | train | validation | test |
|------------------|--------:|-----------:|-------:|
| Number of events | 1211000 | 403000 | 404000 |
### Licensing Information
This dataset is released under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) license.
### Citation Information
```
@dataset{kasieczka_gregor_2019_2603256,
author = {Kasieczka, Gregor and
Plehn, Tilman and
Thompson, Jennifer and
Russel, Michael},
title = {Top Quark Tagging Reference Dataset},
month = mar,
year = 2019,
publisher = {Zenodo},
version = {v0 (2018\_03\_27)},
doi = {10.5281/zenodo.2603256},
url = {https://doi.org/10.5281/zenodo.2603256}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
| [
-0.5662568211555481,
-0.13094116747379303,
0.1853816956281662,
-0.19277219474315643,
-0.46978574991226196,
0.3241800367832184,
-0.04175647720694542,
0.14693352580070496,
0.24476367235183716,
0.02506265416741371,
-0.6231874227523804,
-0.9027178287506104,
-0.4702358543872833,
-0.167466163635... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
strombergnlp/twitter_pos_vcb | strombergnlp | 2022-10-25T21:42:56Z | 21 | 2 | twitter-pos-vcb | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-10-25T21:42:56Z | 2022-04-28T10:10:59.000Z | 2022-04-28T10:10:59 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- part-of-speech
paperswithcode_id: twitter-pos-vcb
pretty_name: Twitter PoS VCB
---
# Dataset Card for "twitter-pos-vcb"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://gate.ac.uk/wiki/twitter-postagger.html](https://gate.ac.uk/wiki/twitter-postagger.html)
- **Repository:** [https://github.com/GateNLP/gateplugin-Twitter](https://github.com/GateNLP/gateplugin-Twitter)
- **Paper:** [https://aclanthology.org/R13-1026.pdf](https://aclanthology.org/R13-1026.pdf)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 4.51 MiB
- **Size of the generated dataset:** 26.88 MB
- **Total amount of disk used:** 31.39 MB
### Dataset Summary
Part-of-speech information is basic NLP task. However, Twitter text
is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.
This data is the vote-constrained bootstrapped data generate to support state-of-the-art results.
The data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset.
The tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged
jointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs
are completely compatible over a whole tweet, is that tweet added to the dataset.
This data is recommend for use a training data **only**, and not evaluation data.
For more details see https://gate.ac.uk/wiki/twitter-postagger.html and https://aclanthology.org/R13-1026.pdf
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English, non-region-specific. `bcp47:en`
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### twitter_pos_vcb
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
```
### Data Splits
| name |tokens|sentences|
|---------|----:|---------:|
|twitter-pos-vcb|1 543 126| 159 492|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Creative Commons Attribution 4.0 (CC-BY)
### Citation Information
```
@inproceedings{derczynski2013twitter,
title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},
author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},
booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},
pages={198--206},
year={2013}
}
```
### Contributions
Author uploaded ([@leondz](https://github.com/leondz)) | [
-0.40417972207069397,
-0.5800196528434753,
0.13377979397773743,
0.3861446976661682,
-0.3768980801105499,
0.2172701358795166,
-0.3944750726222992,
-0.4196683466434479,
0.7080996632575989,
0.2534646689891815,
-0.7843285202980042,
-1.0028108358383179,
-0.6512401700019836,
-0.04666472971439361... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SetFit/amazon_massive_intent_zh-CN | SetFit | 2022-06-20T14:31:57Z | 21 | 2 | null | [
"region:us"
] | 2022-06-20T14:31:57Z | 2022-05-06T09:12:07.000Z | 2022-05-06T09:12:07 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
taln-ls2n/kptimes | taln-ls2n | 2022-09-23T07:38:28Z | 21 | 1 | null | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-09-23T07:38:28Z | 2022-05-06T09:34:40.000Z | 2022-05-06T09:34:40 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 100K<n<1M
pretty_name: KPTimes
---
# KPTimes Benchmark Dataset for Keyphrase Generation
## About
KPTimes is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 290K news articles in English collected from the [New York Times](https://www.nytimes.com/) and the [Japan
Times](https://www.japantimes.co.jp/).
Keyphrases were annotated by editors in a semi-automated manner (that is, editors revise a set of keyphrases proposed by an algorithm and provide additional keyphrases).
Details about the dataset can be found in the original paper [(Gallina et al., 2019)][gallina-2019].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`. <u>P</u>resent keyphrases are ordered according to their first occurrence position in the text.
## Content and statistics
The dataset contains the following test split:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Train | 259,923 | 921 | 5.03 | 45.61 | 15.57 | 29.63 | 9.19 |
| Validation | 10,000 | 921 | 5.02 | 45.22 | 15.78 | 29.60 | 9.41 |
| Test | 20,000 | 648 | 5.03 | 60.64 | 8.90 | 18.95 | 11.51 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **date**: publishing date (YYYY/MM/DD)
- **categories**: categories of the article (1 or 2 categories)
## References
- (Gallina et al., 2019) Ygor Gallina, Florian Boudin, and Beatrice Daille. 2019.
[KPTimes: A Large-Scale Dataset for Keyphrase Generation on News Documents][gallina-2019].
In Proceedings of the 12th International Conference on Natural Language Generation, pages 130–135, Tokyo, Japan. Association for Computational Linguistics.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[gallina-2019]: https://aclanthology.org/W19-8617/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ | [
-0.19650548696517944,
-0.5224202871322632,
0.4107019901275635,
0.23341117799282074,
-0.47818079590797424,
0.20159432291984558,
-0.14515642821788788,
-0.08424612879753113,
0.0979401022195816,
0.2661615014076233,
-0.6398125886917114,
-0.7918916344642639,
-0.44539621472358704,
0.5415648221969... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ncats/EpiSet4NER-v2 | ncats | 2022-09-20T15:25:56Z | 21 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_dataset... | 2022-09-20T15:25:56Z | 2022-05-12T08:47:04.000Z | 2022-05-12T08:47:04 | ---
annotations_creators:
- machine-generated
- expert-generated
language:
- en
language_creators:
- found
- expert-generated
license:
- other
multilinguality:
- monolingual
pretty_name: EpiSet4NER-v2
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- epidemiology
- rare disease
- named entity recognition
- NER
- NIH
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard)
- **Paper:** Pending
### Dataset Summary
EpiSet4NER-v2 is a gold-standard dataset for epidemiological entity recognition of location, epidemiologic types (e.g. "prevalence", "annual incidence", "estimated occurrence"), and epidemiological rates (e.g. "1.7 per 1,000,000 live births", "2.1:1.000.000", "one in five million", "0.03%") created by the [Genetic and Rare Diseases Information Center (GARD)](https://rarediseases.info.nih.gov/), a program in [the National Center for Advancing Translational Sciences](https://ncats.nih.gov/), one of the 27 [National Institutes of Health](https://www.nih.gov/). It was labeled programmatically using spaCy NER and rule-based methods, then manually validated by biomedical researchers, including a GARD curator (genetic and rare disease expert). This weakly-supervised teaching method allowed us to construct this high quality dataset in an efficient manner and achieve satisfactory performance on a multi-type token classification problem. It was used to train [EpiExtract4GARD-v2](https://huggingface.co/ncats/EpiExtract4GARD-v2), a BioBERT-based model fine-tuned for NER.
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature that indicates sentence number.
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-LOC` (1), `I-LOC` (2), `B-EPI` (3), `I-EPI` (4),`B-STAT` (5),`I-STAT` (6).
### Data Splits
|name |train |validation|test|
|---------|-----:|----:|----:|
|EpiSet \# of abstracts|456|114|50|
|EpiSet \# tokens |117888|31262|13910|
## Dataset Creation

*Figure 1:* Creation of EpiSet4NER by NIH/NCATS
Comparing the programmatically labeled test set to the manually corrected test set allowed us to measure the precision, recall, and F1 of the programmatic labeling.
*Table 1:* Programmatic labeling of EpiSet4NER
| Evaluation Level | Entity | Precision | Recall | F1 |
|:----------------:|:------------------------:|:---------:|:------:|:-----:|
| Entity-Level | Overall | 0.559 | 0.662 | 0.606 |
| | Location | 0.597 | 0.661 | 0.627 |
| | Epidemiologic Type | 0.854 | 0.911 | 0.882 |
| | Epidemiologic Rate | 0.175 | 0.255 | 0.207 |
| Token-Level | Overall | 0.805 | 0.710 | 0.755 |
| | Location | 0.868 | 0.713 | 0.783 |
| | Epidemiologic Type | 0.908 | 0.908 | 0.908 |
| | Epidemiologic Rate | 0.739 | 0.645 | 0.689 |
An example of the text labeling:

*Figure 2:* Text Labeling using spaCy and rule-based labeling. Ideal labeling is bolded on the left. Actual programmatic output is on the right. [\[Figure citation\]](https://pubmed.ncbi.nlm.nih.gov/33649778/)
### Curation Rationale
To train ML/DL models that automate the process of rare disease epidemiological curation. This is crucial information to patients & families, researchers, grantors, and policy makers, primarily for funding purposes.
### Source Data
620 rare disease abstracts classified as epidemiological by a LSTM RNN rare disease epi classifier from 488 diseases. See Figure 1.
#### Initial Data Collection and Normalization
A random sample of 500 disease names were gathered from a list of ~6061 rare diseases tracked by GARD until ≥50 abstracts had been returned for each disease or the EBI RESTful API results were exhausted. Though we called ~25,000 abstracts from PubMed's db, only 7699 unique abstracts were returned for 488 diseases. Out of 7699 abstracts, only 620 were classified as epidemiological by the LSTM RNN epidemiological classifier.
### Annotations
#### Annotation process
Programmatic labeling. See [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/create_labeled_dataset_V2.ipynb) and then [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/modify_existing_labels.ipynb). The test set was manually corrected after creation.
#### Who are the annotators?
Programmatic labeling was done by [@William Kariampuzha](https://github.com/wzkariampuzha), one of the NCATS researchers.
The test set was manually corrected by 2 more NCATS researchers and a GARD curator (genetic and rare disease expert).
### Personal and Sensitive Information
None. These are freely available abstracts from PubMed.
## Considerations for Using the Data
### Social Impact of Dataset
Assisting 25-30 millions Americans with rare diseases. Additionally can be useful for Orphanet or CDC researchers/curators.
### Discussion of Biases and Limitations
- There were errors in the source file that contained rare disease synonyms of names, which may have led to some unrelated abstracts being included in the training, validation, and test sets.
- The abstracts were gathered through the EBI API and is thus subject to any biases that the EBI API had. The NCBI API returns very different results as shown by an API analysis here.
- The [long short-term memory recurrent neural network epi classifier](https://pubmed.ncbi.nlm.nih.gov/34457147/) was used to sift the 7699 rare disease abstracts. This model had a hold-out validation F1 score of 0.886 and a test F1 (which was compared against a GARD curator who used full-text articles to determine truth-value of epidemiological abstract) of 0.701. With 620 epi abstracts filtered from 7699 original rare disease abstracts, there are likely several false positives and false negative epi abstracts.
- Tokenization was done by spaCy which may be a limitation (or not) for current and future models trained on this set.
- The programmatic labeling was very imprecise as seen by Table 1. This is likely the largest limitation of the [BioBERT-based model](https://huggingface.co/ncats/EpiExtract4GARD) trained on this set.
- The test set was difficult to validate even for general NCATS researchers, which is why we relied on a rare disease expert to verify our modifications. As this task of epidemiological information identification is quite difficult for non-expert humans to complete, this set, and especially a gold-standard dataset in the possible future, represents a challenging gauntlet for NLP systems, especially those focusing on numeracy, to compete on.
## Additional Information
### Dataset Curators
[NIH GARD](https://rarediseases.info.nih.gov/about-gard/pages/23/about-gard)
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at NCATS/Axle Informatics for adding this dataset. | [
-0.4153500497341156,
-0.5155749917030334,
0.30787548422813416,
-0.029679255560040474,
-0.09705199301242828,
-0.006369141861796379,
-0.13463355600833893,
-0.5178881883621216,
0.6666375398635864,
0.4297786056995392,
-0.335412859916687,
-0.869839608669281,
-0.5334674119949341,
0.4873407483100... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cat-state/mscoco-1st-caption | cat-state | 2022-05-29T20:30:35Z | 21 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-05-29T20:30:35Z | 2022-05-29T19:58:35.000Z | 2022-05-29T19:58:35 | ---
license: cc-by-4.0
---
To reproduce, run `pip install -r requirements.txt` and `download.sh`.
| [
-0.2727764844894409,
-0.4595716595649719,
0.4204868674278259,
0.5842878222465515,
-0.5118152499198914,
-0.34911802411079407,
0.25250276923179626,
-0.20929889380931854,
0.7260076403617859,
0.7581981420516968,
-1.0379142761230469,
-0.1396700143814087,
-0.8365108370780945,
0.5540415644645691,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
amueller/syntactic_transformations | amueller | 2022-10-23T06:11:48Z | 21 | 0 | null | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:2 languages",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:de",
"license:mit",
"region:us"
] | 2022-10-23T06:11:48Z | 2022-06-13T06:03:08.000Z | 2022-06-13T06:03:08 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
- de
license:
- mit
multilinguality:
- 2 languages
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- syntactic-evaluation
task_ids:
- syntactic-transformations
---
# Dataset Card for syntactic_transformations
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/sebschu/multilingual-transformations
- **Paper:** [Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models](https://aclanthology.org/2022.findings-acl.106/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Aaron Mueller](mailto:amueller@jhu.edu)
### Dataset Summary
This contains the the syntactic transformations datasets used in [Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models](https://aclanthology.org/2022.findings-acl.106/). It consists of English and German question formation and passivization transformations. This dataset also contains zero-shot cross-lingual transfer training and evaluation data.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English and German.
## Dataset Structure
### Data Instances
A typical data point consists of a source sequence ("src"), a target sequence ("tgt"), and a task prefix ("prefix"). The prefix indicates whether a given sequence should be kept the same in the target (indicated by the "decl:" prefix) or transformed into a question/passive ("quest:"/"passiv:", respectively). An example follows:
{"src": "the yak has entertained the walruses that have amused the newt.",
"tgt": "has the yak entertained the walruses that have amused the newt?",
"prefix": "quest: "
}
### Data Fields
- src: the original source sequence.
- tgt: the transformed target sequence.
- prefix: indicates which transformation to perform to map from the source to target sequences.
### Data Splits
The datasets are split into training, dev, test, and gen ("generalization") sets. The training sets are for fine-tuning the model. The dev and test sets are for evaluating model abilities on in-domain transformations. The generalization sets are for evaluating the inductive biases of the model.
NOTE: for the zero-shot cross-lingual transfer datasets, the generalization sets are split into in-domain and out-of-domain syntactic structures. For in-domain transformations, use "gen_rc_o" for question formation or "gen_pp_o" for passivization. For out-of-domain transformations, use "gen_rc_s" for question formation or "gen_pp_s" for passivization.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | [
-0.31383007764816284,
-0.7481091022491455,
0.1299661099910736,
0.12863227725028992,
-0.15735264122486115,
-0.02408483624458313,
-0.44216978549957275,
-0.1868654042482376,
0.303760826587677,
0.5577176213264465,
-0.8258847594261169,
-0.9029483795166016,
-0.5802521109580994,
0.256893754005432... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ju-resplande/askD | ju-resplande | 2022-10-29T12:19:35Z | 21 | 3 | null | [
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"task_ids:closed-domain-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:100K<n<1M",
"sourc... | 2022-10-29T12:19:35Z | 2022-06-27T15:26:30.000Z | 2022-06-27T15:26:30 | ---
annotations_creators:
- no-annotation
language_creators:
- found
- machine-generated
language:
- en
- pt
license:
- lgpl-3.0
multilinguality:
- multilingual
- translation
size_categories:
- 100K<n<1M
source_datasets:
- extended|eli5
task_categories:
- text2text-generation
task_ids:
- abstractive-qa
- closed-domain-qa
pretty_name: AskDocs
---
# Dataset Card for askD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/ju-resplande/askD
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[ELI5 dataset](https://huggingface.co/datasets/eli5) adapted on [Medical Questions (AskDocs)](https://www.reddit.com/r/AskDocs/) subreddit.
We additionally translated to Portuguese and used <a href="https://github.com/LasseRegin/medical-question-answer-data"> external data from here<a>.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language data in AskD is English (BCP-47 en) and Brazilian Portuguese (BCP-47 pt-BR)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
| | Train | Valid | Test | External |
| ----- | ------ | ----- | ---- | -------- |
| en | 24256 | 5198 | 5198 | 166804 |
| pt | 24256 | 5198 | 5198 | 166804 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The dataset questions and answers span a period from January 2013 to December 2019.
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@misc{Gomes20202,
author = {GOMES, J. R. S.},
title = {PLUE: Portuguese Language Understanding Evaluation},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ju-resplande/askD}},
commit = {42060c4402c460e174cbb75a868b429c554ba2b7}
}
```
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset. | [
-0.6135164499282837,
-0.8738776445388794,
0.0663299411535263,
0.18068699538707733,
-0.22483597695827484,
-0.16133210062980652,
-0.3318256139755249,
-0.3418222665786743,
0.8527767062187195,
0.5283498167991638,
-0.779615044593811,
-0.9297695755958557,
-0.6246775984764099,
0.17638805508613586... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlp-esg-scoring/spx-sustainalytics-esg-scores | nlp-esg-scoring | 2022-07-02T23:48:07Z | 21 | 1 | null | [
"region:us"
] | 2022-07-02T23:48:07Z | 2022-07-02T23:48:01.000Z | 2022-07-02T23:48:01 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Paul/hatecheck-french | Paul | 2022-07-05T10:40:23Z | 21 | 0 | null | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:fr",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | 2022-07-05T10:40:23Z | 2022-07-05T10:39:16.000Z | 2022-07-05T10:39:16 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- fr
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: French HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. | [
-0.6419410109519958,
-0.7158889174461365,
-0.05510091781616211,
0.09203927218914032,
-0.11549574881792068,
0.10751984268426895,
-0.030292540788650513,
-0.5101842880249023,
0.39948996901512146,
0.3274587094783783,
-0.7589271664619446,
-0.7721040844917297,
-0.5623311400413513,
0.460262477397... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
msalazark/amazon-shoe-reviews | msalazark | 2022-07-06T15:38:49Z | 21 | 0 | null | [
"region:us"
] | 2022-07-06T15:38:49Z | 2022-07-06T15:38:36.000Z | 2022-07-06T15:38:36 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
embedding-data/SPECTER | embedding-data | 2022-08-02T03:45:52Z | 21 | 0 | embedding-data/SPECTER | [
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-classification",
"language:en",
"license:mit",
"arxiv:2004.07180",
"region:us"
] | 2022-08-02T03:45:52Z | 2022-07-08T02:41:34.000Z | 2022-07-08T02:41:34 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/SPECTER
pretty_name: SPECTER
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "SPECTER"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/allenai/specter](https://github.com/allenai/specter)
- **Repository:** [More Information Needed](https://github.com/allenai/specter/blob/master/README.md)
- **Paper:** [More Information Needed](https://arxiv.org/pdf/2004.07180.pdf)
- **Point of Contact:** [@armancohan](https://github.com/armancohan), [@sergeyf](https://github.com/sergeyf), [@haroldrubio](https://github.com/haroldrubio), [@jinamshah](https://github.com/jinamshah)
### Dataset Summary
Dataset containing triplets (three sentences): anchor, positive, and negative. Contains titles of papers.
Disclaimer: The team releasing SPECTER did not upload the dataset to the Hub and did not write a dataset card.
These steps were done by the Hugging Face team.
## Dataset Structure
Each example in the dataset contains triplets of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
Each example is a dictionary with a key, "set", containing a list of three sentences (anchor, positive, and negative):
```
{"set": [anchor, positive, negative]}
{"set": [anchor, positive, negative]}
...
{"set": [anchor, positive, negative]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using triplets.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/SPECTER")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 684100
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Curation Rationale
[More Information Needed](https://github.com/allenai/specter)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/allenai/specter)
#### Who are the source language producers?
[More Information Needed](https://github.com/allenai/specter)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/allenai/specter)
#### Who are the annotators?
[More Information Needed](https://github.com/allenai/specter)
### Personal and Sensitive Information
[More Information Needed](https://github.com/allenai/specter)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/allenai/specter)
### Discussion of Biases
[More Information Needed](https://github.com/allenai/specter)
### Other Known Limitations
[More Information Needed](https://github.com/allenai/specter)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/allenai/specter)
### Licensing Information
[More Information Needed](https://github.com/allenai/specter)
### Citation Information
### Contributions
| [
-0.4373372197151184,
-0.38505062460899353,
0.22067557275295258,
0.052015505731105804,
-0.1843167394399643,
0.29666781425476074,
-0.3121277689933777,
-0.4353150427341461,
0.7312004566192627,
0.4199661314487457,
-0.5305011868476868,
-0.662226140499115,
-0.58742356300354,
0.22597457468509674,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MicPie/unpredictable_cluster14 | MicPie | 2022-08-04T19:53:18Z | 21 | 0 | null | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:tabular-cl... | 2022-08-04T19:53:18Z | 2022-07-08T17:29:56.000Z | 2022-07-08T17:29:56 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-cluster14
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-cluster14" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
| [
-0.5877187848091125,
-0.5580090284347534,
0.47174975275993347,
0.32884353399276733,
0.08426705002784729,
0.1515534520149231,
-0.14722956717014313,
-0.59833824634552,
0.5179962515830994,
0.28314709663391113,
-1.0306464433670044,
-0.6910187602043152,
-0.6535444259643555,
0.208755224943161,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hungnm/multilingual-amazon-review-sentiment-processed | hungnm | 2022-07-09T17:41:04Z | 21 | 0 | null | [
"license:mit",
"region:us"
] | 2022-07-09T17:41:04Z | 2022-07-09T17:30:11.000Z | 2022-07-09T17:30:11 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
demelin/wino_x | demelin | 2022-07-15T22:28:18Z | 21 | 1 | null | [
"task_categories:translation",
"task_ids:multiple-choice-qa",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:1K<n<10K",
... | 2022-07-15T22:28:18Z | 2022-07-14T11:21:23.000Z | 2022-07-14T11:21:23 | ---
annotations_creators:
- no-annotation
language:
- en
- de
- fr
- ru
language_creators:
- machine-generated
- expert-generated
license:
- mit
multilinguality:
- multilingual
- translation
pretty_name: Wino-X
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
- coreference resolution
- commonsense reasoning
task_ids:
- multiple-choice-qa
- language-modeling
---
# Dataset Card for Wino-X
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Wino-X repository](https://github.com/demelin/Wino-X)
- **Repository:** [Wino-X repository](https://github.com/demelin/Wino-X)
- **Paper:** [Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution](https://aclanthology.org/2021.emnlp-main.670/)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Denis Emelin](demelin.github.io)
### Dataset Summary
Wino-X is a parallel dataset of German, French, and Russian Winograd schemas, aligned with their English
counterparts, used to examine whether neural machine translation models can perform coreference resolution that
requires commonsense knowledge, and whether multilingual language models are capable of commonsense reasoning across
multiple languages.
### Supported Tasks and Leaderboards
- translation: The dataset can be used to evaluate translations of ambiguous source sentences, as produced by translation models . A [pretrained transformer-based NMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) can be used for this purpose.
- coreference-resolution: The dataset can be used to rank alternative translations of an ambiguous source sentence that differ in the chosen referent of an ambiguous source pronoun. A [pretrained transformer-based NMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) can be used for this purpose.
- commonsense-reasoning: The dataset can also be used evaluate whether pretrained multilingual language models can perform commonsense reasoning in (or across) multiple languages by identifying the correct filler in a cloze completion task. An [XLM-based model](https://huggingface.co/xlm-roberta-base) can be used for this purpose.
### Languages
The dataset (both its MT and LM portions) is available in the following translation pairs: English-German, English-French, English-Russian. All English sentences included in *Wino-X* were extracted from publicly available parallel corpora, as detailed in the accompanying paper, and represent the dataset-specific language varieties. All non-English sentences were obtained through machine translation and may, as such, exhibit features of translationese.
## Dataset Structure
### Data Instances
The following represents a typical *MT-Wino-X* instance (for the English-German translation pair):
{"qID": "3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1",
"sentence": "The woman looked for a different vase for the bouquet because it was too small.",
"translation1": "Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil sie zu klein war.",
"translation2": "Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil er zu klein war.",
"answer": 1,
"pronoun1": "sie",
"pronoun2": "er",
"referent1_en": "vase",
"referent2_en": "bouquet",
"true_translation_referent_of_pronoun1_de": "Vase",
"true_translation_referent_of_pronoun2_de": "Blumenstrauß",
"false_translation_referent_of_pronoun1_de": "Vase",
"false_translation_referent_of_pronoun2_de": "Blumenstrauß"}
The following represents a typical *LM-Wino-X* instance (for the English-French translation pair):
{"qID": "3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1",
"sentence": "The woman looked for a different vase for the bouquet because it was too small.",
"context_en": "The woman looked for a different vase for the bouquet because _ was too small.",
"context_fr": "La femme a cherché un vase différent pour le bouquet car _ était trop petit.",
"option1_en": "the bouquet",
"option2_en": "the vase",
"option1_fr": "le bouquet",
"option2_fr": "le vase",
"answer": 2,
"context_referent_of_option1_fr": "bouquet",
"context_referent_of_option2_fr": "vase"}
### Data Fields
For *MT-Wino-X*:
- "qID": Unique identifier ID for this dataset instance.
- "sentence": English sentence containing the ambiguous pronoun 'it'.
- "translation1": First translation candidate.
- "translation2": Second translation candidate.
- "answer": ID of the correct translation.
- "pronoun1": Translation of the ambiguous source pronoun in translation1.
- "pronoun2": Translation of the ambiguous source pronoun in translation2.
- "referent1_en": English referent of the translation of the ambiguous source pronoun in translation1.
- "referent2_en": English referent of the translation of the ambiguous source pronoun in translation2.
- "true_translation_referent_of_pronoun1_[TGT-LANG]": Target language referent of pronoun1 in the correct translation.
- "true_translation_referent_of_pronoun2_[TGT-LANG]": Target language referent of pronoun2 in the correct translation.
- "false_translation_referent_of_pronoun1_[TGT-LANG]": Target language referent of pronoun1 in the incorrect translation.
- "false_translation_referent_of_pronoun2_[TGT-LANG]": Target language referent of pronoun2 in the incorrect translation.
For *LM-Wino-X*:
- "qID": Unique identifier ID for this dataset instance.
- "sentence": English sentence containing the ambiguous pronoun 'it'.
- "context_en": Same English sentence, where 'it' is replaced by a gap.
- "context_fr": Target language translation of the English sentence, where the translation of 'it' is replaced by a gap.
- "option1_en": First filler option for the English sentence.
- "option2_en": Second filler option for the English sentence.
- "option1_[TGT-LANG]": First filler option for the target language sentence.
- "option2_[TGT-LANG]": Second filler option for the target language sentence.
- "answer": ID of the correct gap filler.
- "context_referent_of_option1_[TGT-LANG]": English translation of option1_[TGT-LANG].
- "context_referent_of_option2_[TGT-LANG]": English translation of option2_[TGT-LANG]
### Data Splits
*Wno-X* was designed as an evaluation-only benchmark and therefore is intended to be used for zero-shot testing only. However, users are very welcome to split the data as they wish :) .
## Dataset Creation
### Curation Rationale
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Source Data
#### Initial Data Collection and Normalization
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
#### Who are the source language producers?
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Annotations
#### Annotation process
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
#### Who are the annotators?
Annotations were generated automatically and verified by the dataset author / curator for correctness.
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Discussion of Biases
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Other Known Limitations
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
## Additional Information
### Dataset Curators
[Denis Emelin](demelin.github.io)
### Licensing Information
MIT
### Citation Information
@inproceedings{Emelin2021WinoXMW,
title={Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution},
author={Denis Emelin and Rico Sennrich},
booktitle={EMNLP},
year={2021}
} | [
-0.22463896870613098,
-0.5446244478225708,
0.2934102416038513,
0.18022625148296356,
-0.27536875009536743,
-0.27078959345817566,
-0.3936435580253601,
-0.408444344997406,
0.18256960809230804,
0.41708192229270935,
-0.6760407090187073,
-0.7039865255355835,
-0.5220679640769958,
0.55193448066711... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-79c1c0d8-10905463 | autoevaluate | 2022-07-14T18:31:17Z | 21 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-14T18:31:17Z | 2022-07-14T12:47:33.000Z | 2022-07-14T12:47:33 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- kmfoda/booksum
eval_info:
task: summarization
model: pszemraj/led-large-book-summary
metrics: ['bleu', 'perplexity']
dataset_name: kmfoda/booksum
dataset_config: kmfoda--booksum
dataset_split: test
col_mapping:
text: chapter
target: summary_text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-large-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | [
-0.4842089116573334,
-0.08894293755292892,
0.1928825080394745,
0.09766392409801483,
-0.2318985015153885,
-0.1434304416179657,
-0.003659140784293413,
-0.21883179247379303,
0.0752636194229126,
0.4885731041431427,
-0.9775663614273071,
-0.32461002469062805,
-0.6363404989242554,
-0.059224665164... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-squad_v2-2eb94bfa-11695557 | autoevaluate | 2022-07-24T08:25:16Z | 21 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-24T08:25:16Z | 2022-07-24T08:21:13.000Z | 2022-07-24T08:21:13 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: deepset/tinyroberta-6l-768d
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinyroberta-6l-768d
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ghpkishore](https://huggingface.co/ghpkishore) for evaluating this model. | [
-0.44156935811042786,
-0.43533575534820557,
0.3977418541908264,
0.06785260140895844,
0.002758677816018462,
0.06408793479204178,
0.10239019244909286,
-0.4156307876110077,
0.04647817835211754,
0.375997394323349,
-1.287248969078064,
-0.03274814039468765,
-0.4586186707019806,
-0.03000847063958... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
climatebert/environmental_claims | climatebert | 2023-05-23T08:53:10Z | 21 | 9 | null | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2209.00507",
"region:us"
] | 2023-05-23T08:53:10Z | 2022-09-01T14:19:17.000Z | 2022-09-01T14:19:17 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: EnvironmentalClaims
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'no'
'1': 'yes'
splits:
- name: train
num_bytes: 346686
num_examples: 2117
- name: validation
num_bytes: 43018
num_examples: 265
- name: test
num_bytes: 42810
num_examples: 265
download_size: 272422
dataset_size: 432514
---
# Dataset Card for environmental_claims
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [arxiv.org/abs/2209.00507](https://arxiv.org/abs/2209.00507)
- **Leaderboard:**
- **Point of Contact:** [Dominik Stammbach](mailto:dominsta@ethz.ch)
### Dataset Summary
We introduce an expert-annotated dataset for detecting real-world environmental claims made by listed companies.
### Supported Tasks and Leaderboards
The dataset supports a binary classification task of whether a given sentence is an environmental claim or not.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
"text": "It will enable E.ON to acquire and leverage a comprehensive understanding of the transfor- mation of the energy system and the interplay between the individual submarkets in regional and local energy supply sys- tems.",
"label": 0
}
```
### Data Fields
- text: a sentence extracted from corporate annual reports, sustainability reports and earning calls transcripts
- label: the label (0 -> no environmental claim, 1 -> environmental claim)
### Data Splits
The dataset is split into:
- train: 2,400
- validation: 300
- test: 300
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains environmental claims by firms, often in the financial domain. We collect text from corporate annual reports, sustainability reports, and earning calls transcripts.
For more information regarding our sample selection, please refer to Appendix B of our paper, which is provided for [citation](#citation-information).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to Appendix C of our paper, which is provided for [citation](#citation-information).
#### Who are the annotators?
The authors and students at University of Zurich with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Dominik Stammbach
- Nicolas Webersinke
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@misc{stammbach2022environmentalclaims,
title = {A Dataset for Detecting Real-World Environmental Claims},
author = {Stammbach, Dominik and Webersinke, Nicolas and Bingler, Julia Anna and Kraus, Mathias and Leippold, Markus},
year = {2022},
doi = {10.48550/ARXIV.2209.00507},
url = {https://arxiv.org/abs/2209.00507},
publisher = {arXiv},
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. | [
-0.27816593647003174,
-0.4415395259857178,
0.3694171607494354,
0.031433042138814926,
-0.14770832657814026,
-0.16212044656276703,
-0.2009086012840271,
-0.785255491733551,
0.3238080143928528,
0.5383904576301575,
-0.41371744871139526,
-0.7988105416297913,
-0.4688661992549896,
0.09026402980089... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
indonesian-nlp/librivox-indonesia | indonesian-nlp | 2022-10-24T09:14:51Z | 21 | 2 | null | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:librivox",
"language:ace",
"language:bal",
"language:bug",
"language:ind",
"language:min",
"language:jav",
"language:sun",
"... | 2022-10-24T09:14:51Z | 2022-09-04T00:13:16.000Z | 2022-09-04T00:13:16 | ---
pretty_name: LibriVox Indonesia 1.0
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ace
- bal
- bug
- ind
- min
- jav
- sun
license: cc
multilinguality:
- multilingual
size_categories:
ace:
- 1K<n<10K
bal:
- 1K<n<10K
bug:
- 1K<n<10K
ind:
- 1K<n<10K
min:
- 1K<n<10K
jav:
- 1K<n<10K
sun:
- 1K<n<10K
source_datasets:
- librivox
task_categories:
- automatic-speech-recognition
---
# Dataset Card for LibriVox Indonesia 1.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Point of Contact:** [Cahya Wirawan](mailto:cahya.wirawan@gmail.com)
### Dataset Summary
The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public
domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset.
The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio
file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports
multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it
for other languages without additional work to train the model.
The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files
as we collect them.
### Languages
```
Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include
`reader` and `language`.
```python
{
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'language': 'sun',
'reader': '3174',
'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa',
'audio': {
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 44100
},
}
```
### Data Fields
`path` (`string`): The path to the audio file
`language` (`string`): The language of the audio file
`reader` (`string`): The reader Id in LibriVox
`sentence` (`string`): The sentence the user read from the book.
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
### Data Splits
The speech material has only train split.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
``` | [
-0.3888869285583496,
-0.47666969895362854,
0.1080411970615387,
0.17860668897628784,
-0.3699474334716797,
-0.18567369878292084,
-0.23070518672466278,
-0.33346521854400635,
0.28974565863609314,
0.7677683234214783,
-0.6974170804023743,
-0.881267249584198,
-0.37753531336784363,
0.1182633489370... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
batterydata/cner | batterydata | 2022-09-05T16:07:43Z | 21 | 0 | null | [
"task_categories:token-classification",
"language:en",
"license:apache-2.0",
"arxiv:2006.03039",
"region:us"
] | 2022-09-05T16:07:43Z | 2022-09-05T15:49:33.000Z | 2022-09-05T15:49:33 | ---
language:
- en
license:
- apache-2.0
task_categories:
- token-classification
pretty_name: 'Chemical Named Entity Recognition (CNER) Dataset for BatteryDataExtractor'
---
# CNER Dataset
## Original Data Source
#### CHEMDNER
M. Krallinger, O. Rabal, F. Leitner, M. Vazquez, D. Salgado,
Z. Lu, R. Leaman, Y. Lu, D. Ji, D. M. Lowe et al., J. Cheminf.,
2015, 7, 1–17.
#### MatScholar
I. Weston, V. Tshitoyan, J. Dagdelen, O. Kononova, A. Tre-
wartha, K. A. Persson, G. Ceder and A. Jain, J. Chem. Inf.
Model., 2019, 59, 3692–3702.
#### SOFC
A. Friedrich, H. Adel, F. Tomazic, J. Hingerl, R. Benteau,
A. Maruscyk and L. Lange, The SOFC-exp corpus and neural
approaches to information extraction in the materials science
domain, 2020, https://arxiv.org/abs/2006.03039.
#### BioNLP
G. Crichton, S. Pyysalo, B. Chiu and A. Korhonen, BMC Bioinf.,
2017, 18, 1–14.
## Citation
BatteryDataExtractor: battery-aware text-mining software embedded with BERT models | [
-0.09457122534513474,
-0.2962382733821869,
0.6715171337127686,
-0.11178455501794815,
0.07803378254175186,
0.1820404827594757,
-0.02223924547433853,
-0.20302346348762512,
0.08996961265802383,
0.31042981147766113,
-0.5200490951538086,
-0.8403817415237427,
-0.4362850785255432,
0.2250234633684... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Mijavier/donut_custom_dataset | Mijavier | 2022-09-05T17:15:39Z | 21 | 0 | null | [
"region:us"
] | 2022-09-05T17:15:39Z | 2022-09-05T17:07:15.000Z | 2022-09-05T17:07:15 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Mijavier/3_classes_custom_dataset_donut | Mijavier | 2022-09-06T13:02:18Z | 21 | 0 | null | [
"region:us"
] | 2022-09-06T13:02:18Z | 2022-09-06T12:53:02.000Z | 2022-09-06T12:53:02 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Mijavier/10_classes_custom_dataset_donut | Mijavier | 2022-09-08T01:27:19Z | 21 | 1 | null | [
"region:us"
] | 2022-09-08T01:27:19Z | 2022-09-08T01:26:10.000Z | 2022-09-08T01:26:10 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cahya/librivox-indonesia | cahya | 2022-10-25T11:50:39Z | 21 | 0 | null | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:librivox",
"language:ace",
"language:bal",
"language:bug",
"language:id",
"language:min",
"language:jav",
"language:sun",
"l... | 2022-10-25T11:50:39Z | 2022-09-09T15:21:18.000Z | 2022-09-09T15:21:18 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ace
- bal
- bug
- id
- min
- jav
- sun
license: cc
multilinguality:
- multilingual
size_categories:
ace:
- 1K<n<10K
bal:
- 1K<n<10K
bug:
- 1K<n<10K
id:
- 1K<n<10K
min:
- 1K<n<10K
jav:
- 1K<n<10K
sun:
- 1K<n<10K
source_datasets:
- librivox
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: LibriVox Indonesia 1.0
---
# Dataset Card for LibriVox Indonesia 1.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Point of Contact:** [Cahya Wirawan](mailto:cahya.wirawan@gmail.com)
### Dataset Summary
The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public
domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset.
The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio
file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports
multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it
for other languages without additional work to train the model.
The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files
as we collect them.
### Languages
```
Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include
`reader` and `language`.
```python
{
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'language': 'sun',
'reader': '3174',
'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa',
'audio': {
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 44100
},
}
```
### Data Fields
`path` (`string`): The path to the audio file
`language` (`string`): The language of the audio file
`reader` (`string`): The reader Id in LibriVox
`sentence` (`string`): The sentence the user read from the book.
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
### Data Splits
The speech material has only train split.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
``` | [
-0.3888869285583496,
-0.47666969895362854,
0.1080411970615387,
0.17860668897628784,
-0.3699474334716797,
-0.18567369878292084,
-0.23070518672466278,
-0.33346521854400635,
0.28974565863609314,
0.7677683234214783,
-0.6974170804023743,
-0.881267249584198,
-0.37753531336784363,
0.1182633489370... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kipol/vs_art | Kipol | 2022-09-15T15:18:08Z | 21 | 0 | null | [
"license:cc",
"region:us"
] | 2022-09-15T15:18:08Z | 2022-09-15T15:17:14.000Z | 2022-09-15T15:17:14 | ---
license: cc
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
spiccolo/gene_expression_omnibus_nlp | spiccolo | 2022-10-13T16:34:55Z | 21 | 0 | null | [
"region:us"
] | 2022-10-13T16:34:55Z | 2022-09-15T15:53:44.000Z | 2022-09-15T15:53:44 | annotations_creators:
- no-annotation
languages:
-English
All data pulled from Gene Expression Omnibus website. tab separated file with GSE number followed by title and abstract text. | [
-0.5235497951507568,
-0.5737056136131287,
0.5976866483688354,
0.22497721016407013,
-0.4042832851409912,
0.13328689336776733,
0.15461944043636322,
-0.49712929129600525,
0.9549782276153564,
0.8349778652191162,
-0.6217902302742004,
-0.453811377286911,
-0.6960216760635376,
0.7465768456459045,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Skelebor/celeba-eyeglasses | Skelebor | 2022-09-24T08:46:50Z | 21 | 0 | null | [
"region:us"
] | 2022-09-24T08:46:50Z | 2022-09-24T08:40:30.000Z | 2022-09-24T08:40:30 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Inked/Peter | Inked | 2022-09-28T16:16:17Z | 21 | 0 | null | [
"region:us"
] | 2022-09-28T16:16:17Z | 2022-09-28T16:12:43.000Z | 2022-09-28T16:12:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joelniklaus/eurlex_resources | joelniklaus | 2023-05-10T08:04:28Z | 21 | 6 | null | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:e... | 2023-05-10T08:04:28Z | 2022-09-29T07:35:34.000Z | 2022-09-29T07:35:34 | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "EurlexResources: A Corpus Covering the Largest EURLEX Resources"
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for EurlexResources: A Corpus Covering the Largest EURLEX Resources
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/JoelNiklaus/LegalDatasets/tree/main/pretrain/eurlex)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
This dataset contains large text resources (~179GB in total) from EURLEX that can be used for pretraining language models.
Use the dataset like this:
```python
from datasets import load_dataset
config = "de_caselaw" # {lang}_{resource}
dataset = load_dataset("joelito/eurlex_resources", config, split='train', streaming=True)
```
### Supported Tasks and Leaderboards
The dataset supports the task of masked language modeling.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl.xz and there is one split available ("train").
The following resource types are supported: caselaw, decision, directive, intagr, proposal, recommendation, regulation
More information about the resource types can be found here:
- Caselaw: [EU](https://eur-lex.europa.eu/collection/eu-law/eu-case-law.html)
- Decision: [EU](https://eur-lex.europa.eu/EN/legal-content/summary/european-union-decisions.html), [Wikipedia](https://en.wikipedia.org/wiki/Decision_(European_Union))
- Directive: [EU](https://european-union.europa.eu/institutions-law-budget/law/types-legislation_en), [Wikipedia](https://en.wikipedia.org/wiki/Directive_(European_Union))
- Recommendation: [EU](https://eur-lex.europa.eu/EN/legal-content/glossary/recommendation.html), [Wikipedia](https://en.wikipedia.org/wiki/Recommendation_(European_Union))
- Regulation: [EU](https://european-union.europa.eu/institutions-law-budget/law/types-legislation_en), [Wikipedia](https://en.wikipedia.org/wiki/Regulation_(European_Union))
- Intagr: [EU](https://eur-lex.europa.eu/collection/eu-law/inter-agree.html), [Wikipedia](https://en.wikipedia.org/wiki/Treaties_of_the_European_Union)
- Proposal: No resource found
| Source | Size (MB) | Words | Documents | Words/Document |
|:-------------------|------------:|------------:|------------:|-----------------:|
| all_all | 180668 | 12106556233 | 8306749 | 1457 |
| all_caselaw | 34939 | 3413551598 | 2487794 | 1372 |
| all_decision | 28519 | 1698585620 | 1267402 | 1340 |
| all_directive | 4786 | 368577940 | 104187 | 3537 |
| all_intagr | 11421 | 743271516 | 274485 | 2707 |
| all_proposal | 26526 | 2087989530 | 702392 | 2972 |
| all_recommendation | 1886 | 164979037 | 80277 | 2055 |
| all_regulation | 72590 | 3629600992 | 3390212 | 1070 |
| bg_all | 7819 | 398067053 | 348691 | 1141 |
| bg_caselaw | 1588 | 109749174 | 104434 | 1050 |
| bg_decision | 1248 | 58817972 | 54075 | 1087 |
| bg_directive | 263 | 15731608 | 4388 | 3585 |
| bg_intagr | 603 | 31292848 | 11581 | 2702 |
| bg_proposal | 1083 | 60674956 | 29251 | 2074 |
| bg_recommendation | 89 | 5588991 | 3321 | 1682 |
| bg_regulation | 2943 | 116211504 | 141641 | 820 |
| cs_all | 8360 | 471961631 | 449793 | 1049 |
| cs_caselaw | 1163 | 110005022 | 104519 | 1052 |
| cs_decision | 1102 | 58921128 | 54075 | 1089 |
| cs_directive | 186 | 13951134 | 4388 | 3179 |
| cs_intagr | 449 | 28106332 | 11581 | 2426 |
| cs_proposal | 840 | 61838692 | 29252 | 2113 |
| cs_recommendation | 64 | 5416549 | 3323 | 1630 |
| cs_regulation | 4557 | 193722774 | 242655 | 798 |
| da_all | 8932 | 671484862 | 332500 | 2019 |
| da_caselaw | 1746 | 185589641 | 88234 | 2103 |
| da_decision | 1356 | 89498535 | 54085 | 1654 |
| da_directive | 207 | 17525792 | 4388 | 3994 |
| da_intagr | 506 | 35596169 | 11582 | 3073 |
| da_proposal | 1399 | 119759476 | 29257 | 4093 |
| da_recommendation | 100 | 9463897 | 3352 | 2823 |
| da_regulation | 3618 | 214051352 | 141602 | 1511 |
| de_all | 9607 | 695512401 | 348290 | 1996 |
| de_caselaw | 1930 | 193232441 | 104228 | 1853 |
| de_decision | 1449 | 93688222 | 53980 | 1735 |
| de_directive | 218 | 17337760 | 4385 | 3953 |
| de_intagr | 531 | 36791153 | 11580 | 3177 |
| de_proposal | 1556 | 126987454 | 29219 | 4346 |
| de_recommendation | 109 | 9608034 | 3318 | 2895 |
| de_regulation | 3813 | 217867337 | 141580 | 1538 |
| el_all | 12469 | 696216541 | 349667 | 1991 |
| el_caselaw | 2951 | 202027703 | 105138 | 1921 |
| el_decision | 1823 | 94919886 | 54150 | 1752 |
| el_directive | 321 | 19411959 | 4390 | 4421 |
| el_intagr | 701 | 38965777 | 11584 | 3363 |
| el_proposal | 2085 | 128005737 | 29290 | 4370 |
| el_recommendation | 145 | 9344866 | 3357 | 2783 |
| el_regulation | 4443 | 203540613 | 141758 | 1435 |
| en_all | 9217 | 769465561 | 348641 | 2207 |
| en_caselaw | 1846 | 222891827 | 104422 | 2134 |
| en_decision | 1504 | 114626013 | 54054 | 2120 |
| en_directive | 204 | 18860876 | 4388 | 4298 |
| en_intagr | 499 | 39029843 | 11581 | 3370 |
| en_proposal | 1538 | 140781768 | 29242 | 4814 |
| en_recommendation | 97 | 10091809 | 3320 | 3039 |
| en_regulation | 3530 | 223183425 | 141634 | 1575 |
| es_all | 8588 | 725125274 | 348443 | 2081 |
| es_caselaw | 1870 | 220621730 | 104312 | 2115 |
| es_decision | 1334 | 98163499 | 54001 | 1817 |
| es_directive | 221 | 21484479 | 4385 | 4899 |
| es_intagr | 516 | 41841805 | 11581 | 3612 |
| es_proposal | 1366 | 133674486 | 29224 | 4574 |
| es_recommendation | 82 | 8864018 | 3319 | 2670 |
| es_regulation | 3199 | 200475257 | 141621 | 1415 |
| et_all | 6090 | 328068754 | 349615 | 938 |
| et_caselaw | 1074 | 93096396 | 105111 | 885 |
| et_decision | 1069 | 50752324 | 54159 | 937 |
| et_directive | 177 | 11555930 | 4390 | 2632 |
| et_intagr | 436 | 24018147 | 11584 | 2073 |
| et_proposal | 810 | 51600852 | 29283 | 1762 |
| et_recommendation | 61 | 4451369 | 3355 | 1326 |
| et_regulation | 2464 | 92593736 | 141733 | 653 |
| fi_all | 7346 | 404265224 | 349633 | 1156 |
| fi_caselaw | 1596 | 126525296 | 105119 | 1203 |
| fi_decision | 1227 | 59659475 | 54163 | 1101 |
| fi_directive | 204 | 12766491 | 4389 | 2908 |
| fi_intagr | 463 | 25392311 | 11584 | 2192 |
| fi_proposal | 1075 | 69198401 | 29288 | 2362 |
| fi_recommendation | 73 | 5070392 | 3356 | 1510 |
| fi_regulation | 2707 | 105652858 | 141734 | 745 |
| fr_all | 9937 | 828959218 | 348295 | 2380 |
| fr_caselaw | 2158 | 246262666 | 104228 | 2362 |
| fr_decision | 1473 | 108648744 | 53981 | 2012 |
| fr_directive | 222 | 20308801 | 4385 | 4631 |
| fr_intagr | 536 | 41986012 | 11580 | 3625 |
| fr_proposal | 1592 | 149134298 | 29218 | 5104 |
| fr_recommendation | 112 | 11510415 | 3318 | 3469 |
| fr_regulation | 3845 | 251108282 | 141585 | 1773 |
| ga_all | 1028 | 65030095 | 349778 | 185 |
| ga_caselaw | 11 | 696305 | 105205 | 6 |
| ga_decision | 87 | 4415457 | 54189 | 81 |
| ga_directive | 18 | 1512027 | 4390 | 344 |
| ga_intagr | 19 | 1820723 | 11586 | 157 |
| ga_proposal | 289 | 26106889 | 29298 | 891 |
| ga_recommendation | 10 | 902390 | 3361 | 268 |
| ga_regulation | 594 | 29576304 | 141749 | 208 |
| hr_all | 4594 | 258816068 | 348691 | 742 |
| hr_caselaw | 617 | 62432734 | 104434 | 597 |
| hr_decision | 596 | 31911903 | 54075 | 590 |
| hr_directive | 156 | 10855913 | 4388 | 2474 |
| hr_intagr | 450 | 24962086 | 11581 | 2155 |
| hr_proposal | 552 | 33437815 | 29251 | 1143 |
| hr_recommendation | 40 | 3612247 | 3321 | 1087 |
| hr_regulation | 2183 | 91603370 | 141641 | 646 |
| hu_all | 6653 | 375253894 | 349605 | 1073 |
| hu_caselaw | 1278 | 110179375 | 105144 | 1047 |
| hu_decision | 1147 | 57108172 | 54156 | 1054 |
| hu_directive | 200 | 13568304 | 4389 | 3091 |
| hu_intagr | 470 | 27258501 | 11586 | 2352 |
| hu_proposal | 912 | 60882750 | 29291 | 2078 |
| hu_recommendation | 70 | 5312868 | 3357 | 1582 |
| hu_regulation | 2576 | 100943924 | 141682 | 712 |
| it_all | 9586 | 768605772 | 333631 | 2303 |
| it_caselaw | 1889 | 206117726 | 89560 | 2301 |
| it_decision | 1445 | 102848859 | 53983 | 1905 |
| it_directive | 217 | 19687773 | 4385 | 4489 |
| it_intagr | 528 | 40134330 | 11580 | 3465 |
| it_proposal | 1533 | 140713925 | 29218 | 4816 |
| it_recommendation | 109 | 10923431 | 3318 | 3292 |
| it_regulation | 3865 | 248179728 | 141587 | 1752 |
| lt_all | 6400 | 364361783 | 200565 | 1816 |
| lt_caselaw | 1137 | 101808706 | 105477 | 965 |
| lt_decision | 1096 | 55850308 | 21990 | 2539 |
| lt_directive | 185 | 13078983 | 3239 | 4037 |
| lt_intagr | 452 | 27009631 | 7481 | 3610 |
| lt_proposal | 850 | 58553579 | 29272 | 2000 |
| lt_recommendation | 64 | 5121089 | 3363 | 1522 |
| lt_regulation | 2617 | 102939487 | 29743 | 3460 |
| lv_all | 6349 | 363239195 | 349919 | 1038 |
| lv_caselaw | 1153 | 103456811 | 105242 | 983 |
| lv_decision | 1103 | 55512944 | 54224 | 1023 |
| lv_directive | 186 | 13023024 | 4392 | 2965 |
| lv_intagr | 452 | 26693107 | 11630 | 2295 |
| lv_proposal | 96 | 58176216 | 29298 | 1985 |
| lv_recommendation | 64 | 5074494 | 3361 | 1509 |
| lv_regulation | 2545 | 101302599 | 141772 | 714 |
| mt_all | 6540 | 367834815 | 350292 | 1050 |
| mt_caselaw | 1164 | 100423543 | 105479 | 952 |
| mt_decision | 1109 | 55239141 | 54280 | 1017 |
| mt_directive | 203 | 14355266 | 4392 | 3268 |
| mt_intagr | 470 | 27701991 | 11675 | 2372 |
| mt_proposal | 878 | 59749277 | 29274 | 2041 |
| mt_recommendation | 65 | 5039600 | 3363 | 1498 |
| mt_regulation | 2650 | 105325997 | 141829 | 742 |
| nl_all | 9586 | 770312808 | 349407 | 2204 |
| nl_caselaw | 1847 | 206271837 | 105005 | 1964 |
| nl_decision | 1456 | 104060901 | 54152 | 1921 |
| nl_directive | 217 | 19529361 | 4388 | 4450 |
| nl_intagr | 529 | 40247634 | 11584 | 3474 |
| nl_proposal | 1540 | 141258274 | 29279 | 4824 |
| nl_recommendation | 111 | 11002405 | 3355 | 3279 |
| nl_regulation | 3886 | 247942396 | 141644 | 1750 |
| pl_all | 6677 | 406648795 | 350349 | 1160 |
| pl_caselaw | 1231 | 115824759 | 105479 | 1098 |
| pl_decision | 1125 | 60407576 | 54287 | 1112 |
| pl_directive | 197 | 14672157 | 4392 | 3340 |
| pl_intagr | 466 | 28543668 | 11680 | 2443 |
| pl_proposal | 886 | 64728230 | 29317 | 2207 |
| pl_recommendation | 68 | 5769893 | 3363 | 1715 |
| pl_regulation | 2703 | 116702512 | 141831 | 822 |
| pt_all | 8450 | 675152149 | 348449 | 1937 |
| pt_caselaw | 1763 | 198084937 | 104312 | 1898 |
| pt_decision | 1327 | 93278293 | 54007 | 1727 |
| pt_directive | 217 | 19831549 | 4385 | 4522 |
| pt_intagr | 504 | 37999753 | 11581 | 3281 |
| pt_proposal | 1361 | 127461782 | 29224 | 4361 |
| pt_recommendation | 81 | 8396661 | 3319 | 2529 |
| pt_regulation | 3197 | 190099174 | 141621 | 1342 |
| ro_all | 6315 | 415038571 | 350300 | 1184 |
| ro_caselaw | 1110 | 114780999 | 105516 | 1087 |
| ro_decision | 1047 | 59479553 | 54281 | 1095 |
| ro_directive | 206 | 16101628 | 4392 | 3666 |
| ro_intagr | 481 | 31497000 | 11675 | 2697 |
| ro_proposal | 805 | 62130419 | 29274 | 2122 |
| ro_recommendation | 63 | 5977913 | 3363 | 1777 |
| ro_regulation | 2603 | 125071059 | 141799 | 882 |
| sk_all | 6484 | 392235510 | 350570 | 1118 |
| sk_caselaw | 1160 | 110125141 | 105608 | 1042 |
| sk_decision | 1111 | 59576875 | 54349 | 1096 |
| sk_directive | 188 | 14132755 | 4393 | 3217 |
| sk_intagr | 458 | 28298155 | 11676 | 2423 |
| sk_proposal | 859 | 63726047 | 29290 | 2175 |
| sk_recommendation | 66 | 5654790 | 3364 | 1680 |
| sk_regulation | 2642 | 110721747 | 141890 | 780 |
| sl_all | 6222 | 394814289 | 350574 | 1126 |
| sl_caselaw | 1071 | 111238184 | 105608 | 1053 |
| sl_decision | 1075 | 59454906 | 54349 | 1093 |
| sl_directive | 176 | 13908097 | 4393 | 3165 |
| sl_intagr | 441 | 28239078 | 11676 | 2418 |
| sl_proposal | 812 | 63391970 | 29290 | 2164 |
| sl_recommendation | 62 | 5628775 | 3364 | 1673 |
| sl_regulation | 2585 | 112953279 | 141894 | 796 |
| sv_all | 7419 | 500085970 | 351051 | 1424 |
| sv_caselaw | 1585 | 162108645 | 105980 | 1529 |
| sv_decision | 1213 | 71744934 | 54357 | 1319 |
| sv_directive | 195 | 15386273 | 4393 | 3502 |
| sv_intagr | 463 | 29845462 | 11676 | 2556 |
| sv_proposal | 1059 | 86016237 | 29292 | 2936 |
| sv_recommendation | 79 | 7152141 | 3366 | 2124 |
| sv_regulation | 2825 | 127832278 | 141987 | 900 |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data has been downloaded using the R package [eurlex](https://cran.r-project.org/web/packages/eurlex/vignettes/eurlexpkg.html) between June and August 2022.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
[see also the legal notice](https://eur-lex.europa.eu/content/legal-notice/legal-notice.html)
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| [
-0.8940966129302979,
-0.33942368626594543,
0.3929517865180969,
0.20875374972820282,
-0.14009352028369904,
0.13161788880825043,
-0.1478525847196579,
-0.10267993062734604,
0.7258280515670776,
0.7057456970214844,
-0.45160195231437683,
-0.7763776779174805,
-0.44993409514427185,
0.0699290558695... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Marcelpribu/stabledifusion | Marcelpribu | 2023-07-06T16:48:39Z | 21 | 0 | null | [
"license:other",
"region:us"
] | 2023-07-06T16:48:39Z | 2022-09-30T14:05:54.000Z | 2022-09-30T14:05:54 | ---
license: other
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
din0s/msmarco-nlgen | din0s | 2022-10-01T12:30:18Z | 21 | 2 | null | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|ms_marco",
"language:en",
"license:cc-by-4.0",
"msmarco",
"natural lan... | 2022-10-01T12:30:18Z | 2022-09-30T14:06:45.000Z | 2022-09-30T14:06:45 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: MSMARCO NLGEN
size_categories:
- 100K<n<1M
source_datasets:
- extended|ms_marco
tags:
- msmarco
- natural language generation
- question answering
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for MSMARCO - Natural Language Generation Task
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://microsoft.github.io/msmarco/
- **Repository:** https://github.com/microsoft/MSMARCO-Question-Answering
- **Paper:** https://arxiv.org/abs/1611.09268
- **Leaderboard:** https://microsoft.github.io/msmarco#qnadataset
### Dataset Summary
The original focus of MSMARCO was to provide a corpus for training and testing systems which given a real domain user query systems would then provide the most likley candidate answer and do so in language which was natural and conversational. All questions have been generated from real anonymized Bing user queries which grounds the dataset in a real world problem and can provide researchers real contrainsts their models might be used in. The context passages, from which the answers in the dataset are derived, are extracted from real web documents using the most advanced version of the Bing search engine. The answers to the queries are human generated.
### Supported Tasks and Leaderboards
Question Answering & Natural Language Generation. [Leaderboard](https://microsoft.github.io/msmarco#qnadataset)
### Languages
- English
## Dataset Structure
### Data Instances
```py
{
"query_id":604568,
"query":"what county is columbus city in",
"passages":[
{
"is_selected":0,
"passage_text":"WELCOME TO COLUMBUS! The City of Columbus includes a mix of residential, rural and commercial property. Columbus boasts large tracts of public land, including Carlos Avery Wildlife Management Area and Lamprey Pass.",
"url":"http://www.ci.columbus.mn.us/"
},
{
"is_selected":0,
"passage_text":"The ratio of number of residents in Columbus to the number of sex offenders is 488 to 1. The number of registered sex offenders compared to the number of residents in this city is near the state average. Nearest city with pop. 50,000+: Bloomington, IN (33.3 miles , pop. 69,291).",
"url":"http://www.city-data.com/city/Columbus-Indiana.html"
},
{
"is_selected":0,
"passage_text":"Phone Number: Columbus-Muscogee, the first consolidated city-county in Georgia, began development in 1826, building on ceded Creek Indian territory. Muscogee is the name of a branch of the Creek Nation. Columbus, of course, is named for Christopher Columbus.",
"url":"https://georgia.gov/cities-counties/columbus-muscogee-county"
},
{
"is_selected":1,
"passage_text":"Sponsored Topics. Columbus ( /kəlʌmbəs/) is a city in and the county seat of Bartholomew County, Indiana, United States. The population was 44,061 at the 2010 census, and the current mayor is Fred Armstrong. Located approximately 40 miles (64 km) south of Indianapolis, on the east fork of the White River, it is the state's 20th largest city.",
"url":"https://www.mapquest.com/us/in/columbus-282032817"
},
{
"is_selected":0,
"passage_text":"Columbus, Ohio. Columbus (/kəˈlʌmbəs/; kə-LUM-bəs) is the capital and largest city of the U.S. state of Ohio. It is the 15th-largest city in the United States, with a population of 850,106 as of 2015 estimates. This makes Columbus the fourth-most populous state capital in the United States, and the third-largest city in the Midwestern United States.",
"url":"https://en.wikipedia.org/wiki/Columbus,_Ohio"
},
{
"is_selected":0,
"passage_text":"Phone Number: Columbus-Muscogee, the first consolidated city-county in Georgia, began development in 1826, building on ceded Creek Indian territory. Muscogee is the name of a branch of the Creek Nation. Columbus, of course, is named for Christopher Columbus.",
"url":"https://georgia.gov/cities-counties/columbus"
},
{
"is_selected":0,
"passage_text":"Latest news from Columbus, IN collected exclusively by city-data.com from local newspapers, TV, and radio stations. Ancestries: American (30.5%), German (13.7%), English (7.7%), Irish (5.3%), European (2.4%), Scottish (1.2%).",
"url":"http://www.city-data.com/city/Columbus-Indiana.html"
},
{
"is_selected":0,
"passage_text":"Columbus, Indiana. 1 Columbus: covered Bridge at Mill Race Park. 2 Columbus: A statue in cloumbus. 3 Columbus. Columbus: Bartholomew County Courthouse. Columbus: Tipton Lakes - A wonderful planned 1 community! Columbus: Barthalomew county memorial for veterans. Columbus: A sculpter called summer storm in 1 columbus. Columbus: Downtown Columbus.",
"url":"http://www.city-data.com/city/Columbus-Indiana.html"
},
{
"is_selected":0,
"passage_text":"The City owns and operates a volunteer fire department through a joint powers agreement with the City of Forest Lake. Police protection is provided through a contract with the Anoka County Sheriff’s Department. Columbus is located within the Forest Lake Area School District (ISD #831).",
"url":"http://www.ci.columbus.mn.us/"
},
{
"is_selected":0,
"passage_text":"Acceptable ID for children: State ID, Birth Certificate, or Health Insurance Card. Effective June 27, 2016, the Franklin County Sheriff's Office will be implementing changes to ensure the safety of inmates, staff, and visitors. Printed materials (magazines, books, pamphlets, leaflets, or catalogues) MUST fit all the below criteria:",
"url":"https://sheriff.franklincountyohio.gov/services/inmate-information.cfm"
}
],
"query_type":"LOCATION",
"answers":[
"Columbus is a city in Bartholomew County."
]
}
```
### Data Fields
- `query_id`: a unique id for each query that is used in evaluation
- `query`: a unique query based on initial Bing usage
- `passages`: a list of 10 passages (`passage_text`), URLs (`url`), and an annotation if they were used to formulate the answer (`is_selected`)
- `query_type`: a basic division of queries based on a trained classifier (`LOCATION`,`NUMERIC`,`PERSON`,`DESCRIPTION`,`ENTITY`)
- `answers`: a list of "well-formed" answers generated by human annotators using natural language
### Data Splits
| **Split** | **Instances** |
|-----------|---------------|
| Train | 153725 |
| Dev | 12467 |
## Dataset Creation
### Curation Rationale
What is the differences between MSMARCO and other MRC datasets?
- Real questions: All questions have been sampled from real anonymized bing queries.
- Real Documents: Most of the URLs that the passages were sourced from contain the full web documents (passages).
- Human Generated Well-Formed Answers: All questions have an answer written by a human in natural language.
### Annotations
#### Annotation process
The MSMARCO dataset is generated by a well oiled pipeline optimized for the highest quality examples. The general process runs as follows:
1. Bing logs are sampled, filtered and anonymized to make sure the queries are both useful to the research community and respectful to bing users and fans.
2. Using the sampled and anonymized queries Bing generates the 10 most relevant passages for the query.
3. Highly trained judges read the query and its related passages and if there is an answer present, the supporting passages are annotated and a natural language answer is generated.
4. A smaller proportion of queries(~17% of overall dataset with 182,887 unique queries) are then passed on to a second round of judges who are asked to verify the answer is correct and rewrite(if possible) the query to be a well formed answer. These answers are designed to be understood without perfect context and are designed with smart speakers/digital assistants in mind.
## Additional Information
### Licensing Information
MS MARCO is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@inproceedings{Bajaj2016Msmarco,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang},
booktitle={InCoCo@NIPS},
year={2016}
}
```
### Contributions
Thanks to [@din0s](https://github.com/din0s) for adding this dataset. | [
-0.5752911567687988,
-0.4535144567489624,
0.7307134866714478,
0.1507747620344162,
-0.3843582570552826,
-0.06557396799325943,
-0.025002889335155487,
-0.5142058730125427,
0.6436241865158081,
0.5554612874984741,
-0.7118414044380188,
-0.7431328892707825,
-0.47488027811050415,
0.333495438098907... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/RES1 | arbml | 2022-11-03T13:45:23Z | 21 | 0 | null | [
"region:us"
] | 2022-11-03T13:45:23Z | 2022-10-05T13:15:12.000Z | 2022-10-05T13:15:12 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/ASTD | arbml | 2022-11-03T13:49:52Z | 21 | 1 | null | [
"region:us"
] | 2022-11-03T13:49:52Z | 2022-10-05T13:23:21.000Z | 2022-10-05T13:23:21 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/arabic_stop_words | arbml | 2022-11-03T14:14:41Z | 21 | 0 | null | [
"region:us"
] | 2022-11-03T14:14:41Z | 2022-10-05T22:12:11.000Z | 2022-10-05T22:12:11 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
heegyu/kowiki-sentences | heegyu | 2022-10-06T00:54:57Z | 21 | 2 | null | [
"task_categories:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:ko",
"license:cc-by-sa-3.0",
"region:us"
] | 2022-10-06T00:54:57Z | 2022-10-06T00:46:26.000Z | 2022-10-06T00:46:26 | ---
license: cc-by-sa-3.0
language:
- ko
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- other
---
20221001 한국어 위키를 kss(backend=mecab)을 이용해서 문장 단위로 분리한 데이터
- 549262 articles, 4724064 sentences
- 한국어 비중이 50% 이하거나 한국어 글자가 10자 이하인 경우를 제외 | [
-0.15488800406455994,
-0.5914735198020935,
0.6631050109863281,
1.053446650505066,
-0.5713176131248474,
0.030452053993940353,
0.5237307548522949,
0.015433848835527897,
0.7912150025367737,
0.6169736385345459,
-0.7811281681060791,
-0.7324689626693726,
-0.37549933791160583,
0.327118456363678,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
julien-c/titanic-survival | julien-c | 2022-10-10T19:20:30Z | 21 | 1 | null | [
"task_categories:tabular-classification",
"license:cc",
"tabular-classification",
"region:us"
] | 2022-10-10T19:20:30Z | 2022-10-10T19:15:48.000Z | 2022-10-10T19:15:48 | ---
license: cc
tags:
- tabular-classification
task_categories:
- tabular-classification
---
## Titanic Survival
from https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/problem12.html | [
-0.1276736855506897,
-0.6346752643585205,
0.5232409834861755,
0.7958722114562988,
-0.36294007301330566,
0.18341417610645294,
0.5671759843826294,
-0.02997111715376377,
0.2612280249595642,
0.38059717416763306,
-0.5387833714485168,
0.04406515881419182,
-0.4596777558326721,
-0.2320189774036407... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
javlopagu/me | javlopagu | 2022-10-13T08:56:38Z | 21 | 0 | null | [
"region:us"
] | 2022-10-13T08:56:38Z | 2022-10-13T08:31:40.000Z | 2022-10-13T08:31:40 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lazarinho322/autotrain-data-sddb | Lazarinho322 | 2022-10-13T08:50:21Z | 21 | 0 | null | [
"region:us"
] | 2022-10-13T08:50:21Z | 2022-10-13T08:39:10.000Z | 2022-10-13T08:39:10 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Harsit/xnli2.0_train_chinese | Harsit | 2022-10-15T09:16:08Z | 21 | 1 | null | [
"region:us"
] | 2022-10-15T09:16:08Z | 2022-10-15T09:15:31.000Z | 2022-10-15T09:15:31 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
oscarmutante/oscar | oscarmutante | 2022-10-15T21:28:32Z | 21 | 0 | null | [
"license:unlicense",
"region:us"
] | 2022-10-15T21:28:32Z | 2022-10-15T21:27:15.000Z | 2022-10-15T21:27:15 | ---
license: unlicense
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Eddiefloat/galina | Eddiefloat | 2022-10-23T16:40:02Z | 21 | 0 | null | [
"region:us"
] | 2022-10-23T16:40:02Z | 2022-10-23T16:39:48.000Z | 2022-10-23T16:39:48 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Eddiefloat/Karasyova | Eddiefloat | 2022-10-23T16:41:08Z | 21 | 0 | null | [
"region:us"
] | 2022-10-23T16:41:08Z | 2022-10-23T16:40:54.000Z | 2022-10-23T16:40:54 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
niurl/eraser_cose | niurl | 2022-10-25T03:22:37Z | 21 | 0 | null | [
"region:us"
] | 2022-10-25T03:22:37Z | 2022-10-25T03:21:49.000Z | 2022-10-25T03:21:49 | ---
dataset_info:
features:
- name: doc_id
dtype: string
- name: question
sequence: string
- name: query
dtype: string
- name: evidence_span
sequence:
sequence: int64
- name: classification
dtype: string
splits:
- name: test
num_bytes: 282071
num_examples: 1079
- name: train
num_bytes: 2316094
num_examples: 8752
- name: val
num_bytes: 288029
num_examples: 1086
download_size: 1212369
dataset_size: 2886194
---
# Dataset Card for "eraser_cose"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7354553937911987,
-0.559902548789978,
0.14064644277095795,
0.0332457609474659,
-0.16527746617794037,
-0.015785593539476395,
0.3801669180393219,
-0.1539064347743988,
0.9312732219696045,
0.5420854091644287,
-0.9358730912208557,
-0.828498899936676,
-0.6508964896202087,
-0.15385833382606506... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmajisp/sv_corpora_parliament_processed | atmajisp | 2022-10-25T06:56:38Z | 21 | 0 | null | [
"region:us"
] | 2022-10-25T06:56:38Z | 2022-10-25T06:56:19.000Z | 2022-10-25T06:56:19 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sachinkelenjaguri/Disaster_Tweets | Sachinkelenjaguri | 2022-10-25T07:04:45Z | 21 | 0 | null | [
"region:us"
] | 2022-10-25T07:04:45Z | 2022-10-25T07:04:16.000Z | 2022-10-25T07:04:16 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Theund3rdogo/Josemota | Theund3rdogo | 2022-10-26T09:41:50Z | 21 | 0 | null | [
"region:us"
] | 2022-10-26T09:41:50Z | 2022-10-25T07:29:57.000Z | 2022-10-25T07:29:57 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tanay/sentiment-corpus | tanay | 2022-10-25T08:07:06Z | 21 | 0 | null | [
"region:us"
] | 2022-10-25T08:07:06Z | 2022-10-25T07:32:20.000Z | 2022-10-25T07:32:20 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
findzebra/corpus | findzebra | 2022-10-25T09:58:33Z | 21 | 0 | null | [
"region:us"
] | 2022-10-25T09:58:33Z | 2022-10-25T08:05:58.000Z | 2022-10-25T08:05:58 | # FindZebra corpus
A collection of 30.658 curated articles about rare diseases gathered from GARD, GeneReviews, Genetics Home Reference, OMIM, Orphanet, and Wikipedia. Each article is referenced with a Concept Unique Identifier ([CUI](https://www.nlm.nih.gov/research/umls/new_users/online_learning/Meta_005.html)).
## Preprocessing
The raw HTML content of each article has been processed using the following code (`text` column):
```python
# Preprocessing code
import math
import html2text
parser = html2text.HTML2Text()
parser.ignore_links = True
parser.ignore_images = True
parser.ignore_tables = True
parser.ignore_emphasis = True
parser.body_width = math.inf
parser.body_width = math.inf
article_text = parser.handle(article_html)
``` | [
-0.2890584170818329,
-0.8052107095718384,
0.3506700098514557,
0.3513694107532501,
-0.10600923746824265,
-0.10229241102933884,
-0.14407473802566528,
-0.30570173263549805,
0.3676779866218567,
0.2328784316778183,
-0.3993717432022095,
-0.5727856159210205,
-0.21024943888187408,
0.36851486563682... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cha1/github-issues | Cha1 | 2022-10-25T09:48:54Z | 21 | 0 | null | [
"region:us"
] | 2022-10-25T09:48:54Z | 2022-10-25T09:48:44.000Z | 2022-10-25T09:48:44 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664175 | autoevaluate | 2022-10-25T15:21:54Z | 21 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T15:21:54Z | 2022-10-25T14:29:26.000Z | 2022-10-25T14:29:26 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-30b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev_cot
dataset_config: mathemakitten--winobias_antistereotype_dev_cot
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | [
-0.4167015254497528,
-0.31572824716567993,
0.267555832862854,
0.0661846399307251,
-0.03697920963168144,
-0.009511502459645271,
0.04895041882991791,
-0.4376412332057953,
0.20571444928646088,
0.3898604214191437,
-0.9625887870788574,
-0.33130794763565063,
-0.6285218000411987,
-0.1361992657184... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664174 | autoevaluate | 2022-10-25T14:57:27Z | 21 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T14:57:27Z | 2022-10-25T14:29:27.000Z | 2022-10-25T14:29:27 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-13b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev_cot
dataset_config: mathemakitten--winobias_antistereotype_dev_cot
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | [
-0.40399494767189026,
-0.3417755961418152,
0.28593510389328003,
0.06571447849273682,
-0.06365922838449478,
-0.027704691514372826,
0.04763714224100113,
-0.4535995125770569,
0.22593694925308228,
0.3814045190811157,
-0.9868784546852112,
-0.320305734872818,
-0.6422122716903687,
-0.099639810621... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664170 | autoevaluate | 2022-10-25T14:30:11Z | 21 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T14:30:11Z | 2022-10-25T14:29:28.000Z | 2022-10-25T14:29:28 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev_cot
eval_info:
task: text_zero_shot_classification
model: ArthurZ/opt-125m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev_cot
dataset_config: mathemakitten--winobias_antistereotype_dev_cot
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | [
-0.43999341130256653,
-0.26475998759269714,
0.3097064197063446,
-0.03254207223653793,
-0.0910295695066452,
-0.06546992063522339,
0.03978892043232918,
-0.477418452501297,
0.1736738234758377,
0.42004725337028503,
-0.8864684700965881,
-0.36988112330436707,
-0.6969241499900818,
-0.024148505181... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664176 | autoevaluate | 2022-10-25T16:42:14Z | 21 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T16:42:14Z | 2022-10-25T14:29:31.000Z | 2022-10-25T14:29:31 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-66b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev_cot
dataset_config: mathemakitten--winobias_antistereotype_dev_cot
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | [
-0.38123977184295654,
-0.31307825446128845,
0.2758602797985077,
0.020101701840758324,
-0.06426826119422913,
-0.03629271686077118,
0.06099208816885948,
-0.45578497648239136,
0.2191443294286728,
0.3931007981300354,
-0.9566453695297241,
-0.3356380760669708,
-0.6285384297370911,
-0.10930898040... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
katossky/wine-recognition | katossky | 2022-10-29T10:22:58Z | 21 | 1 | null | [
"task_categories:tabular-classification",
"task_ids:tabular-multi-class-classification",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"size_categories:n<1K",
"source_datasets:original",
"license:unknown",
"region:us"
] | 2022-10-29T10:22:58Z | 2022-10-25T16:15:53.000Z | 2022-10-25T16:15:53 | ---
annotations_creators:
- no-annotation
language: []
language_creators:
- expert-generated
license:
- unknown
pretty_name: Wine Recognition Dataset
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- tabular-classification
task_ids:
- tabular-multi-class-classification
---
# Dataset Card for Wine Recognition dataset
## Dataset Description
- **Homepage:** https://archive.ics.uci.edu/ml/datasets/wine
- **Papers:**
1. S. Aeberhard, D. Coomans and O. de Vel,
Comparison of Classifiers in High Dimensional Settings,
Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
2. S. Aeberhard, D. Coomans and O. de Vel,
"THE CLASSIFICATION PERFORMANCE OF RDA"
Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
- **Point of Contact:** stefan'@'coral.cs.jcu.edu.au
### Dataset Summary
These data are the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines. In a classification context, this is a well posed problem with "well behaved" class structures. A good data set for first testing of a new classifier, but not very challenging.
### Supported Tasks and Leaderboards
Classification (cultivar) from continuous variables (all other variables)
## Dataset Structure
### Data Instances
178 wines
### Data Fields
1. Wine category (cultivar)
2. Alcohol
3. Malic acid
4. Ash
5. Alcalinity of ash
6. Magnesium
7. Total phenols
8. Flavanoids
9. Nonflavanoid phenols
10. Proanthocyanins
11. Color intensity
12. Hue
13. OD280/OD315 of diluted wines
14. Proline
### Data Splits
None
## Dataset Creation
### Source Data
https://archive.ics.uci.edu/ml/datasets/wine
#### Initial Data Collection and Normalization
Original Owners:
Forina, M. et al, PARVUS -
An Extendible Package for Data Exploration, Classification and Correlation.
Institute of Pharmaceutical and Food Analysis and Technologies, Via Brigata Salerno,
16147 Genoa, Italy.
## Additional Information
### Dataset Curators
Stefan Aeberhard
### Licensing Information
No information found on the original website | [
-0.3120420575141907,
-0.49965378642082214,
0.029417777433991432,
0.2144947052001953,
-0.05419369786977768,
-0.20841442048549652,
-0.12377498298883438,
-0.314390629529953,
0.09134034812450409,
0.3912973701953888,
-0.31319722533226013,
-0.6918555498123169,
-0.5273853540420532,
0.242638945579... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Eddiefloat/sasha2 | Eddiefloat | 2022-10-26T06:48:07Z | 21 | 0 | null | [
"region:us"
] | 2022-10-26T06:48:07Z | 2022-10-26T06:47:38.000Z | 2022-10-26T06:47:38 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kejian/codeparrot-train-more-filter-3.3b-cleaned | kejian | 2023-02-21T04:40:49Z | 21 | 2 | null | [
"task_categories:text-classification",
"annotations_creators:machine-generated",
"size_categories:100K<n<1M",
"source_datasets:codeparrot",
"license:mit",
"pretraining-with-human-feedback",
"pep8",
"python",
"codeparrot",
"region:us"
] | 2023-02-21T04:40:49Z | 2022-10-29T08:00:31.000Z | 2022-10-29T08:00:31 | ---
annotations_creators:
- machine-generated
source_datasets:
- codeparrot
task_categories:
- text-classification
tags:
- pretraining-with-human-feedback
- pep8
- python
- codeparrot
size_categories:
- 100K<n<1M
license: mit
--- | [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aps/test-wmt | aps | 2022-11-01T22:10:04Z | 21 | 0 | null | [
"region:us"
] | 2022-11-01T22:10:04Z | 2022-11-01T22:08:34.000Z | 2022-11-01T22:08:34 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andrewburns/hf_flat_icons | andrewburns | 2022-11-02T02:49:46Z | 21 | 1 | null | [
"region:us"
] | 2022-11-02T02:49:46Z | 2022-11-02T00:55:53.000Z | 2022-11-02T00:55:53 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
camenduru/plushies | camenduru | 2022-11-18T03:16:34Z | 21 | 3 | null | [
"region:us"
] | 2022-11-18T03:16:34Z | 2022-11-09T06:54:48.000Z | 2022-11-09T06:54:48 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 42942055.0
num_examples: 730
download_size: 42653871
dataset_size: 42942055.0
models:
- camenduru/plushies
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jellywibble/dalio-finetune-principles_book | Jellywibble | 2022-11-11T20:32:57Z | 21 | 0 | null | [
"region:us"
] | 2022-11-11T20:32:57Z | 2022-11-11T19:12:46.000Z | 2022-11-11T19:12:46 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Tristan/olm-wikipedia-20221101-kl-language | Tristan | 2022-11-11T19:32:33Z | 21 | 0 | null | [
"region:us"
] | 2022-11-11T19:32:33Z | 2022-11-11T19:32:29.000Z | 2022-11-11T19:32:29 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 311164
num_examples: 297
download_size: 191198
dataset_size: 311164
---
# Dataset Card for "olm-wikipedia-20221101-kl-language"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7930101156234741,
-0.38626083731651306,
0.32534468173980713,
0.0899059921503067,
-0.09717456251382828,
-0.0903032198548317,
-0.17116035521030426,
-0.22301311790943146,
0.4131796360015869,
0.5947287082672119,
-0.7463589906692505,
-0.7811427712440491,
-0.46644946932792664,
-0.161756619811... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vidhikatkoria/SGD_RentalCars | vidhikatkoria | 2023-03-21T20:54:19Z | 21 | 0 | null | [
"region:us"
] | 2023-03-21T20:54:19Z | 2022-11-11T21:17:26.000Z | 2022-11-11T21:17:26 | ---
dataset_info:
features:
- name: domain
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: act
dtype: int64
- name: speaker
dtype: int64
splits:
- name: train
num_bytes: 1685534.5292607802
num_examples: 5843
- name: test
num_bytes: 239
num_examples: 1
download_size: 637179
dataset_size: 1685773.5292607802
---
# Dataset Card for "SGD_RentalCars"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5324354767799377,
-0.08661982417106628,
0.1726689636707306,
0.23864313960075378,
-0.18256738781929016,
-0.018070239573717117,
0.2535597085952759,
0.025432340800762177,
0.7768822312355042,
0.35150793194770813,
-0.757973313331604,
-0.7581344246864319,
-0.26092153787612915,
-0.540518343448... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bionlp_shared_task_2009 | bigbio | 2022-12-22T15:43:48Z | 21 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:43:48Z | 2022-11-13T22:06:45.000Z | 2022-11-13T22:06:45 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2009
homepage: http://www.geniaproject.org/shared-tasks/bionlp-shared-task-2009
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- EVENT_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioNLP 2009
## Dataset Description
- **Homepage:** http://www.geniaproject.org/shared-tasks/bionlp-shared-task-2009
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,EE,COREF
The BioNLP Shared Task 2009 was organized by GENIA Project and its corpora were curated based
on the annotations of the publicly available GENIA Event corpus and an unreleased (blind) section
of the GENIA Event corpus annotations, used for evaluation.
## Citation Information
```
@inproceedings{kim-etal-2009-overview,
title = "Overview of {B}io{NLP}{'}09 Shared Task on Event Extraction",
author = "Kim, Jin-Dong and
Ohta, Tomoko and
Pyysalo, Sampo and
Kano, Yoshinobu and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the {B}io{NLP} 2009 Workshop Companion Volume for Shared Task",
month = jun,
year = "2009",
address = "Boulder, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W09-1401",
pages = "1--9",
}
```
| [
-0.11823961138725281,
-0.2898801267147064,
0.3101237714290619,
0.2550509572029114,
-0.4010538160800934,
0.11999046057462692,
-0.2739965319633484,
-0.7851911783218384,
0.5442599654197693,
0.3132665455341339,
-0.6504834890365601,
-0.742508053779602,
-0.47341620922088623,
0.5503581762313843,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/ehr_rel | bigbio | 2022-12-22T15:44:34Z | 21 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-12-22T15:44:34Z | 2022-11-13T22:08:18.000Z | 2022-11-13T22:08:18 |
---
language:
- en
bigbio_language:
- English
license: apache-2.0
multilinguality: monolingual
bigbio_license_shortname: APACHE_2p0
pretty_name: EHR-Rel
homepage: https://github.com/babylonhealth/EHR-Rel
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for EHR-Rel
## Dataset Description
- **Homepage:** https://github.com/babylonhealth/EHR-Rel
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
EHR-Rel is a novel open-source1 biomedical concept relatedness dataset consisting of 3630 concept pairs, six times more
than the largest existing dataset. Instead of manually selecting and pairing concepts as done in previous work,
the dataset is sampled from EHRs to ensure concepts are relevant for the EHR concept retrieval task.
A detailed analysis of the concepts in the dataset reveals a far larger coverage compared to existing datasets.
## Citation Information
```
@inproceedings{schulz-etal-2020-biomedical,
title = {Biomedical Concept Relatedness {--} A large {EHR}-based benchmark},
author = {Schulz, Claudia and
Levy-Kramer, Josh and
Van Assel, Camille and
Kepes, Miklos and
Hammerla, Nils},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
month = {dec},
year = {2020},
address = {Barcelona, Spain (Online)},
publisher = {International Committee on Computational Linguistics},
url = {https://aclanthology.org/2020.coling-main.577},
doi = {10.18653/v1/2020.coling-main.577},
pages = {6565--6575},
}
```
| [
-0.1433669924736023,
-0.8185283541679382,
0.48986199498176575,
-0.24860787391662598,
-0.4894036650657654,
0.13414894044399261,
-0.11271211504936218,
-0.5493342876434326,
0.6549006700515747,
0.34241437911987305,
-0.5185106992721558,
-0.9417853355407715,
-0.26621565222740173,
0.2240214645862... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nadav/MiniScans | Nadav | 2022-11-15T14:15:58Z | 21 | 0 | null | [
"region:us"
] | 2022-11-15T14:15:58Z | 2022-11-14T09:34:46.000Z | 2022-11-14T09:34:46 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: evaluation
1: train
splits:
- name: test
num_bytes: 1655444336.229
num_examples: 15159
- name: train
num_bytes: 34770710847.12
num_examples: 300780
download_size: 38233031644
dataset_size: 36426155183.349
---
# Dataset Card for "MiniScans"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8039416670799255,
-0.05179569870233536,
0.3355148732662201,
-0.13556009531021118,
-0.3337397873401642,
-0.15595769882202148,
0.44468408823013306,
-0.17975810170173645,
0.8937708735466003,
0.3359910249710083,
-0.8816756010055542,
-0.6042829155921936,
-0.5820124745368958,
-0.1688274741172... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
galman33/gal_yair_83000_100x100 | galman33 | 2022-11-19T14:57:47Z | 21 | 0 | null | [
"region:us"
] | 2022-11-19T14:57:47Z | 2022-11-19T14:56:55.000Z | 2022-11-19T14:56:55 | ---
dataset_info:
features:
- name: lat
dtype: float64
- name: lon
dtype: float64
- name: country_code
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 1423239502.0
num_examples: 83000
download_size: 1423108777
dataset_size: 1423239502.0
---
# Dataset Card for "gal_yair_83000_100x100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5971413254737854,
-0.419486939907074,
0.1150117889046669,
0.2829453945159912,
-0.075982965528965,
-0.18899662792682648,
0.4840484857559204,
-0.05570195987820625,
0.7560132741928101,
0.5633373856544495,
-0.8342081904411316,
-0.6744818687438965,
-0.5416781306266785,
-0.26287147402763367,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dxiao/requirements-ner-id | dxiao | 2022-11-21T18:40:22Z | 21 | 0 | null | [
"region:us"
] | 2022-11-21T18:40:22Z | 2022-11-21T18:39:52.000Z | 2022-11-21T18:39:52 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zpn/pcba_686978 | zpn | 2022-12-09T20:30:45Z | 21 | 0 | null | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"license:mit",
"bio",
"bio-chem",
"molnet",
"molecule-net",
"biophysics",
"arxiv:1703.00564",
"region:us"
] | 2022-12-09T20:30:45Z | 2022-11-28T14:25:33.000Z | 2022-11-28T14:25:33 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: pcba_686978
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- bio
- bio-chem
- molnet
- molecule-net
- biophysics
task_categories:
- other
task_ids: []
---
# Dataset Card for pcba_686978
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://moleculenet.org/**
- **Repository: https://github.com/deepchem/deepchem/tree/master**
- **Paper: https://arxiv.org/abs/1703.00564**
### Dataset Summary
`pcba_686978` is a dataset included in [MoleculeNet](https://moleculenet.org/). PubChem BioAssay (PCBA) is a database consisting of biological activities of small molecules generated by high-throughput screening. We have chosen one of the larger tasks (ID 686978) as described in https://par.nsf.gov/servlets/purl/10168888.
## Dataset Structure
### Data Fields
Each split contains
* `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule
* `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule
* `target`: Measured results (Active/Inactive) for bioassays
### Data Splits
The dataset is split into an 80/10/10 train/valid/test split using random split.
### Source Data
#### Initial Data Collection and Normalization
Data was originially generated by the Pande Group at Standford
### Licensing Information
This dataset was originally released under an MIT license
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.1703.00564,
doi = {10.48550/ARXIV.1703.00564},
url = {https://arxiv.org/abs/1703.00564},
author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay},
keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences},
title = {MoleculeNet: A Benchmark for Molecular Machine Learning},
publisher = {arXiv},
year = {2017},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset.
| [
-0.4238501489162445,
-0.5066964030265808,
0.257470965385437,
0.05396729335188866,
-0.26852014660835266,
0.04764753207564354,
-0.24888958036899567,
-0.1646169275045395,
0.34650716185569763,
0.48140111565589905,
-0.5693786144256592,
-0.8785144686698914,
-0.5107825994491577,
0.222218632698059... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ManpreetK/NDD_NER | ManpreetK | 2022-12-24T21:58:17Z | 21 | 0 | null | [
"region:us"
] | 2022-12-24T21:58:17Z | 2022-11-28T22:32:42.000Z | 2022-11-28T22:32:42 | ---
viewer: true
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': I-CONDITION
'1': I-TEST
'2': B-CONDITION
'3': I-PATIENT_GROUP
'4': B-ASSOCIATED_PROBLEM
'5': O
'6': I-ASSOCIATED_PROBLEM
'7': B-INTERVENTION
'8': B-PATIENT_GROUP
'9': I-INTERVENTION
'10': B-TEST
splits:
- name: train
num_bytes: 156151
num_examples: 341
- name: validation
num_bytes: 68495
num_examples: 177
- name: test
num_bytes: 67949
num_examples: 160
download_size: 78315
dataset_size: 292595
---
# Dataset Card for "NDD_NER"
## Dataset Summary
This Named Entity Recognition dataset is created for Neurodevelopmental disorders domain to detected domain specific entities. Initially, pubmed abstracts were annotated
with SciSpaCy UMLS entity linker and specific semantic types were mapped to required domain specific labels, which were further validated during manual curation process
using Label Studio (an open source data labeling tool).
| Label Category | UMLS semantic types |
|-----|-----|
|CONDITION| Mental or Behavioral Dysfunction, Disease or Syndrome, Neoplastic Process, Congenital Abnormality |
|ASSOCIATED_PROBLEM| Sign or Symptom, Mental Process, Injury or Poisoning |
|PATIENT_GROUP| Age Group, Population Group, Patient or Disabled Group |
|INTERVENTION| Therapeutic or Preventive Procedure, Health Care Activity |
|TEST| Diagnostic Procedure, Intellectual Product, Research Activity, Laboratory Procedure |
## Dataset Splits
|split name|number of examples|CONDITION|ASSOCIATED_PROBLEM|PATIENT_GROUP|INTERVENTION|TEST|
|-----|-----|-----|-----|-----|-----|-----|
|train| 341 | 320 | 189 | 240 | 273 | 228 |
|test| 160 | 139 | 68 | 87 | 98 | 82 |
|validation| 177 | 147 | 82 | 104 | 117 | 98 |
## Source Data
Pubmed abstracts for ("Neurodevelopmental Disorders"[Mesh]) AND "Behavioral Disciplines and Activities"[Mesh] query using NCBI E-utilities API.
| [
-0.48012879490852356,
-0.3472195565700531,
0.20596541464328766,
0.27169325947761536,
0.009749868884682655,
0.10172830522060394,
0.29102104902267456,
-0.32704219222068787,
0.6450881958007812,
0.48848775029182434,
-0.6756392121315002,
-0.9097263216972351,
-0.7063872218132019,
0.3916667103767... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-4c75f893-5bbd-4360-a0fd-dfda62c6960c-103102 | autoevaluate | 2022-11-29T08:42:03Z | 21 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-29T08:42:03Z | 2022-11-29T08:41:24.000Z | 2022-11-29T08:41:24 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.20361600816249847,
-0.33383119106292725,
0.2989136576652527,
0.17618101835250854,
-0.16354264318943024,
0.036154817789793015,
0.02089543640613556,
-0.39217692613601685,
0.12184587866067886,
0.3618120551109314,
-0.9186381101608276,
-0.21669894456863403,
-0.770520806312561,
-0.01348811481... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DBL/test | DBL | 2022-11-29T09:50:20Z | 21 | 0 | null | [
"region:us"
] | 2022-11-29T09:50:20Z | 2022-11-29T08:48:08.000Z | 2022-11-29T08:48:08 | The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models that can take advantage of long term dependencies. | [
-0.45280736684799194,
-0.5990869402885437,
0.13732747733592987,
0.22501403093338013,
-0.1992504894733429,
0.06115317344665527,
-0.06115875765681267,
-0.5480256080627441,
0.18547849357128143,
0.6236847043037415,
-0.4588369131088257,
-0.09170911461114883,
-0.4865490198135376,
0.1819006800651... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
louisbetsch/labeling | louisbetsch | 2022-11-29T09:32:08Z | 21 | 0 | null | [
"region:us"
] | 2022-11-29T09:32:08Z | 2022-11-29T09:31:43.000Z | 2022-11-29T09:31:43 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dataScienceGroup8/tweetsTrainSet | dataScienceGroup8 | 2022-11-29T10:11:14Z | 21 | 1 | null | [
"region:us"
] | 2022-11-29T10:11:14Z | 2022-11-29T10:10:43.000Z | 2022-11-29T10:10:43 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SuperNova672/PositiveWithEmbeddings | SuperNova672 | 2022-11-29T10:22:16Z | 21 | 0 | null | [
"region:us"
] | 2022-11-29T10:22:16Z | 2022-11-29T10:20:50.000Z | 2022-11-29T10:20:50 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lucadiliello/news_as2 | lucadiliello | 2022-11-29T11:26:06Z | 21 | 0 | null | [
"region:us"
] | 2022-11-29T11:26:06Z | 2022-11-29T11:19:38.000Z | 2022-11-29T11:19:38 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 316302353
num_examples: 1840533
- name: dev
num_bytes: 8925506
num_examples: 51844
- name: test
num_bytes: 8824280
num_examples: 51472
download_size: 35957517
dataset_size: 334052139
---
# Dataset Card for "news_as2"
Answer Sentence Selection version of the NewsQA dataset. For more info, check out the original [repository](https://github.com/lucadiliello/answer-selection). | [
-0.3582553267478943,
-0.5369856953620911,
0.1429586112499237,
0.11354092508554459,
-0.40749838948249817,
-0.056875668466091156,
0.14655467867851257,
-0.20343494415283203,
0.4752635955810547,
0.6695976257324219,
-0.7149937748908997,
-0.4054143726825714,
-0.37128570675849915,
-0.126276150345... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
awacke1/ASRLive.csv | awacke1 | 2022-12-04T20:01:22Z | 21 | 1 | null | [
"region:us"
] | 2022-12-04T20:01:22Z | 2022-11-29T11:25:34.000Z | 2022-11-29T11:25:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cjvt/cc_gigafida | cjvt | 2023-01-17T13:11:14Z | 21 | 0 | null | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100M<n<1B",
"language:sl... | 2023-01-17T13:11:14Z | 2022-11-29T15:03:45.000Z | 2022-11-29T15:03:45 | ---
annotations_creators:
- no-annotation
language:
- sl
language_creators:
- found
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: Written corpus ccGigafida 1.0
size_categories:
- 10K<n<100K
- 100M<n<1B
source_datasets: []
tags:
- gigafida
- gigafida2
- kres
- cckres
- reference corpus
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- language-modeling
---
# Dataset Card for ccGigafida
This repository by default loads the publicly available dataset ccGigafida, which contains a small subset of the Gigafida/Gigafida2 corpus.
The full datasets are private due to copyright. **If you happen to have access to the full datasets, the script will also work with those.**
Instead of
```
datasets.load_dataset("cjvt/cc_gigafida")
```
please use
```
datasets.load_dataset("cjvt/cc_gigafida", "private", data_dir="<directory-containing-gigafida(2)-TEI-files>")
```
**IMPORTANT:** The script will process all `.xml` files in the provided directory and its subdirectories - make sure there are no schema or metadata files in there!
### Dataset Summary
ccGigafida is a reference corpus of Slovene texts. It is a publicly available subsample of an even larger reference corpus, Gigafida (and its successor Gigafida 2).
The Gigafida corpus is an extensive collection of Slovene text of various genres, from daily newspapers, magazines, all kinds of books (fiction, non-fiction, textbooks),
web pages, transcriptions of parliamentary debates and similar.
### Supported Tasks and Leaderboards
Language modeling.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
The data is loaded at document-level, i.e. one instance is one document.
```
{
'id_doc': 'F0000123',
'doc_title': 'Novi tednik NT&RC',
'authors': ['neznani novinar'],
'publish_date': '1998-03-27',
'publisher': 'Novi tednik',
'genres': ['tisk/periodično/časopis'],
'doc_tokenized': [
[
['Po', 'nekajletnem', 'počitku', 'pa', 'se', 'vračajo', 'tudi', 'kralji', 'dark', 'rock', 'godbe', 'JESUS', 'AND', 'THE', 'MARY', 'CHAIN', '.'],
['Brata', 'Reid', 'bosta', 'svojo', 'najnovejšo', 'kreacijo', '»', 'Cracking', 'Up', '«', 'objavila', 'v', 'ponedeljek', 'pri', 'trenutno', 'najuspešnejši', 'neodvisni', 'založbi', 'Creation', '(', 'vodi', 'jo', 'njun', 'nekdanji', 'menager', 'Alan', 'McGee', ',', 'zanjo', 'pa', 'poleg', 'Oasis', 'snema', 'še', 'cel', 'kup', 'popularnih', 'brit', '-', 'popovcev', ')', ',', 'tej', 'pa', 'bo', 'kmalu', 'sledil', 'tudi', 'album', '»', 'Munki', '«', '.']
],
[
['Kultni', 'ameriški', 'tehno', 'freak', 'PLASTIKMAN', 'že', 'vrsto', 'let', 'velja', 'za', 'enega', 'izmed', 'najbolj', 'inovativnih', 'in', 'produktivnih', 'ustvarjalcev', 'sodobne', 'elektronske', 'glasbe', '.'],
['Za', 'založbo', 'Nova', 'Mute', 'je', 'v', 'preteklih', 'nekaj', 'letih', 'posnel', 'cel', 'kup', 'izvrstnih', 'underground', 'dance', 'glasbenih', 'izdelkov', ',', 'pred', 'nedavnim', 'pa', 'je', 'ljubitelje', 'tovrstne', 'godbe', 'presenetil', 'z', 'ambientalnimi', 'odisejadami', ',', 'zbranimi', 'na', 'LP-ju', '»', 'Refused', '«', ',', 'ki', 'ga', 'lahko', 'od', 'prejšnjega', 'ponedeljka', 'kupite', 'tudi', 'v', 'bolje', 'založenih', 'trgovinah', 'z', 'nosilci', 'zvoka', 'na', 'sončni', 'strani', 'Alp', '.']
],
[
['STANE', 'ŠPEGEL']
]
],
'doc_lemmas': [...],
'doc_msds': [...],
'doc_string': [
[
'Po nekajletnem počitku pa se vračajo tudi kralji dark rock godbe JESUS AND THE MARY CHAIN. ',
'Brata Reid bosta svojo najnovejšo kreacijo »Cracking Up« objavila v ponedeljek pri trenutno najuspešnejši neodvisni založbi Creation (vodi jo njun nekdanji menager Alan McGee, zanjo pa poleg Oasis snema še cel kup popularnih brit-popovcev), tej pa bo kmalu sledil tudi album »Munki«.'
],
[
'Kultni ameriški tehno freak PLASTIKMAN že vrsto let velja za enega izmed najbolj inovativnih in produktivnih ustvarjalcev sodobne elektronske glasbe. ',
'Za založbo Nova Mute je v preteklih nekaj letih posnel cel kup izvrstnih underground dance glasbenih izdelkov, pred nedavnim pa je ljubitelje tovrstne godbe presenetil z ambientalnimi odisejadami, zbranimi na LP-ju »Refused«, ki ga lahko od prejšnjega ponedeljka kupite tudi v bolje založenih trgovinah z nosilci zvoka na sončni strani Alp.'
],
[
'STANE ŠPEGEL'
]
],
'id_sents': [['F0000123.000005.0', 'F0000123.000005.1'], ['F0000123.000013.0', 'F0000123.000013.1'], ['F0000123.000020.0']]
}
```
### Data Fields
- `id_doc`: the document ID (string);
- `doc_title`: the document title (string);
- `authors`: author(s) of the document (list of string): "neznani novinar" (sl) = ("unknown/unspecified journalist");
- `publish_date`: publish date (string);
- `publisher`: publisher, e.g., the name of a news agency (string);
- `genres`: genre(s) of the document (list of string) - possible genres: `['tisk', 'tisk/knjižno', 'tisk/knjižno/leposlovno', 'tisk/knjižno/strokovno', 'tisk/periodično', 'tisk/periodično/časopis', 'tisk/periodično/revija', 'tisk/drugo', 'internet']`;
- `doc_tokenized`: tokenized document - the top level lists represent paragraphs, the lists in the level deeper represent sentences, and each sentence contains tokens;
- `doc_lemmas`: lemmatized document - same structure as `doc_tokenized`;
- `doc_msds`: MSD tags of the document - same structure as `doc_tokenized` ([tagset](http://nl.ijs.si/ME/V6/msd/html/msd-sl.html));
- `doc_string`: same as `doc_tokenized` but with properly placed spaces in sentences;
- `id_sents`: IDs of sentences contained inside paragraphs of the document.
## Dataset Creation
Gigafida consists of texts which were published between 1990 and 2011. The texts come from printed sources and from the web.
Printed part contains fiction, non-fiction and textbooks, and periodicals such as daily newspapers and magazines.
Texts originating from the web were published on news portals, pages of big Slovene companies and more important governmental,
educational, research, cultural and similar institutions.
For more information, please check http://eng.slovenscina.eu/korpusi/gigafida.
## Additional Information
### Dataset Curators
Nataša Logar; et al. (please see http://hdl.handle.net/11356/1035 for the full list)
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@misc{ccGigafida,
title = {Written corpus {ccGigafida} 1.0},
author = {Logar, Nata{\v s}a and Erjavec, Toma{\v z} and Krek, Simon and Gr{\v c}ar, Miha and Holozan, Peter},
url = {http://hdl.handle.net/11356/1035},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{NonCommercial}-{ShareAlike} 4.0 International ({CC} {BY}-{NC}-{SA} 4.0)},
issn = {2820-4042},
year = {2013}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| [
-0.5021858215332031,
-0.5424695611000061,
0.30337727069854736,
0.22739169001579285,
-0.43144339323043823,
-0.003220688086003065,
-0.1857953518629074,
-0.12230560928583145,
0.629310131072998,
0.5447376370429993,
-0.6612545251846313,
-0.9274387955665588,
-0.6387829780578613,
0.28296852111816... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nicoclemens/trainimagenes | nicoclemens | 2022-11-29T17:48:20Z | 21 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-29T17:48:20Z | 2022-11-29T17:44:04.000Z | 2022-11-29T17:44:04 | ---
license: creativeml-openrail-m
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vucinatim/spectrogram-captions | vucinatim | 2023-01-03T00:24:32Z | 21 | 1 | null | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:afl-3.0",
"stable diffusion sound generation text-to-sound text-to-image-to-sound spectrogram",
"region:us"... | 2023-01-03T00:24:32Z | 2022-11-29T17:44:33.000Z | 2022-11-29T17:44:33 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: Captioned generic audio clips with spectrogram images
size_categories:
- n<1K
source_datasets: []
tags:
- 'stable diffusion sound generation
text-to-sound
text-to-image-to-sound
spectrogram'
task_categories:
- text-to-image
task_ids: []
---
Dataset of captioned spectrograms (text describing the sound). | [
-0.2184762954711914,
0.07170958071947098,
0.07847946137189865,
0.47692129015922546,
-0.4393340051174164,
0.24656057357788086,
-0.5970582365989685,
-0.24956782162189484,
0.8859320878982544,
1.0159938335418701,
-0.5508180856704712,
-0.7138012647628784,
-0.0672549158334732,
0.3471838235855102... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/medical-keywords | argilla | 2022-12-07T12:00:34Z | 21 | 4 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"region:us"
] | 2022-12-07T12:00:34Z | 2022-12-07T11:49:17.000Z | 2022-12-07T11:49:17 | ---
language:
- en
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- keyphrase-extraction
- named-entity-recognition
dataset_info:
features:
- name: text
dtype: string
- name: tokens
sequence: string
- name: prediction
list:
- name: end
dtype: int64
- name: label
dtype: string
- name: score
dtype: float64
- name: start
dtype: int64
- name: prediction_agent
dtype: string
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: id
dtype: 'null'
- name: metadata
struct:
- name: medical_specialty
dtype: string
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
dtype: 'null'
splits:
- name: train
num_bytes: 58986555
num_examples: 148699
download_size: 17498377
dataset_size: 58986555
---
# Dataset Card for "medical-keywords"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/tboyle10/medicaltranscriptions
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Medical transcription data scraped from mtsamples.com
Medical data is extremely hard to find due to HIPAA privacy regulations. This dataset offers a solution by providing medical transcription samples.
This dataset contains sample medical transcriptions for various medical specialties.
### Languages
english
### Citation Information
Acknowledgements
Medical transcription data scraped from mtsamples.com
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. | [
0.17993399500846863,
-0.39697763323783875,
0.4617731273174286,
0.013181762769818306,
-0.5461891889572144,
0.20735053718090057,
-0.04371865466237068,
-0.15255586802959442,
0.6815820336341858,
0.7373807430267334,
-0.6783593893051147,
-0.983235239982605,
-0.7348924875259399,
0.223673880100250... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Whispering-GPT/lex-fridman-podcast-transcript-audio | Whispering-GPT | 2022-12-18T17:32:28Z | 21 | 0 | null | [
"task_categories:automatic-speech-recognition",
"whisper",
"whispering",
"medium",
"region:us"
] | 2022-12-18T17:32:28Z | 2022-12-12T22:36:18.000Z | 2022-12-12T22:36:18 | ---
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: id
dtype: string
- name: channel
dtype: string
- name: channel_id
dtype: string
- name: title
dtype: string
- name: categories
sequence: string
- name: tags
sequence: string
- name: description
dtype: string
- name: text
dtype: string
- name: segments
list:
- name: start
dtype: float64
- name: end
dtype: float64
- name: text
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 65356108140.0
num_examples: 333
download_size: 64386861854
dataset_size: 65356108140.0
tags:
- whisper
- whispering
- medium
---
# Dataset Card for "lexFridmanPodcast-transcript-audio"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Whispering-GPT](https://github.com/matallanas/whisper_gpt_pipeline)
- **Repository:** [whisper_gpt_pipeline](https://github.com/matallanas/whisper_gpt_pipeline)
- **Paper:** [whisper](https://cdn.openai.com/papers/whisper.pdf) and [gpt](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
- **Point of Contact:** [Whispering-GPT organization](https://huggingface.co/Whispering-GPT)
### Dataset Summary
This dataset is created by applying whisper to the videos of the Youtube channel [Lex Fridman Podcast](https://www.youtube.com/watch?v=FhfmGM6hswI&list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4&ab_channel=LexFridman). The dataset was created a medium size whisper model.
### Languages
- **Language**: English
## Dataset Structure
The dataset contains all the transcripts plus the audio of the different videos of Lex Fridman Podcast.
### Data Fields
The dataset is composed by:
- **id**: Id of the youtube video.
- **channel**: Name of the channel.
- **channel\_id**: Id of the youtube channel.
- **title**: Title given to the video.
- **categories**: Category of the video.
- **description**: Description added by the author.
- **text**: Whole transcript of the video.
- **segments**: A list with the time and transcription of the video.
- **start**: When started the trancription.
- **end**: When the transcription ends.
- **text**: The text of the transcription.
- **audio**: the extracted audio of the video in ogg format.
### Data Splits
- Train split.
## Dataset Creation
### Source Data
The transcriptions are from the videos of [Lex Fridman Podcast](https://www.youtube.com/watch?v=FhfmGM6hswI&list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4&ab_channel=LexFridman)
### Contributions
Thanks to [Whispering-GPT](https://huggingface.co/Whispering-GPT) organization for adding this dataset. | [
-0.4734615683555603,
-0.4175272583961487,
0.33125415444374084,
0.20606321096420288,
-0.26905182003974915,
0.18238332867622375,
-0.3622020184993744,
-0.10604846477508545,
0.38578763604164124,
0.2584192454814911,
-1.1124727725982666,
-1.0260975360870361,
-0.7634797096252441,
-0.2828189134597... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mwz/ur_para | mwz | 2023-06-24T13:06:04Z | 21 | 0 | null | [
"task_categories:text2text-generation",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ur",
"license:mit",
"region:us"
] | 2023-06-24T13:06:04Z | 2023-01-20T07:11:27.000Z | 2023-01-20T07:11:27 | ---
license: mit
task_categories:
- text2text-generation
- summarization
- text-generation
language:
- ur
pretty_name: ur_para
size_categories:
- 100K<n<1M
---
# Paraphrase Dataset (Urdu)
This dataset contains paraphrases in Urdu. It is provided in the Parquet format and is split into a training set with 393,000 rows.
## Dataset Details
- Columns:
- `sentence1`: The first sentence in a pair of paraphrases (string).
- `sentence2`: The second sentence in a pair of paraphrases (string).
## Usage
You can use this dataset for various natural language processing tasks such as text similarity, paraphrase identification, and language generation.
| [
-0.07434561103582382,
-0.39937594532966614,
0.17786478996276855,
0.6355083584785461,
-0.5213455557823181,
-0.13510215282440186,
0.13690784573554993,
0.282731294631958,
0.06688309460878372,
1.0303447246551514,
-0.3278694152832031,
-0.540427029132843,
-0.4256037473678589,
0.7044786810874939,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yhavinga/imdb_dutch | yhavinga | 2023-01-21T10:57:39Z | 21 | 0 | imdb-movie-reviews | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:nl",
"language:en",
"license:other",
"reg... | 2023-01-21T10:57:39Z | 2023-01-21T09:37:16.000Z | 2023-01-21T09:37:16 | ---
pretty_name: IMDB
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- nl
- en
license:
- other
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: imdb-movie-reviews
train-eval-index:
- config: plain_text
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
- name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
dataset_info:
features:
- name: text
dtype: string
- name: text_en
dtype: string
- name: label
dtype:
class_label:
names:
0: neg
1: pos
config_name: plain_text
splits:
- name: train
num_bytes: 69589646
num_examples: 24992
- name: test
num_bytes: 67958995
num_examples: 24992
- name: unsupervised
num_bytes: 139649169
num_examples: 49984
download_size: 108170940
dataset_size: 277197810
---
# Dataset Card for "imdb_dutch"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Large Movie Review Dataset translated to Dutch.
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets.
We provide a set of 24,992 highly polar movie reviews for training, and 24,992 for testing. There is additional unlabeled data for use as well.
### Translation to Dutch
The dataset was translated with [yhavinga/ul2-large-en-nl](https://huggingface.co/yhavinga/ul2-large-en-nl).
The translation code is available in the src directory.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
This dataset contains Dutch and English data.
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 108 MiB
- **Size of the generated dataset:** 277 MiB
An example of 'train' looks as follows.
```
{
"label": 0,
"text": "Holy shit. Dit was de slechtste film die ik in lange tijd heb gezien."
"text_en": "Holy crap. This was the worst film I have seen in a long time."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `text`: a `string` feature.
- `text_en`: a `string` feature.
- `label`: a classification label, with possible values including `neg` (0), `pos` (1).
### Data Splits
| name |train|unsupervised|test |
|----------|----:|-----------:|----:|
|plain_text|24992| 49984|24992|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
```
### Contributions
Thanks to [@ghazi-f](https://github.com/ghazi-f), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding
the English `imdb` dataset.
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
| [
-0.7615659236907959,
-0.490326851606369,
0.018948741257190704,
0.15372857451438904,
-0.42599329352378845,
0.01962283067405224,
-0.3391980230808258,
-0.3882008194923401,
0.7903887629508972,
0.3558122217655182,
-0.7533095479011536,
-0.9403298497200012,
-0.7033836841583252,
0.1203736364841461... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jbarat/plant_species | jbarat | 2023-01-22T14:03:45Z | 21 | 1 | null | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"license:unknown",
"region:us"
] | 2023-01-22T14:03:45Z | 2023-01-21T17:50:33.000Z | 2023-01-21T17:50:33 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': aechmea_fasciata
'1': agave_americana
'2': agave_attenuata
'3': agave_tequilana
'4': aglaonema_commutatum
'5': albuca_spiralis
'6': allium_cepa
'7': allium_sativum
splits:
- name: train
num_bytes: 82083349.0
num_examples: 800
download_size: 82004194
dataset_size: 82083349.0
license: unknown
task_categories:
- image-classification
language:
- en
pretty_name: Plant Species
size_categories:
- 10K<n<100K
---
# Dataset Card for "plant_species"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5847690105438232,
-0.17921380698680878,
0.19406484067440033,
0.3816590905189514,
-0.205146923661232,
0.07803487032651901,
0.1439816802740097,
-0.3440079092979431,
0.9728972911834717,
0.2595140039920807,
-0.7481226325035095,
-0.7759957909584045,
-0.6036621928215027,
-0.002794341184198856... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlpservicebots/operations | nlpservicebots | 2023-01-23T18:02:46Z | 21 | 0 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:es",
"license:apache-2.0",
"region:us"
] | 2023-01-23T18:02:46Z | 2023-01-23T17:52:12.000Z | 2023-01-23T17:52:12 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- es
size_categories:
- 1K<n<10K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
matchbench/semi-homo | matchbench | 2023-02-20T14:44:52Z | 21 | 0 | null | [
"region:us"
] | 2023-02-20T14:44:52Z | 2023-01-24T08:30:00.000Z | 2023-01-24T08:30:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-detoxify | tomekkorbak | 2023-02-07T15:31:11Z | 21 | 1 | null | [
"task_categories:text-classification",
"task_categories:other",
"task_ids:acceptability-classification",
"task_ids:hate-speech-detection",
"task_ids:text-scoring",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"sourc... | 2023-02-07T15:31:11Z | 2023-01-25T17:32:30.000Z | 2023-01-25T17:32:30 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: pile-detoxify
size_categories:
- 1M<n<10M
source_datasets:
- extended|the_pile
tags:
- toxicity
- pretraining-with-human-feedback
task_categories:
- text-classification
- other
task_ids:
- acceptability-classification
- hate-speech-detection
- text-scoring
---
# Dataset Card for pile-pii-scrubadub
## Dataset Description
- **Repository: https://github.com/tomekkorbak/aligned-pretraining-objectives**
- **Paper: Arxiv link to be added**
### Dataset Summary
This dataset contains text from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on the toxicity of each sentence.
Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the toxicity predicted by the [Detoxify](https://github.com/unitaryai/detoxify).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is taken from [The Pile](https://huggingface.co/datasets/the_pile), which is English text.
## Dataset Structure
### Data Instances
1949977
### Data Fields
- texts (sequence): a list of the sentences in the document, segmented using SpaCy
- meta (dict): the section of [The Pile](https://huggingface.co/datasets/the_pile) from which it originated
- scores (sequence): a score for each sentence in the `texts` column indicating the toxicity predicted by [Detoxify](https://github.com/unitaryai/detoxify)
- avg_score (float64): the average of the scores listed in the `scores` column
- num_sents (int64): the number of sentences (and scores) in that document
### Data Splits
Training set only
## Dataset Creation
### Curation Rationale
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile), a large dataset of text in English. The text is scored for toxicity so that generative language models can be trained to avoid generating toxic text.
### Source Data
#### Initial Data Collection and Normalization
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile).
#### Who are the source language producers?
Please see [The Pile](https://huggingface.co/datasets/the_pile) for the source of the dataset.
### Annotations
#### Annotation process
Each sentence was scored using [Detoxify](https://github.com/unitaryai/detoxify), which is a toxic comment classifier.
We used the `unbiased` model which is based on the 124M parameter [RoBERTa](https://arxiv.org/abs/1907.11692) and trained on the [Jigsaw Unintended Bias in Toxicity Classification dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification).
#### Who are the annotators?
[Detoxify](https://github.com/unitaryai/detoxify)
### Personal and Sensitive Information
This dataset contains all personal identifable information and toxic text that was originally contained in [The Pile](https://huggingface.co/datasets/the_pile).
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contains examples of toxic text and personal identifiable information.
(A version of this datatset with personal identifiable information annotated is [available here](https://huggingface.co/datasets/tomekkorbak/pile-pii-scrubadub).)
Please take care to avoid misusing the toxic text or putting anybody in danger by publicizing their information.
This dataset is intended for research purposes only. We cannot guarantee that all toxic text has been detected, and we cannot guarantee that models trained using it will avoid generating toxic text.
We do not recommend deploying models trained on this data.
### Discussion of Biases
This dataset contains all biases from The Pile discussed in their paper: https://arxiv.org/abs/2101.00027
### Other Known Limitations
The toxic text in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate.
## Additional Information
### Dataset Curators
[The Pile](https://huggingface.co/datasets/the_pile)
### Licensing Information
From [The Pile](https://huggingface.co/datasets/the_pile): PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE)
### Citation Information
Paper information to be added
### Contributions
[The Pile](https://huggingface.co/datasets/the_pile) | [
-0.1466348022222519,
-0.4515819251537323,
0.2979830801486969,
0.21162737905979156,
-0.3308597803115845,
-0.24141250550746918,
0.0525093711912632,
-0.2885943353176117,
0.2967604100704193,
0.6478675603866577,
-0.4329594075679779,
-0.8903802037239075,
-0.6766363382339478,
0.31485414505004883,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/lonli | metaeval | 2023-05-31T08:41:36Z | 21 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:mit",
"region:us"
] | 2023-05-31T08:41:36Z | 2023-02-04T14:48:11.000Z | 2023-02-04T14:48:11 | ---
license: mit
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
https://github.com/microsoft/LoNLI
```bibtex
@article{Tarunesh2021TrustingRO,
title={Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task},
author={Ishan Tarunesh and Somak Aditya and Monojit Choudhury},
journal={ArXiv},
year={2021},
volume={abs/2107.07229}
}
``` | [
-0.2190648764371872,
-0.42718788981437683,
0.6371781229972839,
0.18129797279834747,
-0.07572164386510849,
-0.08841215819120407,
-0.22687847912311554,
-1.0960404872894287,
0.3183082938194275,
0.5343510508537292,
-0.5816202759742737,
-0.33755895495414734,
-0.7102458477020264,
-0.004453473258... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sartajekram/BanglaRQA | sartajekram | 2023-05-06T19:04:32Z | 21 | 0 | null | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:human",
"size_categories:10K<n<100K",
"language:bn",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-05-06T19:04:32Z | 2023-02-07T14:56:06.000Z | 2023-02-07T14:56:06 | ---
annotations_creators:
- human
license: cc-by-nc-sa-4.0
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
language:
- bn
size_categories:
- 10K<n<100K
---
# Dataset Card for `BanglaRQA`
## Table of Contents
- [Dataset Card for `BanglaRQA`](#dataset-card-for-BanglaRQA)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [https://github.com/sartajekram419/BanglaRQA](https://github.com/sartajekram419/BanglaRQA)
- **Paper:** [BanglaRQA: A Benchmark Dataset for Under-resourced Bangla Language Reading Comprehension-based Question Answering with Diverse Question-Answer Types](https://aclanthology.org/2022.findings-emnlp.186)
### Dataset Summary
This is a human-annotated Bangla Question Answering (QA) dataset with diverse question-answer types.
### Languages
* `Bangla`
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("sartajekram/BanglaRQA")
```
## Dataset Structure
### Data Instances
One example from the dataset is given below in JSON format.
```
{
'passage_id': 'bn_wiki_2977',
'title': 'ফাজিল পরীক্ষা',
'context': 'ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা। ফাজিল পরীক্ষা বাংলাদেশে ডিগ্রি সমমানের, কখনো স্নাতক সমমানের একটি পরীক্ষা, যা একটি ফাজিল মাদ্রাসায় অনুষ্ঠিত হয়ে থাকে। তবে ভারতে ফাজিল পরীক্ষাকে উচ্চ মাধ্যমিক শ্রেণীর (১১ বা ১২ ক্লাস) মান বলে বিবেচিত করা হয়। ফাজিল পরীক্ষা বাংলাদেশ ভারত ও পাকিস্তানের সরকারি স্বীকৃত আলিয়া মাদরাসায় প্রচলিত রয়েছে। বাংলাদেশের ফাজিল পরীক্ষা ইসলামি আরবি বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়ে থাকে ও ভারতের ফাজিল পরীক্ষা পশ্চিমবঙ্গ মাদ্রাসা শিক্ষা পর্ষদের অধীনে অনুষ্ঠিত হয়ে থাকে।\n\n১৯৪৭ সালে ঢাকা আলিয়া মাদ্রাসা ঢাকায় স্থানান্তরের পূর্বে বাংলাদেশ ও ভারতের ফাজিল পরীক্ষা কলকাতা আলিয়া মাদ্রাসার অধীনে অনুষ্ঠিত হতো। ফাযিল পরীক্ষা বর্তমানে ইসলামি আরবী বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়। যা পূর্বে মাদরাসা বোর্ড ও ইসলামি বিশ্ববিদ্যালয়ের আধীনে অনুষ্ঠিত হত। মাদ্রাসা-ই-আলিয়া ঢাকায় স্থানান্তরিত হলে ১৯৪৮ সালে মাদ্রাসা বোর্ডের ফাজিলগুলো পরীক্ষা ঢাকা বিশ্ববিদ্যালয় কর্তৃক গৃহীত হতো। ১৯৭৫ সালের কুদরত-এ-খুদা শিক্ষা কমিশনের সুপারিশে মাদ্রাসা বোর্ড নিয়ন্ত্রিত আলিয়া মাদ্রাসাসমূহে জাতীয় শিক্ষাক্রম ও বহুমুখী পাঠ্যসূচি প্রবর্তিত করা হয়। ১৯৮০ সালে অনুষ্ঠিত ফাজিল পরীক্ষায় এই পাঠ্যসুচী কার্যকর হয়। এই শিক্ষা কমিশন অনুসারে ফাজিল শ্রেণীতে ইসলামি শিক্ষার পাশাপাশি সাধারণ পাঠ্যসূচী অন্তর্ভুক্ত করে ফাজিল পরীক্ষাকে সাধারণ উচ্চ মাধ্যমিক এইচ এস সির সমমান ঘোষণা করা হয়।\n\n১৯৭৮ সালে অধ্যাপক মুস্তফা বিন কাসিমের নেতৃত্বে সিনিয়র মাদ্রাসা শিক্ষা ব্যবস্থা কমিটি গঠিত হয়। এই কমিটির নির্দেশনায় ১৯৮৪ সালে সাধারণ শিক্ষার স্তরের সঙ্গে বাংলাদেশ মাদ্রাসা বোর্ড নিয়ন্ত্রিত আলিয়া মাদ্রাসা শিক্ষা স্তরের সামঞ্জস্য করা হয়। ফাজিল স্তরকে ২ বছর মেয়াদী কোর্সে উন্নিত করে, মোট ১৬ বছর ব্যাপী আলিয়া মাদ্রাসার পূর্ণাঙ্গ আধুনিক শিক্ষা ব্যবস্থা প্রবর্তন করা হয়। এই কমিশনের মাধ্যমেই সরকার ফাজিল পরীক্ষাকে সাধারণ ডিগ্রি মান ঘোষণা করে।',
'question_id': 'bn_wiki_2977_01',
'question_text': 'ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা ?',
'is_answerable': '1',
'question_type': 'confirmation',
'answers':
{
'answer_text': ['হ্যাঁ', 'হ্যাঁ '],
'answer_type': ['yes/no', 'yes/no']
},
}
```
### Data Splits
| split |count |
|----------|--------|
|`train`| 11,912 |
|`validation`| 1,484 |
|`test`| 1,493 |
## Additional Information
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use the dataset, please cite the following paper:
```
@inproceedings{ekram-etal-2022-banglarqa,
title = "{B}angla{RQA}: A Benchmark Dataset for Under-resourced {B}angla Language Reading Comprehension-based Question Answering with Diverse Question-Answer Types",
author = "Ekram, Syed Mohammed Sartaj and
Rahman, Adham Arik and
Altaf, Md. Sajid and
Islam, Mohammed Saidul and
Rahman, Mehrab Mustafy and
Rahman, Md Mezbaur and
Hossain, Md Azam and
Kamal, Abu Raihan Mostofa",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.186",
pages = "2518--2532",
abstract = "High-resource languages, such as English, have access to a plethora of datasets with various question-answer types resembling real-world reading comprehension. However, there is a severe lack of diverse and comprehensive question-answering datasets in under-resourced languages like Bangla. The ones available are either translated versions of English datasets with a niche answer format or created by human annotations focusing on a specific domain, question type, or answer type. To address these limitations, this paper introduces BanglaRQA, a reading comprehension-based Bangla question-answering dataset with various question-answer types. BanglaRQA consists of 3,000 context passages and 14,889 question-answer pairs created from those passages. The dataset comprises answerable and unanswerable questions covering four unique categories of questions and three types of answers. In addition, this paper also implemented four different Transformer models for question-answering on the proposed dataset. The best-performing model achieved an overall 62.42{\%} EM and 78.11{\%} F1 score. However, detailed analyses showed that the performance varies across question-answer types, leaving room for substantial improvement of the model performance. Furthermore, we demonstrated the effectiveness of BanglaRQA as a training resource by showing strong results on the bn{\_}squad dataset. Therefore, BanglaRQA has the potential to contribute to the advancement of future research by enhancing the capability of language models. The dataset and codes are available at https://github.com/sartajekram419/BanglaRQA",
}
```
| [
-0.6297765374183655,
-0.725426971912384,
0.12717697024345398,
0.0934290811419487,
-0.4694769084453583,
-0.07088314741849899,
0.04502612352371216,
-0.1803007870912552,
0.4930931031703949,
0.44459405541419983,
-0.628779411315918,
-0.3586840331554413,
-0.32492369413375854,
0.41267988085746765... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tiagoseca/raw_true_labels | tiagoseca | 2023-02-27T11:36:55Z | 21 | 0 | null | [
"region:us"
] | 2023-02-27T11:36:55Z | 2023-02-08T14:08:00.000Z | 2023-02-08T14:08:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KocLab-Bilkent/turkish-constitutional-court | KocLab-Bilkent | 2023-02-20T19:53:46Z | 21 | 0 | null | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:tr",
"license:cc-by-4.0",
"region:us"
] | 2023-02-20T19:53:46Z | 2023-02-18T15:03:21.000Z | 2023-02-18T15:03:21 | ---
license: cc-by-4.0
task_categories:
- text-classification
annotations_creators:
- found
language_creators:
- found
multilinguality:
- monolingual
language:
- tr
size_categories:
- 10M<n<100M
pretty_name: predicting-turkish-constitutional-court-decisions
source_datasets:
- original
---
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- **Homepage:**
- **Repository:** https://github.com/koc-lab/law-turk
- **Paper:** https://doi.org/10.1016/j.ipm.2021.102684
- **Point of Contact:** [Ceyhun Emre Öztürk](mailto:ceyhun.ozturk@bilkent.edu.tr)
### Dataset Summary
This dataset is extracted from the following Github repo, which was created for the journal paper with URL https://www.sciencedirect.com/science/article/abs/pii/S0306457321001692.
https://github.com/koc-lab/law-turk
The dataset includes 1290 court case decision texts from the Turkish Court of Cassation. Each sample has one label, which is the ruling of the court. The possible rulings are "Violation" and "No violation". There are 1290 samples. 1141 of these samples are labeled as "Violation".
### Supported Tasks and Leaderboards
Legal Judgment Prediction
### Languages
Turkish
## Dataset Structure
### Data Instances
The file format is jsonl and three data splits are present (train, validation and test) for each configuration.
### Data Fields
The dataset contains the following fields:
- `Text`: Legal case decision texts
- `Label`: The ruling of the court.
- 'Violation': The court decides for the legal case that there is a violation of the constitution.
- 'No violation': The court decides for the legal case that there is no violation of the constitution.
### Data Splits
The data has been split randomly into 70% train (903), 15% validation (195), 15% test (195).
## Dataset Creation
### Curation Rationale
This dataset was created to further the research on developing models for predicting Brazilian court decisions that are
also able to predict whether the decision will be unanimous.
### Source Data
The data were collected from *Türkiye Cumhuriyeti Anayasa Mahkemesi* (T.C. AYM, Turkish Constitutional Court).
#### Initial Data Collection and Normalization
The data were collected from the official website of the Turkish Contitutional Court: https://www.anayasa.gov.tr/tr/kararlar-bilgi-bankasi/.
#### Who are the source language producers?
The source language producers are judges.
### Annotations
#### Annotation process
The dataset was not annotated.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The court decisions might contain sensitive information about individuals.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
### Dataset Curators
The data collection was done by Emre Mumcuoğlu ([Email](mailto:mumcuoglu@ee.bilkent.edu.tr)).
### Licensing Information
No licensing information was provided for this dataset. However, please make sure that you use the dataset according to
Turkish law.
### Citation Information
```
@article{mumcuoglu21natural,
title = {{Natural language processing in law: Prediction of outcomes in the higher courts of Turkey}},
journal = {Information Processing \& Management},
volume = {58},
number = {5},
year = {2021},
author = {Mumcuoğlu, Emre and Öztürk, Ceyhun E. and Ozaktas, Haldun M. and Koç, Aykut}
}
``` | [
-0.42535653710365295,
-0.6274503469467163,
0.30654165148735046,
0.23690223693847656,
-0.35021138191223145,
-0.3682691752910614,
-0.2139812409877777,
-0.2872348129749298,
0.15830090641975403,
0.6714896559715271,
-0.4794784188270569,
-0.9025505185127258,
-0.7829406261444092,
0.06287471204996... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
loubnabnl/large-text-issues | loubnabnl | 2023-03-01T19:20:20Z | 21 | 0 | null | [
"region:us"
] | 2023-03-01T19:20:20Z | 2023-02-24T16:14:49.000Z | 2023-02-24T16:14:49 | ---
dataset_info:
features:
- name: repo
dtype: string
- name: org
dtype: string
- name: issue_id
dtype: int64
- name: issue_number
dtype: int64
- name: pull_request
struct:
- name: number
dtype: int64
- name: repo
dtype: string
- name: user_login
dtype: string
- name: events
list:
- name: action
dtype: string
- name: author
dtype: string
- name: comment_id
dtype: float64
- name: datetime
dtype: int64
- name: large_text
dtype: bool
- name: masked_author
dtype: string
- name: nb_lines
dtype: int64
- name: size
dtype: int64
- name: text
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: user_count
dtype: int64
- name: event_count
dtype: int64
- name: text_size
dtype: int64
- name: bot_issue
dtype: bool
- name: modified_by_bot
dtype: bool
- name: text_size_no_bots
dtype: int64
- name: modified_usernames
dtype: bool
- name: contains_large
dtype: bool
splits:
- name: train
num_bytes: 3807857
num_examples: 163
download_size: 1040266
dataset_size: 3807857
---
# Dataset Card for "large-text-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5376545786857605,
-0.5178585052490234,
0.454026997089386,
0.5146267414093018,
-0.06205705180764198,
-0.04308119788765907,
-0.24088509380817413,
-0.43824803829193115,
0.804068922996521,
0.5973553657531738,
-0.6261856555938721,
-0.6663607954978943,
-0.7666052579879761,
-0.0977267846465110... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
artem9k/ai-text-detection-pile | artem9k | 2023-02-27T03:37:54Z | 21 | 2 | null | [
"license:mit",
"region:us"
] | 2023-02-27T03:37:54Z | 2023-02-27T02:52:29.000Z | 2023-02-27T02:52:29 | ---
license: mit
---
# Dataset Card for AI Text Dectection Pile
## Dataset Description
- **Point of Contact:artem9k@gmail.com
### Dataset Summary
This is a large scale dataset intended for AI Text Detection tasks, geared toward long-form text and essays. It contains samples of both human text and AI-generated text from GPT2, GPT3, ChatGPT, GPTJ.
Here is the (tentative) breakdown:
#### Human Text
| Dataset | Num Samples | Link |
| ----------- | ----------- | ----------- |
| Reddit WritingPromps | 570k | [Link](https://www.kaggle.com/datasets/ratthachat/writing-prompts) |
| OpenAI Webtext | 260k | [Link](https://github.com/openai/gpt-2-output-dataset) |
| HC3 (Human Responses) | 58k | [Link](https://huggingface.co/datasets/Hello-SimpleAI/HC3) |
| ivypanda-essays | TODO | TODO |
| **Total** | **990k** | **-** |
#### AI-Generated Text
| Model | Dataset | Num Samples | Link |
| ----------- | ----------- | ----------- | ----------- |
| GPT2 | OpenAI gpt2-output-dataset | 260k | [Link](https://github.com/openai/gpt-2-output-dataset) |
| GPT3 | pairwise-davinci | 44k | TODO |
| GPT3 | synthetic-instruct-davinci-pairwise | 30k | [Link](https://huggingface.co/datasets/Dahoas/instruct-synthetic-prompt-responses) |
| GPTJ | synthetic-instruct-gptj-pairwise | 44k | [Link](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) |
| ChatGPT | Scraped from twitter | 5k | **-** |
| ChatGPT | HC3 (ChatGPT Responses) | 27k | [Link](https://huggingface.co/datasets/Hello-SimpleAI/HC3) |
| ChatGPT | ChatGPT Prompts/emergentmind | 500 | [Link](https://huggingface.co/datasets/MohamedRashad/ChatGPT-prompts/tree/main) |
| **Total** | **340k** | **-** | **-** |
### Supported Tasks and Leaderboards
Text Classification, AI Text Detection.
### Languages
English.
### Data Fields
TEXT: The text of the sample.
SOURCE: either "human" or "ai" | [
-0.42881476879119873,
-0.684938907623291,
0.2458297163248062,
-0.005394488573074341,
-0.1941610425710678,
0.1046028584241867,
-0.08587339520454407,
-0.6394516229629517,
0.1471058577299118,
0.5387808084487915,
-0.6048146486282349,
-0.8574953675270081,
-0.6963401436805725,
0.2987668216228485... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Hamid-reza/small-persian-QA | Hamid-reza | 2023-03-02T20:06:30Z | 21 | 0 | null | [
"region:us"
] | 2023-03-02T20:06:30Z | 2023-03-02T20:02:45.000Z | 2023-03-02T20:02:45 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 2525696.3482460035
num_examples: 1261
- name: validation
num_bytes: 260467.5806451613
num_examples: 130
download_size: 1158438
dataset_size: 2786163.9288911647
---
# Dataset Card for "small-persian-QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6440785527229309,
-0.31296786665916443,
0.5218084454536438,
0.08629899471998215,
-0.346719354391098,
0.04002085328102112,
-0.01565459556877613,
-0.05427468940615654,
0.9551475644111633,
0.23199667036533356,
-0.7514501810073853,
-0.6881675124168396,
-0.20675091445446014,
-0.1017467379570... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.