id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mesolitica/stemming | mesolitica | 2022-09-01T12:25:00Z | 16 | 0 | null | [
"region:us"
] | 2022-09-01T12:25:00Z | 2022-08-31T12:52:44.000Z | 2022-08-31T12:52:44 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clips/VaccinChatNL | clips | 2023-03-21T15:22:36Z | 16 | 0 | null | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:nl",
"license:cc-by-4.0",
"covid-19",
"FAQ",
"question-a... | 2023-03-21T15:22:36Z | 2022-09-02T11:52:00.000Z | 2022-09-02T11:52:00 | ---
annotations_creators:
- expert-generated
language:
- nl
language_creators:
- other
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: VaccinChatNL
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- covid-19
- FAQ
- question-answer pairs
task_categories:
- text-classification
task_ids:
- intent-classification
---
# Dataset Card for VaccinChatNL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
<!-- - [Curation Rationale](#curation-rationale) -->
<!-- - [Source Data](#source-data) -->
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
<!-- - [Social Impact of Dataset](#social-impact-of-dataset) -->
- [Discussion of Biases](#discussion-of-biases)
<!-- - [Other Known Limitations](#other-known-limitations) -->
- [Additional Information](#additional-information)
<!-- - [Dataset Curators](#dataset-curators) -->
<!-- - [Licensing Information](#licensing-information) -->
- [Citation Information](#citation-information)
<!-- - [Contributions](#contributions) -->
## Dataset Description
<!-- - **Homepage:**
- **Repository:**
- **Paper:** [To be added]
- **Leaderboard:** -->
- **Point of Contact:** [Jeska Buhmann](mailto:jeska.buhmann@uantwerpen.be)
### Dataset Summary
VaccinChatNL is a Flemish Dutch FAQ dataset on the topic of COVID-19 vaccinations in Flanders. It consists of 12,833 user questions divided over 181 answer labels, thus providing large groups of semantically equivalent paraphrases (a many-to-one mapping of user questions to answer labels). VaccinChatNL is the first Dutch many-to-one FAQ dataset of this size.
### Supported Tasks and Leaderboards
- 'text-classification': the dataset can be used to train a classification model for Dutch frequently asked questions on the topic of COVID-19 vaccination in Flanders.
### Languages
Dutch (Flemish): the BCP-47 code for Dutch as generally spoken in Flanders (Belgium) is nl-BE.
## Dataset Structure
### Data Instances
For each instance, there is a string for the user question and a string for the label of the annotated answer. See the [CLiPS / VaccinChatNL dataset viewer](https://huggingface.co/datasets/clips/VaccinChatNL/viewer/clips--VaccinChatNL/train).
```
{"sentence1": "Waar kan ik de bijsluiters van de vaccins vinden?", "label": "faq_ask_bijsluiter"}
```
### Data Fields
- `sentence1`: a string containing the user question
- `label`: a string containing the name of the intent (the answer class)
### Data Splits
The VaccinChatNL dataset has 3 splits: _train_, _valid_, and _test_. Below are the statistics for the dataset.
| Dataset Split | Number of Labeled User Questions in Split |
| ------------- | ------------------------------------------ |
| Train | 10,542 |
| Validation | 1,171 |
| Test | 1,170 |
## Dataset Creation
<!-- ### Curation Rationale
[More Information Needed] -->
<!-- ### Source Data
[Perhaps a link to vaccinchat.be and some of the website that were used for information] -->
<!-- #### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed] -->
### Annotations
#### Annotation process
Annotation was an iterative semi-automatic process. Starting from a very limited dataset with approximately 50 question-answer pairs (_sentence1-label_ pairs) a text classification model was trained and implemented in a publicly available chatbot. When the chatbot was used, the predicted labels for the new questions were checked and corrected if necessary. In addition, new answers were added to the dataset. After each round of corrections, the model was retrained on the updated dataset. This iterative approach led to the final dataset containing 12,883 user questions divided over 181 answer labels.
#### Who are the annotators?
The VaccinChatNL data were annotated by members and students of [CLiPS](https://www.uantwerpen.be/en/research-groups/clips/). All annotators have a background in Computational Linguistics.
### Personal and Sensitive Information
The data are anonymized in the sense that a user question can never be traced back to a specific individual.
## Considerations for Using the Data
<!-- ### Social Impact of Dataset
[More Information Needed] -->
### Discussion of Biases
This dataset contains real user questions, including a rather large section (7%) of out-of-domain questions or remarks (_label: nlu_fallback_). This class of user questions consists of ununderstandable questions, but also jokes and insulting remarks.
<!-- ### Other Known Limitations
[Perhaps some information of % of exact overlap between train and test set] -->
## Additional Information
<!-- ### Dataset Curators
[More Information Needed] -->
<!-- ### Licensing Information
[More Information Needed] -->
### Citation Information
```
@inproceedings{buhmann-etal-2022-domain,
title = "Domain- and Task-Adaptation for {V}accin{C}hat{NL}, a {D}utch {COVID}-19 {FAQ} Answering Corpus and Classification Model",
author = "Buhmann, Jeska and De Bruyn, Maxime and Lotfi, Ehsan and Daelemans, Walter",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.312",
pages = "3539--3549"
}
```
<!-- ### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. -->
| [
-0.3129641115665436,
-0.7880530953407288,
-0.18563491106033325,
0.03305713087320328,
-0.07979747653007507,
-0.16381067037582397,
-0.20742318034172058,
-0.2597908675670624,
0.3347683548927307,
0.4223705232143402,
-0.43738794326782227,
-0.7317219376564026,
-0.4810759425163269,
0.187862023711... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cahya/test01 | cahya | 2022-09-07T15:39:58Z | 16 | 0 | null | [
"region:us"
] | 2022-09-07T15:39:58Z | 2022-09-04T08:22:19.000Z | 2022-09-04T08:22:19 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pedramyamini/ku_radaw_news | pedramyamini | 2023-10-05T04:05:49Z | 16 | 0 | null | [
"license:afl-3.0",
"region:us"
] | 2023-10-05T04:05:49Z | 2022-09-04T10:34:39.000Z | 2022-09-04T10:34:39 | ---
license: afl-3.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anton-l/earnings22_baseline_5_gram | anton-l | 2022-10-17T18:35:04Z | 16 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2022-10-17T18:35:04Z | 2022-09-17T15:31:55.000Z | 2022-09-17T15:31:55 | ---
license: apache-2.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pitagorak/Yo | Pitagorak | 2022-10-01T04:21:10Z | 16 | 0 | null | [
"license:other",
"region:us"
] | 2022-10-01T04:21:10Z | 2022-09-29T01:03:00.000Z | 2022-09-29T01:03:00 | ---
license: other
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Allanduu/fotosduu | Allanduu | 2022-09-29T10:41:51Z | 16 | 0 | null | [
"region:us"
] | 2022-09-29T10:41:51Z | 2022-09-29T09:45:36.000Z | 2022-09-29T09:45:36 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
raydelrosario/Segundo | raydelrosario | 2022-09-29T17:29:21Z | 16 | 0 | null | [
"region:us"
] | 2022-09-29T17:29:21Z | 2022-09-29T15:47:08.000Z | 2022-09-29T15:47:08 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mvazquez/LSE_eSaude_UVIGO_OSLWL | mvazquez | 2022-10-02T19:35:04Z | 16 | 0 | null | [
"region:us"
] | 2022-10-02T19:35:04Z | 2022-10-02T19:30:38.000Z | 2022-10-02T19:30:38 |
---
annotations_creators:
- expert-generated
language:
- lse
language_creators:
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: LSE_eSaude_UVIGO_OSLWL
size_categories:
- n<1K
source_datasets:
- original
tags:
- sign spotting
- sign language recognition
- lse
task_categories:
- other
task_ids: []
# Dataset Card for LSE_eSaude_UVIGO_OSLWL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| [
-0.3923821449279785,
-0.3562878668308258,
0.26143723726272583,
0.22453458607196808,
-0.2730810046195984,
0.1712443083524704,
-0.3178485035896301,
-0.5851607918739319,
0.5927407741546631,
0.6309691071510315,
-0.8004866242408752,
-1.1188701391220093,
-0.7980042695999146,
0.21837574243545532,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ywchoi/pmc_5 | ywchoi | 2022-10-05T17:19:39Z | 16 | 0 | null | [
"region:us"
] | 2022-10-05T17:19:39Z | 2022-10-05T16:21:57.000Z | 2022-10-05T16:21:57 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/ara_emotion | arbml | 2022-11-03T14:41:29Z | 16 | 0 | null | [
"region:us"
] | 2022-11-03T14:41:29Z | 2022-10-05T22:34:17.000Z | 2022-10-05T22:34:17 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-3750000-3800000 | tomekkorbak | 2022-10-06T02:06:41Z | 16 | 0 | null | [
"region:us"
] | 2022-10-06T02:06:41Z | 2022-10-06T02:06:34.000Z | 2022-10-06T02:06:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ywchoi/pmc_0_cleaned | ywchoi | 2022-10-07T17:13:03Z | 16 | 0 | null | [
"region:us"
] | 2022-10-07T17:13:03Z | 2022-10-07T17:12:28.000Z | 2022-10-07T17:12:28 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RishiGupta/amazon-shoe-reviews | RishiGupta | 2022-10-08T14:26:23Z | 16 | 0 | null | [
"region:us"
] | 2022-10-08T14:26:23Z | 2022-10-08T11:30:22.000Z | 2022-10-08T11:30:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BrownFox3/JoBro | BrownFox3 | 2022-10-08T17:32:20Z | 16 | 0 | null | [
"region:us"
] | 2022-10-08T17:32:20Z | 2022-10-08T16:14:07.000Z | 2022-10-08T16:14:07 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ywchoi/pmc_5_cleaned | ywchoi | 2022-10-08T21:29:58Z | 16 | 0 | null | [
"region:us"
] | 2022-10-08T21:29:58Z | 2022-10-08T20:31:56.000Z | 2022-10-08T20:31:56 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ywchoi/pmc_8_cleaned | ywchoi | 2022-10-09T03:57:16Z | 16 | 0 | null | [
"region:us"
] | 2022-10-09T03:57:16Z | 2022-10-09T02:36:41.000Z | 2022-10-09T02:36:41 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ywchoi/pmc_9_cleaned | ywchoi | 2022-10-09T04:40:59Z | 16 | 0 | null | [
"region:us"
] | 2022-10-09T04:40:59Z | 2022-10-09T04:08:21.000Z | 2022-10-09T04:08:21 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Chinchis/imagenes | Chinchis | 2022-10-13T05:44:07Z | 16 | 0 | null | [
"license:gpl",
"region:us"
] | 2022-10-13T05:44:07Z | 2022-10-09T22:56:58.000Z | 2022-10-09T22:56:58 | ---
license: gpl
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Bioskop/BeccaCP | Bioskop | 2022-10-10T01:52:28Z | 16 | 0 | null | [
"license:unknown",
"region:us"
] | 2022-10-10T01:52:28Z | 2022-10-10T01:52:00.000Z | 2022-10-10T01:52:00 | ---
license: unknown
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-KETI-AIR__korquad-v1.0-acb0d1-1711659840 | autoevaluate | 2022-10-10T12:25:13Z | 16 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-10T12:25:13Z | 2022-10-10T11:38:00.000Z | 2022-10-10T11:38:00 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- KETI-AIR/korquad
eval_info:
task: extractive_question_answering
model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
metrics: ['angelina-wang/directional_bias_amplification']
dataset_name: KETI-AIR/korquad
dataset_config: v1.0
dataset_split: train
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: KETI-AIR/korquad
* Config: v1.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@HANSOLYOO](https://huggingface.co/HANSOLYOO) for evaluating this model. | [
-0.5331105589866638,
-0.5122230052947998,
0.24116116762161255,
0.14361919462680817,
-0.01155191008001566,
0.031018367037177086,
0.11470239609479904,
-0.5011204481124878,
0.08112852275371552,
0.4031035900115967,
-1.1754541397094727,
0.027387645095586777,
-0.5538758039474487,
-0.054448634386... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/cochrane_dense_oracle | allenai | 2022-11-18T19:46:14Z | 16 | 0 | multi-document-summarization | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | 2022-11-18T19:46:14Z | 2022-10-12T13:43:35.000Z | 2022-10-12T13:43:35 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever.
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7790 | 0.4487 | 0.4487 | 0.4487 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7856 | 0.4424 | 0.4424 | 0.4424 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. | [
-0.07613812386989594,
-0.2748023569583893,
0.27583444118499756,
0.30847421288490295,
-0.21059706807136536,
-0.2845284640789032,
-0.11271736770868301,
-0.09662897139787674,
0.545505166053772,
0.6129191517829895,
-0.5751116871833801,
-0.6779780983924866,
-0.734003484249115,
0.220183476805686... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nikitam/ACES | nikitam | 2022-10-28T07:53:15Z | 16 | 5 | null | [
"task_categories:translation",
"multilinguality:multilingual",
"source_datasets:FLORES-101, FLORES-200, PAWS-X, XNLI, XTREME, WinoMT, Wino-X, MuCOW, EuroParl ConDisco, ParcorFull",
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"arxiv:2210.15615",
"region:us"
] | 2022-10-28T07:53:15Z | 2022-10-13T07:37:39.000Z | 2022-10-13T07:37:39 | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
source_datasets:
- FLORES-101, FLORES-200, PAWS-X, XNLI, XTREME, WinoMT, Wino-X, MuCOW, EuroParl ConDisco, ParcorFull
task_categories:
- translation
pretty_name: ACES
---
# Dataset Card for ACES
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Usage](#usage)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact](#contact)
## Dataset Description
- **Repository:** [ACES dataset repository](https://github.com/EdinburghNLP/ACES)
- **Paper:** [arXiv](https://arxiv.org/abs/2210.15615)
### Dataset Summary
ACES consists of 36,476 examples covering 146 language pairs and representing challenges from 68 phenomena for evaluating machine translation metrics. We focus on translation accuracy errors and base the phenomena covered in our challenge set on the Multidimensional Quality Metrics (MQM) ontology. The phenomena range from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge.
### Supported Tasks and Leaderboards
-Machine translation evaluation of metrics
-Potentially useful for contrastive machine translation evaluation
### Languages
The dataset covers 146 language pairs as follows:
af-en, af-fa, ar-en, ar-fr, ar-hi, be-en, bg-en, bg-lt, ca-en, ca-es, cs-en, da-en, de-en, de-es, de-fr, de-ja, de-ko, de-ru, de-zh, el-en, en-af, en-ar, en-be, en-bg, en-ca, en-cs, en-da, en-de, en-el, en-es, en-et, en-fa, en-fi, en-fr, en-gl, en-he, en-hi, en-hr, en-hu, en-hy, en-id, en-it, en-ja, en-ko, en-lt, en-lv, en-mr, en-nl, en-no, en-pl, en-pt, en-ro, en-ru, en-sk, en-sl, en-sr, en-sv, en-ta, en-tr, en-uk, en-ur, en-vi, en-zh, es-ca, es-de, es-en, es-fr, es-ja, es-ko, es-zh, et-en, fa-af, fa-en, fi-en, fr-de, fr-en, fr-es, fr-ja, fr-ko, fr-mr, fr-ru, fr-zh, ga-en, gl-en, he-en, he-sv, hi-ar, hi-en, hr-en, hr-lv, hu-en, hy-en, hy-vi, id-en, it-en, ja-de, ja-en, ja-es, ja-fr, ja-ko, ja-zh, ko-de, ko-en, ko-es, ko-fr, ko-ja, ko-zh, lt-bg, lt-en, lv-en, lv-hr, mr-en, nl-en, no-en, pl-en, pl-mr, pl-sk, pt-en, pt-sr, ro-en, ru-de, ru-en, ru-es, ru-fr, sk-en, sk-pl, sl-en, sr-en, sr-pt, sv-en, sv-he, sw-en, ta-en, th-en, tr-en, uk-en, ur-en, vi-en, vi-hy, wo-en, zh-de, zh-en, zh-es, zh-fr, zh-ja, zh-ko
## Dataset Structure
### Data Instances
Each data instance contains the following features: _source_, _good-translation_, _incorrect-translation_, _reference_, _phenomena_, _langpair_
See the [ACES corpus viewer](https://huggingface.co/datasets/nikitam/ACES/viewer/nikitam--ACES/train) to explore more examples.
An example from the ACES challenge set looks like the following:
```
{'source': "Proper nutritional practices alone cannot generate elite performances, but they can significantly affect athletes' overall wellness.", 'good-translation': 'Las prácticas nutricionales adecuadas por sí solas no pueden generar rendimiento de élite, pero pueden afectar significativamente el bienestar general de los atletas.', 'incorrect-translation': 'Las prácticas nutricionales adecuadas por sí solas no pueden generar rendimiento de élite, pero pueden afectar significativamente el bienestar general de los jóvenes atletas.', 'reference': 'No es posible que las prácticas nutricionales adecuadas, por sí solas, generen un rendimiento de elite, pero puede influir en gran medida el bienestar general de los atletas .', 'phenomena': 'addition', 'langpair': 'en-es'}
```
### Data Fields
- 'source': a string containing the text that needs to be translated
- 'good-translation': possible translation of the source sentence
- 'incorrect-translation': translation of the source sentence that contains an error or phenomenon of interest
- 'reference': the gold standard translation
- 'phenomena': the type of error or phenomena being studied in the example
- 'langpair': the source language and the target language pair of the example
Note that the _good-translation_ may not be free of errors but it is a better translation than the _incorrect-translation_
### Data Splits
The ACES dataset has 1 split: _train_ which contains the challenge set. There are 36476 examples.
## Dataset Creation
### Curation Rationale
With the advent of neural networks and especially Transformer-based architectures, machine translation outputs have become more and more fluent. Fluency errors are also judged less severely than accuracy errors by human evaluators \citep{freitag-etal-2021-experts} which reflects the fact that accuracy errors can have dangerous consequences in certain contexts, for example in the medical and legal domains. For these reasons, we decided to build a challenge set focused on accuracy errors.
Another aspect we focus on is including a broad range of language pairs in ACES. Whenever possible we create examples for all language pairs covered in a source dataset when we use automatic approaches. For phenomena where we create examples manually, we also aim to cover at least two language pairs per phenomenon but are of course limited to the languages spoken by the authors.
We aim to offer a collection of challenge sets covering both easy and hard phenomena. While it may be of interest to the community to continuously test on harder examples to check where machine translation evaluation metrics still break, we believe that easy challenge sets are just as important to ensure that metrics do not suddenly become worse at identifying error types that were previously considered ``solved''. Therefore, we take a holistic view when creating ACES and do not filter out individual examples or exclude challenge sets based on baseline metric performance or other factors.
### Source Data
#### Initial Data Collection and Normalization
Please see Sections 4 and 5 of the paper.
#### Who are the source language producers?
The dataset contains sentences found in FLORES-101, FLORES-200, PAWS-X, XNLI, XTREME, WinoMT, Wino-X, MuCOW, EuroParl ConDisco, ParcorFull datasets. Please refer to the respective papers for further details.
### Personal and Sensitive Information
The external datasets may contain sensitive information. Refer to the respective datasets for further details.
## Considerations for Using the Data
### Usage
ACES has been primarily designed to evaluate machine translation metrics on the accuracy errors. We expect the metric to score _good-translation_ consistently higher than _incorrect-translation_. We report the performance of metric based on Kendall-tau like correlation. It measures the number of times a metric scores the good translation above the incorrect translation (concordant) and equal to or lower than the incorrect translation (discordant).
### Discussion of Biases
Some examples within the challenge set exhibit biases, however, this is necessary in order to expose the limitations of existing metrics.
### Other Known Limitations
The ACES challenge set exhibits a number of biases. Firstly, there is greater coverage in terms of phenomena and the number of examples for the en-de and en-fr language pairs. This is in part due to the manual effort required to construct examples for some phenomena, in particular, those belonging to the discourse-level and real-world knowledge categories. Further, our choice of language pairs is also limited to the ones available in XLM-R. Secondly, ACES contains more examples for those phenomena for which examples could be generated automatically, compared to those that required manual construction/filtering. Thirdly, some of the automatically generated examples require external libraries which are only available for a few languages (e.g. Multilingual Wordnet). Fourthly, the focus of the challenge set is on accuracy errors. We leave the development of challenge sets for fluency errors to future work.
As a result of using existing datasets as the basis for many of the examples, errors present in these datasets may be propagated through into ACES. Whilst we acknowledge that this is undesirable, in our methods for constructing the incorrect translation we aim to ensure that the quality of the incorrect translation is always worse than the corresponding good translation.
The results and analyses presented in the paper exclude those metrics submitted to the WMT 2022 metrics shared task that provides only system-level outputs. We focus on metrics that provide segment-level outputs as this enables us to provide a broad overview of metric performance on different phenomenon categories and to conduct fine-grained analyses of performance on individual phenomena. For some of the fine-grained analyses, we apply additional constraints based on the language pairs covered by the metrics, or whether the metrics take the source as input, to address specific questions of interest. As a result of applying some of these additional constraints, our investigations tend to focus more on high and medium-resource languages than on low-resource languages. We hope to address this shortcoming in future work.
## Additional Information
### Licensing Information
The ACES dataset is Creative Commons Attribution Non-Commercial Share Alike 4.0 (cc-by-nc-sa-4.0)
### Citation Information
@inproceedings{amrhein-aces-2022,
title = "{ACES}: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics",
author = {Amrhein, Chantal and
Moghe, Nikita and
Guillou, Liane},
booktitle = "Seventh Conference on Machine Translation (WMT22)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
eprint = {2210.15615}
}
### Contact
[Chantal Amrhein](mailto:amrhein@cl.uzh.ch) and [Nikita Moghe](mailto:nikita.moghe@ed.ac.uk) and [Liane Guillou](mailto:lguillou@ed.ac.uk)
Dataset card based on [Allociné](https://huggingface.co/datasets/allocine) | [
-0.10984554886817932,
-0.6331301331520081,
0.3773006200790405,
0.18315859138965607,
-0.029584214091300964,
0.2194923609495163,
-0.17392832040786743,
-0.366132527589798,
0.44195136427879333,
0.4210565388202667,
-0.4592595100402832,
-0.7900373339653015,
-0.6906252503395081,
0.615149497985839... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nickmuchi/crowdsourced-movie-poster-diffusion-demo | nickmuchi | 2022-10-14T11:44:26Z | 16 | 0 | null | [
"region:us"
] | 2022-10-14T11:44:26Z | 2022-10-14T05:24:09.000Z | 2022-10-14T05:24:09 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pandaman2020/SDTraining | pandaman2020 | 2023-06-14T06:49:31Z | 16 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-06-14T06:49:31Z | 2022-10-14T09:14:41.000Z | 2022-10-14T09:14:41 | ---
license: cc-by-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AndyChiang/cloth | AndyChiang | 2022-10-14T14:10:37Z | 16 | 2 | null | [
"task_categories:fill-mask",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"cloze",
"mid-school",
"high-school",
"exams",
"region:us"
] | 2022-10-14T14:10:37Z | 2022-10-14T12:28:41.000Z | 2022-10-14T12:28:41 | ---
pretty_name: cloth
multilinguality:
- monolingual
language:
- en
license:
- mit
size_categories:
- 10K<n<100K
tags:
- cloze
- mid-school
- high-school
- exams
task_categories:
- fill-mask
---
# cloth
**CLOTH** is a dataset which is a collection of nearly 100,000 cloze questions from middle school and high school English exams. The detail of CLOTH dataset is shown below.
| Number of questions | Train | Valid | Test |
| ------------------- | ----- | ----- | ----- |
| **Middle school** | 22056 | 3273 | 3198 |
| **High school** | 54794 | 7794 | 8318 |
| **Total** | 76850 | 11067 | 11516 |
Source: https://www.cs.cmu.edu/~glai1/data/cloth/ | [
-0.8214318156242371,
-0.8657706379890442,
0.12847353518009186,
-0.06331002712249756,
-0.0010010297410190105,
-0.11522848904132843,
0.07353154569864273,
-0.007208327762782574,
0.16110217571258545,
0.5722755789756775,
-0.7796676158905029,
-0.7755882740020752,
-0.6228315234184265,
0.209264129... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
facebook/content_rephrasing | facebook | 2022-10-14T17:41:05Z | 16 | 10 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-14T17:41:05Z | 2022-10-14T17:25:22.000Z | 2022-10-14T17:25:22 | ---
license: cc-by-sa-4.0
---
## Message Content Rephrasing Dataset
Introduced by Einolghozati et al. in Sound Natural: Content Rephrasing in Dialog Systems https://aclanthology.org/2020.emnlp-main.414/
We introduce a new task of rephrasing for amore natural virtual assistant. Currently, vir-tual assistants work in the paradigm of intent-slot tagging and the slot values are directlypassed as-is to the execution engine. However,this setup fails in some scenarios such as mes-saging when the query given by the user needsto be changed before repeating it or sending itto another user. For example, for queries like‘ask my wife if she can pick up the kids’ or ‘re-mind me to take my pills’, we need to rephrasethe content to ‘can you pick up the kids’ and‘take your pills’. In this paper, we study theproblem of rephrasing with messaging as ause case and release a dataset of 3000 pairs oforiginal query and rephrased query. We showthat BART, a pre-trained transformers-basedmasked language model with auto-regressivedecoding, is a strong baseline for the task, andshow improvements by adding a copy-pointerand copy loss to it. We analyze different trade-offs of BART-based and LSTM-based seq2seqmodels, and propose a distilled LSTM-basedseq2seq as the best practical model.
| [
-0.3052766025066376,
-0.9426947832107544,
0.16419267654418945,
0.17672036588191986,
-0.6096519827842712,
-0.04909803345799446,
-0.06669089943170547,
-0.45813310146331787,
0.44791677594184875,
1.034664511680603,
-0.8474729657173157,
-0.21701662242412567,
-0.47549968957901,
0.365643918514251... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
omerist/arab-ds-mini | omerist | 2022-10-15T01:12:49Z | 16 | 0 | null | [
"region:us"
] | 2022-10-15T01:12:49Z | 2022-10-15T01:12:24.000Z | 2022-10-15T01:12:24 | ---
dataset_info:
features:
- name: title
dtype: string
- name: review
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 87011869.13722204
num_examples: 27116
- name: validation
num_bytes: 9668342.001417983
num_examples: 3013
download_size: 49392988
dataset_size: 96680211.13864002
---
# Dataset Card for "arab-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.9297677874565125,
-0.3482329547405243,
0.24657997488975525,
-0.047859083861112595,
-0.3203868269920349,
0.14210723340511322,
0.382888525724411,
-0.1373361051082611,
1.0059983730316162,
0.29322999715805054,
-0.9717768430709839,
-0.9089987874031067,
-0.7839136719703674,
-0.232788369059562... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
une/uneune_image1 | une | 2022-10-15T09:07:58Z | 16 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-10-15T09:07:58Z | 2022-10-15T08:41:22.000Z | 2022-10-15T08:41:22 | ---
license: cc-by-4.0
---
# Dataset Card for uneune_image1
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
今まで私が描いたイラスト100枚のデータセットです。
512×512にトリミングしてあります。
さっくりとstableDiffusionでの学習用に使えるデータセットが欲しかったので作りました。
This is a data set of 100 illustrations I have drawn so far.
Cropped to 512x512.
I wanted a dataset that can be used for learning with stableDiffusion, so I made it. | [
-0.5258246660232544,
-0.3337080478668213,
-0.07538007944822311,
0.3074888288974762,
-0.7726276516914368,
-0.08494862914085388,
0.06953748315572739,
-0.06524297595024109,
0.8712189197540283,
0.5743721723556519,
-0.49751248955726624,
-0.8962045311927795,
-0.40755364298820496,
-0.139463603496... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zhaluza/flagged-movie-poster-images | zhaluza | 2022-10-16T15:15:01Z | 16 | 0 | null | [
"region:us"
] | 2022-10-16T15:15:01Z | 2022-10-16T15:04:54.000Z | 2022-10-16T15:04:54 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SayagoDev/yo | SayagoDev | 2022-10-17T03:39:58Z | 16 | 0 | null | [
"region:us"
] | 2022-10-17T03:39:58Z | 2022-10-17T03:39:01.000Z | 2022-10-17T03:39:01 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
julianmoraes/doodles-captions-BLIP | julianmoraes | 2022-10-18T16:03:19Z | 16 | 2 | null | [
"region:us"
] | 2022-10-18T16:03:19Z | 2022-10-18T16:02:39.000Z | 2022-10-18T16:02:39 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SG4YK/yoneyama_mai_normal | SG4YK | 2022-10-18T16:29:42Z | 16 | 0 | null | [
"region:us"
] | 2022-10-18T16:29:42Z | 2022-10-18T16:29:13.000Z | 2022-10-18T16:29:13 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GrainsPolito/BBBicycles | GrainsPolito | 2022-10-20T11:14:59Z | 16 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-10-20T11:14:59Z | 2022-10-18T19:05:32.000Z | 2022-10-18T19:05:32 | ---
license: cc-by-nc-4.0
---
# Dataset Card for BBBicycles
## Dataset Summary
Bent & Broken Bicycles (BBBicycles) dataset is a benchmark set for the novel task of **damaged object re-identification**, which aims to identify the same object in multiple images even in the presence of breaks, deformations, and missing parts. You can find an interactive preview [here](https://huggingface.co/spaces/GrainsPolito/BBBicyclesPreview).
## Dataset Structure
The final dataset contains:
- Total of 39,200 image
- 2,800 unique IDs
- 20 models
- 140 IDs for each model
<table border-collapse="collapse">
<tr>
<td><b style="font-size:25px">Information for each ID:</b></td>
<td><b style="font-size:25px">Information for each render:</b></td>
</tr>
<tr>
<td>
<ul>
<li>Model</li>
<li>Type</li>
<li>Texture type</li>
<li>Stickers</li>
</ul>
</td>
<td>
<ul>
<li>Background</li>
<li>Viewing Side</li>
<li>Focal Length</li>
<li>Presence of dirt</li>
</ul>
</td>
</tr>
</table>
### Citation Information
```
@inproceedings{bbb_2022,
title={Bent & Broken Bicycles: Leveraging synthetic data for damaged object re-identification},
author={Luca Piano, Filippo Gabriele Pratticò, Alessandro Sebastian Russo, Lorenzo Lanari, Lia Morra, Fabrizio Lamberti},
booktitle={2022 IEEE Winter Conference on Applications of Computer Vision (WACV)},
year={2022},
organization={IEEE}
}
```
### Credits
The authors gratefully acknowledge the financial support of Reale Mutua Assicurazioni. | [
-0.599943995475769,
-0.4216257631778717,
0.34386342763900757,
-0.13526052236557007,
-0.5755453109741211,
0.23211263120174408,
0.18162888288497925,
-0.8486849665641785,
0.32321733236312866,
0.27869856357574463,
-1.0488474369049072,
-0.6896496415138245,
-0.13727429509162903,
0.00272591691464... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
awacke1/LOINC-CodeSet-Value-Description.csv | awacke1 | 2022-10-29T12:43:25Z | 16 | 1 | null | [
"license:mit",
"region:us"
] | 2022-10-29T12:43:25Z | 2022-10-18T19:08:21.000Z | 2022-10-18T19:08:21 | ---
license: mit
---
LOINC-CodeSet-Value-Description.csv | [
-0.02688765898346901,
0.1048889011144638,
-0.20654217898845673,
0.32136592268943787,
-0.5197839140892029,
0.13807706534862518,
-0.29052552580833435,
-0.2844024896621704,
0.973384439945221,
0.9673690795898438,
-0.3622044622898102,
-0.6944156885147095,
-0.5357659459114075,
-0.192650616168975... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
drt/kqa_pro | drt | 2022-10-20T19:35:20Z | 16 | 2 | null | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"k... | 2022-10-20T19:35:20Z | 2022-10-20T18:12:48.000Z | 2022-10-20T18:12:48 | ---
annotations_creators:
- machine-generated
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: KQA-Pro
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- knowledge graph
- freebase
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for KQA Pro
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Configs](#data-configs)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [How to run SPARQLs and programs](#how-to-run-sparqls-and-programs)
- [Knowledge Graph File](#knowledge-graph-file)
- [How to Submit to Leaderboard](#how-to-submit-results-of-test-set)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://thukeg.gitee.io/kqa-pro/
- **Repository:** https://github.com/shijx12/KQAPro_Baselines
- **Paper:** [KQA Pro: A Dataset with Explicit Compositional Programs for Complex Question Answering over Knowledge Base](https://aclanthology.org/2022.acl-long.422/)
- **Leaderboard:** http://thukeg.gitee.io/kqa-pro/leaderboard.html
- **Point of Contact:** shijx12 at gmail dot com
### Dataset Summary
KQA Pro is a large-scale dataset of complex question answering over knowledge base. The questions are very diverse and challenging, requiring multiple reasoning capabilities including compositional reasoning, multi-hop reasoning, quantitative comparison, set operations, and etc. Strong supervisions of SPARQL and program are provided for each question.
### Supported Tasks and Leaderboards
It supports knowlege graph based question answering. Specifically, it provides SPARQL and *program* for each question.
### Languages
English
## Dataset Structure
**train.json/val.json**
```
[
{
'question': str,
'sparql': str, # executable in our virtuoso engine
'program':
[
{
'function': str, # function name
'dependencies': [int], # functional inputs, representing indices of the preceding functions
'inputs': [str], # textual inputs
}
],
'choices': [str], # 10 answer choices
'answer': str, # golden answer
}
]
```
**test.json**
```
[
{
'question': str,
'choices': [str], # 10 answer choices
}
]
```
### Data Configs
This dataset has two configs: `train_val` and `test` because they have different available fields. Please specify this like `load_dataset('drt/kqa_pro', 'train_val')`.
### Data Splits
train, val, test
## Additional Information
### Knowledge Graph File
You can find the knowledge graph file `kb.json` in the original github repository. It comes with the format:
```json
{
'concepts':
{
'<id>':
{
'name': str,
'instanceOf': ['<id>', '<id>'], # ids of parent concept
}
},
'entities': # excluding concepts
{
'<id>':
{
'name': str,
'instanceOf': ['<id>', '<id>'], # ids of parent concept
'attributes':
[
{
'key': str, # attribute key
'value': # attribute value
{
'type': 'string'/'quantity'/'date'/'year',
'value': float/int/str, # float or int for quantity, int for year, 'yyyy/mm/dd' for date
'unit': str, # for quantity
},
'qualifiers':
{
'<qk>': # qualifier key, one key may have multiple corresponding qualifier values
[
{
'type': 'string'/'quantity'/'date'/'year',
'value': float/int/str,
'unit': str,
}, # the format of qualifier value is similar to attribute value
]
}
},
]
'relations':
[
{
'predicate': str,
'object': '<id>', # NOTE: it may be a concept id
'direction': 'forward'/'backward',
'qualifiers':
{
'<qk>': # qualifier key, one key may have multiple corresponding qualifier values
[
{
'type': 'string'/'quantity'/'date'/'year',
'value': float/int/str,
'unit': str,
}, # the format of qualifier value is similar to attribute value
]
}
},
]
}
}
}
```
### How to run SPARQLs and programs
We implement multiple baselines in our [codebase](https://github.com/shijx12/KQAPro_Baselines), which includes a supervised SPARQL parser and program parser.
In the SPARQL parser, we implement a query engine based on [Virtuoso](https://github.com/openlink/virtuoso-opensource.git).
You can install the engine based on our [instructions](https://github.com/shijx12/KQAPro_Baselines/blob/master/SPARQL/README.md), and then feed your predicted SPARQL to get the answer.
In the program parser, we implement a rule-based program executor, which receives a predicted program and returns the answer.
Detailed introductions of our functions can be found in our [paper](https://arxiv.org/abs/2007.03875).
### How to submit results of test set
You need to predict answers for all questions of test set and write them in a text file **in order**, one per line.
Here is an example:
```
Tron: Legacy
Palm Beach County
1937-03-01
The Queen
...
```
Then you need to send the prediction file to us by email <caosl19@mails.tsinghua.edu.cn>, we will reply to you with the performance as soon as possible.
To appear in the learderboard, you need to also provide following information:
- model name
- affiliation
- open-ended or multiple-choice
- whether use the supervision of SPARQL in your model or not
- whether use the supervision of program in your model or not
- single model or ensemble model
- (optional) paper link
- (optional) code link
### Licensing Information
MIT License
### Citation Information
If you find our dataset is helpful in your work, please cite us by
```
@inproceedings{KQAPro,
title={{KQA P}ro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base},
author={Cao, Shulin and Shi, Jiaxin and Pan, Liangming and Nie, Lunyiu and Xiang, Yutong and Hou, Lei and Li, Juanzi and He, Bin and Zhang, Hanwang},
booktitle={ACL'22},
year={2022}
}
```
### Contributions
Thanks to [@happen2me](https://github.com/happen2me) for adding this dataset.
| [
-0.5601772665977478,
-0.7715224027633667,
0.28838562965393066,
-0.0048738219775259495,
0.06143983080983162,
0.01806468330323696,
0.006296860985457897,
-0.21196267008781433,
0.09223771840333939,
0.6258013844490051,
-0.8231831192970276,
-0.7065662741661072,
-0.38322070240974426,
-0.088434107... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963398 | autoevaluate | 2022-10-24T08:46:56Z | 16 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-24T08:46:56Z | 2022-10-23T21:00:00.000Z | 2022-10-23T21:00:00 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/neqa0_8shot
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-66b_eval
metrics: []
dataset_name: jeffdshen/neqa0_8shot
dataset_config: jeffdshen--neqa0_8shot
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | [
-0.369832307100296,
-0.31198933720588684,
0.3456975221633911,
-0.07381800562143326,
-0.05361565947532654,
-0.18964353203773499,
0.006521324627101421,
-0.37203478813171387,
0.052444517612457275,
0.44498053193092346,
-0.9589676856994629,
-0.24167856574058533,
-0.6663673520088196,
0.029928218... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
saldera/lahore_booth | saldera | 2022-10-24T10:36:59Z | 16 | 0 | null | [
"region:us"
] | 2022-10-24T10:36:59Z | 2022-10-24T10:35:51.000Z | 2022-10-24T10:35:51 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/Named_Entities_Lexicon | arbml | 2022-11-02T15:01:29Z | 16 | 1 | null | [
"region:us"
] | 2022-11-02T15:01:29Z | 2022-10-25T17:00:26.000Z | 2022-10-25T17:00:26 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BrainArtLabs/LiminalSourceDiffusionV1 | BrainArtLabs | 2022-10-25T18:08:28Z | 16 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-10-25T18:08:28Z | 2022-10-25T17:57:08.000Z | 2022-10-25T17:57:08 | ---
license: cc-by-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lipaoMai/drug_one_1dataset | lipaoMai | 2022-10-25T20:27:56Z | 16 | 0 | null | [
"region:us"
] | 2022-10-25T20:27:56Z | 2022-10-25T20:27:48.000Z | 2022-10-25T20:27:48 | ---
dataset_info:
features:
- name: patient_id
dtype: int64
- name: drugName
dtype: string
- name: condition
dtype: string
- name: review
dtype: string
- name: rating
dtype: float64
- name: date
dtype: string
- name: usefulCount
dtype: int64
splits:
- name: test
num_bytes: 28367208
num_examples: 53471
- name: train
num_bytes: 85172055
num_examples: 160398
download_size: 63481104
dataset_size: 113539263
---
# Dataset Card for "drug_one_1dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.26572296023368835,
-0.36788856983184814,
0.19770857691764832,
0.12678301334381104,
-0.19693703949451447,
-0.06126369908452034,
0.49204495549201965,
0.20785798132419586,
1.180346965789795,
0.6685841679573059,
-1.0450732707977295,
-0.9920985102653503,
-0.7788481712341309,
-0.1080392599105... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/Arabic_Stories_Corpus | arbml | 2022-10-25T23:28:10Z | 16 | 0 | null | [
"region:us"
] | 2022-10-25T23:28:10Z | 2022-10-25T23:27:57.000Z | 2022-10-25T23:27:57 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ghomasHudson/muld_NarrativeQA | ghomasHudson | 2022-11-02T12:24:41Z | 16 | 0 | null | [
"region:us"
] | 2022-11-02T12:24:41Z | 2022-11-02T12:17:00.000Z | 2022-11-02T12:17:00 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
sequence: string
splits:
- name: test
num_bytes: 3435452065
num_examples: 10143
- name: train
num_bytes: 11253796383
num_examples: 32747
- name: validation
num_bytes: 1176625993
num_examples: 3373
download_size: 8819172017
dataset_size: 15865874441
---
# Dataset Card for "muld_NarrativeQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5395896434783936,
-0.3445785939693451,
0.44645747542381287,
0.08234001696109772,
-0.032180942595005035,
0.2698554992675781,
0.31396225094795227,
-0.06799497455358505,
0.8615339994430542,
0.6368306279182434,
-0.9904688000679016,
-0.8062136769294739,
-0.6095128059387207,
-0.44632339477539... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jpwahle/autoencoder-paraphrase-dataset | jpwahle | 2022-11-18T17:26:00Z | 16 | 2 | are-neural-language-models-good-plagiarists-a | [
"task_categories:text-classification",
"task_categories:text-generation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"bert",
"roberta"... | 2022-11-18T17:26:00Z | 2022-11-06T08:28:10.000Z | 2022-11-06T08:28:10 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Autoencoder Paraphrase Dataset (BERT, RoBERTa, Longformer)
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- bert
- roberta
- longformer
- plagiarism
- paraphrase
- academic integrity
- arxiv
- wikipedia
- theses
task_categories:
- text-classification
- text-generation
task_ids: []
paperswithcode_id: are-neural-language-models-good-plagiarists-a
dataset_info:
- split: train
download_size: 2980464
dataset_size: 2980464
- split: test
download_size: 1690032
dataset_size: 1690032
---
# Dataset Card for Machine Paraphrase Dataset (MPC)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rat1.ionale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** https://ieeexplore.ieee.org/document/9651895
- **Total size:** 2.23 GB
- **Train size:** 1.52 GB
- **Test size:** 861 MB
### Dataset Summary
The Autoencoder Paraphrase Corpus (APC) consists of ~200k examples of original, and paraphrases using three neural language models.
It uses three models (BERT, RoBERTa, Longformer) on three source texts (Wikipedia, arXiv, student theses).
The examples are aligned, i.e., we sample the same paragraphs for originals and paraphrased versions.
### How to use it
You can load the dataset using the `load_dataset` function:
```python
from datasets import load_dataset
ds = load_dataset("jpwahle/autoencoder-paraphrase-dataset")
print(ds[0])
#OUTPUT:
{
'text': 'War memorial formally unveiled on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in attendance At the unveiling ceremony Captain Fortescue gave a speech during wherein he announced that 11 600 men and women from Devon had been inval while serving in imperialist war He later stated that some 63 700 8 000 regulars 36 700 volunteers 19 000 conscripts had served in the armed forces The heroism of the dead are recorded on a roll of honour of which three copies were made one for Exeter Cathedral one To be held by Tasman county council and another honoring the Prince of Wales placed in a hollow in bedrock base of the war memorial The princes visit generated considerable excitement in the area Thousands of spectators lined the street to greet his motorcade and shops on Market High Street hung out banners with welcoming messages After the unveiling Edward spent ten days touring the local area',
'label': 1,
'dataset': 'wikipedia',
'method': 'longformer'
}
```
### Supported Tasks and Leaderboards
Paraphrase Identification
### Languages
English
## Dataset Structure
### Data Instances
```json
{
'text': 'War memorial formally unveiled on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in attendance At the unveiling ceremony Captain Fortescue gave a speech during wherein he announced that 11 600 men and women from Devon had been inval while serving in imperialist war He later stated that some 63 700 8 000 regulars 36 700 volunteers 19 000 conscripts had served in the armed forces The heroism of the dead are recorded on a roll of honour of which three copies were made one for Exeter Cathedral one To be held by Tasman county council and another honoring the Prince of Wales placed in a hollow in bedrock base of the war memorial The princes visit generated considerable excitement in the area Thousands of spectators lined the street to greet his motorcade and shops on Market High Street hung out banners with welcoming messages After the unveiling Edward spent ten days touring the local area',
'label': 1,
'dataset': 'wikipedia',
'method': 'longformer'
}
```
### Data Fields
| Feature | Description |
| --- | --- |
| `text` | The unique identifier of the paper. |
| `label` | Whether it is a paraphrase (1) or the original (0). |
| `dataset` | The source dataset (Wikipedia, arXiv, or theses). |
| `method` | The method used (bert, roberta, longformer). |
### Data Splits
- train (Wikipedia x [bert, roberta, longformer])
- test ([Wikipedia, arXiv, theses] x [bert, roberta, longformer])
## Dataset Creation
### Curation Rationale
Providing a resource for testing against autoencoder paraprhased plagiarism.
### Source Data
#### Initial Data Collection and Normalization
- Paragraphs from `featured articles` from the English Wikipedia dump
- Paragraphs from full-text pdfs of arXMLiv
- Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Philip Wahle](https://jpwahle.com/)
### Licensing Information
The Autoencoder Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms.
### Citation Information
```bib
@inproceedings{9651895,
title = {Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection},
author = {Wahle, Jan Philip and Ruas, Terry and Meuschke, Norman and Gipp, Bela},
year = 2021,
booktitle = {2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL)},
volume = {},
number = {},
pages = {226--229},
doi = {10.1109/JCDL52503.2021.00065}
}
```
### Contributions
Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset. | [
-0.5163741707801819,
-0.63018399477005,
0.5667062401771545,
0.22786091268062592,
-0.3646954298019409,
-0.2621479332447052,
0.07608691602945328,
-0.0036604374181479216,
0.3453373610973358,
0.6651423573493958,
-0.33485114574432373,
-0.5686833262443542,
-0.6264593005180359,
0.4236187338829040... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jpwahle/autoregressive-paraphrase-dataset | jpwahle | 2022-11-19T12:14:43Z | 16 | 1 | null | [
"task_categories:text-classification",
"task_categories:text-generation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"plagiarism",
"pa... | 2022-11-19T12:14:43Z | 2022-11-06T08:28:27.000Z | 2022-11-06T08:28:27 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Machine Paraphrase Dataset (T5, GPT-3)
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- plagiarism
- paraphrase
- academic integrity
- arxiv
- wikipedia
- theses
- bert
- roberta
- t5
- gpt-3
task_categories:
- text-classification
- text-generation
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Size:** 163MB
- **Repository:** https://github.com/jpwahle/emnlp22-transforming
- **Paper:** https://arxiv.org/abs/2210.03568
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.5333082675933838,
-0.5757511258125305,
0.11405634135007858,
0.2551368176937103,
-0.17742060124874115,
0.18479833006858826,
-0.42138564586639404,
-0.3384806215763092,
0.7586318850517273,
0.7304631471633911,
-0.9892981648445129,
-1.1250981092453003,
-0.6845178008079529,
0.1834729909896850... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qag_squad | lmqg | 2022-12-18T07:39:03Z | 16 | 0 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_squad",
"language:en",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-18T07:39:03Z | 2022-11-11T14:12:30.000Z | 2022-11-11T14:12:30 | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: en
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_squad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_squad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the SQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": "\"4 Minutes\" was released as the album's lead single and peaked at number three on the Billboard Hot 100. It was Madonna's 37th top-ten hit on the chart—it pushed Madonna past Elvis Presley as the artist with the most top-ten hits. In the UK she retained her record for the most number-one singles for a female artist; \"4 Minutes\" becoming her thirteenth. At the 23rd Japan Gold Disc Awards, Madonna received her fifth Artist of the Year trophy from Recording Industry Association of Japan, the most for any artist. To further promote the album, Madonna embarked on the Sticky & Sweet Tour; her first major venture with Live Nation. With a gross of $280 million, it became the highest-grossing tour by a solo artist then, surpassing the previous record Madonna set with the Confessions Tour; it was later surpassed by Roger Waters' The Wall Live. It was extended to the next year, adding new European dates, and after it ended, the total gross was $408 million.",
"questions": [
"Which single was released as the album's lead single?",
"Madonna surpassed which artist with the most top-ten hits?",
"4 minutes became Madonna's which number one single in the UK?",
"What is the name of the first tour with Live Nation?",
"How much did Stick and Sweet Tour grossed?"
],
"answers": [
"4 Minutes",
"Elvis Presley",
"thirteenth",
"Sticky & Sweet Tour",
"$280 million,"
],
"questions_answers": "question: Which single was released as the album's lead single?, answer: 4 Minutes | question: Madonna surpassed which artist with the most top-ten hits?, answer: Elvis Presley | question: 4 minutes became Madonna's which number one single in the UK?, answer: thirteenth | question: What is the name of the first tour with Live Nation?, answer: Sticky & Sweet Tour | question: How much did Stick and Sweet Tour grossed?, answer: $280 million,"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|16462| 2067 | 2429|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.5799385905265808,
-1.0390466451644897,
0.1237441822886467,
-0.02740384079515934,
-0.16028046607971191,
0.083236925303936,
-0.09154445677995682,
-0.2865259647369385,
0.30437374114990234,
0.5557749271392822,
-0.8849124312400818,
-0.3772180378437042,
-0.143138125538826,
0.2094072699546814,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BridgeQZH/xi_diversity | BridgeQZH | 2022-11-11T22:55:28Z | 16 | 0 | null | [
"license:openrail",
"region:us"
] | 2022-11-11T22:55:28Z | 2022-11-11T22:49:52.000Z | 2022-11-11T22:49:52 | ---
license: openrail
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ClemenKok/digimon-blip-captions | ClemenKok | 2022-11-13T02:08:54Z | 16 | 0 | null | [
"annotations_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"digimon",
"region:us"
] | 2022-11-13T02:08:54Z | 2022-11-13T00:27:54.000Z | 2022-11-13T00:27:54 | ---
annotations_creators:
- machine-generated
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: '1,071 BLIP captioned images of Digimon. '
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- digimon
task_categories: []
task_ids: []
---
# Dataset Card for Digimon BLIP captions
This project was inspired by the [labelled Pokemon dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions).
The captions were generated using the BLIP Model found in the [LAVIS Library for Language-Vision Intelligence](https://github.com/salesforce/LAVIS).
Like the Pokemon equivalent, each row in the dataset contains the `image` and `text` keys. `Image` is a varying size pixel jpeg, and `text` is the corresponding text caption.
## Citation
If you use this dataset, please cite it as:
```
@misc{clemen2022digimon,
author = {Kok, Clemen},
title = {Digimon BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/ClemenKok/digimon-lavis-captions/}}
}
``` | [
-0.18872208893299103,
-0.16902591288089752,
0.30174535512924194,
0.2949482202529907,
-0.6474601030349731,
0.24391412734985352,
-0.026868196204304695,
-0.501562237739563,
0.8535083532333374,
0.6507309675216675,
-0.7368024587631226,
-0.6134599447250366,
-0.23116512596607208,
0.22922478616237... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bio_simlex | bigbio | 2022-12-22T15:43:27Z | 16 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:43:27Z | 2022-11-13T22:06:24.000Z | 2022-11-13T22:06:24 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: Bio-SimLex
homepage: https://github.com/cambridgeltl/bio-simverb
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for Bio-SimLex
## Dataset Description
- **Homepage:** https://github.com/cambridgeltl/bio-simverb
- **Pubmed:** True
- **Public:** True
- **Tasks:** STS
Bio-SimLex enables intrinsic evaluation of word representations. This evaluation can serve as a predictor of performance on various downstream tasks in the biomedical domain. The results on Bio-SimLex using standard word representation models highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs).
## Citation Information
```
@article{article,
title = {
Bio-SimVerb and Bio-SimLex: Wide-coverage evaluation sets of word
similarity in biomedicine
},
author = {Chiu, Billy and Pyysalo, Sampo and Vulić, Ivan and Korhonen, Anna},
year = 2018,
month = {02},
journal = {BMC Bioinformatics},
volume = 19,
pages = {},
doi = {10.1186/s12859-018-2039-z}
}
```
| [
-0.08314336836338043,
-0.35609692335128784,
0.6196287274360657,
0.17321930825710297,
-0.20564773678779602,
0.02978319115936756,
-0.13531823456287384,
-0.2526159882545471,
0.2473318874835968,
-0.02368207648396492,
-0.505046546459198,
-0.7167311906814575,
-0.6960605382919312,
0.4655160605907... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bioasq_task_c_2017 | bigbio | 2022-12-22T15:43:32Z | 16 | 0 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:43:32Z | 2022-11-13T22:06:31.000Z | 2022-11-13T22:06:31 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: NLM_LICENSE
pretty_name: BioASQ Task C 2017
homepage: http://participants-area.bioasq.org/general_information/Task5c/
bigbio_pubmed: True
bigbio_public: False
bigbio_tasks:
- TEXT_CLASSIFICATION
---
# Dataset Card for BioASQ Task C 2017
## Dataset Description
- **Homepage:** http://participants-area.bioasq.org/general_information/Task5c/
- **Pubmed:** True
- **Public:** False
- **Tasks:** TXTCLASS
The training data set for this task contains annotated biomedical articles
published in PubMed and corresponding full text from PMC. By annotated is meant
that GrantIDs and corresponding Grant Agencies have been identified in the full
text of articles
## Citation Information
```
@article{nentidis-etal-2017-results,
title = {Results of the fifth edition of the {B}io{ASQ} Challenge},
author = {
Nentidis, Anastasios and Bougiatiotis, Konstantinos and Krithara,
Anastasia and Paliouras, Georgios and Kakadiaris, Ioannis
},
year = 2007,
journal = {},
volume = {BioNLP 2017},
doi = {10.18653/v1/W17-2306},
url = {https://aclanthology.org/W17-2306},
biburl = {},
bibsource = {https://aclanthology.org/W17-2306}
}
```
| [
0.00855413731187582,
-0.16777245700359344,
0.39833399653434753,
0.07780109345912933,
-0.36046820878982544,
0.1779107302427292,
0.10159121453762054,
-0.5332242250442505,
0.32831236720085144,
0.283935010433197,
-0.6026720404624939,
-0.834822416305542,
-0.5089280605316162,
0.5287265181541443,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/medical_data | bigbio | 2022-12-22T15:45:28Z | 16 | 2 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:45:28Z | 2022-11-13T22:09:35.000Z | 2022-11-13T22:09:35 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: Medical Data
homepage:
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- TEXTUAL_ENTAILMENT
---
# Dataset Card for Medical Data
## Dataset Description
- **Homepage:**
- **Pubmed:** False
- **Public:** False
- **Tasks:** TE
This dataset is designed to do multiclass classification on medical drugs
## Citation Information
```
@misc{ask9medicaldata,
author = {Khan, Arbaaz},
title = {Sentiment Analysis for Medical Drugs},
year = {2019},
url = {https://www.kaggle.com/datasets/arbazkhan971/analyticvidhyadatasetsentiment},
}
```
| [
-0.13148072361946106,
-0.4640921950340271,
0.29649490118026733,
0.1370021551847458,
-0.3234930634498596,
0.22538088262081146,
-0.1755409836769104,
-0.2892901599407196,
0.4546467363834381,
0.40756523609161377,
-0.5165191292762756,
-1.1059151887893677,
-0.7189273834228516,
-0.012509153224527... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/psytar | bigbio | 2022-12-22T15:46:20Z | 16 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-12-22T15:46:20Z | 2022-11-13T22:11:38.000Z | 2022-11-13T22:11:38 |
---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: PsyTAR
homepage: https://www.askapatient.com/research/pharmacovigilance/corpus-ades-psychiatric-medications.asp
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- TEXT_CLASSIFICATION
---
# Dataset Card for PsyTAR
## Dataset Description
- **Homepage:** https://www.askapatient.com/research/pharmacovigilance/corpus-ades-psychiatric-medications.asp
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER,TXTCLASS
The "Psychiatric Treatment Adverse Reactions" (PsyTAR) dataset contains 891 drugs
reviews posted by patients on "askapatient.com", about the effectiveness and adverse
drug events associated with Zoloft, Lexapro, Cymbalta, and Effexor XR.
This dataset can be used for (multi-label) sentence classification of Adverse Drug
Reaction (ADR), Withdrawal Symptoms (WDs), Sign/Symptoms/Illness (SSIs), Drug
Indications (DIs), Drug Effectiveness (EF), Drug Infectiveness (INF) and Others, as well
as for recognition of 5 different types of named entity (in the categories ADRs, WDs,
SSIs and DIs)
## Citation Information
```
@article{Zolnoori2019,
author = {Maryam Zolnoori and
Kin Wah Fung and
Timothy B. Patrick and
Paul Fontelo and
Hadi Kharrazi and
Anthony Faiola and
Yi Shuan Shirley Wu and
Christina E. Eldredge and
Jake Luo and
Mike Conway and
Jiaxi Zhu and
Soo Kyung Park and
Kelly Xu and
Hamideh Moayyed and
Somaieh Goudarzvand},
title = {A systematic approach for developing a corpus of patient reported adverse drug events: A case study for {SSRI} and {SNRI} medications},
journal = {Journal of Biomedical Informatics},
volume = {90},
year = {2019},
url = {https://doi.org/10.1016/j.jbi.2018.12.005},
doi = {10.1016/j.jbi.2018.12.005},
}
```
| [
-0.19269736111164093,
-0.3706826865673065,
0.5996590256690979,
0.1402580887079239,
-0.19436272978782654,
-0.2118706852197647,
-0.14841127395629883,
-0.3113315999507904,
0.47397229075431824,
0.4466753900051117,
-0.2918415069580078,
-0.8791342377662659,
-0.6527733206748962,
0.167502745985984... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
salmonhumorous/logo-blip-caption | salmonhumorous | 2022-11-16T19:35:54Z | 16 | 4 | null | [
"region:us"
] | 2022-11-16T19:35:54Z | 2022-11-16T19:35:45.000Z | 2022-11-16T19:35:45 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 24808769.89
num_examples: 1435
download_size: 24242906
dataset_size: 24808769.89
---
# Dataset Card for "logo-blip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5561331510543823,
-0.25183793902397156,
0.006064141169190407,
0.4693876802921295,
-0.2745356261730194,
0.2838570773601532,
0.2154090702533722,
-0.506100058555603,
0.9843161702156067,
0.4230389893054962,
-0.8067166805267334,
-0.7482252717018127,
-0.7605422139167786,
-0.18470405042171478,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PlanTL-GOB-ES/WikiCAT_esv2 | PlanTL-GOB-ES | 2023-07-27T09:13:16Z | 16 | 0 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:automatically-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:es",
"license:cc-by-sa-3.0",
"region:us"
] | 2023-07-27T09:13:16Z | 2022-11-18T10:18:53.000Z | 2022-11-18T10:18:53 | ---
YAML tags:
annotations_creators:
- automatically-generated
language_creators:
- found
language:
- es
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: wikicat_esv2
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# WikiCAT_es: Spanish Text Classification dataset
## Dataset Description
- **Paper:**
- **Point of Contact:** carlos.rodriguez1@bsc.es
**Repository**
### Dataset Summary
WikiCAT_ca is a Spanish corpus for thematic Text Classification tasks. It is created automatically from Wikipedia and Wikidata sources, and contains 8401 articles from the Viquipedia classified under 12 different categories.
This dataset was developed by BSC TeMU as part of the PlanTL project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.
### Supported Tasks and Leaderboards
Text classification, Language Model
### Languages
ES- Spanish
## Dataset Structure
### Data Instances
Two json files, one for each split.
### Data Fields
We used a simple model with the article text and associated labels, without further metadata.
#### Example:
<pre>
{'sentence': 'La economía de Reunión se ha basado tradicionalmente en la agricultura. La caña de azúcar ha sido el cultivo principal durante más de un siglo, y en algunos años representa el 85% de las exportaciones. El gobierno ha estado impulsando el desarrollo de una industria turística para aliviar el alto desempleo, que representa más del 40% de la fuerza laboral.(...) El PIB total de la isla fue de 18.800 millones de dólares EE.UU. en 2007., 'label': 'Economía'}
</pre>
#### Labels
'Religión', 'Entretenimiento', 'Música', 'Ciencia_y_Tecnología', 'Política', 'Economía', 'Matemáticas', 'Humanidades', 'Deporte', 'Derecho', 'Historia', 'Filosofía'
### Data Splits
* hfeval_esv5.json: 1681 label-document pairs
* hftrain_esv5.json: 6716 label-document pairs
## Dataset Creation
### Methodology
La páginas de "Categoría" representan los temas.
para cada tema, extraemos las páginas asociadas a ese primer nivel de la jerarquía, y utilizamos el resúmen ("summary") como texto representativo.
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The source data are thematic categories in the different Wikipedias
#### Who are the source language producers?
### Annotations
#### Annotation process
Automatic annotation
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Spanish.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing Information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Contributions
[N/A]
| [
-0.27928411960601807,
-0.5200899839401245,
0.18167729675769806,
0.39228686690330505,
-0.13623230159282684,
0.11135414987802505,
-0.36725130677223206,
-0.3818866014480591,
0.5398956537246704,
0.4615843892097473,
-0.5974758267402649,
-0.9854397773742676,
-0.6938670873641968,
0.38680553436279... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Robo0890/RosiePosey | Robo0890 | 2022-11-19T20:53:39Z | 16 | 0 | null | [
"region:us"
] | 2022-11-19T20:53:39Z | 2022-11-19T20:36:08.000Z | 2022-11-19T20:36:08 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AdithyaSNair/disease | AdithyaSNair | 2022-11-20T13:52:13Z | 16 | 0 | null | [
"region:us"
] | 2022-11-20T13:52:13Z | 2022-11-20T11:04:03.000Z | 2022-11-20T11:04:03 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Bugjuhjugjyy/t | Bugjuhjugjyy | 2022-11-21T00:10:24Z | 16 | 0 | null | [
"region:us"
] | 2022-11-21T00:10:24Z | 2022-11-21T00:09:37.000Z | 2022-11-21T00:09:37 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-futin__feed-top_vi-b5257d-2174969944 | autoevaluate | 2022-11-21T05:09:21Z | 16 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T05:09:21Z | 2022-11-21T04:36:14.000Z | 2022-11-21T04:36:14 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: []
dataset_name: futin/feed
dataset_config: top_vi
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/feed
* Config: top_vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | [
-0.20452076196670532,
-0.3816588521003723,
0.4341752827167511,
0.07783662527799606,
0.08465936779975891,
-0.165145605802536,
-0.00278214062564075,
-0.3964948058128357,
0.06752713024616241,
0.318744033575058,
-0.9895274639129639,
-0.24432796239852905,
-0.7298749685287476,
-0.056784879416227... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dlproject/msp_train_hubert_large | dlproject | 2022-11-21T07:13:29Z | 16 | 0 | null | [
"region:us"
] | 2022-11-21T07:13:29Z | 2022-11-21T07:08:21.000Z | 2022-11-21T07:08:21 | ---
dataset_info:
features:
- name: input_values
sequence:
sequence:
sequence: float32
- name: attention_mask
sequence:
sequence: int32
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 10873164220
num_examples: 29939
download_size: 9851610543
dataset_size: 10873164220
---
# Dataset Card for "msp_train_hubert_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5216963291168213,
0.044254057109355927,
0.31042802333831787,
0.39633840322494507,
-0.07684127986431122,
-0.07652904093265533,
0.011730668134987354,
-0.0055874669924378395,
0.9585094451904297,
0.41705554723739624,
-0.7383705973625183,
-0.3986664414405823,
-0.6634520888328552,
-0.26052045... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-e2e28e52-014f-41d6-a473-008f1f5e4d3d-6058 | autoevaluate | 2022-11-21T14:15:39Z | 16 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T14:15:39Z | 2022-11-21T14:15:02.000Z | 2022-11-21T14:15:02 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/conll2003-sample
eval_info:
task: entity_extraction
model: autoevaluate/entity-extraction
metrics: []
dataset_name: autoevaluate/conll2003-sample
dataset_config: autoevaluate--conll2003-sample
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: autoevaluate/entity-extraction
* Dataset: autoevaluate/conll2003-sample
* Config: autoevaluate--conll2003-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.4288422763347626,
-0.26621875166893005,
0.14720284938812256,
0.1712748259305954,
-0.10964198410511017,
-0.041024357080459595,
0.03740628436207771,
-0.5576199889183044,
0.16876348853111267,
0.339834064245224,
-0.8970293402671814,
-0.2609933316707611,
-0.7502707242965698,
0.04434997588396... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lm4pt/bpsad | lm4pt | 2022-11-23T19:20:11Z | 16 | 3 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:sentiment-analysis",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:pt",
"license:unknown",
... | 2022-11-23T19:20:11Z | 2022-11-21T15:37:12.000Z | 2022-11-21T15:37:12 | ---
annotations_creators: []
language:
- pt
language_creators:
- other
license:
- unknown
multilinguality:
- monolingual
pretty_name: bpsad
size_categories:
- 1M<n<10M
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
- sentiment-scoring
- sentiment-analysis
---
# Dataset Card for Brazilian Portuguese Sentiment Analysis Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Kaggle Dataset](https://www.kaggle.com/datasets/fredericods/ptbr-sentiment-analysis-datasets)
- **Paper:** [Sentiment Analysis on Brazilian Portuguese User Reviews](https://ieeexplore.ieee.org/abstract/document/9769838)
- **Point of Contact:** [Frederico Dias Souza](fredericods@poli.ufrj.br)
### Dataset Summary
**Disclaimer:** *The team releasing the dataset did not write a dataset card
for this dataset so this dataset card has been written by the contributors.*
The Brazilian Portuguese Sentiment Analysis Dataset (BPSAD) is composed
by the concatenation of 5 differents sources (Olist, B2W Digital, Buscapé,
UTLC-Apps and UTLC-Movies), each one is composed by evaluation sentences
classified according to the polarity (0: negative; 1: positive) and ratings
(1, 2, 3, 4 and 5 stars).
This dataset requires manual download:
1. Download the `concatenated` file from dataset homepage.
2. Extract the file inside `<path/to/manual/data>`.
3. Load the dataset using the command:
```python
datasets.load_dataset(
path="lm4pt/bpsad",
name='<polarity|rating>',
data_dir='<path/to/manual/data>')
```
A detailed description about the dataset and the processing steps can be
found at the [dataset homepage](https://www.kaggle.com/datasets/fredericods/ptbr-sentiment-analysis-datasets).
### Supported Tasks and Leaderboards
The dataset contains two configurations that represents the possible tasks
related to sentiment analysis. The polarity classification is a binary
classification problem where the sentences must be classified as positive (1)
or negative (0) reviews. The rating prediction is a multiclass problem
with values ranging from 1 to 5 stars.
### Languages
The texts are in Brazilian Portuguese, as spoken by users of different e-commerces and Filmow social network.
## Dataset Structure
### Data Instances
#### polarity
```
{
"review_text": "Bem macio e felpudo...recomendo. Preço imbatível e entrega rápida. Compraria outro quando precisar",
"polarity": 1
}
```
#### rating
```
{
"review_text": "Bem macio e felpudo...recomendo. Preço imbatível e entrega rápida. Compraria outro quando precisar",
"rating": 4
}
```
### Data Fields
#### polarity
- `review_text`: a `string` feature with product or movie review.
- `polarity`: an `integer` value that represents positive (1) or negative (0) reviews.
#### rating
- `review_text`: a `string` feature with product or movie review.
- `rating`: an `integer` value that represents the number of stars given by the reviewer. Possible values are 1, 2, 3, 4 and 5.
### Data Splits
Data splits are created based on the original `kfold` column of each configuration, following the original authors recomendations:
- train: folds 1 to 8
- validation: fold 9
- test: fold 10
| | train | validation | test |
|----------|--------:|-----------:|-------:|
| polarity | 1908937 | 238614 | 238613 |
| rating | 2228877 | 278608 | 278607 |
More information about sentence size and label distribution can be found in the [dataset homepage](https://www.kaggle.com/datasets/fredericods/ptbr-sentiment-analysis-datasets).
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{souza2021sentiment,
author={
Souza, Frederico Dias and
Baptista de Oliveira e Souza Filho, João},
booktitle={
2021 IEEE Latin American Conference on
Computational Intelligence (LA-CCI)},
title={
Sentiment Analysis on Brazilian Portuguese User Reviews},
year={2021},
pages={1-6},
doi={10.1109/LA-CCI48322.2021.9769838}
}
```
### Contributions
Thanks to [@guilhermelmello](https://huggingface.co/guilhermelmello) and [@DominguesPH](https://huggingface.co/DominguesPH) for adding this dataset. | [
-0.7094226479530334,
-0.5785952806472778,
0.02492707222700119,
0.595948338508606,
-0.6649030447006226,
0.053490158170461655,
-0.35006389021873474,
-0.24445055425167084,
0.43527743220329285,
0.5201903581619263,
-0.5384611487388611,
-1.0128190517425537,
-0.7302791476249695,
0.244308874011039... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GabrielVidal/dead-by-daylight-perks | GabrielVidal | 2022-11-27T16:06:46Z | 16 | 1 | null | [
"task_categories:image-classification",
"task_categories:text-to-image",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:openrail",
"de... | 2022-11-27T16:06:46Z | 2022-11-21T20:42:24.000Z | 2022-11-21T20:42:24 | ---
license: openrail
dataset_info:
features:
- name: image
dtype: image
- name: name
dtype: string
- name: type
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 22392351.0
num_examples: 219
download_size: 22365600
dataset_size: 22392351.0
annotations_creators:
- found
language:
- en
language_creators:
- found
multilinguality:
- monolingual
pretty_name: Dead by daylight video game perks
size_categories:
- n<1K
source_datasets:
- original
tags:
- dead by daylight
task_categories:
- image-classification
- text-to-image
task_ids:
- multi-class-image-classification
---
# Dataset Card for Dead by Daylight perks
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
### Dataset Summary
This dataset contains all images (on black background and upscaled to 512x512) of perks from the video game [Dead by Daylight](https://deadbydaylight.com/) with type, name and description (the first sentence) in english.
## Dataset Creation
### Source Data
All images and text have been found online, mainly on the [Dead by Daylight wiki](https://deadbydaylight.fandom.com/wiki/Dead_by_Daylight_Wiki).
## Additional Information
### Licensing Information
All images belong to [Dead by Daylight](https://deadbydaylight.com/).
### Contributions
Thanks to [@GabrielVidal1](https://github.com/GabrielVidal1) for adding this dataset. | [
0.03001708723604679,
-0.009678215719759464,
0.3941190540790558,
0.3303085267543793,
-0.8813632726669312,
0.14849863946437836,
0.02989904209971428,
-0.32309490442276,
0.6660747528076172,
0.9448564648628235,
-1.05951726436615,
-1.2104829549789429,
-0.2946438491344452,
0.09639295935630798,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DTU54DL/demo-common-whisper | DTU54DL | 2022-11-22T08:43:39Z | 16 | 0 | acronym-identification | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-11-22T08:43:39Z | 2022-11-22T08:40:07.000Z | 2022-11-22T08:40:07 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Acronym Identification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- token-classification-other-acronym-identification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| [
-0.47841677069664,
-0.5084842443466187,
0.14602938294410706,
0.278889000415802,
-0.2170247584581375,
0.24832050502300262,
-0.3366999328136444,
-0.3758932054042816,
0.6720380187034607,
0.6457639932632446,
-0.9167346358299255,
-1.2200126647949219,
-0.7551794052124023,
0.07273735105991364,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
VIMA/VIMA-Data | VIMA | 2023-06-17T04:52:09Z | 16 | 15 | null | [
"license:cc-by-4.0",
"arxiv:2210.03094",
"region:us"
] | 2023-06-17T04:52:09Z | 2022-11-24T19:59:13.000Z | 2022-11-24T19:59:13 | ---
license: cc-by-4.0
---
# Dataset Card for VIMA-Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://vimalabs.github.io/
- **Repository:** https://github.com/vimalabs/VimaBench
- **Paper:** https://arxiv.org/abs/2210.03094
### Dataset Summary
This is the official dataset used to train general robot manipulation agents with multimodal prompts, as presented in [paper](https://arxiv.org/abs/2210.03094). It contains 650K trajectories for 13 tasks in [VIMA-Bench](https://github.com/vimalabs/VimaBench). All demonstrations are generated by oracles.
## Dataset Structure
Data are grouped into different tasks. Within each trajectory's folder, there are two folders `rgb_front` and `rgb_top`, and three files `obs.pkl`, `action.pkl`, and `trajectory.pkl`. RGB frames from a certain perspective are separately stored in corresponding folder. `obs.pkl` includes segmentation and state of end effector. `action.pkl` contains oracle actions. `trajectory.pkl` contains meta information such as elapsed steps, task information, and object information. Users can build their custom data piepline starting from here. More details and examples can be found [here](https://github.com/vimalabs/VimaBench#training-data).
## Dataset Creation
All demonstrations are generated by scripted oracles.
## Additional Information
### Licensing Information
This dataset is released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/legalcode) license.
### Citation Information
If you find our work useful, please consider citing us!
```bibtex
@inproceedings{jiang2023vima,
title = {VIMA: General Robot Manipulation with Multimodal Prompts},
author = {Yunfan Jiang and Agrim Gupta and Zichen Zhang and Guanzhi Wang and Yongqiang Dou and Yanjun Chen and Li Fei-Fei and Anima Anandkumar and Yuke Zhu and Linxi Fan},
booktitle = {Fortieth International Conference on Machine Learning},
year = {2023}
}
``` | [
-0.28347936272621155,
-0.6898390650749207,
0.45253923535346985,
0.09759016335010529,
-0.24100777506828308,
-0.18016360700130463,
-0.1341741383075714,
-0.09133057296276093,
0.3924780786037445,
0.3684581220149994,
-1.0414092540740967,
-0.8015879988670349,
-0.38716018199920654,
-0.07337094843... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
piuba-bigdata/contextualized_hate_speech | piuba-bigdata | 2023-04-29T14:19:58Z | 16 | 5 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:es",
"hate_speech",
"arxiv:2210.00465",
"region:us"
] | 2023-04-29T14:19:58Z | 2022-11-28T22:12:44.000Z | 2022-11-28T22:12:44 | ---
language:
- es
pretty_name: contextualized_hate_speech
task_categories:
- text-classification
tags:
- hate_speech
size_categories:
- 10K<n<100K
---
# Contextualized Hate Speech: A dataset of comments in news outlets on Twitter
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper**: ["Assessing the impact of contextual information in hate speech detection"](https://arxiv.org/abs/2210.00465), Juan Manuel Pérez, Franco Luque, Demian Zayat, Martín Kondratzky, Agustín Moro, Pablo Serrati, Joaquín Zajac, Paula Miguel, Natalia Debandi, Agustín Gravano, Viviana Cotik
- **Point of Contact**: jmperez (at) dc uba ar
### Dataset Summary

This dataset is a collection of tweets that were posted in response to news articles from five specific Argentinean news outlets: Clarín, Infobae, La Nación, Perfil and Crónica, during the COVID-19 pandemic. The comments were analyzed for hate speech across eight different characteristics: against women, racist content, class hatred, against LGBTQ+ individuals, against physical appearance, against people with disabilities, against criminals, and for political reasons. All the data is in Spanish.
Each comments is labeled with the following variables
| Label | Description |
| :--------- | :---------------------------------------------------------------------- |
| HATEFUL | Contains hate speech (HS)? |
| CALLS | If it is hateful, is this message calling to (possibly violent) action? |
| WOMEN | Is this against women? |
| LGBTI | Is this against LGBTI people? |
| RACISM | Is this a racist message? |
| CLASS | Is this a classist message? |
| POLITICS | Is this HS due to political ideology? |
| DISABLED | Is this HS against disabled people? |
| APPEARANCE | Is this HS against people due to their appearance? (e.g. fatshaming) |
| CRIMINAL | Is this HS against criminals or people in conflict with law? |
There is an extra label `CALLS`, which represents whether a comment is a call to violent action or not.
### Citation Information
```bibtex
@article{perez2022contextual,
author = {Pérez, Juan Manuel and Luque, Franco M. and Zayat, Demian and Kondratzky, Martín and Moro, Agustín and Serrati, Pablo Santiago and Zajac, Joaquín and Miguel, Paula and Debandi, Natalia and Gravano, Agustín and Cotik, Viviana},
journal = {IEEE Access},
title = {Assessing the Impact of Contextual Information in Hate Speech Detection},
year = {2023},
volume = {11},
number = {},
pages = {30575-30590},
doi = {10.1109/ACCESS.2023.3258973}
}
```
### Contributions
[More Information Needed] | [
-0.5364510416984558,
-0.8280168175697327,
0.15513816475868225,
0.2665287256240845,
-0.38637614250183105,
0.10118788480758667,
-0.23102132976055145,
-0.49694034457206726,
0.4578245282173157,
0.3322845697402954,
-0.4594537019729614,
-0.6804143786430359,
-0.9014158844947815,
0.000910793140064... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
airaspberry/hoodie-cad | airaspberry | 2022-12-01T20:47:53Z | 16 | 0 | null | [
"license:openrail",
"region:us"
] | 2022-12-01T20:47:53Z | 2022-11-29T05:50:40.000Z | 2022-11-29T05:50:40 | ---
license: openrail
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dferndz/cSQuAD1 | dferndz | 2022-12-09T23:17:57Z | 16 | 0 | null | [
"task_categories:question-answering",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-12-09T23:17:57Z | 2022-11-30T00:03:13.000Z | 2022-11-30T00:03:13 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- other
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: cSQuAD1
size_categories: []
source_datasets: []
tags: []
task_categories:
- question-answering
task_ids: []
---
# Dataset Card for cSQuAD1
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A contrast set generated from the eval set of SQuAD. Questions and answers were modified
to help detecting dataset artifacts. This dataset only contains a validation set, which
should only be used to evaluate a model.
### Supported Tasks
Question Answering (SQuAD).
### Languages
English
## Dataset Structure
### Data Instances
Dataset contains 100 instances
### Data Fields
| Field | Description |
|----------|--------------------------------------------------
| id | Id of document containing context |
| title | Title of the document |
| context | The context of the question |
| question | The question to answer |
| answers | A list of possible answers from the context |
| answer_start | The index in context where the answer starts |
### Data Splits
A single `eval` split is provided
## Dataset Creation
Dataset was created by modifying a sample of 100 examples from SQuAD test split.
## Additional Information
### Licensing Information
Apache 2.0 license
### Citation Information
TODO: add citations | [
-0.5728867053985596,
-0.5501077175140381,
0.09308747947216034,
0.4715999662876129,
-0.03956054896116257,
0.3902271091938019,
0.07899774610996246,
-0.1730806827545166,
0.11037001013755798,
0.38249877095222473,
-1.0690497159957886,
-0.798209547996521,
-0.19977430999279022,
0.0956067815423011... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TUKE-DeutscheTelekom/skquad | TUKE-DeutscheTelekom | 2022-12-05T14:10:32Z | 16 | 3 | squad | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"task_ids:document-retrieval",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categor... | 2022-12-05T14:10:32Z | 2022-12-02T11:28:37.000Z | 2022-12-02T11:28:37 | ---
annotations_creators:
- crowdsourced
language:
- sk
language_creators:
- crowdsourced
- found
license:
- cc-by-sa-4.0
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: squad
pretty_name: skquad
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- wikipedia
task_categories:
- question-answering
- text-retrieval
task_ids:
- open-domain-qa
- extractive-qa
- document-retrieval
train-eval-index:
- col_mapping:
answers:
answer_start: answer_start
text: text
context: context
question: question
config: squad_v2
metrics:
- name: SQuAD v2
type: squad_v2
splits:
eval_split: validation
train_split: train
task: question-answering
task_id: extractive_question_answering
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
SK-QuAD is the first QA dataset for the Slovak language.
It is manually annotated, so it has no distortion caused by
machine translation. The dataset is thematically diverse – it
does not overlap with SQuAD – it brings new knowledge.
It passed the second round of annotation – each question
and the answer were seen by at least two annotators.
### Supported Tasks and Leaderboards
- Question answering
- Document retrieval
### Languages
- Slovak
## Dataset Structure
#### squad_v2
- **Size of downloaded dataset files:** 44.34 MB
- **Size of the generated dataset:** 122.57 MB
- **Total amount of disk used:** 166.91 MB
-
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [94, 87, 94, 94],
"text": ["10th and 11th centuries", "in the 10th and 11th centuries", "10th and 11th centuries", "10th and 11th centuries"]
},
"context": "\"The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave thei...",
"id": "56ddde6b9a695914005b9629",
"question": "When were the Normans in Normandy?",
"title": "Normans"
}
```
### Data Fields
The data fields are the same among all splits.
#### squad_v2
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| | Train | Dev | Translated |
| ------------- | -----: | -----: | -------: |
| Documents | 8,377 | 940 | 442 |
| Paragraphs | 22,062 | 2,568 | 18,931 |
| Questions | 81,582 | 9,583 | 120,239 |
| Answers | 65,839 | 7,822 | 79,978 |
| Unanswerable | 15,877 | 1,784 | 40,261 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Deutsche Telekom Systems Solutions Slovakia
- Technical Univesity of Košice
### Licensing Information
Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| [
-0.5830299258232117,
-0.6634318828582764,
0.23764270544052124,
0.2454279214143753,
-0.12082500755786896,
0.3349124491214752,
-0.2825468182563782,
-0.3625536859035492,
0.6335184574127197,
0.49126020073890686,
-1.0238585472106934,
-1.0606286525726318,
-0.5105468034744263,
0.4347386956214905,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chizhikchi/CARES | chizhikchi | 2022-12-09T12:22:08Z | 16 | 3 | null | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:afl-3.0",
"radiology",
"biomedicine",
"ICD-10",
"region:us"
] | 2022-12-09T12:22:08Z | 2022-12-09T12:13:39.000Z | 2022-12-09T12:13:39 | ---
annotations_creators:
- expert-generated
language:
- es
language_creators:
- expert-generated
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: CARES
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- radiology
- biomedicine
- ICD-10
task_categories:
- text-classification
dataset_info:
features:
- name: iddoc
dtype: float64
- name: id
dtype: int64
- name: full_text
dtype: string
- name: icd10
sequence: string
- name: general
sequence: string
- name: chapters
sequence: int64
- name: area
sequence: string
splits:
- name: train
num_bytes: 3377631
num_examples: 2253
- name: test
num_bytes: 1426962
num_examples: 966
download_size: 2291080
dataset_size: 4804593
---
# CARES - A Corpus of Anonymised Radiological Evidences in Spanish 📑🏥
CARES is a high-quality text resource manually labeled with ICD-10 codes and reviewed by radiologists. These types of resources are essential for developing automatic text classification tools as they are necessary for training and fine-tuning our computational systems.
The CARES corpus has been manually annotated using the ICD-10 ontology, which stands for for the 10th version of the International Classification of Diseases. For each radiological report, a minimum of one code and a maximum of 9 codes were assigned, while the average number of codes per text is 2.15 with the standard deviation of 1.12.
The corpus was additionally preprocessed in order to make its format coherent with the automatic text classification task. Considering the hierarchical structure of the ICD-10 ontology, each sub-code was mapped to its respective code and chapter, obtaining two new sets of labels for each report. The entire CARES collection contains 6,907 sub-code annotations among the 3,219 radiologic reports. There are 223 unique ICD-10 sub-codes within the annotations, which were mapped to 156 unique ICD-10 codes and 16 unique chapters of the cited ontology.
As for the dataset train and test subsets, a stratified split was performed in order to guarantee that the number of labels in the test data is representative. | [
-0.33495497703552246,
-0.0991424173116684,
0.6985869407653809,
0.37711867690086365,
-0.45670443773269653,
0.05927750840783119,
0.15376223623752594,
-0.6310255527496338,
0.462810754776001,
0.5329519510269165,
-0.39949870109558105,
-0.8908340334892273,
-0.9299721121788025,
0.4540829360485077... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
parambharat/malayalam_asr_corpus | parambharat | 2022-12-11T13:05:27Z | 16 | 3 | null | [
"task_categories:automatic-speech-recognition",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|common_voice",
"source_datasets:extended|openslr",
"language:ml",
"license:cc-by-4.0",
"region:us"
] | 2022-12-11T13:05:27Z | 2022-12-11T12:46:03.000Z | 2022-12-11T12:46:03 | ---
annotations_creators:
- found
language:
- ml
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Malayalam ASR Corpus
size_categories:
- 1K<n<10K
source_datasets:
- extended|common_voice
- extended|openslr
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for [Malayalam Asr Corpus]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset. | [
-0.33923420310020447,
-0.41418373584747314,
-0.04267089068889618,
0.44727984070777893,
-0.4109989106655121,
0.25556743144989014,
-0.36453378200531006,
-0.16169597208499908,
0.6703384518623352,
0.6784070730209351,
-0.6758557558059692,
-0.9232767224311829,
-0.8167703747749329,
0.215947240591... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images | bhargavsdesai | 2022-12-14T21:04:51Z | 16 | 3 | null | [
"region:us"
] | 2022-12-14T21:04:51Z | 2022-12-14T19:24:25.000Z | 2022-12-14T19:24:25 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LLukas22/NLQuAD | LLukas22 | 2022-12-23T13:04:58Z | 16 | 1 | null | [
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2022-12-23T13:04:58Z | 2022-12-15T15:05:57.000Z | 2022-12-15T15:05:57 | ---
pretty_name: NLQuAD
language:
- en
license:
- cc-by-3.0
size_categories:
- 10K<n<100K
multilinguality:
- monolingual
task_ids:
- extractive-qa
dataset_info:
features:
- name: title
dtype: string
- name: date
dtype: string
- name: paragraphs
list:
- name: context
dtype: string
- name: qas
list:
- name: answers
list:
- name: answer_end
dtype: int64
- name: answer_start
dtype: int64
- name: text
dtype: string
- name: id
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 72036724
num_examples: 10259
- name: test
num_bytes: 9045482
num_examples: 1280
- name: validation
num_bytes: 8876137
num_examples: 1280
download_size: 0
dataset_size: 89958343
---
# Dataset Card for "NLQuAD"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/ASoleimaniB/NLQuAD](https://github.com/ASoleimaniB/NLQuAD)
- **Paper: https://aclanthology.org/2021.eacl-main.106/**
- **Size of the generated dataset:** 89.95 MB
### Dataset Summary
This is a copy of the original NLQuAD dataset distributed via [Github](https://github.com/ASoleimaniB/NLQuAD).
NLQuAD is a non-factoid long question answering dataset from BBC news articles.
NLQuAD’s question types and the long length of its context documents as well as answers, make it a challenging real-world task.
NLQuAD consists of news articles as context documents, interrogative sub-headings in the articles as questions, and body paragraphs corresponding to the sub-headings as contiguous answers to the questions.
NLQuAD contains 31k non-factoid questions and long answers collected from 13k BBC news articles.
See example articles in BBC [1](https://www.bbc.com/news/world-asia-china-51230011), [2](https://www.bbc.com/news/world-55709428).
We automatically extract target answers because annotating for non-factoid long QA is extremely challenging and costly.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"title": "Khashoggi murder: Body 'dissolved in acid'",
"date": "2 November 2018",
"paragraphs":[
{
"context": "A top Turkish official, presidential adviser Yasin Aktay, has said ....",
"qas":[
{
"question":"What was said in the crown prince's alleged phone call?",
"id":"0_0",
"answers":[
{
"text":"During the call with President Donald Trump\'s son-in-law Jared Kushner and national ....",
"answer_start":1352,
"answer_end": 2108,
}
]
},
{
"question":"What has the investigation found so far?",
"id":"0_1",
"answers":[
{
"text":"There is still no consensus on how Khashoggi died. He entered ....",
"answer_start":2109,
"answer_end": 3128,
}
]
},
]
}
]
}
```
### Data Fields
The data fields are the same among all splits.
- `title`: a `string` feature.
- `date`: a `string` feature.
- `paragraphs`: a list feature containing dictionaries:
- `context`: a `string` feature.
- `qas`: a list feature containing dictionaries:
- `question`: a `string` feature.
- `id`: a `string` feature.
- `answers`: a list feature containing dictionaries:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `answer_end`: a `int32` feature
### Data Splits
| name |train|test|validation|
|----------|----:|----:|---------:|
| |10259| 1280| 1280|
## Additional Information
### Licensing Information
This dataset is distributed under the [CC BY-NC](https://creativecommons.org/licenses/by-nc/3.0/) licence providing free access for non-commercial and academic usage.
### Citation Information
BibTeX:
```json
@inproceedings{soleimani-etal-2021-nlquad,
title = "{NLQ}u{AD}: A Non-Factoid Long Question Answering Data Set",
author = "Soleimani, Amir and
Monz, Christof and
Worring, Marcel",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-main.106",
doi = "10.18653/v1/2021.eacl-main.106",
pages = "1245--1255",
abstract = "We introduce NLQuAD, the first data set with baseline methods for non-factoid long question answering, a task requiring document-level language understanding. In contrast to existing span detection question answering data sets, NLQuAD has non-factoid questions that are not answerable by a short span of text and demanding multiple-sentence descriptive answers and opinions. We show the limitation of the F1 score for evaluation of long answers and introduce Intersection over Union (IoU), which measures position-sensitive overlap between the predicted and the target answer spans. To establish baseline performances, we compare BERT, RoBERTa, and Longformer models. Experimental results and human evaluations show that Longformer outperforms the other architectures, but results are still far behind a human upper bound, leaving substantial room for improvements. NLQuAD{'}s samples exceed the input limitation of most pre-trained Transformer-based models, encouraging future research on long sequence language models.",
}
``` | [
-0.689561128616333,
-0.9657943844795227,
0.2878628373146057,
0.0818670243024826,
-0.26202142238616943,
0.054324012249708176,
-0.1845659464597702,
-0.47297072410583496,
0.429053395986557,
0.38849934935569763,
-0.5221943855285645,
-0.597296953201294,
-0.25910839438438416,
0.4358046352863312,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mystgg/ru-wikipedia | mystgg | 2022-12-23T10:20:31Z | 16 | 0 | null | [
"license:mit",
"region:us"
] | 2022-12-23T10:20:31Z | 2022-12-23T10:19:40.000Z | 2022-12-23T10:19:40 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bond005/rulibrispeech | bond005 | 2023-01-18T19:38:48Z | 16 | 1 | null | [
"region:us"
] | 2023-01-18T19:38:48Z | 2022-12-26T10:39:04.000Z | 2022-12-26T10:39:04 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 11165185580.744
num_examples: 54472
- name: test
num_bytes: 306649969.0
num_examples: 1352
- name: validation
num_bytes: 321842480.0
num_examples: 1400
download_size: 10689335725
dataset_size: 11793678029.744
---
# Dataset Card for "rulibrispeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4963977038860321,
-0.20669837296009064,
0.006557229906320572,
0.3830987513065338,
-0.19801385700702667,
-0.06671300530433655,
0.3417005240917206,
-0.16644960641860962,
0.9117162823677063,
0.4458749294281006,
-1.0267988443374634,
-0.6541064977645874,
-0.5364266633987427,
-0.2989737689495... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sdadas/sick_pl | sdadas | 2022-12-29T11:01:28Z | 16 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:sick",
"language:pl",
"license:cc-by-nc-sa-3.0",
"region:us"
] | 2022-12-29T11:01:28Z | 2022-12-29T10:04:41.000Z | 2022-12-29T10:04:41 | ---
language:
- pl
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- sick
task_categories:
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-scoring
pretty_name: Sentences Involving Compositional Knowledge (Polish)
dataset_info:
features:
- name: pair_ID
dtype: string
- name: sentence_A
dtype: string
- name: sentence_B
dtype: string
- name: relatedness_score
dtype: float32
- name: entailment_judgment
dtype: string
splits:
- name: train
- name: validation
- name: test
---
# SICK_PL - Sentences Involving Compositional Knowledge (Polish)
### Dataset Summary
This dataset is a manually translated version of popular English natural language inference (NLI) corpus consisting of 10,000 sentence pairs. NLI is the task of determining whether one statement (premise) semantically entails other statement (hypothesis). Such relation can be classified as entailment (if the first sentence entails second sentence), neutral (the first statement does not determine the truth value of the second statement), or contradiction (if the first sentence is true, the second is false). Additionally, the original SICK dataset contains semantic relatedness scores for the sentence pairs as real numbers ranging from 1 to 5. When translating the corpus to Polish, we tried to be as close as possible to the original meaning. In some cases, however, two different English sentences had an identical translation in Polish. Such instances were slightly modified in order to preserve both the meaning and the syntactic differences in sentence pair.
### Data Instances
Example instance:
```
{
"pair_ID": "122",
"sentence_A": "Pięcioro dzieci stoi blisko siebie , a jedno dziecko ma pistolet",
"sentence_B": "Pięcioro dzieci stoi blisko siebie i żadne z nich nie ma pistoletu",
"relatedness_score": 3.7,
"entailment_judgment": "CONTRADICTION"
}
```
### Data Fields
- pair_ID: sentence pair ID
- sentence_A: sentence A
- sentence_B: sentence B
- entailment_judgment: textual entailment gold label: entailment (0), neutral (1) or contradiction (2)
- relatedness_score: semantic relatedness gold score (on a 1-5 continuous scale)
### Citation Information
```
@inproceedings{dadas-etal-2020-evaluation,
title = "Evaluation of Sentence Representations in {P}olish",
author = "Dadas, Slawomir and Pere{\l}kiewicz, Micha{\l} and Po{\'s}wiata, Rafa{\l}",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.207",
pages = "1674--1680",
language = "English",
ISBN = "979-10-95546-34-4",
}
``` | [
-0.11296030133962631,
-0.7608293294906616,
0.6851730346679688,
0.42909619212150574,
-0.31181252002716064,
-0.3669779598712921,
-0.18153779208660126,
-0.4857032895088196,
0.34528934955596924,
0.3788595497608185,
-0.7018145322799683,
-0.7496436238288879,
-0.39252716302871704,
0.5905690789222... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
archanatikayatray/aeroBERT-NER | archanatikayatray | 2023-05-20T22:40:58Z | 16 | 2 | null | [
"task_categories:token-classification",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"NER",
"Aerospace",
"ORG",
"SYS",
"DATETIME",
"RESOURCE",
"VALUE",
"doi:10.57967/hf/0470",
"region:us"
] | 2023-05-20T22:40:58Z | 2023-01-05T15:43:58.000Z | 2023-01-05T15:43:58 | ---
license: apache-2.0
task_categories:
- token-classification
language:
- en
tags:
- NER
- Aerospace
- ORG
- SYS
- DATETIME
- RESOURCE
- VALUE
pretty_name: all_text_annotation_NER.txt
size_categories:
- n<1K
---
# Dataset Card for aeroBERT-NER
## Dataset Description
- **Paper:** aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT
- **Point of Contact:** archanatikayatray@gmail.com
### Dataset Summary
This dataset contains sentences from the aerospace requirements domain. The sentences are tagged for five NER categories (SYS, VAL, ORG, DATETIME, and RES) using the BIO tagging scheme.
There are a total of 1432 sentences. The creation of this dataset is aimed at - <br>
(1) Making available an **open-source** dataset for aerospace requirements which are often proprietary <br>
(2) Fine-tuning language models for **token identification** (NER) specific to the aerospace domain <br>
This dataset can be used for training or fine-tuning language models for the identification of mentioned Named-Entities in aerospace texts.
## Dataset Structure
The dataset is of the format: ``Sentence-Number * WordPiece-Token * NER-tag`` <br>
"*" is used as a delimiter to avoid confusion with commas (",") that occur in the text. The following example shows the dataset structure for Sentence #1431. <br>
1431\*the\*O <br>
1431\*airplane\*B-SYS <br>
1431\*takeoff\*O <br>
1431\*performance\*O <br>
1431\*must\*O <br>
1431\*be\*O <br>
1431\*determined\*O <br>
1431\*for\*O <br>
1431\*climb\*O <br>
1431\*gradients\*O <br>
1431\*.\*O <br>
## Dataset Creation
### Source Data
Two types of aerospace texts are used to create the aerospace corpus for fine-tuning BERT: <br>
(1) general aerospace texts such as publications by the National Academy of Space Studies Board, and <br>
(2) certification requirements from Title 14 CFR. A total of 1432 sentences from the aerospace domain were included in the corpus. <br>
### Importing dataset into Python environment
Use the following code chunk to import the dataset into Python environment as a DataFrame.
```
from datasets import load_dataset
import pandas as pd
dataset = load_dataset("archanatikayatray/aeroBERT-NER")
#Converting the dataset into a pandas DataFrame
dataset = pd.DataFrame(dataset["train"]["text"])
dataset = dataset[0].str.split('*', expand = True)
#Getting the headers from the first row
header = dataset.iloc[0]
#Excluding the first row since it contains the headers
dataset = dataset[1:]
#Assigning the header to the DataFrame
dataset.columns = header
#Viewing the last 10 rows of the annotated dataset
dataset.tail(10)
```
### Annotations
#### Annotation process
A Subject Matter Expert (SME) was consulted for deciding on the annotation categories. The BIO Tagging scheme was used for annotating the dataset.
**B** - Beginning of entity <br>
**I** - Inside an entity <br>
**O** - Outside an entity <br>
| Category | NER Tags | Example |
| :----: | :----: | :----: |
| System | B-SYS, I-SYS | exhaust heat exchangers, powerplant, auxiliary power unit |
| Value | B-VAL, I-VAL | 1.2 percent, 400 feet, 10 to 19 passengers |
| Date time | B-DATETIME, I-DATETIME | 2013, 2019, May 11,1991 |
| Organization | B-ORG, I-ORG | DOD, Ames Research Center, NOAA |
| Resource | B-RES, I-RES | Section 25-341, Sections 25-173 through 25-177, Part 23 subpart B |
The distribution of the various entities in the corpus is shown below - <br>
|NER Tag|Description|Count|
| :----: | :----: | :----: |
O | Tokens that are not identified as any NE | 37686 |
B-SYS | Beginning of a system NE | 1915 |
I-SYS | Inside a system NE | 1104 |
B-VAL | Beginning of a value NE | 659 |
I-VAL | Inside a value NE | 507 |
B-DATETIME| Beginning of a date time NE | 147 |
I-DATETIME | Inside a date time NE | 63 |
B-ORG | Beginning of an organization NE | 302 |
I-ORG | Inside a organization NE | 227 |
B-RES | Beginning of a resource NE |390 |
I-RES | Inside a resource NE | 1033 |
### Limitations
(1)The dataset is an imbalanced dataset, given that's how language is (not every word is a Named-Entity). Hence, using ``Accuracy`` as a metric for the model performance is
NOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation.
(2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment.
Please refer to the Appendix of the paper for information on the test set.
### Citation Information
```
@Article{aeroBERT-NER,
AUTHOR = {Tikayat Ray, Archana and Pinon Fischer, Olivia J. and Mavris, Dimitri N. and White, Ryan T. and Cole, Bjorn F.},
TITLE = {aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT},
JOURNAL = {AIAA SCITECH 2023 Forum},
YEAR = {2023},
URL = {https://arc.aiaa.org/doi/10.2514/6.2023-2583},
DOI = {10.2514/6.2023-2583}
}
@phdthesis{tikayatray_thesis,
author = {Tikayat Ray, Archana},
title = {Standardization of Engineering Requirements Using Large Language Models},
school = {Georgia Institute of Technology},
year = {2023},
doi = {10.13140/RG.2.2.17792.40961},
URL = {https://repository.gatech.edu/items/964c73e3-f0a8-487d-a3fa-a0988c840d04}
}
```
| [
-0.5236718654632568,
-0.6026595830917358,
0.22412322461605072,
0.2745873034000397,
0.0011916154762730002,
-0.0629592016339302,
-0.21967776119709015,
-0.42668431997299194,
0.421906054019928,
0.43112656474113464,
-0.47862958908081055,
-0.6097276210784912,
-0.35775598883628845,
0.381900668144... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/utilitarianism | metaeval | 2023-01-06T13:41:50Z | 16 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-01-06T13:41:50Z | 2023-01-06T13:23:13.000Z | 2023-01-06T13:23:13 | ---
license: apache-2.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Celal11/denemeler1 | Celal11 | 2023-01-07T18:21:32Z | 16 | 0 | null | [
"region:us"
] | 2023-01-07T18:21:32Z | 2023-01-07T18:20:48.000Z | 2023-01-07T18:20:48 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dahoas/hh_prompt_format | Dahoas | 2023-01-09T04:36:06Z | 16 | 1 | null | [
"region:us"
] | 2023-01-09T04:36:06Z | 2023-01-09T04:31:16.000Z | 2023-01-09T04:31:16 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
matchbench/semi-heter | matchbench | 2023-02-21T11:08:00Z | 16 | 0 | null | [
"region:us"
] | 2023-02-21T11:08:00Z | 2023-01-24T08:24:32.000Z | 2023-01-24T08:24:32 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
matchbench/semi-rel | matchbench | 2023-02-20T14:43:37Z | 16 | 0 | null | [
"region:us"
] | 2023-02-20T14:43:37Z | 2023-01-24T08:32:11.000Z | 2023-01-24T08:32:11 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
matchbench/semi-text-c | matchbench | 2023-02-20T14:22:26Z | 16 | 0 | null | [
"region:us"
] | 2023-02-20T14:22:26Z | 2023-01-24T08:39:35.000Z | 2023-01-24T08:39:35 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/StanfordCars_test_embeddings | Multimodal-Fatima | 2023-01-29T01:37:22Z | 16 | 0 | null | [
"region:us"
] | 2023-01-29T01:37:22Z | 2023-01-29T01:36:11.000Z | 2023-01-29T01:36:11 | ---
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: int64
- name: vision_embeddings
sequence: float32
splits:
- name: openai_clip_vit_large_patch14
num_bytes: 1019088043.0
num_examples: 8041
download_size: 1017679318
dataset_size: 1019088043.0
---
# Dataset Card for "StanfordCars_test_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5910021662712097,
-0.3960208296775818,
0.22082418203353882,
0.3916023373603821,
-0.138330340385437,
-0.021220792084932327,
0.06638801842927933,
-0.0592191182076931,
0.544921338558197,
0.2788107693195343,
-0.6760170459747314,
-0.8379735350608826,
-0.3309871256351471,
-0.41532596945762634... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/StanfordCars_train_embeddings | Multimodal-Fatima | 2023-01-29T01:45:29Z | 16 | 0 | null | [
"region:us"
] | 2023-01-29T01:45:29Z | 2023-01-29T01:44:51.000Z | 2023-01-29T01:44:51 | ---
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: int64
- name: vision_embeddings
sequence: float32
splits:
- name: openai_clip_vit_large_patch14
num_bytes: 1020546140.0
num_examples: 8144
download_size: 1019546810
dataset_size: 1020546140.0
---
# Dataset Card for "StanfordCars_train_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5403304696083069,
-0.11860799789428711,
0.2971115708351135,
0.5288865566253662,
-0.15407851338386536,
-0.0737873986363411,
0.08460118621587753,
-0.0039612180553376675,
0.6382138729095459,
0.2835693061351776,
-0.6576212644577026,
-0.7345126271247864,
-0.40850013494491577,
-0.520418524742... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clip-benchmark/wds_vtab-diabetic_retinopathy | clip-benchmark | 2023-01-31T01:51:28Z | 16 | 0 | null | [
"region:us"
] | 2023-01-31T01:51:28Z | 2023-01-31T01:46:02.000Z | 2023-01-31T01:46:02 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shields/catalan_commonvoice_first15hr_processed_with_noise | shields | 2023-02-06T12:23:28Z | 16 | 0 | null | [
"region:us"
] | 2023-02-06T12:23:28Z | 2023-02-06T12:18:04.000Z | 2023-02-06T12:18:04 | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6723710888
num_examples: 7000
- name: val
num_bytes: 2881592776
num_examples: 3000
download_size: 1776942256
dataset_size: 9605303664
---
# Dataset Card for "catalan_commonvoice_first15hr_processed_with_noise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.35202479362487793,
-0.35071861743927,
0.05955245718359947,
0.7639815807342529,
-0.5838567018508911,
-0.2094368189573288,
0.0868242084980011,
-0.39612337946891785,
0.8158854842185974,
0.5430917739868164,
-1.0592641830444336,
-0.9737387895584106,
-0.47506290674209595,
-0.28701701760292053... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
codkiller0911/kotlin_code | codkiller0911 | 2023-02-11T16:42:21Z | 16 | 0 | null | [
"size_categories:1K<n<10K",
"language:en",
"kotlin",
"android",
"region:us"
] | 2023-02-11T16:42:21Z | 2023-02-11T14:39:47.000Z | 2023-02-11T14:39:47 | ---
language:
- en
tags:
- kotlin
- android
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset kotlin_code
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This Dataset contains Kotlin functions with there documentation. This dataset can be useful in fine-tuning or creating new models for developing models which can generate the code documentaiton
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.23676443099975586,
-0.2930159270763397,
0.02196205034852028,
0.25350987911224365,
-0.2592480778694153,
-0.02606137841939926,
-0.05688120424747467,
0.04915137216448784,
0.225434809923172,
0.9185916185379028,
-0.7260968089103699,
-1.0419256687164307,
-0.6342151165008545,
-0.12066562473773... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LFBMS/class_dataset_real | LFBMS | 2023-02-16T22:03:16Z | 16 | 0 | null | [
"region:us"
] | 2023-02-16T22:03:16Z | 2023-02-16T22:01:29.000Z | 2023-02-16T22:01:29 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz_h
'1': bilanz_v
'2': guv
'3': kontennachweis_bilanz
'4': kontennachweis_guv
'5': other
'6': text
splits:
- name: train
num_bytes: 330330968.875
num_examples: 1117
- name: test
num_bytes: 99656474.0
num_examples: 280
download_size: 400425817
dataset_size: 429987442.875
---
# Dataset Card for "class_dataset_real"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6122838854789734,
-0.5175618529319763,
0.09374087303876877,
-0.0326656773686409,
-0.194393128156662,
0.27842047810554504,
0.24186979234218597,
-0.3413065969944,
0.8878681063652039,
0.38000717759132385,
-0.7503160238265991,
-0.6317551136016846,
-0.40869274735450745,
-0.3048490881919861,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LFBMS/class_dataset_real2 | LFBMS | 2023-02-16T22:06:16Z | 16 | 0 | null | [
"region:us"
] | 2023-02-16T22:06:16Z | 2023-02-16T22:04:23.000Z | 2023-02-16T22:04:23 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz_h
'1': bilanz_v
'2': guv
'3': kontennachweis_bilanz
'4': kontennachweis_guv
'5': other
splits:
- name: train
num_bytes: 345218235.409
num_examples: 1117
- name: test
num_bytes: 87105530.0
num_examples: 280
download_size: 400622867
dataset_size: 432323765.409
---
# Dataset Card for "class_dataset_real2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4850597679615021,
-0.39760565757751465,
0.0913335531949997,
0.06156216561794281,
-0.20246528089046478,
0.16435067355632782,
0.36391395330429077,
-0.390147864818573,
0.7222259044647217,
0.3372630476951599,
-0.6536510586738586,
-0.5216357707977295,
-0.4964219331741333,
-0.3825842142105102... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LFBMS/class_dataset_real_donut | LFBMS | 2023-02-17T09:51:49Z | 16 | 0 | null | [
"region:us"
] | 2023-02-17T09:51:49Z | 2023-02-17T09:51:22.000Z | 2023-02-17T09:51:22 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz_h
'1': bilanz_v
'2': guv
'3': kontennachweis_bilanz
'4': kontennachweis_guv
'5': other
'6': text
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 327762478.0
num_examples: 1117
- name: test
num_bytes: 99667843.0
num_examples: 280
download_size: 400428133
dataset_size: 427430321.0
---
# Dataset Card for "class_dataset_real_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.30876365303993225,
-0.46891453862190247,
0.15356120467185974,
-0.13205552101135254,
0.012884051539003849,
0.37505045533180237,
0.1269281655550003,
-0.07457110285758972,
0.8824689388275146,
0.45901134610176086,
-0.6745189428329468,
-0.643440306186676,
-0.5642623901367188,
-0.308848708868... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LFBMS/class_dataset_real2_donut | LFBMS | 2023-02-17T10:00:38Z | 16 | 0 | null | [
"region:us"
] | 2023-02-17T10:00:38Z | 2023-02-17T10:00:13.000Z | 2023-02-17T10:00:13 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz_h
'1': bilanz_v
'2': guv
'3': kontennachweis_bilanz
'4': kontennachweis_guv
'5': other
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 340313532.0
num_examples: 1117
- name: test
num_bytes: 87116926.0
num_examples: 280
download_size: 400625159
dataset_size: 427430458.0
---
# Dataset Card for "class_dataset_real2_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.23411907255649567,
-0.41888296604156494,
0.12664923071861267,
-0.08481240272521973,
-0.04647856205701828,
0.3092496395111084,
0.17341698706150055,
-0.16284599900245667,
0.7830994129180908,
0.40385866165161133,
-0.6250549554824829,
-0.5046903491020203,
-0.595497190952301,
-0.374202758073... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LFBMS/class_dataset_real3_donut | LFBMS | 2023-02-17T10:14:49Z | 16 | 0 | null | [
"region:us"
] | 2023-02-17T10:14:49Z | 2023-02-17T10:14:22.000Z | 2023-02-17T10:14:22 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz
'1': guv
'2': kontennachweis_bilanz
'3': kontennachweis_guv
'4': other
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 327835672.0
num_examples: 1117
- name: test
num_bytes: 99594248.0
num_examples: 280
download_size: 400602803
dataset_size: 427429920.0
---
# Dataset Card for "class_dataset_real3_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3153657615184784,
-0.38798636198043823,
0.25685635209083557,
-0.05522079020738602,
0.00000553522522750427,
0.3085711896419525,
0.22711752355098724,
-0.1645480841398239,
0.7891983985900879,
0.4680875539779663,
-0.6238396167755127,
-0.6255180239677429,
-0.5366458892822266,
-0.227895796298... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/Ships-In-Satellite-Imagery | jonathan-roberts1 | 2023-03-31T14:38:12Z | 16 | 2 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2023-03-31T14:38:12Z | 2023-02-17T16:48:59.000Z | 2023-02-17T16:48:59 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': an entire ship
'1': no ship or part of a ship
splits:
- name: train
num_bytes: 41806886
num_examples: 4000
download_size: 0
dataset_size: 41806886
license: cc-by-sa-4.0
---
# Dataset Card for "Ships-In-Satellite-Imagery"
## Dataset Description
- **Paper:** [Ships in Satellite Imagery](https://www.kaggle.com/datasets/rhammell/ships-in-satellite-imagery)
### Licensing Information
CC BY-SA 4.0
## Citation Information
[Ships in Satellite Imagery](https://www.kaggle.com/datasets/rhammell/ships-in-satellite-imagery)
```
@misc{kaggle_sisi,
author = {Hammell, Robert},
title = {Ships in Satellite Imagery},
howpublished = {\url{https://www.kaggle.com/datasets/rhammell/ships-in-satellite-imagery}},
year = {2018},
version = {9.0}
}
``` | [
-0.4436092972755432,
-0.27410638332366943,
0.524499237537384,
0.3099164664745331,
-0.8063821792602539,
-0.020752916112542152,
0.22602957487106323,
-0.4645395576953888,
0.29406309127807617,
0.7436918616294861,
-0.7140666842460632,
-0.9595170021057129,
-0.48105573654174805,
-0.15729182958602... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LFBMS/class_dataset_real3_donut_train_val | LFBMS | 2023-02-17T18:35:35Z | 16 | 0 | null | [
"region:us"
] | 2023-02-17T18:35:35Z | 2023-02-17T18:35:24.000Z | 2023-02-17T18:35:24 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz
'1': guv
'2': kontennachweis_bilanz
'3': kontennachweis_guv
'4': other
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 311399863.9140555
num_examples: 1061
- name: test
num_bytes: 16435808.085944494
num_examples: 56
download_size: 307807682
dataset_size: 327835672.0
---
# Dataset Card for "class_dataset_real3_donut_train_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.36821097135543823,
-0.23376502096652985,
0.22278554737567902,
0.04755987972021103,
-0.0025193127803504467,
0.219878152012825,
0.35017892718315125,
-0.05811028555035591,
0.661356508731842,
0.4858950674533844,
-0.6725291609764099,
-0.4598170518875122,
-0.5454911589622498,
-0.2513867914676... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LFBMS/class_dataset_real2_donut_train_val | LFBMS | 2023-02-18T08:00:13Z | 16 | 0 | null | [
"region:us"
] | 2023-02-18T08:00:13Z | 2023-02-18T08:00:00.000Z | 2023-02-18T08:00:00 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz_h
'1': bilanz_v
'2': guv
'3': kontennachweis_bilanz
'4': kontennachweis_guv
'5': other
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 323252155.2837959
num_examples: 1061
- name: test
num_bytes: 17061376.716204118
num_examples: 56
download_size: 320030509
dataset_size: 340313532.0
---
# Dataset Card for "class_dataset_real2_donut_train_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3082212507724762,
-0.2561621069908142,
0.10639990121126175,
0.014204763807356358,
-0.050722796469926834,
0.22052747011184692,
0.29274284839630127,
-0.05521031469106674,
0.6538386940956116,
0.4364941120147705,
-0.6816582679748535,
-0.35858353972435,
-0.599319577217102,
-0.375912725925445... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sagnikrayc/snli-bt | sagnikrayc | 2023-02-20T23:11:17Z | 16 | 0 | null | [
"license:afl-3.0",
"region:us"
] | 2023-02-20T23:11:17Z | 2023-02-20T23:03:02.000Z | 2023-02-20T23:03:02 | ---
license: afl-3.0
---
### Dataset Card for SNLI Back Translation
back translation of SNLI dataset: only use the test version
| [
0.2348702996969223,
0.028180953115224838,
-0.2692723870277405,
0.6804989576339722,
-0.7160782217979431,
0.03881988301873207,
-0.21774470806121826,
0.1800573319196701,
0.4411642551422119,
0.7509449124336243,
-0.480893075466156,
-0.6528359651565552,
-0.3809700310230255,
-0.014316840097308159... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SaylorTwift/Gutenberg | SaylorTwift | 2023-03-02T14:33:50Z | 16 | 3 | null | [
"region:us"
] | 2023-03-02T14:33:50Z | 2023-03-02T13:59:30.000Z | 2023-03-02T13:59:30 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: author
dtype: string
- name: authoryearofbirth
dtype: int32
- name: authoryearofdeath
dtype: int32
- name: downloads
dtype: int32
- name: text
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 20279073235
num_examples: 54810
download_size: 12344747182
dataset_size: 20279073235
---
# Dataset Card for "Gutenberg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6913110613822937,
-0.27365365624427795,
0.31175127625465393,
0.09541667997837067,
-0.11879080533981323,
-0.06149054318666458,
0.07287570089101791,
-0.3097490072250366,
0.6189468502998352,
0.5521401762962341,
-0.7335449457168579,
-0.8628405332565308,
-0.6919702887535095,
-0.2792537510395... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MarkJeong/food_poc | MarkJeong | 2023-03-08T08:54:21Z | 16 | 0 | null | [
"region:us"
] | 2023-03-08T08:54:21Z | 2023-03-08T07:54:05.000Z | 2023-03-08T07:54:05 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': 김치찌개
'1': 된장찌개
splits:
- name: train
num_bytes: 2821852.0
num_examples: 54
- name: test
num_bytes: 2498883.0
num_examples: 21
- name: validation
num_bytes: 2498883.0
num_examples: 21
download_size: 7820748
dataset_size: 7819618.0
---
# Dataset Card for "food_poc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.38085517287254333,
-0.3906114399433136,
0.2684119939804077,
-0.02930261194705963,
0.08292330801486969,
-0.23334917426109314,
0.36925485730171204,
-0.12155891209840775,
0.9476609230041504,
0.5339672565460205,
-0.6723347306251526,
-0.9261269569396973,
-0.6768621206283569,
-0.0274672117084... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pcuenq/face_synthetics_smol | pcuenq | 2023-03-12T15:24:35Z | 16 | 0 | null | [
"region:us"
] | 2023-03-12T15:24:35Z | 2023-03-12T15:22:11.000Z | 2023-03-12T15:22:11 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_seg
dtype: image
- name: landmarks
dtype: string
splits:
- name: train
num_bytes: 35303202.0
num_examples: 100
download_size: 35283640
dataset_size: 35303202.0
---
# Dataset Card for "face_synthetics_smol"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5543146133422852,
-0.35082870721817017,
0.30574876070022583,
0.3338061571121216,
-0.12040118128061295,
0.30967822670936584,
0.22897765040397644,
-0.15891537070274353,
0.9432887434959412,
0.694487452507019,
-1.1937670707702637,
-0.617733359336853,
-0.4574729800224304,
-0.1546247005462646... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SKyu/my-image-captioning-dataset | SKyu | 2023-03-20T06:24:06Z | 16 | 0 | null | [
"size_categories:1K<n<10K",
"region:us"
] | 2023-03-20T06:24:06Z | 2023-03-20T05:45:04.000Z | 2023-03-20T05:45:04 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 417257082.9
num_examples: 3100
download_size: 480865927
dataset_size: 417257082.9
pretty_name: jl_pics
size_categories:
- 1K<n<10K
---
# Dataset Card for "my-image-captioning-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6928406953811646,
-0.06174745038151741,
0.12354177236557007,
0.3118031620979309,
-0.35295262932777405,
0.17953816056251526,
0.36152151226997375,
-0.0306453388184309,
0.8890178203582764,
0.6472420692443848,
-0.7709235548973083,
-0.5842766761779785,
-0.6356714963912964,
0.0464857742190361... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.