author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-big_patent-y-b4cccf-1519855005 | 2022-09-22T06:24:35.000Z | null | false | 8fcbf087a8ba256d1d8ad78d5474126481b43e73 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:big_patent"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-big_patent-y-b4cccf-1519855005/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- big_patent
eval_info:
task: summarization
model: pszemraj/pegasus-x-large-book-summary
metrics: []
dataset_name: big_patent
dataset_config: y
dataset_split: test
col_mapping:
text: description
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-x-large-book-summary
* Dataset: big_patent
* Config: y
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-samsum-samsum-a5c306-1520055006 | 2022-09-21T02:23:40.000Z | null | false | 94ff6a5935f6cd3ff8a915f76e6852c4a3667a7f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-samsum-samsum-a5c306-1520055006/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-samsum-samsum-bf100b-1520255007 | 2022-09-21T02:23:16.000Z | null | false | 169d0612fccaa4dd7bff2fa33ab533b40aeef69e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-samsum-samsum-bf100b-1520255007/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
metrics: ['rouge']
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-Tristan__zero_shot_classification_test-Tristan__zero_sh-c10c5c-1520355008 | 2022-09-21T03:16:17.000Z | null | false | 523d566065cd18bc42172c82f9ffa933eaf29b05 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Tristan/zero_shot_classification_test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-Tristan__zero_shot_classification_test-Tristan__zero_sh-c10c5c-1520355008/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Tristan/zero_shot_classification_test
eval_info:
task: text_zero_shot_classification
model: Tristan/opt-66b-copy
metrics: []
dataset_name: Tristan/zero_shot_classification_test
dataset_config: Tristan--zero_shot_classification_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Tristan/opt-66b-copy
* Dataset: Tristan/zero_shot_classification_test
* Config: Tristan--zero_shot_classification_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad_v2-squad_v2-e4ddf6-1520555010 | 2022-09-21T04:32:36.000Z | null | false | 5d3309b8aa10d7cf28752a9589c8a8a99325e069 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad_v2-squad_v2-e4ddf6-1520555010/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: SebastianS/distilbert-base-uncased-finetuned-squad-d5716d28
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: SebastianS/distilbert-base-uncased-finetuned-squad-d5716d28
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ColdYoungGuy](https://huggingface.co/ColdYoungGuy) for evaluating this model. |
OccultMC | null | null | null | false | null | false | OccultMC/AndrewTate | 2022-09-21T07:40:52.000Z | null | false | a9f9a231732cac33471e8a2efbeae114859ef1d3 | [] | [
"license:cc"
] | https://huggingface.co/datasets/OccultMC/AndrewTate/resolve/main/README.md | ---
license: cc
---
|
HighSodium | null | null | null | false | 1 | false | HighSodium/inflation | 2022-09-21T08:07:12.000Z | null | false | 6a940d4970bd3b248c1d6e3f35bd59c7befdfade | [] | [
"license:odbl"
] | https://huggingface.co/datasets/HighSodium/inflation/resolve/main/README.md | ---
license: odbl
---
|
Harrietofthesea | null | null | null | false | null | false | Harrietofthesea/public_test | 2022-09-21T08:31:29.000Z | null | false | a8f7d8754929868c25e7139e643b59a41dc19964 | [] | [
"license:cc"
] | https://huggingface.co/datasets/Harrietofthesea/public_test/resolve/main/README.md | ---
license: cc
---
|
sdhj | null | null | null | false | null | false | sdhj/wwww | 2022-09-21T09:47:48.000Z | null | false | af9881620d1112fee620f0b76a93233233d0e017 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/sdhj/wwww/resolve/main/README.md | ---
license: apache-2.0
---
|
sanchit-gandhi | null | null | null | false | 33 | false | sanchit-gandhi/earnings22_split | 2022-09-23T09:44:26.000Z | null | false | f9fb35f4134e32b9c8100199d949398fd6d08a5f | [] | [] | https://huggingface.co/datasets/sanchit-gandhi/earnings22_split/resolve/main/README.md | We partition the earnings22 dataset at https://huggingface.co/datasets/anton-l/earnings22_baseline_5_gram by `source_id`:
Validation: 4420696 4448760 4461799 4469836 4473238 4482110
Test: 4432298 4450488 4470290 4479741 4483338 4485244
Train: remainder
Official script for processing these splits will be released shortly.
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-e42237-1523455078 | 2022-09-21T18:28:50.000Z | null | false | 16c96aacfd2f858c7577cd1944a8e67992036e8c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:kmfoda/booksum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-e42237-1523455078/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- kmfoda/booksum
eval_info:
task: summarization
model: pszemraj/pegasus-x-large-book-summary
metrics: []
dataset_name: kmfoda/booksum
dataset_config: kmfoda--booksum
dataset_split: test
col_mapping:
text: chapter
target: summary_text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-x-large-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
AIRI-Institute | null | null | null | false | null | false | AIRI-Institute/I4TALK_DATA | 2022-09-21T11:51:05.000Z | null | false | b87e432d0decd12b0de10ce6c92a3c75536f2b3f | [] | [
"license:cc-by-sa-4.0"
] | https://huggingface.co/datasets/AIRI-Institute/I4TALK_DATA/resolve/main/README.md | ---
license: cc-by-sa-4.0
---
|
Adapting | null | null | null | false | 1 | false | Adapting/chinese_biomedical_NER_dataset | 2022-09-21T18:21:15.000Z | null | false | 7c1cc64b8570c0d0882b285941fd625c4bbb886c | [] | [
"license:mit"
] | https://huggingface.co/datasets/Adapting/chinese_biomedical_NER_dataset/resolve/main/README.md | ---
license: mit
---
# 1 Source
Source: https://github.com/alibaba-research/ChineseBLUE
# 2 Definition of the tagset
```python
tag_set = [
'B_手术',
'I_疾病和诊断',
'B_症状',
'I_解剖部位',
'I_药物',
'B_影像检查',
'B_药物',
'B_疾病和诊断',
'I_影像检查',
'I_手术',
'B_解剖部位',
'O',
'B_实验室检验',
'I_症状',
'I_实验室检验'
]
tag2id = lambda tag: tag_set.index(tag)
id2tag = lambda id: tag_set[id]
```
# 3 Citation
To use this dataset in your work please cite:
Ningyu Zhang, Qianghuai Jia, Kangping Yin, Liang Dong, Feng Gao, Nengwei Hua. Conceptualized Representation Learning for Chinese Biomedical Text Mining
```
@article{zhang2020conceptualized,
title={Conceptualized Representation Learning for Chinese Biomedical Text Mining},
author={Zhang, Ningyu and Jia, Qianghuai and Yin, Kangping and Dong, Liang and Gao, Feng and Hua, Nengwei},
journal={arXiv preprint arXiv:2008.10813},
year={2020}
}
```
|
myt517 | null | null | null | false | 1 | false | myt517/GID_benchmark | 2022-09-21T14:06:09.000Z | null | false | 9377b07c09c9e734468cb85f7a58b16c46aa264c | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/myt517/GID_benchmark/resolve/main/README.md | ---
license: apache-2.0
---
|
ArneBinder | null | null | null | false | 18 | false | ArneBinder/xfund | 2022-09-21T15:12:34.000Z | null | false | b52c6bf1f753da7c473f7954708a160b26fcaa6e | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/ArneBinder/xfund/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-bf74a8-1524255094 | 2022-09-21T18:43:44.000Z | null | false | 51d9269a2818c7fe39b9380efc9a62f40a8e5b2e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-bf74a8-1524255094/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
StonyBrookNLP | null | null | null | false | 1 | false | StonyBrookNLP/tellmewhy | 2022-09-29T13:05:59.000Z | null | false | 94c5862e240eb8778c22d9badd50c5a1e14a5225 | [] | [
"annotations_creators:crowdsourced",
"language_creators:found",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text2text-generation"
] | https://huggingface.co/datasets/StonyBrookNLP/tellmewhy/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: null
pretty_name: TellMeWhy
---
# Dataset Card for NewsCommentary
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://stonybrooknlp.github.io/tellmewhy/
- **Repository:** https://github.com/StonyBrookNLP/tellmewhy
- **Paper:** https://aclanthology.org/2021.findings-acl.53/
- **Leaderboard:** None
- **Point of Contact:** [Yash Kumar Lal](mailto:ylal@cs.stonybrook.edu)
### Dataset Summary
TellMeWhy is a large-scale crowdsourced dataset made up of more than 30k questions and free-form answers concerning why characters in short narratives perform the actions described.
### Supported Tasks and Leaderboards
The dataset is designed to test why-question answering abilities of models when bound by local context.
### Languages
English
## Dataset Structure
### Data Instances
A typical data point consists of a story, a question and a crowdsourced answer to that question. Additionally, the instance also indicates whether the question's answer would be implicit or if it is explicitly stated in text. If applicable, it also contains Likert scores (-2 to 2) about the answer's grammaticality and validity in the given context.
```
{
"narrative":"Cam ordered a pizza and took it home. He opened the box to take out a slice. Cam discovered that the store did not cut the pizza for him. He looked for his pizza cutter but did not find it. He had to use his chef knife to cut a slice.",
"question":"Why did Cam order a pizza?",
"original_sentence_for_question":"Cam ordered a pizza and took it home.",
"narrative_lexical_overlap":0.3333333333,
"is_ques_answerable":"Not Answerable",
"answer":"Cam was hungry.",
"is_ques_answerable_annotator":"Not Answerable",
"original_narrative_form":[
"Cam ordered a pizza and took it home.",
"He opened the box to take out a slice.",
"Cam discovered that the store did not cut the pizza for him.",
"He looked for his pizza cutter but did not find it.",
"He had to use his chef knife to cut a slice."
],
"question_meta":"rocstories_narrative_41270_sentence_0_question_0",
"helpful_sentences":[
],
"human_eval":false,
"val_ann":[
],
"gram_ann":[
]
}
```
### Data Fields
- `question_meta` - Unique meta for each question in the corpus
- `narrative` - Full narrative from ROCStories. Used as the context with which the question and answer are associated
- `question` - Why question about an action or event in the narrative
- `answer` - Crowdsourced answer to the question
- `original_sentence_for_question` - Sentence in narrative from which question was generated
- `narrative_lexical_overlap` - Unigram overlap of answer with the narrative
- `is_ques_answerable` - Majority judgment by annotators on whether an answer to this question is explicitly stated in the narrative. If "Not Answerable", it is part of the Implicit-Answer questions subset, which is harder for models.
- `is_ques_answerable_annotator` - Individual annotator judgment on whether an answer to this question is explicitly stated in the narrative.
- `original_narrative_form` - ROCStories narrative as an array of its sentences
- `human_eval` - Indicates whether a question is a specific part of the test set. Models should be evaluated for their answers on these questions using the human evaluation suite released by the authors. They advocate for this human evaluation to be the correct way to track progress on this dataset.
- `val_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is valid given the question and context. Empty arrays exist for cases where the human_eval flag is False.
- `gram_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is grammatical. Empty arrays exist for cases where the human_eval flag is False.
### Data Splits
The data is split into training, valiudation, and test sets.
| Train | Valid | Test |
| ------ | ----- | ----- |
| 23964 | 2992 | 3563 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
ROCStories corpus (Mostafazadeh et al, 2016)
#### Initial Data Collection and Normalization
ROCStories was used to create why-questions related to actions and events in the stories.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
Amazon Mechanical Turk workers were provided a story and an associated why-question, and asked to answer. Three answers were collected for each question. For a small subset of questions, the quality of answers was also validated in a second round of annotation. This smaller subset should be used to perform human evaluation of any new models built for this dataset.
#### Who are the annotators?
Amazon Mechanical Turk workers
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Evaluation
To evaluate progress on this dataset, the authors advocate for human evaluation and release a suite with the required settings [here](https://github.com/StonyBrookNLP/tellmewhy). Once inference on the test set has been completed, please filter out the answers on which human evaluation needs to be performed by selecting the questions (one answer per question, deduplication might be needed) in the test set where the `human_eval` flag is set to `True`. This subset can then be used to complete the requisite evaluation on TellMeWhy.
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{lal-etal-2021-tellmewhy,
title = "{T}ell{M}e{W}hy: A Dataset for Answering Why-Questions in Narratives",
author = "Lal, Yash Kumar and
Chambers, Nathanael and
Mooney, Raymond and
Balasubramanian, Niranjan",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.53",
doi = "10.18653/v1/2021.findings-acl.53",
pages = "596--610",
}
```
### Contributions
Thanks to [@yklal95](https://github.com/ykl7) for adding this dataset. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-169e67-1524755111 | 2022-09-21T17:48:48.000Z | null | false | 0af0ec66aa94b834cd671169833768ef6063285e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-169e67-1524755111/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: mathemakitten/opt-125m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: mathemakitten/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
MvsSrs | null | null | null | false | 1 | false | MvsSrs/quistest | 2022-09-26T21:09:58.000Z | null | false | c4d0527ce23b301ba6b56bcf1c32d302d75c9bfb | [] | [
"license:unknown"
] | https://huggingface.co/datasets/MvsSrs/quistest/resolve/main/README.md | ---
license: unknown
---
|
PotatoGod | null | null | null | false | 1 | false | PotatoGod/testing | 2022-09-22T09:19:25.000Z | null | false | 71fce68bfcbd42b9ac56f691818a957ef3c8f4fa | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/PotatoGod/testing/resolve/main/README.md | ---
license: afl-3.0
---
|
zpn | null | @ARTICLE{Kim2016-sz,
title = "{PubChem} Substance and Compound databases",
author = "Kim, Sunghwan and Thiessen, Paul A and Bolton, Evan E and Chen,
Jie and Fu, Gang and Gindulyte, Asta and Han, Lianyi and He,
Jane and He, Siqian and Shoemaker, Benjamin A and Wang, Jiyao
and Yu, Bo and Zhang, Jian and Bryant, Stephen H",
abstract = "PubChem (https://pubchem.ncbi.nlm.nih.gov) is a public
repository for information on chemical substances and their
biological activities, launched in 2004 as a component of the
Molecular Libraries Roadmap Initiatives of the US National
Institutes of Health (NIH). For the past 11 years, PubChem has
grown to a sizable system, serving as a chemical information
resource for the scientific research community. PubChem consists
of three inter-linked databases, Substance, Compound and
BioAssay. The Substance database contains chemical information
deposited by individual data contributors to PubChem, and the
Compound database stores unique chemical structures extracted
from the Substance database. Biological activity data of
chemical substances tested in assay experiments are contained in
the BioAssay database. This paper provides an overview of the
PubChem Substance and Compound databases, including data sources
and contents, data organization, data submission using PubChem
Upload, chemical structure standardization, web-based interfaces
for textual and non-textual searches, and programmatic access.
It also gives a brief description of PubChem3D, a resource
derived from theoretical three-dimensional structures of
compounds in PubChem, as well as PubChemRDF, Resource
Description Framework (RDF)-formatted PubChem data for data
sharing, analysis and integration with information contained in
other databases.",
journal = "Nucleic Acids Res.",
publisher = "Oxford University Press (OUP)",
volume = 44,
number = "D1",
pages = "D1202--13",
month = jan,
year = 2016,
language = "en"
} | This dataset contains ~100M molecules from PubChem, with their SMILES and SELFIES representations. | false | 10,715 | false | zpn/pubchem_selfies | 2022-10-04T16:15:19.000Z | null | false | d27fa3d9aea71a1de1cfc280bb534887b05f510d | [] | [
"license:openrail"
] | https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/README.md | ---
license: openrail
---
This dataset consists of Pubchem molecules downloaded from: https://ftp.ncbi.nlm.nih.gov/pubchem/Compound/CURRENT-Full/
There are in total ~85M compounds for training, with an additional ~10M held out for validation and testing. |
mehr4n-m | null | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | A Persian translation dataset (English -> Persian). | false | 6 | false | mehr4n-m/parsinlu-en-fa-structrual-edit | 2022-11-10T22:59:16.000Z | null | false | 42a28644fe76522463f587f3719cab6a920f86a5 | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/mehr4n-m/parsinlu-en-fa-structrual-edit/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-f407ed-1527355152 | 2022-09-21T22:50:42.000Z | null | false | 8852346e4b76d1f815e1b272c840d45d7dc08ea8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-f407ed-1527355152/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: autoevaluate/zero-shot-classification
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
phaticusthiccy | null | null | null | false | null | false | phaticusthiccy/avatar | 2022-09-21T22:40:14.000Z | null | false | 3af942a32b98c8e16043ec591f92f5c368ed2953 | [] | [] | https://huggingface.co/datasets/phaticusthiccy/avatar/resolve/main/README.md |
# Avatar Dataset
Raw data stack of 18,000 sample images created for [Avatar AI](https://t.me/AvatarAIBot).
## Features
- 256X256 Medium Quality
- Micro Bloom
|
NathanGavenski | null | null | null | false | 1 | false | NathanGavenski/How-Resilient-are-Imitation-Learning-Methods-to-Sub-Optimal-Experts | 2022-10-25T14:48:38.000Z | null | false | dc30b042b8caa6fc0cdbe7511e1867919f10fd80 | [] | [
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"license:mit",
"size_categories:100B<n<1T",
"source_datasets:original",
"task_categories:other",
"tags:Imitation Learning",
"tags:Expert Trajectories",
"tags:Classic Control"
] | https://huggingface.co/datasets/NathanGavenski/How-Resilient-are-Imitation-Learning-Methods-to-Sub-Optimal-Experts/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language: []
license:
- mit
multilinguality: []
size_categories:
- 100B<n<1T
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: How Resilient are Imitation Learning Methods to Sub-Optimal Experts?
tags:
- Imitation Learning
- Expert Trajectories
- Classic Control
---
# How Resilient are Imitation Learning Methods to Sub-Optimal Experts?
## Related Work
Trajectories used in [How Resilient are Imitation Learning Methods to Sub-Optimal Experts?]()
The code that uses this data is on GitHub: https://github.com/NathanGavenski/How-resilient-IL-methods-are
# Structure
These trajectories are formed by using [Stable Baselines](https://stable-baselines.readthedocs.io/en/master/).
Each file is a dictionary of a set of trajectories with the following keys:
* actions: the action in the given timestamp `t`
* obs: current state in the given timestamp `t`
* rewards: reward retrieved after the action in the given timestamp `t`
* episode_returns: The aggregated reward of each episode (each file consists of 5000 runs)
* episode_Starts: Whether that `obs` is the first state of an episode (boolean list)
## Citation Information
```
@inproceedings{gavenski2022how,
title={How Resilient are Imitation Learning Methods to Sub-Optimal Experts?},
author={Nathan Gavenski and Juarez Monteiro and Adilson Medronha and Rodrigo Barros},
booktitle={2022 Brazilian Conference on Intelligent Systems (BRACIS)},
year={2022},
organization={IEEE}
}
```
## Contact:
- [Nathan Schneider Gavenski](nathan.gavenski@edu.pucrs.br)
- [Juarez Monteiro](juarez.santos@edu.pucrs.br)
- [Adilson Medronha](adilson.medronha@edu.pucrs.br)
- [Rodrigo C. Barros](rodrigo.barros@pucrs.br)
|
mafzal | null | null | null | false | 1 | false | mafzal/SOAP-notes | 2022-09-22T01:39:39.000Z | null | false | fc13ca9b1583fd4f16359a22cc7053eeb6d75f76 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/mafzal/SOAP-notes/resolve/main/README.md | ---
license: apache-2.0
---
|
dataDRVN | null | null | null | false | 1 | false | dataDRVN/dog-wesley | 2022-09-22T03:52:54.000Z | null | false | cee49c3f84bb914fbde672730c614a1cb2bff03f | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/dataDRVN/dog-wesley/resolve/main/README.md | ---
license: afl-3.0
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-scan-simple-0b9bd3-1528755178 | 2022-09-22T04:29:45.000Z | null | false | aba349e6b3a4d06820576289db881e37f2d5c5e3 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:scan"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-scan-simple-0b9bd3-1528755178/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- scan
eval_info:
task: summarization
model: ARTeLab/it5-summarization-fanpage
metrics: []
dataset_name: scan
dataset_config: simple
dataset_split: train
col_mapping:
text: commands
target: actions
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-fanpage
* Dataset: scan
* Config: simple
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@test_yoon_0921](https://huggingface.co/test_yoon_0921) for evaluating this model. |
mehr4n-m | null | null | null | false | null | false | mehr4n-m/autotrain-data-nllb_600_ft | 2022-09-22T05:54:15.000Z | null | false | 8381f2d7cd133cc20378a943ae802a21e0dd1a11 | [] | [] | https://huggingface.co/datasets/mehr4n-m/autotrain-data-nllb_600_ft/resolve/main/README.md | ---
task_categories:
- conditional-text-generation
---
# AutoTrain Dataset for project: nllb_600_ft
## Dataset Description
This dataset has been automatically processed by AutoTrain for project nllb_600_ft.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_id": "772",
"feat_URL": "https://en.wikivoyage.org/wiki/Apia",
"feat_domain": "wikivoyage",
"feat_topic": "Travel",
"feat_has_image": "0",
"feat_has_hyperlink": "0",
"text": "All the ships were sunk, except for one British cruiser. Nearly 200 American and German lives were lost.",
"target": "\u0628\u0647\u200c\u062c\u0632 \u06cc\u06a9 \u06a9\u0634\u062a\u06cc \u062c\u0646\u06af\u06cc \u0627\u0646\u06af\u0644\u06cc\u0633\u06cc \u0647\u0645\u0647 \u06a9\u0634\u062a\u06cc\u200c\u0647\u0627 \u063a\u0631\u0642 \u0634\u062f\u0646\u062f\u060c \u0648 \u0646\u0632\u062f\u06cc\u06a9 \u0628\u0647 200 \u0646\u0641\u0631 \u0622\u0645\u0631\u06cc\u06a9\u0627\u06cc\u06cc \u0648 \u0622\u0644\u0645\u0627\u0646\u06cc \u062c\u0627\u0646 \u062e\u0648\u062f \u0631\u0627 \u0627\u0632 \u062f\u0633\u062a \u062f\u0627\u062f\u0646\u062f."
},
{
"feat_id": "195",
"feat_URL": "https://en.wikinews.org/wiki/Mitt_Romney_wins_Iowa_Caucus_by_eight_votes_over_surging_Rick_Santorum",
"feat_domain": "wikinews",
"feat_topic": "Politics",
"feat_has_image": "0",
"feat_has_hyperlink": "0",
"text": "Bachmann, who won the Ames Straw Poll in August, decided to end her campaign.",
"target": "\u0628\u0627\u062e\u0645\u0646\u060c \u06a9\u0647 \u062f\u0631 \u0645\u0627\u0647 \u0622\u06af\u0648\u0633\u062a \u0628\u0631\u0646\u062f\u0647 \u0646\u0638\u0631\u0633\u0646\u062c\u06cc \u0622\u0645\u0633 \u0627\u0633\u062a\u0631\u0627\u0648 \u0634\u062f\u060c \u062a\u0635\u0645\u06cc\u0645 \u06af\u0631\u0641\u062a \u06a9\u0645\u067e\u06cc\u0646 \u062e\u0648\u062f \u0631\u0627 \u062e\u0627\u062a\u0645\u0647 \u062f\u0647\u062f."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_id": "Value(dtype='string', id=None)",
"feat_URL": "Value(dtype='string', id=None)",
"feat_domain": "Value(dtype='string', id=None)",
"feat_topic": "Value(dtype='string', id=None)",
"feat_has_image": "Value(dtype='string', id=None)",
"feat_has_hyperlink": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1608 |
| valid | 402 |
|
cjvt | null | @InProceedings{krek2020ssj500k,
title = {The ssj500k Training Corpus for Slovene Language Processing},
author={Krek, Simon and Erjavec, Tomaž and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and Čibej, Jaka and Brank, Janez},
booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},
year={2020},
pages={24-33}
} | The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenisation,
sentence segmentation, morphosyntactic tagging, and lemmatisation. About half of the corpus is also manually annotated
with syntactic dependencies, named entities, and verbal multiword expressions. About a quarter of the corpus is also
annotated with semantic role labels. The morphosyntactic tags and syntactic dependencies are included both in the
JOS/MULTEXT-East framework, as well as in the framework of Universal Dependencies. | false | 9 | false | cjvt/ssj500k | 2022-10-21T07:34:07.000Z | null | false | a5fc1ade9a63d6125d8150190c216858ed008034 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"language:sl",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"task_categories:token-classification",
"task_ids:named-entit... | https://huggingface.co/datasets/cjvt/ssj500k/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
- expert-generated
language:
- sl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
- 10K<n<100K
source_datasets: []
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
- lemmatization
- parsing
pretty_name: ssj500k
tags:
- semantic-role-labeling
- multiword-expression-detection
---
# Dataset Card for ssj500k
**Important**: there exists another HF implementation of the dataset ([classla/ssj500k](https://huggingface.co/datasets/classla/ssj500k)), but it seems to be more narrowly focused. **This implementation is designed for more general use** - the CLASSLA version seems to expose only the specific training/validation/test annotations used in the CLASSLA library, for only a subset of the data.
### Dataset Summary
The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenization, sentence segmentation, morphosyntactic tagging, and lemmatization. It is also partially annotated for the following tasks:
- named entity recognition (config `named_entity_recognition`)
- dependency parsing(*), Universal Dependencies style (config `dependency_parsing_ud`)
- dependency parsing, JOS/MULTEXT-East style (config `dependency_parsing_jos`)
- semantic role labeling (config `semantic_role_labeling`)
- multi-word expressions (config `multiword_expressions`)
If you want to load all the data along with their partial annotations, please use the config `all_data`.
\* _The UD dependency parsing labels are included here for completeness, but using the dataset [universal_dependencies](https://huggingface.co/datasets/universal_dependencies) should be preferred for dependency parsing applications to ensure you are using the most up-to-date data._
### Supported Tasks and Leaderboards
Sentence tokenization, sentence segmentation, morphosyntactic tagging, lemmatization, named entity recognition, dependency parsing, semantic role labeling, multi-word expression detection.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset (using the config `all_data`):
```
{
'id_doc': 'ssj1',
'idx_par': 0,
'idx_sent': 0,
'id_words': ['ssj1.1.1.t1', 'ssj1.1.1.t2', 'ssj1.1.1.t3', 'ssj1.1.1.t4', 'ssj1.1.1.t5', 'ssj1.1.1.t6', 'ssj1.1.1.t7', 'ssj1.1.1.t8', 'ssj1.1.1.t9', 'ssj1.1.1.t10', 'ssj1.1.1.t11', 'ssj1.1.1.t12', 'ssj1.1.1.t13', 'ssj1.1.1.t14', 'ssj1.1.1.t15', 'ssj1.1.1.t16', 'ssj1.1.1.t17', 'ssj1.1.1.t18', 'ssj1.1.1.t19', 'ssj1.1.1.t20', 'ssj1.1.1.t21', 'ssj1.1.1.t22', 'ssj1.1.1.t23', 'ssj1.1.1.t24'],
'words': ['"', 'Tistega', 'večera', 'sem', 'preveč', 'popil', ',', 'zgodilo', 'se', 'je', 'mesec', 'dni', 'po', 'tem', ',', 'ko', 'sem', 'izvedel', ',', 'da', 'me', 'žena', 'vara', '.'],
'lemmas': ['"', 'tisti', 'večer', 'biti', 'preveč', 'popiti', ',', 'zgoditi', 'se', 'biti', 'mesec', 'dan', 'po', 'ta', ',', 'ko', 'biti', 'izvedeti', ',', 'da', 'jaz', 'žena', 'varati', '.'],
'msds': ['UPosTag=PUNCT', 'UPosTag=DET|Case=Gen|Gender=Masc|Number=Sing|PronType=Dem', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Sing', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=DET|PronType=Ind', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=VERB|Aspect=Perf|Gender=Neut|Number=Sing|VerbForm=Part', 'UPosTag=PRON|PronType=Prs|Reflex=Yes|Variant=Short', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=3|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=NOUN|Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Plur', 'UPosTag=ADP|Case=Loc', 'UPosTag=DET|Case=Loc|Gender=Neut|Number=Sing|PronType=Dem', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=PRON|Case=Acc|Number=Sing|Person=1|PronType=Prs|Variant=Short', 'UPosTag=NOUN|Case=Nom|Gender=Fem|Number=Sing', 'UPosTag=VERB|Aspect=Imp|Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin', 'UPosTag=PUNCT'],
'has_ne_ann': True,
'has_ud_dep_ann': True,
'has_jos_dep_ann': True,
'has_srl_ann': True,
'has_mwe_ann': True,
'ne_tags': ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'],
'ud_dep_head': [5, 2, 5, 5, 5, -1, 7, 5, 7, 7, 7, 10, 13, 10, 17, 17, 17, 13, 22, 22, 22, 22, 17, 5],
'ud_dep_rel': ['punct', 'det', 'obl', 'aux', 'advmod', 'root', 'punct', 'parataxis', 'expl', 'aux', 'obl', 'nmod', 'case', 'nmod', 'punct', 'mark', 'aux', 'acl', 'punct', 'mark', 'obj', 'nsubj', 'ccomp', 'punct'],
'jos_dep_head': [-1, 2, 5, 5, 5, -1, -1, -1, 7, 7, 7, 10, 13, 10, -1, 17, 17, 13, -1, 22, 22, 22, 17, -1],
'jos_dep_rel': ['Root', 'Atr', 'AdvO', 'PPart', 'AdvM', 'Root', 'Root', 'Root', 'PPart', 'PPart', 'AdvO', 'Atr', 'Atr', 'Atr', 'Root', 'Conj', 'PPart', 'Atr', 'Root', 'Conj', 'Obj', 'Sb', 'Obj', 'Root'],
'srl_info': [
{'idx_arg': 2, 'idx_head': 5, 'role': 'TIME'},
{'idx_arg': 4, 'idx_head': 5, 'role': 'QUANT'},
{'idx_arg': 10, 'idx_head': 7, 'role': 'TIME'},
{'idx_arg': 20, 'idx_head': 22, 'role': 'PAT'},
{'idx_arg': 21, 'idx_head': 22, 'role': 'ACT'},
{'idx_arg': 22, 'idx_head': 17, 'role': 'RESLT'}
],
'mwe_info': [
{'type': 'IRV', 'word_indices': [7, 8]}
]
}
```
### Data Fields
The following attributes are present in the most general config (`all_data`). Please see below for attributes present in the specific configs.
- `id_doc`: a string containing the identifier of the document;
- `idx_par`: an int32 containing the consecutive number of the paragraph, which the current sentence is a part of;
- `idx_sent`: an int32 containing the consecutive number of the current sentence inside the current paragraph;
- `id_words`: a list of strings containing the identifiers of words - potentially redundant, helpful for connecting the dataset with external datasets like coref149;
- `words`: a list of strings containing the words in the current sentence;
- `lemmas`: a list of strings containing the lemmas in the current sentence;
- `msds`: a list of strings containing the morphosyntactic description of words in the current sentence;
- `has_ne_ann`: a bool indicating whether the current example has named entities annotated;
- `has_ud_dep_ann`: a bool indicating whether the current example has dependencies (in UD style) annotated;
- `has_jos_dep_ann`: a bool indicating whether the current example has dependencies (in JOS style) annotated;
- `has_srl_ann`: a bool indicating whether the current example has semantic roles annotated;
- `has_mwe_ann`: a bool indicating whether the current example has multi-word expressions annotated;
- `ne_tags`: a list of strings containing the named entity tags encoded using IOB2 - if `has_ne_ann=False` all tokens are annotated with `"N/A"`;
- `ud_dep_head`: a list of int32 containing the head index for each word (using UD guidelines) - the head index of the root word is `-1`; if `has_ud_dep_ann=False` all tokens are annotated with `-2`;
- `ud_dep_rel`: a list of strings containing the relation with the head for each word (using UD guidelines) - if `has_ud_dep_ann=False` all tokens are annotated with `"N/A"`;
- `jos_dep_head`: a list of int32 containing the head index for each word (using JOS guidelines) - the head index of the root word is `-1`; if `has_jos_dep_ann=False` all tokens are annotated with `-2`;
- `jos_dep_rel`: a list of strings containing the relation with the head for each word (using JOS guidelines) - if `has_jos_dep_ann=False` all tokens are annotated with `"N/A"`;
- `srl_info`: a list of dicts, each containing index of the argument word, the head (verb) word, and the semantic role - if `has_srl_ann=False` this list is empty;
- `mwe_info`: a list of dicts, each containing word indices and the type of a multi-word expression;
#### Data fields in 'named_entity_recognition'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ne_tags']
```
#### Data fields in 'dependency_parsing_ud'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ud_dep_head', 'ud_dep_rel']
```
#### Data fields in 'dependency_parsing_jos'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'jos_dep_head', 'jos_dep_rel']
```
#### Data fields in 'semantic_role_labeling'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'srl_info']
```
#### Data fields in 'multiword_expressions'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'mwe_info']
```
## Additional Information
### Dataset Curators
Simon Krek; et al. (please see http://hdl.handle.net/11356/1434 for the full list)
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
The paper describing the dataset:
```
@InProceedings{krek2020ssj500k,
title = {The ssj500k Training Corpus for Slovene Language Processing},
author={Krek, Simon and Erjavec, Tomaž and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and Čibej, Jaka and Brank, Janez},
booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},
year={2020},
pages={24-33}
}
```
The resource itself:
```
@misc{krek2021clarinssj500k,
title = {Training corpus ssj500k 2.3},
author = {Krek, Simon and Dobrovoljc, Kaja and Erjavec, Toma{\v z} and Mo{\v z}e, Sara and Ledinek, Nina and Holz, Nanika and Zupan, Katja and Gantar, Polona and Kuzman, Taja and {\v C}ibej, Jaka and Arhar Holdt, {\v S}pela and Kav{\v c}i{\v c}, Teja and {\v S}krjanec, Iza and Marko, Dafne and Jezer{\v s}ek, Lucija and Zajc, Anja},
url = {http://hdl.handle.net/11356/1434},
year = {2021} }
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset. |
christianwbsn | null | null | null | false | 2 | false | christianwbsn/indotacos | 2022-09-22T06:47:12.000Z | null | false | 9f0ee7856c82c2e53f74187e8e6f62bf5f401806 | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/christianwbsn/indotacos/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
biomegix | null | null | null | false | 49 | false | biomegix/soap-notes | 2022-09-22T08:20:42.000Z | null | false | 69c6690b6b195935df66f1942f221dd459f561cb | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/biomegix/soap-notes/resolve/main/README.md | ---
license: apache-2.0
---
|
Nadav | null | null | null | false | 34 | false | Nadav/runaway_scans | 2022-09-22T08:57:09.000Z | null | false | 39256ba0c7edbf7fa945f2fcf44ee1a42c5a89d1 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Nadav/runaway_scans/resolve/main/README.md | ---
license: afl-3.0
---
|
detection-datasets | null | null | null | false | 328 | false | detection-datasets/fashionpedia | 2022-09-22T13:22:02.000Z | fashionpedia | false | 80845435ce686b8a9dbf70a05452fbfb8e09cdd7 | [] | [
"arxiv:2004.12276",
"task_categories:object-detection",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:object-detection",
"tags:fashion",
"tags:computer-vision"
] | https://huggingface.co/datasets/detection-datasets/fashionpedia/resolve/main/README.md | ---
pretty_name: Fashionpedia
task_categories:
- object-detection
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- object-detection
- fashion
- computer-vision
paperswithcode_id: fashionpedia
---
# Dataset Card for Fashionpedia
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://fashionpedia.github.io/home/index.html
- **Repository:** https://github.com/cvdfoundation/fashionpedia
- **Paper:** https://arxiv.org/abs/2004.12276
### Dataset Summary
Fashionpedia is a dataset mapping out the visual aspects of the fashion world.
From the paper:
> Fashionpedia is a new dataset which consists of two parts: (1) an ontology built by fashion experts containing 27 main apparel categories, 19 apparel parts, 294 fine-grained attributes and their relationships; (2) a dataset with everyday and celebrity event fashion images annotated with segmentation masks and their associated per-mask fine-grained attributes, built upon the Fashionpedia ontology.
Fashionpedia has:
- 46781 images
- 342182 bounding-boxes
### Supported Tasks
- Object detection
- Image classification
### Languages
All of annotations use English as primary language.
## Dataset Structure
The dataset is structured as follows:
```py
DatasetDict({
train: Dataset({
features: ['image_id', 'image', 'width', 'height', 'objects'],
num_rows: 45623
})
val: Dataset({
features: ['image_id', 'image', 'width', 'height', 'objects'],
num_rows: 1158
})
})
```
### Data Instances
An example of the data for one image is:
```py
{'image_id': 23,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=682x1024>,
'width': 682,
'height': 1024,
'objects': {'bbox_id': [150311, 150312, 150313, 150314],
'category': [23, 23, 33, 10],
'bbox': [[445.0, 910.0, 505.0, 983.0],
[239.0, 940.0, 284.0, 994.0],
[298.0, 282.0, 386.0, 352.0],
[210.0, 282.0, 448.0, 665.0]],
'area': [1422, 843, 373, 56375]}}
```
With the type of each field being defined as:
```py
{'image_id': Value(dtype='int64'),
'image': Image(decode=True),
'width': Value(dtype='int64'),
'height': Value(dtype='int64'),
'objects': Sequence(feature={
'bbox_id': Value(dtype='int64'),
'category': ClassLabel(num_classes=46, names=['shirt, blouse', 'top, t-shirt, sweatshirt', 'sweater', 'cardigan', 'jacket', 'vest', 'pants', 'shorts', 'skirt', 'coat', 'dress', 'jumpsuit', 'cape', 'glasses', 'hat', 'headband, head covering, hair accessory', 'tie', 'glove', 'watch', 'belt', 'leg warmer', 'tights, stockings', 'sock', 'shoe', 'bag, wallet', 'scarf', 'umbrella', 'hood', 'collar', 'lapel', 'epaulette', 'sleeve', 'pocket', 'neckline', 'buckle', 'zipper', 'applique', 'bead', 'bow', 'flower', 'fringe', 'ribbon', 'rivet', 'ruffle', 'sequin', 'tassel']),
'bbox': Sequence(feature=Value(dtype='float64'), length=4),
'area': Value(dtype='int64')},
length=-1)}
```
### Data Fields
The dataset has the following fields:
- `image_id`: Unique numeric ID of the image.
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: Image width.
- `height`: Image height.
- `objects`: A dictionary containing bounding box metadata for the objects in the image:
- `bbox_id`: Unique numeric ID of the bounding box annotation.
- `category`: The object’s category.
- `area`: The area of the bounding box.
- `bbox`: The object’s bounding box (in the Pascal VOC format)
### Data Splits
| | Train | Validation | Test |
|----------------|--------|------------|------|
| Images | 45623 | 1158 | 0 |
| Bounding boxes | 333401 | 8781 | 0 |
## Additional Information
### Licensing Information
Fashionpedia is licensed under a Creative Commons Attribution 4.0 International License.
### Citation Information
```
@inproceedings{jia2020fashionpedia,
title={Fashionpedia: Ontology, Segmentation, and an Attribute Localization Dataset},
author={Jia, Menglin and Shi, Mengyun and Sirotenko, Mikhail and Cui, Yin and Cardie, Claire and Hariharan, Bharath and Adam, Hartwig and Belongie, Serge}
booktitle={European Conference on Computer Vision (ECCV)},
year={2020}
}
```
### Contributions
Thanks to [@blinjrm](https://github.com/blinjrm) for adding this dataset.
|
shreya2524 | null | null | null | false | 1 | false | shreya2524/housePrice | 2022-09-22T11:13:35.000Z | null | false | d2ece80b8a94b9c86ef694ffd5682e196bc98991 | [] | [
"license:mit"
] | https://huggingface.co/datasets/shreya2524/housePrice/resolve/main/README.md | ---
license: mit
---
|
jchenyu | null | null | null | false | null | false | jchenyu/t5_large_supervised_proportional_1M | 2022-09-22T11:35:08.000Z | null | false | 871826e171a2cf997849318707f1a6970bc53be6 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/jchenyu/t5_large_supervised_proportional_1M/resolve/main/README.md | ---
license: apache-2.0
---
This data set is created by randomly sampling 1M documents from [the large supervised proportional mixture](https://github.com/google-research/text-to-text-transfer-transformer/blob/733428af1c961e09ea0b7292ad9ac9e0e001f8a5/t5/data/mixtures.py#L193) from the [T5](https://github.com/google-research/text-to-text-transfer-transformer) repository.
The code to produce this sampled dataset can be found [here](https://github.com/chenyu-jiang/text-to-text-transfer-transformer/blob/main/prepare_dataset.py). |
thesofakillers | null | null | null | false | 1 | false | thesofakillers/SemCor | 2022-10-12T08:46:28.000Z | null | false | 2db8cc29752777441ed3bed7ca97352171059550 | [] | [
"annotations_creators:expert-generated",
"language:en",
"language_creators:expert-generated",
"license:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"tags:word sense disambiguation",
"tags:semcor",
"tags:wordnet",
"task_categories:text-classifica... | https://huggingface.co/datasets/thesofakillers/SemCor/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- other
multilinguality:
- monolingual
pretty_name: SemCor
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- word sense disambiguation
- semcor
- wordnet
task_categories:
- text-classification
task_ids:
- topic-classification
---
# Dataset Card for SemCor
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://web.eecs.umich.edu/~mihalcea/downloads.html#semcor
- **Repository:**
- **Paper:** https://aclanthology.org/H93-1061/
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
SemCor 3.0 was automatically created from SemCor 1.6 by mapping WordNet 1.6 to
WordNet 3.0 senses. SemCor 1.6 was created and is property of Princeton
University.
Some (few) word senses from WordNet 1.6 were dropped, and therefore they cannot
be retrieved anymore in the 3.0 database. A sense of 0 (wnsn=0) is used to
symbolize a missing sense in WordNet 3.0.
The automatic mapping was performed within the Language and Information
Technologies lab at UNT, by Rada Mihalcea (rada@cs.unt.edu).
THIS MAPPING IS PROVIDED "AS IS" AND UNT MAKES NO REPRESENTATIONS OR WARRANTIES,
EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, UNT MAKES NO
REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR
PURPOSE.
In agreement with the license from Princeton Univerisity, you are granted
permission to use, copy, modify and distribute this database
for any purpose and without fee and royalty is hereby granted, provided that you
agree to comply with the Princeton copyright notice and statements, including
the disclaimer, and that the same appear on ALL copies of the database,
including modifications that you make for internal
use or for distribution.
Both LICENSE and README files distributed with the SemCor 1.6 package are
included in the current distribution of SemCor 3.0.
### Languages
English
## Additional Information
### Licensing Information
WordNet Release 1.6 Semantic Concordance Release 1.6
This software and database is being provided to you, the LICENSEE, by
Princeton University under the following license. By obtaining, using
and/or copying this software and database, you agree that you have
read, understood, and will comply with these terms and conditions.:
Permission to use, copy, modify and distribute this software and
database and its documentation for any purpose and without fee or
royalty is hereby granted, provided that you agree to comply with
the following copyright notice and statements, including the disclaimer,
and that the same appear on ALL copies of the software, database and
documentation, including modifications that you make for internal
use or for distribution.
WordNet 1.6 Copyright 1997 by Princeton University. All rights reserved.
THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT-
ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE
OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT
INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR
OTHER RIGHTS.
The name of Princeton University or Princeton may not be used in
advertising or publicity pertaining to distribution of the software
and/or database. Title to copyright in this software, database and
any associated documentation shall at all times remain with
Princeton University and LICENSEE agrees to preserve same.
### Citation Information
```bibtex
@inproceedings{miller-etal-1993-semantic,
title = "A Semantic Concordance",
author = "Miller, George A. and
Leacock, Claudia and
Tengi, Randee and
Bunker, Ross T.",
booktitle = "{H}uman {L}anguage {T}echnology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993",
year = "1993",
url = "https://aclanthology.org/H93-1061",
}
```
### Contributions
Thanks to [@thesofakillers](https://github.com/thesofakillers) for adding this
dataset, converting from xml to csv.
|
EMBO | null | @Unpublished{
huggingface: dataset,
title = {SourceData NLP},
authors={Thomas Lemberger & Jorge Abreu-Vicente, EMBO},
year={2021}
} | This dataset is based on the SourceData database and is intented to facilitate training of NLP tasks in the cell and molecualr biology domain. | false | 1 | false | EMBO/sd-character-level-ner | 2022-10-23T06:41:24.000Z | null | false | 63aac2cc0638acf1d69b9e1fb0a1b615da567550 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"task_i... | https://huggingface.co/datasets/EMBO/sd-character-level-ner/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- text-classification
- structure-prediction
task_ids:
- multi-class-classification
- named-entity-recognition
- parsing
---
# Dataset Card for sd-nlp
## Table of Contents
- [Dataset Card for [EMBO/sd-nlp-non-tokenized]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sourcedata.embo.org
- **Repository:** https://github.com/source-data/soda-roberta
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** thomas.lemberger@embo.org, jorge.abreu@embo.org
### Dataset Summary
This dataset is based on the content of the SourceData (https://sourcedata.embo.org) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471).
Unlike the dataset [`sd-nlp`](https://huggingface.co/datasets/EMBO/sd-nlp), pre-tokenized with the `roberta-base` tokenizer, this dataset is not previously tokenized, but just splitted into words. Users can therefore use it to fine-tune other models.
Additional details at https://github.com/source-data/soda-roberta
### Supported Tasks and Leaderboards
Tags are provided as [IOB2-style tags](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)).
`PANELIZATION`: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. `PANELIZATION` provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends.
`NER`: biological and chemical entities are labeled. Specifically the following entities are tagged:
- `SMALL_MOLECULE`: small molecules
- `GENEPROD`: gene products (genes and proteins)
- `SUBCELLULAR`: subcellular components
- `CELL`: cell types and cell lines.
- `TISSUE`: tissues and organs
- `ORGANISM`: species
- `EXP_ASSAY`: experimental assays
`ROLES`: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are:
- `CONTROLLED_VAR`: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations.
- `MEASURED_VAR`: entities that are associated with the variables measured and the object of the measurements.
`BORING`: entities are marked with the tag `BORING` when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...).
### Languages
The text in the dataset is English.
## Dataset Structure
### Data Instances
```json
{'text': '(E) Quantification of the number of cells without γ-Tubulin at centrosomes (γ-Tub -) in pachytene and diplotene spermatocytes in control, Plk1(∆/∆) and BI2536-treated spermatocytes. Data represent average of two biological replicates per condition. ',
'labels': [0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
13,
14,
14,
14,
14,
14,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
3,
4,
4,
4,
4,
4,
4,
4,
4,
0,
0,
0,
0,
5,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
0,
0,
3,
4,
4,
4,
4,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
7,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
3,
4,
4,
4,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
2,
2,
2,
2,
2,
0,
0,
0,
0,
0,
0,
0,
0,
0,
7,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0]}
```
### Data Fields
- `text`: `str` of the text
- `label_ids` dictionary composed of list of strings on a character-level:
- `entity_types`: `list` of `strings` for the IOB2 tags for entity type; possible value in `["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL", "B-CELL", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]`
- `panel_start`: `list` of `strings` for IOB2 tags `["O", "B-PANEL_START"]`
### Data Splits
```python
DatasetDict({
train: Dataset({
features: ['text', 'labels'],
num_rows: 66085
})
test: Dataset({
features: ['text', 'labels'],
num_rows: 8225
})
validation: Dataset({
features: ['text', 'labels'],
num_rows: 7948
})
})
```
## Dataset Creation
### Curation Rationale
The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train character-based models for text segmentation and named entity recognition.
### Source Data
#### Initial Data Collection and Normalization
Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021.
#### Who are the source language producers?
The examples are extracted from the figure legends from scientific papers in cell and molecular biology.
### Annotations
#### Annotation process
The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org)
#### Who are the annotators?
Curators of the SourceData project.
### Personal and Sensitive Information
None known.
## Considerations for Using the Data
### Social Impact of Dataset
Not applicable.
### Discussion of Biases
The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org)
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Thomas Lemberger, EMBO.
### Licensing Information
CC BY 4.0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@tlemberger](https://github.com/tlemberger>) and [@drAbreu](https://github.com/drAbreu>) for adding this dataset. |
detection-datasets | null | null | null | false | 67 | false | detection-datasets/fashionpedia_4_categories | 2022-09-22T14:45:18.000Z | fashionpedia | false | 4a706ce4d084ae644acb17bac7fd0919e493dbeb | [] | [
"task_categories:object-detection",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:fashionpedia",
"tags:object-detection",
"tags:fashion",
"tags:computer-vision"
] | https://huggingface.co/datasets/detection-datasets/fashionpedia_4_categories/resolve/main/README.md | ---
pretty_name: Fashionpedia_4_categories
task_categories:
- object-detection
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- fashionpedia
tags:
- object-detection
- fashion
- computer-vision
paperswithcode_id: fashionpedia
---
# Dataset Card for Fashionpedia_4_categories
This dataset is a variation of the fashionpedia dataset available [here](https://huggingface.co/datasets/detection-datasets/fashionpedia), with 2 key differences:
- It contains only 4 categories:
- Clothing
- Shoes
- Bags
- Accessories
- New splits were created:
- Train: 90% of the images
- Val: 5%
- Test 5%
The goal is to make the detection task easier with 4 categories instead of 46 for the full fashionpedia dataset.
This dataset was created using the `detection_datasets` library ([GitHub](https://github.com/blinjrm/detection-datasets), [PyPI](https://pypi.org/project/detection-datasets/)), you can check here the full creation [notebook](https://blinjrm.github.io/detection-datasets/tutorials/2_Transform/).
In a nutshell, the following mapping was applied:
```Python
mapping = {
'shirt, blouse': 'clothing',
'top, t-shirt, sweatshirt': 'clothing',
'sweater': 'clothing',
'cardigan': 'clothing',
'jacket': 'clothing',
'vest': 'clothing',
'pants': 'clothing',
'shorts': 'clothing',
'skirt': 'clothing',
'coat': 'clothing',
'dress': 'clothing',
'jumpsuit': 'clothing',
'cape': 'clothing',
'glasses': 'accessories',
'hat': 'accessories',
'headband, head covering, hair accessory': 'accessories',
'tie': 'accessories',
'glove': 'accessories',
'belt': 'accessories',
'tights, stockings': 'accessories',
'sock': 'accessories',
'shoe': 'shoes',
'bag, wallet': 'bags',
'scarf': 'accessories',
}
```
As a result, annotations with no category equivalent in the mapping have been dropped. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-6f9c29-1531855204 | 2022-09-22T15:17:52.000Z | null | false | 2e7fdae1b8a959fa70bdadea392312869a02c744 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-6f9c29-1531855204/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: facebook/bart-large-cnn
metrics: ['accuracy']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
benlipkin | null | null | null | false | 1 | false | benlipkin/braincode-neurips2022 | 2022-09-22T17:24:45.000Z | null | false | ad46e5b6677b9bd3aa6368c688dac0fc30d5e4ca | [] | [
"license:mit"
] | https://huggingface.co/datasets/benlipkin/braincode-neurips2022/resolve/main/README.md | ---
license: mit
---
Large file storage for the paper `Convergent Representations of Computer Programs in Human and Artificial Neural Networks` by Shashank Srikant*, Benjamin Lipkin*, Anna A. Ivanova, Evelina Fedorenko, and Una-May O'Reilly. The code repository is hosted on [GitHub](https://github.com/ALFA-group/code-representations-ml-brain). Check it out!
If you use this work, please cite:
```bibtex
@inproceedings{SrikantLipkin2022,
author = {Srikant, Shashank and Lipkin, Benjamin and Ivanova, Anna and Fedorenko, Evelina and O'Reilly, Una-May},
title = {Convergent Representations of Computer Programs in Human and Artificial Neural Networks},
year = {2022},
journal = {Advances in Neural Information Processing Systems},
}
``` |
MadhuLokanath | null | null | null | false | 1 | false | MadhuLokanath/New_Data | 2022-09-22T14:32:22.000Z | null | false | caba75ded0756e6f559f383b667112a74578f55e | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/MadhuLokanath/New_Data/resolve/main/README.md | ---
license: apache-2.0
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-samsum-samsum-61187c-1532155205 | 2022-09-22T16:40:56.000Z | null | false | 9623e24bcc3da5ec8a7ab5ed6b194294d6a18358 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-samsum-samsum-61187c-1532155205/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: train
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
* Dataset: samsum
* Config: samsum
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
GGWON | null | null | null | false | null | false | GGWON/jnstyle | 2022-09-22T15:29:18.000Z | null | false | e7367bb69fc0a14d622f29f74d51efddea95b46a | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/GGWON/jnstyle/resolve/main/README.md | ---
license: afl-3.0
---
|
nlp-guild | null | null | null | false | 1 | false | nlp-guild/intent-recognition-biomedical | 2022-09-22T16:13:44.000Z | null | false | 8178d8c493897dc0cf759dd21413c118c0423718 | [] | [
"license:mit"
] | https://huggingface.co/datasets/nlp-guild/intent-recognition-biomedical/resolve/main/README.md | ---
license: mit
---
[source](https://github.com/wangle1218/KBQA-for-Diagnosis/tree/main/nlu/bert_intent_recognition/data) |
Azarthehulk | null | null | null | false | 1 | false | Azarthehulk/hand_written_dataset | 2022-09-22T16:57:28.000Z | null | false | 7eecec7624c6677ce4d20471785ab36a068da321 | [] | [
"license:other"
] | https://huggingface.co/datasets/Azarthehulk/hand_written_dataset/resolve/main/README.md | ---
license: other
---
|
aseem007 | null | null | null | false | 1 | false | aseem007/sd | 2022-11-06T13:10:58.000Z | null | false | b1ff4f0b5abaadff2684a551d01334e4b2133d59 | [] | [] | https://huggingface.co/datasets/aseem007/sd/resolve/main/README.md | |
Theo89 | null | null | null | false | 1 | false | Theo89/teracotta | 2022-09-22T18:55:36.000Z | null | false | 6ec16181a1c4b5ed412c979adc8a4c05d6321ce9 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Theo89/teracotta/resolve/main/README.md | ---
license: artistic-2.0
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-ded028-2312 | 2022-09-22T21:03:51.000Z | null | false | aec7dd1b87ea54c67b2823ba5fc09c2b9ede8f6e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:autoevaluate/zero-shot-classification-sample"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-ded028-2312/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/zero-shot-classification-sample
eval_info:
task: text_zero_shot_classification
model: autoevaluate/zero-shot-classification
metrics: []
dataset_name: autoevaluate/zero-shot-classification-sample
dataset_config: autoevaluate--zero-shot-classification-sample
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-ab10d5-2413 | 2022-09-22T21:12:01.000Z | null | false | 6abfd356ba7ac593c607c0fee3f8666e39db69a6 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:autoevaluate/zero-shot-classification-sample"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-ab10d5-2413/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/zero-shot-classification-sample
eval_info:
task: text_zero_shot_classification
model: autoevaluate/zero-shot-classification
metrics: []
dataset_name: autoevaluate/zero-shot-classification-sample
dataset_config: autoevaluate--zero-shot-classification-sample
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-914f2c-2514 | 2022-09-22T22:03:52.000Z | null | false | 62eddd2262a1357f9574f59f54a6eac7794e6d07 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Tristan/zero-shot-classification-large-test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-914f2c-2514/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Tristan/zero-shot-classification-large-test
eval_info:
task: text_zero_shot_classification
model: autoevaluate/zero-shot-classification
metrics: []
dataset_name: Tristan/zero-shot-classification-large-test
dataset_config: Tristan--zero-shot-classification-large-test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: Tristan/zero-shot-classification-large-test
* Config: Tristan--zero-shot-classification-large-test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
neeva | null | null | null | false | 25 | false | neeva/query2query_evaluation | 2022-09-22T22:58:34.000Z | null | false | 6c3ed433023c6b7830a9f1f957ee511c31bb4ce9 | [] | [
"task_categories:sentence-similarity"
] | https://huggingface.co/datasets/neeva/query2query_evaluation/resolve/main/README.md | ---
task_categories:
- sentence-similarity
---
## Description
This dataset contains triples of the form "query1", "query2", "label" where labels are mapped as follows
- similar: 1
- not similar: 0
- ambiguous: -1 |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-emotion-default-98e72c-1536755281 | 2022-09-22T21:51:27.000Z | null | false | 69cb9d1035e5bbc34516d9dc016b50aa03e279c7 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-emotion-default-98e72c-1536755281/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: Jorgeutd/sagemaker-roberta-base-emotion
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Jorgeutd/sagemaker-roberta-base-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@neehau](https://huggingface.co/neehau) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-eb4ad9-22 | 2022-09-23T00:38:10.000Z | null | false | 70ade0819ad2c1f3b42f83e859a489b457f667e8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Tristan/zero-shot-classification-large-test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-eb4ad9-22/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Tristan/zero-shot-classification-large-test
eval_info:
task: text_zero_shot_classification
model: autoevaluate/zero-shot-classification
metrics: []
dataset_name: Tristan/zero-shot-classification-large-test
dataset_config: Tristan--zero-shot-classification-large-test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: Tristan/zero-shot-classification-large-test
* Config: Tristan--zero-shot-classification-large-test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model. |
Oragani | null | null | null | false | 1 | false | Oragani/BoneworksFord | 2022-09-23T00:49:07.000Z | null | false | 27fd6aba198ae571c71b11aefb2335f04cd151de | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Oragani/BoneworksFord/resolve/main/README.md | ---
license: afl-3.0
---
|
cjsojulz01 | null | null | null | false | 1 | false | cjsojulz01/cjsojulz | 2022-09-23T04:06:43.000Z | null | false | 52d9dd11f3f31e920f3b86b3fecb2655ecb94be1 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/cjsojulz01/cjsojulz/resolve/main/README.md | ---
license: afl-3.0
---
|
ourjames | null | null | null | false | 2 | false | ourjames/Linda-Chase-Head-20170720 | 2022-09-23T05:17:44.000Z | null | false | 37b92f99bbd820c24fc60cad5984a242bda86b4e | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/ourjames/Linda-Chase-Head-20170720/resolve/main/README.md | ---
license: apache-2.0
---
|
taskmasterpeace | null | null | null | false | 1 | false | taskmasterpeace/d | 2022-09-23T05:31:04.000Z | null | false | f3e42bc8df06ce710946a8a14ef5ebacf1a4e19b | [] | [
"license:bigscience-openrail-m"
] | https://huggingface.co/datasets/taskmasterpeace/d/resolve/main/README.md | ---
license: bigscience-openrail-m
---
|
Kris5 | null | null | null | false | 1 | false | Kris5/test | 2022-09-23T05:32:15.000Z | null | false | eda21347985c2b59d4a050809ebc5ea8b322ae2f | [] | [
"license:other"
] | https://huggingface.co/datasets/Kris5/test/resolve/main/README.md | ---
license: other
---
|
SQexplorer | null | null | null | false | null | false | SQexplorer/SQ | 2022-09-23T08:19:24.000Z | null | false | 3398e8f029cb199893c036ee39f32ae1d3392ffb | [] | [
"license:openrail"
] | https://huggingface.co/datasets/SQexplorer/SQ/resolve/main/README.md | ---
license: openrail
---
|
varun-d | null | null | null | false | 1 | false | varun-d/asdfasdfa | 2022-09-23T08:36:49.000Z | null | false | 8075a09728578927f1984022df33907bcadba41c | [] | [
"license:openrail"
] | https://huggingface.co/datasets/varun-d/asdfasdfa/resolve/main/README.md | ---
license: openrail
---
|
j0hngou | null | null | null | false | 1 | false | j0hngou/ccmatrix_en-it_subsampled | 2022-09-26T16:34:43.000Z | null | false | 7772b4c915269a59f75a85f9875e82e3e33889c4 | [] | [
"language:en",
"language:it"
] | https://huggingface.co/datasets/j0hngou/ccmatrix_en-it_subsampled/resolve/main/README.md | ---
language:
- en
- it
--- |
jinyan438 | null | null | null | false | 1 | false | jinyan438/hh | 2022-09-23T12:29:09.000Z | null | false | 43b223a8643cbb2f5347d82f83a3c1770af49573 | [] | [] | https://huggingface.co/datasets/jinyan438/hh/resolve/main/README.md | |
freddyaboulton | null | null | null | false | 1 | false | freddyaboulton/gradio-subapp | 2022-09-23T16:17:40.000Z | null | false | 7047858126a84448d9d1c5b5a16abcb233f22243 | [] | [
"license:mit"
] | https://huggingface.co/datasets/freddyaboulton/gradio-subapp/resolve/main/README.md | ---
license: mit
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-d81307-16956302 | 2022-09-23T21:43:03.000Z | null | false | bc0e6e13bd30db81e45194b7e95ba06ea15c40f4 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Tristan/zero-shot-classification-large-test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-d81307-16956302/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Tristan/zero-shot-classification-large-test
eval_info:
task: text_zero_shot_classification
model: Tristan/opt-66b-copy
metrics: []
dataset_name: Tristan/zero-shot-classification-large-test
dataset_config: Tristan--zero-shot-classification-large-test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Tristan/opt-66b-copy
* Dataset: Tristan/zero-shot-classification-large-test
* Config: Tristan--zero-shot-classification-large-test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
nlphuji | null | @article{bitton2022winogavil,
title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models},
author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy},
journal={arXiv preprint arXiv:2207.12576},
year={2022}
} | WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more. | false | 1 | false | nlphuji/winogavil | 2022-09-27T14:33:33.000Z | winogavil | false | 4936d9558fc05d3b4568487eddbea261a5401242 | [] | [
"arxiv:2207.12576",
"annotations_creators:crowdsourced",
"language:en",
"language_creators:found",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:commonsense-reasoning",
"tags:visual-reasoning",
"extra_gated_prompt:By clicking ... | https://huggingface.co/datasets/nlphuji/winogavil/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: winogavil
pretty_name: WinoGAViL
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- commonsense-reasoning
- visual-reasoning
task_ids: []
extra_gated_prompt: "By clicking on “Access repository” below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files."
---
# Dataset Card for WinoGAViL
- [Dataset Description](#dataset-description)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Colab notebook code for Winogavil evaluation with CLIP](#colab-notebook-code-for-winogavil-evaluation-with-clip)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more.
- **Homepage:**
https://winogavil.github.io/
- **Colab**
https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi
- **Repository:**
https://github.com/WinoGAViL/WinoGAViL-experiments/
- **Paper:**
https://arxiv.org/abs/2207.12576
- **Leaderboard:**
https://winogavil.github.io/leaderboard
- **Point of Contact:**
winogavil@gmail.com; yonatanbitton1@gmail.com
### Supported Tasks and Leaderboards
https://winogavil.github.io/leaderboard.
https://paperswithcode.com/dataset/winogavil.
## Colab notebook code for Winogavil evaluation with CLIP
https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi
### Languages
English.
## Dataset Structure
### Data Fields
candidates (list): ["bison", "shelter", "beard", "flea", "cattle", "shave"] - list of image candidates.
cue (string): pogonophile - the generated cue.
associations (string): ["bison", "beard", "shave"] - the images associated with the cue selected by the user.
score_fool_the_ai (int64): 80 - the spymaster score (100 - model score) for fooling the AI, with CLIP RN50 model.
num_associations (int64): 3 - The number of images selected as associative with the cue.
num_candidates (int64): 6 - the number of total candidates.
solvers_jaccard_mean (float64): 1.0 - three solvers scores average on the generated association instance.
solvers_jaccard_std (float64): 1.0 - three solvers scores standard deviation on the generated association instance
ID (int64): 367 - association ID.
### Data Splits
There is a single TEST split. In the accompanied paper and code we sample it to create different training sets, but the intended use is to use winogavil as a test set.
There are different number of candidates, which creates different difficulty levels:
-- With 5 candidates, random model expected score is 38%.
-- With 6 candidates, random model expected score is 34%.
-- With 10 candidates, random model expected score is 24%.
-- With 12 candidates, random model expected score is 19%.
<details>
<summary>Why random chance for success with 5 candidates is 38%?</summary>
It is a binomial distribution probability calculation.
Assuming N=5 candidates, and K=2 associations, there could be three events:
(1) The probability for a random guess is correct in 0 associations is 0.3 (elaborate below), and the Jaccard index is 0 (there is no intersection between the correct labels and the wrong guesses). Therefore the expected random score is 0.
(2) The probability for a random guess is correct in 1 associations is 0.6, and the Jaccard index is 0.33 (intersection=1, union=3, one of the correct guesses, and one of the wrong guesses). Therefore the expected random score is 0.6*0.33 = 0.198.
(3) The probability for a random guess is correct in 2 associations is 0.1, and the Jaccard index is 1 (intersection=2, union=2). Therefore the expected random score is 0.1*1 = 0.1.
* Together, when K=2, the expected score is 0+0.198+0.1 = 0.298.
To calculate (1), the first guess needs to be wrong. There are 3 "wrong" guesses and 5 candidates, so the probability for it is 3/5. The next guess should also be wrong. Now there are only 2 "wrong" guesses, and 4 candidates, so the probability for it is 2/4. Multiplying 3/5 * 2/4 = 0.3.
Same goes for (2) and (3).
Now we can perform the same calculation with K=3 associations.
Assuming N=5 candidates, and K=3 associations, there could be four events:
(4) The probability for a random guess is correct in 0 associations is 0, and the Jaccard index is 0. Therefore the expected random score is 0.
(5) The probability for a random guess is correct in 1 associations is 0.3, and the Jaccard index is 0.2 (intersection=1, union=4). Therefore the expected random score is 0.3*0.2 = 0.06.
(6) The probability for a random guess is correct in 2 associations is 0.6, and the Jaccard index is 0.5 (intersection=2, union=4). Therefore the expected random score is 0.6*5 = 0.3.
(7) The probability for a random guess is correct in 3 associations is 0.1, and the Jaccard index is 1 (intersection=3, union=3). Therefore the expected random score is 0.1*1 = 0.1.
* Together, when K=3, the expected score is 0+0.06+0.3+0.1 = 0.46.
Taking the average of 0.298 and 0.46 we reach 0.379.
Same process can be recalculated with 6 candidates (and K=2,3,4), 10 candidates (and K=2,3,4,5) and 123 candidates (and K=2,3,4,5,6).
</details>
## Dataset Creation
Inspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating
associations that are challenging for a rival AI model but still solvable by other
human players.
### Annotations
#### Annotation process
We paid Amazon Mechanical Turk Workers to play our game.
## Considerations for Using the Data
All associations were obtained with human annotators.
### Licensing Information
CC-By 4.0
### Citation Information
@article{bitton2022winogavil,
title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models},
author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy},
journal={arXiv preprint arXiv:2207.12576},
year={2022}
|
claudio4525 | null | null | null | false | 1 | false | claudio4525/testt | 2022-09-23T19:46:08.000Z | null | false | 24f850ea98b0582135f7ed9fdcf076ef5a85176a | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/claudio4525/testt/resolve/main/README.md | ---
license: afl-3.0
---
|
tednc | null | null | null | false | 1 | false | tednc/images | 2022-09-23T22:04:38.000Z | null | false | 746385044ca49b021086113b88027e9563645c1e | [] | [
"license:cc"
] | https://huggingface.co/datasets/tednc/images/resolve/main/README.md | ---
license: cc
---
|
HuggingFaceM4 | null | @InProceedings{huggingface:dataset,
title = {Multimodal synthetic dataset for testing},
author={HuggingFace, Inc.},
year={2022}
} | This dataset is designed to be used in testing. It's derived from cm4-10k dataset | false | 112,805 | false | HuggingFaceM4/cm4-synthetic-testing | 2022-10-04T17:39:58.000Z | null | false | a18e6f28722c93869223393f60da772ee8809876 | [] | [
"license:bigscience-openrail-m"
] | https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing/resolve/main/README.md | ---
license: bigscience-openrail-m
---
This dataset is designed to be used in testing multimodal text/image models. It's derived from cm4-10k dataset.
The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`.
The `unique` ones ensure uniqueness across text entries.
The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation.
The default split is `100.unique`
The full process of this dataset creation, including which records were used to build it, is documented inside [cm4-synthetic-testing.py](https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing/blob/main/cm4-synthetic-testing.py)
|
n1ghtf4l1 | null | null | null | false | 1 | false | n1ghtf4l1/Ariel-Data-Challenge-NeurIPS-2022 | 2022-09-24T05:55:23.000Z | null | false | 3ea47d49efd28082366bf993f3d2cac18e3c153d | [] | [
"license:mit"
] | https://huggingface.co/datasets/n1ghtf4l1/Ariel-Data-Challenge-NeurIPS-2022/resolve/main/README.md | ---
license: mit
---
# **Ariel Data Challenge NeurIPS 2022**
Dataset is part of the [**Ariel Machine Learning Data Challenge**](https://www.ariel-datachallenge.space/). The Ariel Space mission is a European Space Agency mission to be launched in 2029. Ariel will observe the atmospheres of 1000 extrasolar planets - planets around other stars - to determine how they are made, how they evolve and how to put our own Solar System in the gallactic context.
### **Understanding worlds in our Milky Way**
Today we know of roughly 5000 exoplanets in our Milky Way galaxy. Given that the first planet was only conclusively discovered in the mid-1990's, this is an impressive achievement. Yet, simple number counting does not tell us much about the nature of these worlds. One of the best ways to understand their formation and evolution histories is to understand the composition of their atmospheres. What's the chemistry, temperatures, cloud coverage, etc? Can we see signs of possible bio-markers in the smaller Earth and super-Earth planets? Since we can't get in-situ measurements (even the closest exoplanet is lightyears away), we rely on remote sensing and interpreting the stellar light that shines through the atmosphere of these planets. Model fitting these atmospheric exoplanet spectra is tricky and requires significant computational time. This is where you can help!
### **Speed up model fitting!**
Today, our atmospheric models are fit to the data using MCMC type approaches. This is sufficient if your atmospheric forward models are fast to run but convergence becomes problematic if this is not the case. This challenge looks at inverse modelling using machine learning. For more information on why we need your help, we provide more background in the about page and the documentation.
### **Many thanks to...**
[NeurIPS 2022](https://nips.cc/) for hosting the data challenge and to the [UK Space Agency](https://www.gov.uk/government/organisations/uk-space-agency) and the [European Research Council](https://erc.europa.eu/) for support this effort. Also many thanks to the data challenge team and partnering institutes, and of course thanks to the [Ariel](https://arielmission.space/) team for technical support and building the space mission in the first place!
For more information, contact us at: exoai.ucl [at] gmail.com
|
BumblingOrange | null | null | null | false | 2 | false | BumblingOrange/Hanks_Embeddings | 2022-09-24T20:32:38.000Z | null | false | 618847c234ccbaafd4238ac3113da2c20b0ef758 | [] | [
"license:bigscience-bloom-rail-1.0"
] | https://huggingface.co/datasets/BumblingOrange/Hanks_Embeddings/resolve/main/README.md | ---
license: bigscience-bloom-rail-1.0
---
This is a collection of embeddings that I decided to make public. Additionally, it will be where I host any future embeddings I decide to train. |
TKKG | null | null | null | false | 1 | false | TKKG/inferno | 2022-09-24T09:41:53.000Z | null | false | e0f1e2e8e3a85ca342d113fb4281eab0a23b237f | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/TKKG/inferno/resolve/main/README.md | ---
license: afl-3.0
---
|
MHCK | null | null | null | false | 1 | false | MHCK/AI | 2022-10-01T08:27:42.000Z | null | false | ca494fba0970456f98f12e4db4241a737fa1db0c | [] | [
"license:cc-by-nc-nd-4.0"
] | https://huggingface.co/datasets/MHCK/AI/resolve/main/README.md | ---
license: cc-by-nc-nd-4.0
---
|
zishuod | null | null | null | false | 6 | false | zishuod/pokemon-icons | 2022-09-24T15:35:39.000Z | null | false | 75b8d3472af2587f51d9f635e078372d308b344a | [] | [
"license:mit",
"tags:pokemon",
"task_categories:image-classification"
] | https://huggingface.co/datasets/zishuod/pokemon-icons/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators: []
license:
- mit
multilinguality: []
pretty_name: pokemon-icons
size_categories: []
source_datasets: []
tags:
- pokemon
task_categories:
- image-classification
task_ids: []
---
# Dataset Card for pokemon-icons
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Pokemon Icons. Most of them are collected and cropped from screenshots captured in Pokémon Sword and Shield.
### Supported Tasks and Leaderboards
Image classification |
amir7d0 | null | null | null | false | 16 | false | amir7d0/laion20M-fa | 2022-11-04T15:51:21.000Z | null | false | aa5a640053c19908b9a988c3c3f45cc9de300700 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/amir7d0/laion20M-fa/resolve/main/README.md | ---
license: cc-by-4.0
---
|
dbal0503 | null | null | null | false | 1 | false | dbal0503/Bundesliga | 2022-09-26T17:48:50.000Z | null | false | 8f854e3e4f7007134410f2040827bba7bf4c3dd8 | [] | [] | https://huggingface.co/datasets/dbal0503/Bundesliga/resolve/main/README.md | Bundesliga Videos dataset from Kaggle competition: https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout |
Sonrin | null | null | null | false | 1 | false | Sonrin/Thorneworks | 2022-09-24T18:33:12.000Z | null | false | 337bdbce29ebc97dadf443f34689e3e43d051fb4 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Sonrin/Thorneworks/resolve/main/README.md | ---
license: artistic-2.0
---
|
Naimul | null | null | null | false | null | false | Naimul/testingmyown | 2022-09-24T19:07:55.000Z | null | false | fdc79ccc1674743e851455079f09cb935cf82c1d | [] | [
"license:mit"
] | https://huggingface.co/datasets/Naimul/testingmyown/resolve/main/README.md | ---
license: mit
---
|
quecopiones | null | null | null | false | 1 | false | quecopiones/twitter_extract_suicide_keywords | 2022-09-24T19:42:50.000Z | null | false | 934a79d988c4507958e62c5c89b0057f5e1ce38f | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/quecopiones/twitter_extract_suicide_keywords/resolve/main/README.md | ---
license: afl-3.0
---
|
Lubub | null | null | null | false | null | false | Lubub/teste_sharp | 2022-09-24T20:19:18.000Z | null | false | 7fd72a8472a14c6903b8e7b0fc80aac84f7b8a79 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/Lubub/teste_sharp/resolve/main/README.md | ---
license: apache-2.0
---
|
sanchit-gandhi | null | null | null | false | 1 | false | sanchit-gandhi/earnings22_split_resampled | 2022-09-30T15:24:09.000Z | null | false | afd9400721e19e44f4d28598cb73902558f02bbb | [] | [] | https://huggingface.co/datasets/sanchit-gandhi/earnings22_split_resampled/resolve/main/README.md | We partition the earnings22 dataset at https://huggingface.co/datasets/anton-l/earnings22_baseline_5_gram by source_id:
Validation: 4420696 4448760 4461799 4469836 4473238 4482110
Test: 4432298 4450488 4470290 4479741 4483338 4485244
Train: remainder
Official script for processing these splits will be released shortly. |
pkhtjim | null | null | null | false | 1 | false | pkhtjim/berdly | 2022-09-24T23:28:54.000Z | null | false | e27300b405c50cdd1db1d4ceaf20008977aa9af3 | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/pkhtjim/berdly/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
GeneralAwareness | null | null | null | false | 1 | false | GeneralAwareness/Various | 2022-09-25T02:13:14.000Z | null | false | 15be22438d1edfc194476d3ffb593d32b98858d1 | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/GeneralAwareness/Various/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
gabrielaltay | null | null | null | false | 6 | false | gabrielaltay/hacdc-wikipedia | 2022-10-02T23:05:37.000Z | null | false | 378947b09975046c1b92f73b0e6cc3f5c21f12ef | [] | [
"license:cc-by-sa-3.0"
] | https://huggingface.co/datasets/gabrielaltay/hacdc-wikipedia/resolve/main/README.md | ---
license: cc-by-sa-3.0
---
|
m1guelpf | null | null | null | false | 7 | false | m1guelpf/nouns | 2022-09-25T06:18:40.000Z | null | false | 505bb434cc751d0b5158ae82f368a7c63e7a94c6 | [] | [
"license:cc0-1.0",
"annotations_creators:machine-generated",
"language:en",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"task_categories:text-to-image"
] | https://huggingface.co/datasets/m1guelpf/nouns/resolve/main/README.md | ---
license: cc0-1.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: 'Nouns auto-captioned'
size_categories:
- 10K<n<100K
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Nouns auto-captioned
_Dataset used to train Nouns text to image model_
Automatically generated captions for Nouns from their attributes, colors and items. Help on the captioning script appreciated!
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Citation
If you use this dataset, please cite it as:
```
@misc{piedrafita2022nouns,
author = {Piedrafita, Miguel},
title = {Nouns auto-captioned},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/m1guelpf/nouns/}}
}
```
|
waifu-research-department | null | null | null | false | 2 | false | waifu-research-department/embeddings | 2022-09-29T02:50:05.000Z | null | false | bbfa20fac8083c90012bca77e55acd8aa4d5c824 | [] | [
"license:mit"
] | https://huggingface.co/datasets/waifu-research-department/embeddings/resolve/main/README.md | ---
license: mit
---
# Info
>Try to include embedding info in the commit description (model, author, artist, images, etc)
>Naming: name-object/style |
huynguyen208 | null | null | null | false | 1 | false | huynguyen208/assignment2 | 2022-09-27T11:57:00.000Z | null | false | 431ee067cc8976e255572f9d4f8c4434b24f99a0 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/huynguyen208/assignment2/resolve/main/README.md | ---
license: unknown
---
|
Miron | null | null | null | false | 2 | false | Miron/Text | 2022-11-10T08:00:19.000Z | null | false | 9c9b738f010f33843d0bc076f1024d3ca7191fb4 | [] | [] | https://huggingface.co/datasets/Miron/Text/resolve/main/README.md | ---
dataset_info:
features:
- name: Science artilce's texts
dtype: string
- name: text_length
dtype: int64
- name: TEXT
dtype: string
splits:
- name: train
num_bytes: 54709956.09102402
num_examples: 711
- name: validation
num_bytes: 6155831.908975979
num_examples: 80
download_size: 26356400
dataset_size: 60865788.0
---
# Dataset Card for "Text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wertyworld | null | null | null | false | 4 | false | wertyworld/taser_1_00 | 2022-09-27T16:16:22.000Z | null | false | 268eb429954ebbfc5cd6ce7257bb867b14c85351 | [] | [
"license:cc-by-nc-nd-4.0"
] | https://huggingface.co/datasets/wertyworld/taser_1_00/resolve/main/README.md | ---
license: cc-by-nc-nd-4.0
---
|
SIGMitch | null | null | null | false | 1 | false | SIGMitch/KDroid | 2022-09-28T02:19:25.000Z | null | false | 2694806d783538ab49f362f6f4431f600a9d65d2 | [] | [] | https://huggingface.co/datasets/SIGMitch/KDroid/resolve/main/README.md | |
pane2k | null | null | null | false | 1 | false | pane2k/pan | 2022-09-26T00:58:24.000Z | null | false | ea53e978a3de1a239248dec0d089a4949ccc3093 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/pane2k/pan/resolve/main/README.md | ---
license: afl-3.0
---
|
pane2k | null | null | null | false | 1 | false | pane2k/paneModel | 2022-09-26T01:26:51.000Z | null | false | 265821a55b2a6a358ce3585e4f4964c964b20669 | [] | [
"license:mit"
] | https://huggingface.co/datasets/pane2k/paneModel/resolve/main/README.md | ---
license: mit
---
|
NebulaEnt | null | null | null | false | 1 | false | NebulaEnt/kain-swanton | 2022-09-26T02:28:36.000Z | null | false | 977db3e0916bfcd3ce4dd81e7a83b294b74632b4 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/NebulaEnt/kain-swanton/resolve/main/README.md | ---
license: unknown
---
|
bigscience-biomedical | null | @misc{https://doi.org/10.13026/c2rs98,
title = {MedNLI — A Natural Language Inference Dataset For The Clinical Domain},
author = {Shivade, Chaitanya},
year = 2017,
publisher = {physionet.org},
doi = {10.13026/C2RS98},
url = {https://physionet.org/content/mednli/}
} | State of the art models using deep neural networks have become very good in learning an accurate
mapping from inputs to outputs. However, they still lack generalization capabilities in conditions
that differ from the ones encountered during training. This is even more challenging in specialized,
and knowledge intensive domains, where training data is limited. To address this gap, we introduce
MedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI),
grounded in the medical history of patients. As the source of premise sentences, we used the
MIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical
notes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical
History to be the most informative section of a clinical note, from which useful inferences can be
drawn about the patient. | false | 10 | false | bigscience-biomedical/mednli | 2022-10-16T19:22:04.000Z | mednli | false | a9c79cfa8203733f8633f7e8c15e000eb46a7038 | [] | [
"language:en",
"license:other",
"multilinguality:monolingual"
] | https://huggingface.co/datasets/bigscience-biomedical/mednli/resolve/main/README.md | ---
language: en
license: other
multilinguality: monolingual
pretty_name: MedNLI
paperswithcode_id: mednli
---
# Dataset Card for MedNLI
## Dataset Description
- **Homepage:** https://physionet.org/content/mednli/1.0.0/
- **Pubmed:** False
- **Public:** False
- **Tasks:** Textual Entailment
State of the art models using deep neural networks have become very good in learning an accurate
mapping from inputs to outputs. However, they still lack generalization capabilities in conditions
that differ from the ones encountered during training. This is even more challenging in specialized,
and knowledge intensive domains, where training data is limited. To address this gap, we introduce
MedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI),
grounded in the medical history of patients. As the source of premise sentences, we used the
MIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical
notes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical
History to be the most informative section of a clinical note, from which useful inferences can be
drawn about the patient.
## Citation Information
```
@misc{https://doi.org/10.13026/c2rs98,
title = {MedNLI — A Natural Language Inference Dataset For The Clinical Domain},
author = {Shivade, Chaitanya},
year = 2017,
publisher = {physionet.org},
doi = {10.13026/C2RS98},
url = {https://physionet.org/content/mednli/}
}
```
|
bigscience-biomedical | null | @article{Bravo2015,
doi = {10.1186/s12859-015-0472-9},
url = {https://doi.org/10.1186/s12859-015-0472-9},
year = {2015},
month = feb,
publisher = {Springer Science and Business Media {LLC}},
volume = {16},
number = {1},
author = {{\`{A}}lex Bravo and Janet Pi{\~{n}}ero and N{\'{u}}ria Queralt-Rosinach and Michael Rautschka and Laura I Furlong},
title = {Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research},
journal = {{BMC} Bioinformatics}
} | A corpus identifying associations between genes and diseases by a semi-automatic
annotation procedure based on the Genetic Association Database | false | 248 | false | bigscience-biomedical/gad | 2022-10-16T19:22:05.000Z | null | false | 983da2be4b07d66558a3730f3328f8e8fa5ab52a | [] | [
"language:en",
"license:cc-by-4.0",
"multilinguality:momolingual"
] | https://huggingface.co/datasets/bigscience-biomedical/gad/resolve/main/README.md | ---
language: en
license: cc-by-4.0
multilinguality: momolingual
pretty_name: GAD
---
# Dataset Card for GAD
## Dataset Description
- **Homepage:** https://geneticassociationdb.nih.gov/
- **Pubmed:** True
- **Public:** True
- **Tasks:** Text Classification
A corpus identifying associations between genes and diseases by a semi-automatic
annotation procedure based on the Genetic Association Database.
## Note about homepage
The homepage for this dataset is no longer reachable, but the url is recorded here.
Data for this dataset was originally downloaded from a google drive
folder (the link used in the [BLURB benchmark data download script](https://microsoft.github.io/BLURB/submit.html).
However, we host the data in the huggingface hub for more reliable downloads and access.
## Citation Information
```
@article{Bravo2015,
doi = {10.1186/s12859-015-0472-9},
url = {https://doi.org/10.1186/s12859-015-0472-9},
year = {2015},
month = feb,
publisher = {Springer Science and Business Media {LLC}},
volume = {16},
number = {1},
author = {{\`{A}}lex Bravo and Janet Pi{\~{n}}ero and N{\'{u}}ria Queralt-Rosinach and Michael Rautschka and Laura I Furlong},
title = {Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research},
journal = {{BMC} Bioinformatics}
}
```
|
Greg3d | null | null | null | false | 1 | false | Greg3d/test | 2022-09-26T03:55:47.000Z | null | false | 12352e0e32ac93fa9edc8ea202f5383cc79b9991 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Greg3d/test/resolve/main/README.md | ---
license: afl-3.0
---
|
bigscience-biomedical | null | @article{tsatsaronis2015overview,
title = {
An overview of the BIOASQ large-scale biomedical semantic indexing and
question answering competition
},
author = {
Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos
and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and
Weissenborn, Dirk and Krithara, Anastasia and Petridis, Sergios and
Polychronopoulos, Dimitris and others
},
year = 2015,
journal = {BMC bioinformatics},
publisher = {BioMed Central Ltd},
volume = 16,
number = 1,
pages = 138
} | The data are intended to be used as training and development data for BioASQ
10, which will take place during 2022. There is one file containing the data:
- training10b.json
The file contains the data of the first nine editions of the challenge: 4234
questions [1] with their relevant documents, snippets, concepts and RDF
triples, exact and ideal answers.
Differences with BioASQ-training9b.json
- 492 new questions added from BioASQ9
- The question with id 56c1f01eef6e394741000046 had identical body with
602498cb1cb411341a00009e. All relevant elements from both questions
are available in the merged question with id 602498cb1cb411341a00009e.
- The question with id 5c7039207c78d69471000065 had identical body with
601c317a1cb411341a000014. All relevant elements from both questions
are available in the merged question with id 601c317a1cb411341a000014.
- The question with id 5e4b540b6d0a27794100001c had identical body with
602828b11cb411341a0000fc. All relevant elements from both questions
are available in the merged question with id 602828b11cb411341a0000fc.
- The question with id 5fdb42fba43ad31278000027 had identical body with
5d35eb01b3a638076300000f. All relevant elements from both questions
are available in the merged question with id 5d35eb01b3a638076300000f.
- The question with id 601d76311cb411341a000045 had identical body with
6060732b94d57fd87900003d. All relevant elements from both questions
are available in the merged question with id 6060732b94d57fd87900003d.
[1] 4234 questions : 1252 factoid, 1148 yesno, 1018 summary, 816 list | false | 24 | false | bigscience-biomedical/bioasq_task_b | 2022-11-13T16:17:04.000Z | null | false | ba0efe8ba8289c01df37a7eb5ddd74352939075c | [] | [
"language:en",
"license:other",
"multilinguality:monolingual"
] | https://huggingface.co/datasets/bigscience-biomedical/bioasq_task_b/resolve/main/README.md | ---
language: en
license: other
multilinguality: monolingual
pretty_name: BioASQ Task B
---
# Dataset Card for BioASQ Task B
## Dataset Description
- **Homepage:** http://participants-area.bioasq.org/datasets/
- **Pubmed:** True
- **Public:** False
- **Tasks:** Question Answering
The BioASQ corpus contains multiple question
answering tasks annotated by biomedical experts, including yes/no, factoid, list,
and summary questions. Pertaining to our objective of comparing neural language
models, we focus on the the yes/no questions (Task 7b), and leave the inclusion
of other tasks to future work. Each question is paired with a reference text
containing multiple sentences from a PubMed abstract and a yes/no answer. We use
the official train/dev/test split of 670/75/140 questions.
See 'Domain-Specific Language Model Pretraining for Biomedical
Natural Language Processing'
## Citation Information
```
@article{tsatsaronis2015overview,
title = {
An overview of the BIOASQ large-scale biomedical semantic indexing and
question answering competition
},
author = {
Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos
and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and
Weissenborn, Dirk and Krithara, Anastasia and Petridis, Sergios and
Polychronopoulos, Dimitris and others
},
year = 2015,
journal = {BMC bioinformatics},
publisher = {BioMed Central Ltd},
volume = 16,
number = 1,
pages = 138
}
```
|
samuelchan | null | null | null | false | null | false | samuelchan/art | 2022-09-26T06:38:45.000Z | null | false | 5f481a733e7cfb4fec7507aca1720db7b28fbe9e | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/samuelchan/art/resolve/main/README.md | ---
license: afl-3.0
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-samsum-samsum-8a4c42-1554855493 | 2022-09-26T07:02:52.000Z | null | false | 2be31cb9f5880cbce04b5b68299121992587ace7 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-samsum-samsum-8a4c42-1554855493/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: Samuel-Fipps/t5-efficient-large-nl36_fine_tune_sum_V2
metrics: ['mse']
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Samuel-Fipps/t5-efficient-large-nl36_fine_tune_sum_V2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuel-fipps](https://huggingface.co/samuel-fipps) for evaluating this model. |
BraimComplexe | null | null | null | false | 2 | false | BraimComplexe/train_1 | 2022-09-26T09:13:22.000Z | null | false | a35672081af08bf55b7cdcdd8f2864edcb50a2ff | [] | [] | https://huggingface.co/datasets/BraimComplexe/train_1/resolve/main/README.md | train data |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.