author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ShohamWeiss | null | null | null | false | 2 | false | ShohamWeiss/Dragon_Ball_Colorization | 2022-06-27T20:25:56.000Z | null | false | 1deb256b25446474684c28662d748709b552aa14 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/ShohamWeiss/Dragon_Ball_Colorization/resolve/main/README.md | ---
license: apache-2.0
---
Side-by-side images of Dragon Ball scenes. On the left: A grayscale outline of the scene. On the right: A colored version of the same scene.
The data was taken from downloaded Dragon Ball episodes and preprocessed using OpenCV to remove color and take the outlines of the drawings. Then the pre-processed and post-processed images were concatenated side by side. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ba18bf28-7804997 | 2022-06-27T20:33:58.000Z | null | false | 1d40de094ece5650f7ce90d55b1711742f8c5c0b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ba18bf28-7804997/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: zhiguoxu/xlm-roberta-base-finetuned-token-clasify
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.en
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: zhiguoxu/xlm-roberta-base-finetuned-token-clasify
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ba18bf28-7804998 | 2022-06-27T20:33:57.000Z | null | false | f639f0697aab5aa14a4179902b2ee22b971a1b7b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ba18bf28-7804998/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: transformersbook/xlm-roberta-base-finetuned-panx-en
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.en
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: transformersbook/xlm-roberta-base-finetuned-panx-en
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ba18bf28-7805002 | 2022-06-27T20:34:59.000Z | null | false | 466512ee121e61cc619c7fa5db35465b8433c181 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ba18bf28-7805002/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: moghis/xlm-roberta-base-finetuned-panx-en
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.en
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: moghis/xlm-roberta-base-finetuned-panx-en
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-d42d3c12-7815006 | 2022-06-27T20:36:00.000Z | null | false | 17a9449b6ce9f1e492d169d2497c7c265d2aa3db | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-d42d3c12-7815006/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: jg/xlm-roberta-base-finetuned-panx-de
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: jg/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-d42d3c12-7815007 | 2022-06-27T20:36:08.000Z | null | false | 9f42265eb1a9f20772ffe03ff4a7e7a55b8c0204 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-d42d3c12-7815007/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: evs/xlm-roberta-base-finetuned-panx-de
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: evs/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-d42d3c12-7815008 | 2022-06-27T20:36:10.000Z | null | false | efeb597d3146c15f1fc6281eef33d7a605122d50 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-d42d3c12-7815008/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: PdF/xlm-roberta-base-finetuned-panx-de
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: PdF/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-d42d3c12-7815009 | 2022-06-27T20:36:24.000Z | null | false | dd4f14b735b072bd0a0b82aff2ab0b99e0bb17ab | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-d42d3c12-7815009/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: olpa/xlm-roberta-base-finetuned-panx-de
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: olpa/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-d42d3c12-7815010 | 2022-06-27T20:36:22.000Z | null | false | 3b23a7d9e53e518709f2eb96ff2caf1bb72bb6c9 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-d42d3c12-7815010/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: naam/xlm-roberta-base-finetuned-panx-de
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: naam/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-d42d3c12-7815011 | 2022-06-27T20:37:54.000Z | null | false | b03beed2084b9467b559617e96d41f9fd6e20837 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-d42d3c12-7815011/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: dfsj/xlm-roberta-base-finetuned-panx-de
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: dfsj/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-d42d3c12-7815012 | 2022-06-27T20:39:31.000Z | null | false | 02715875baf1d4ef6f3143713f99c9c2ebf93351 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-d42d3c12-7815012/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: edwardjross/xlm-roberta-base-finetuned-panx-de
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: edwardjross/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-d42d3c12-7815013 | 2022-06-27T20:38:33.000Z | null | false | 4db862ceaeb292c34d3bde74b46eeddbef45f02e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xtreme"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-d42d3c12-7815013/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: Ninh/xlm-roberta-base-finetuned-panx-de
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Ninh/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-e1d72cd6-7845032 | 2022-06-28T15:48:11.000Z | null | false | 5ce8e6b412ae525c1c05ab8e23674023034480d6 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:billsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-e1d72cd6-7845032/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- billsum
eval_info:
task: summarization
model: d0r1h/LEDBill
metrics: []
dataset_name: billsum
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: d0r1h/LEDBill
* Dataset: billsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-e1d72cd6-7845033 | 2022-06-27T20:39:35.000Z | null | false | 80f3122d1f09cb1b052a07d5fdfddbc498b806ae | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:billsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-e1d72cd6-7845033/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- billsum
eval_info:
task: summarization
model: stevhliu/t5-small-finetuned-billsum-ca_test
metrics: []
dataset_name: billsum
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: stevhliu/t5-small-finetuned-billsum-ca_test
* Dataset: billsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6fbfec76-7855034 | 2022-06-27T20:57:19.000Z | null | false | 4c4af35020183a5bba67d6830073722ade33ec73 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6fbfec76-7855034/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: henryu-lin/t5-3b-samsum-deepspeed
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: henryu-lin/t5-3b-samsum-deepspeed
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6fbfec76-7855035 | 2022-06-27T20:50:39.000Z | null | false | 46d64fdb5afbd979e3d802606112e8826c097d10 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6fbfec76-7855035/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: henryu-lin/t5-large-samsum-deepspeed
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: henryu-lin/t5-large-samsum-deepspeed
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6fbfec76-7855036 | 2022-06-27T20:49:07.000Z | null | false | b109aa5c1a272e1451a6c77f14ef71bd311eba99 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6fbfec76-7855036/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: jpcorb20/pegasus-large-reddit_tifu-samsum-256
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: jpcorb20/pegasus-large-reddit_tifu-samsum-256
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6fbfec76-7855037 | 2022-06-27T20:49:12.000Z | null | false | ffc3f251c54bf090c63d3d021cb1878022a1545b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6fbfec76-7855037/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: jpcorb20/pegasus-large-reddit_tifu-samsum-512
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: jpcorb20/pegasus-large-reddit_tifu-samsum-512
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6fbfec76-7855038 | 2022-06-27T20:44:31.000Z | null | false | 4e1f56d62f31be7265784afbb0eb45c31ff2f527 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6fbfec76-7855038/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: santiviquez/t5-small-finetuned-samsum-en
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: santiviquez/t5-small-finetuned-samsum-en
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6fbfec76-7855039 | 2022-06-27T20:44:54.000Z | null | false | a311f4573edeb5e54e8f7e6d6e15542ec6b62694 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6fbfec76-7855039/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: santiviquez/bart-base-finetuned-samsum-en
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: santiviquez/bart-base-finetuned-samsum-en
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6fbfec76-7855040 | 2022-06-27T20:47:12.000Z | null | false | 9224499116760e2d4eaf0b4eb0933b14f7934bb2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6fbfec76-7855040/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: jackieliu930/bart-large-cnn-samsum
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: jackieliu930/bart-large-cnn-samsum
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6fbfec76-7855041 | 2022-06-27T20:46:43.000Z | null | false | 703e18a9c8981908d342e8f00e24bead5cbd7bde | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6fbfec76-7855041/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: knkarthick/bart-large-xsum-samsum
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: knkarthick/bart-large-xsum-samsum
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6fbfec76-7855042 | 2022-06-27T20:45:12.000Z | null | false | 302ce85e5245d37f731bb017660ec1a90cc9e578 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6fbfec76-7855042/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: lidiya/bart-base-samsum
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: lidiya/bart-base-samsum
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6fbfec76-7855043 | 2022-06-27T20:46:11.000Z | null | false | 6efd9d5965f2b22f3db3345fb3a1630d8452d7f8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6fbfec76-7855043/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: santiviquez/ssr-base-finetuned-samsum-en
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: santiviquez/ssr-base-finetuned-samsum-en
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-de1c01d5-7885055 | 2022-06-27T21:04:51.000Z | null | false | aacbd1bc47af69a4bb8e17d90b0b0b0d185f495d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:wmt19"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-de1c01d5-7885055/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- wmt19
eval_info:
task: translation
model: Tanhim/translation-En2De
metrics: []
dataset_name: wmt19
dataset_config: de-en
dataset_split: validation
col_mapping:
source: translation.en
target: translation.de
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: Tanhim/translation-En2De
* Dataset: wmt19
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-208688aa-7955063 | 2022-06-27T21:40:11.000Z | null | false | 4c2734905dc51f98a0f5ed33ab67e0b610e240b0 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Matthijs/snacks"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-208688aa-7955063/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Matthijs/snacks
eval_info:
task: image_multi_class_classification
model: matteopilotto/vit-base-patch16-224-in21k-snacks
metrics: []
dataset_name: Matthijs/snacks
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: matteopilotto/vit-base-patch16-224-in21k-snacks
* Dataset: Matthijs/snacks
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
imvladikon | null | null | null | false | 1 | false | imvladikon/paranames | 2022-06-27T22:38:37.000Z | null | false | 13934e6e5c8382e2e22544304453d864fa6cb596 | [] | [
"arxiv:2202.14035"
] | https://huggingface.co/datasets/imvladikon/paranames/resolve/main/README.md | <img src="data/paranames_banner.png"></img>
# ParaNames: A multilingual resource for parallel names
This repository contains releases for the ParaNames corpus, consisting of parallel names of over 12 million named entities in over 400 languages.
ParaNames was introduced in [Sälevä, J. and Lignos, C., 2022. ParaNames: A Massively Multilingual Entity Name Corpus. arXiv preprint arXiv:2202.14035](https://arxiv.org/abs/2202.14035).
Please cite as:
```
@article{saleva2022paranames,
title={ParaNames: A Massively Multilingual Entity Name Corpus},
author={S{\"a}lev{\"a}, Jonne and Lignos, Constantine},
journal={arXiv preprint arXiv:2202.14035},
year={2022}
}
```
See the [Releases page](https://github.com/bltlab/paranames/releases) for the downloadable release.
# Using the data release
## Release format
The corpus is released as a gzipped TSV file which is produced by the pipeline included in this repository.
## Release notes
### Repeated entities
In current releases, any entity that is associated with multiple named entity types (PER, LOC, ORG) in the Wikidata type hierarchy will appear multiple times in the output, once with each type. This affects less than 3% of the entities in the data.
If you want a unique set of entities, you should deduplicate the data using the `wikidata_id` field.
If you only want to use entities that are associated with a single named entity type, you should remove any `wikidata_id` that appears in multiple rows.
# Using the code
First, install the following non-Python dependencies:
- MongoDB
- [xsv](https://github.com/BurntSushi/xsv)
- ICU support for your computer (e.g. `libicu-dev`)
Next, install ParaNames and its Python dependencies by running `pip install -e .`.
It is recommended that you use a Conda environment for package management.
## Creating the ParaNames corpus
To create a corpus following our approach, follow the steps below:
1. Download the latest Wikidata dump from the [Wikimedia page](https://dumps.wikimedia.org/wikidatawiki/entities/) and extract it. Note that this may take up several TB of disk space.
2. Use `recipes/paranames_pipeline.sh` which ingests the Wikidata JSON to MongoDB and then dumps and postprocesses it to our final TSV resource.
The call to `recipes/paranames_pipeline.sh` works as follows:
```
recipes/paranames_pipeline.sh <path_to_extracted_json_dump> <output_folder> <n_workers>
```
Set the number of workers based on the number of CPUs your machine has.
By default, only 1 CPU is used.
The output folder will contain one subfolder per language, inside of which `paranames_<language_code>.tsv` can be found.
The entire resource is located in `<output_folder>/combined/paranames.tsv`.
### Notes
ParaNames offers several options for customization:
- If your MongoDB instance uses a non-standard port, you should change the value of [`mongodb_port`](https://github.com/bltlab/paranames/blob/main/recipes/paranames_pipeline.sh#L13) accordingly inside `paranames_pipeline.sh`.
- Setting [`should_collapse_languages=yes`](https://github.com/bltlab/paranames/blob/main/recipes/dump.sh#L17) will cause Wikimedia language codes to be "collapsed" to the top-level Wikimedia language code, i.e. `kk-cyrl` will be converted to `kk`, `en-ca` to `en` etc.
- Setting [`should_keep_intermediate_files=yes`](https://github.com/bltlab/paranames/blob/main/recipes/dump.sh#L18) will cause intermediate files to be deleted. This includes the raw per-type TSV dumps (`{PER,LOC,ORG}.tsv`) from MongoDB, as well as outputs of `postprocess.py`.
- Within [`recipes/dump.sh`](https://github.com/bltlab/paranames/blob/main/recipes/dump.sh), it is also possible to define languages to be excluded and whether entity types should be disambiguated. By default, no languages are excluded and no disambiguation is done.
- After the pipeline completes, `<output_folder>` will contain one folder per language, inside of which is a TSV file containing the subset of names in that language. Combined TSVs with names in all languages are available in the `combined` folder. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-57377e87-7975067 | 2022-06-28T01:17:36.000Z | null | false | 48a5b799dc8383799683c5c2b7ae466a103ac896 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:food101"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-57377e87-7975067/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- food101
eval_info:
task: image_multi_class_classification
model: aspis/swin-finetuned-food101
metrics: []
dataset_name: food101
dataset_config: default
dataset_split: validation
col_mapping:
image: image
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: aspis/swin-finetuned-food101
* Dataset: food101
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-57377e87-7975068 | 2022-06-28T01:17:37.000Z | null | false | 5f49b13677db6758bfebaa528e8840904682b79b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:food101"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-57377e87-7975068/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- food101
eval_info:
task: image_multi_class_classification
model: eslamxm/vit-base-food101
metrics: []
dataset_name: food101
dataset_config: default
dataset_split: validation
col_mapping:
image: image
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: eslamxm/vit-base-food101
* Dataset: food101
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-57377e87-7975069 | 2022-06-28T01:17:06.000Z | null | false | 96d5cc4fbeae4c051a01e0735e644be386327f60 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:food101"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-57377e87-7975069/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- food101
eval_info:
task: image_multi_class_classification
model: nateraw/food
metrics: []
dataset_name: food101
dataset_config: default
dataset_split: validation
col_mapping:
image: image
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: nateraw/food
* Dataset: food101
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-57377e87-7975070 | 2022-06-28T01:17:38.000Z | null | false | f48b247feff57496a65d5158f4ed6996d1588300 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:food101"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-57377e87-7975070/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- food101
eval_info:
task: image_multi_class_classification
model: skylord/swin-finetuned-food101
metrics: []
dataset_name: food101
dataset_config: default
dataset_split: validation
col_mapping:
image: image
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: skylord/swin-finetuned-food101
* Dataset: food101
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ac4402f5-7985071 | 2022-06-28T01:11:48.000Z | null | false | fc64cba2a6607951c08b67a7b744e552e5c654c8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:beans"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ac4402f5-7985071/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- beans
eval_info:
task: image_multi_class_classification
model: eugenecamus/resnet-50-base-beans-demo
metrics: []
dataset_name: beans
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: eugenecamus/resnet-50-base-beans-demo
* Dataset: beans
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ac4402f5-7985072 | 2022-06-28T01:11:59.000Z | null | false | 12c2c692f03c2fb5c5be9034663ad1363cc7f37f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:beans"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ac4402f5-7985072/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- beans
eval_info:
task: image_multi_class_classification
model: johnnydevriese/vit_beans
metrics: []
dataset_name: beans
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: johnnydevriese/vit_beans
* Dataset: beans
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ac4402f5-7985073 | 2022-06-28T01:12:06.000Z | null | false | 2eeab49dc8b7db70b1ee4b0b9294e1e2652a703e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:beans"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ac4402f5-7985073/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- beans
eval_info:
task: image_multi_class_classification
model: karthiksv/vit-base-beans
metrics: []
dataset_name: beans
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: karthiksv/vit-base-beans
* Dataset: beans
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ac4402f5-7985074 | 2022-06-28T01:12:04.000Z | null | false | 8e31af387a31ba9cdcf2b804b8d5ca2e550887b7 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:beans"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ac4402f5-7985074/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- beans
eval_info:
task: image_multi_class_classification
model: mrm8488/convnext-tiny-finetuned-beans
metrics: []
dataset_name: beans
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: mrm8488/convnext-tiny-finetuned-beans
* Dataset: beans
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ac4402f5-7985075 | 2022-06-28T01:12:11.000Z | null | false | a9ed2ae2efa9eb7dddbdaeff5f2a5db735d64eee | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:beans"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ac4402f5-7985075/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- beans
eval_info:
task: image_multi_class_classification
model: nateraw/vit-base-beans
metrics: []
dataset_name: beans
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: nateraw/vit-base-beans
* Dataset: beans
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ac4402f5-7985076 | 2022-06-28T01:12:17.000Z | null | false | cf9d2e4eb271e58f2e02a34c04385072e52842dd | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:beans"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ac4402f5-7985076/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- beans
eval_info:
task: image_multi_class_classification
model: nateraw/vit-base-beans-demo
metrics: []
dataset_name: beans
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: nateraw/vit-base-beans-demo
* Dataset: beans
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ac4402f5-7985077 | 2022-06-28T01:12:20.000Z | null | false | db742cfe5ee4f817a0bdbcb52f5fcfc6370bd9b5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:beans"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ac4402f5-7985077/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- beans
eval_info:
task: image_multi_class_classification
model: nateraw/vit-base-beans-demo-v2
metrics: []
dataset_name: beans
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: nateraw/vit-base-beans-demo-v2
* Dataset: beans
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ac4402f5-7985078 | 2022-06-28T01:12:28.000Z | null | false | da9bcb8e227d0a2855c127640c55a03f20d6e114 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:beans"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ac4402f5-7985078/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- beans
eval_info:
task: image_multi_class_classification
model: nateraw/vit-base-beans-demo-v3
metrics: []
dataset_name: beans
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: nateraw/vit-base-beans-demo-v3
* Dataset: beans
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ac4402f5-7985080 | 2022-06-28T01:12:52.000Z | null | false | f6998feed23737040025f847ea8e2644da8e09ce | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:beans"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ac4402f5-7985080/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- beans
eval_info:
task: image_multi_class_classification
model: saiharsha/vit-base-beans
metrics: []
dataset_name: beans
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: saiharsha/vit-base-beans
* Dataset: beans
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ac4402f5-7985079 | 2022-06-28T01:13:10.000Z | null | false | 27054856d50749cc3b9090ca90b28a58bd383ac2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:beans"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ac4402f5-7985079/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- beans
eval_info:
task: image_multi_class_classification
model: nickmuchi/vit-base-beans
metrics: []
dataset_name: beans
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: nickmuchi/vit-base-beans
* Dataset: beans
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-5480d71b-7995081 | 2022-06-28T01:17:58.000Z | null | false | c45f20dd3fd0a6828525afffe050f6cee9739286 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cifar10"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-5480d71b-7995081/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cifar10
eval_info:
task: image_multi_class_classification
model: aaraki/vit-base-patch16-224-in21k-finetuned-cifar10
metrics: []
dataset_name: cifar10
dataset_config: plain_text
dataset_split: test
col_mapping:
image: img
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: aaraki/vit-base-patch16-224-in21k-finetuned-cifar10
* Dataset: cifar10
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-5480d71b-7995082 | 2022-06-28T01:17:59.000Z | null | false | dfd6372de9860d27f25f4f57c685787d5688364e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cifar10"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-5480d71b-7995082/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cifar10
eval_info:
task: image_multi_class_classification
model: abhishek/autotrain_cifar10_vit_base
metrics: []
dataset_name: cifar10
dataset_config: plain_text
dataset_split: test
col_mapping:
image: img
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: abhishek/autotrain_cifar10_vit_base
* Dataset: cifar10
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-5480d71b-7995084 | 2022-06-28T01:18:12.000Z | null | false | 44aa08f5bf0ef2af1fd0faf041c815bfed67248e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cifar10"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-5480d71b-7995084/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cifar10
eval_info:
task: image_multi_class_classification
model: jadohu/BEiT-finetuned
metrics: []
dataset_name: cifar10
dataset_config: plain_text
dataset_split: test
col_mapping:
image: img
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: jadohu/BEiT-finetuned
* Dataset: cifar10
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-5480d71b-7995085 | 2022-06-28T01:18:16.000Z | null | false | 64a6340f2f15d378dc42a50b312714a713cb2f6a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cifar10"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-5480d71b-7995085/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cifar10
eval_info:
task: image_multi_class_classification
model: karthiksv/vit-base-patch16-224-cifar10
metrics: []
dataset_name: cifar10
dataset_config: plain_text
dataset_split: test
col_mapping:
image: img
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: karthiksv/vit-base-patch16-224-cifar10
* Dataset: cifar10
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-5480d71b-7995086 | 2022-06-28T01:18:31.000Z | null | false | 57848a71566164eb2818851a09ffe5373cfbbd87 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cifar10"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-5480d71b-7995086/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cifar10
eval_info:
task: image_multi_class_classification
model: karthiksv/vit-base-patch16-224-in21k-finetuned-cifar10
metrics: []
dataset_name: cifar10
dataset_config: plain_text
dataset_split: test
col_mapping:
image: img
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: karthiksv/vit-base-patch16-224-in21k-finetuned-cifar10
* Dataset: cifar10
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-5480d71b-7995087 | 2022-06-28T01:18:39.000Z | null | false | 6ddee40ebb1ba898a18f2613d0e9669babd3aee1 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cifar10"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-5480d71b-7995087/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cifar10
eval_info:
task: image_multi_class_classification
model: michaelbenayoun/vit-base-beans
metrics: []
dataset_name: cifar10
dataset_config: plain_text
dataset_split: test
col_mapping:
image: img
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: michaelbenayoun/vit-base-beans
* Dataset: cifar10
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-5480d71b-7995089 | 2022-06-28T01:18:50.000Z | null | false | deead74447b2beae48f22d348f9c7ebd2865b661 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cifar10"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-5480d71b-7995089/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cifar10
eval_info:
task: image_multi_class_classification
model: tanlq/vit-base-patch16-224-in21k-finetuned-cifar10
metrics: []
dataset_name: cifar10
dataset_config: plain_text
dataset_split: test
col_mapping:
image: img
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: tanlq/vit-base-patch16-224-in21k-finetuned-cifar10
* Dataset: cifar10
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
psyche | null | null | null | false | 14 | false | psyche/common_crawl | 2022-07-16T13:38:35.000Z | null | false | 354ee13418687f585aa97e475f17dfb352262044 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/psyche/common_crawl/resolve/main/README.md | ---
license:
- apache-2.0
---
This dataset fit on the streaming mode.
The origin dataset link: https://data.commoncrawl.org/crawl-data/CC-MAIN-2022-27/warc.paths.gz
_Requirements: selectolax, warcio_
```
from datasets import load_dataset
# sub name is the number has string type e.g. "1", "2", ...(it depends on the dataset)
dataset = load_dataset("psyche/common_crawl", "1", streaming=True)
```
|
nakkhatra | null | null | null | false | 1 | false | nakkhatra/trial_bn | 2022-06-28T05:02:24.000Z | null | false | 4b029d8ecde3f4bf04efa359dcefdb4241b802c4 | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/nakkhatra/trial_bn/resolve/main/README.md | ---
license: cc0-1.0
---
|
drjayfeldman | null | null | null | false | 1 | false | drjayfeldman/drjayfeldman | 2022-06-28T05:09:06.000Z | null | false | cec7ce08806b599eddd9c0116022d58529ab5ebb | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/drjayfeldman/drjayfeldman/resolve/main/README.md | ---
license: apache-2.0
---
|
IDEA-CCNL | null | \ | Download from https://www.cluebenchmarks.com/introduce.html | false | 55 | false | IDEA-CCNL/AFQMC | 2022-09-28T18:12:27.000Z | null | false | f108b15f58d6eeb978cfc76cb4f31d3fdc4dce50 | [] | [
"arxiv:2209.02970",
"license:apache-2.0"
] | https://huggingface.co/datasets/IDEA-CCNL/AFQMC/resolve/main/README.md | ---
license: apache-2.0
---
# AFQMC
Download from https://www.cluebenchmarks.com/introduce.html
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
|
jimbozhang | null | null | null | false | 1 | false | jimbozhang/speechocean762 | 2022-06-28T07:16:27.000Z | null | false | 71d88f09d3d69cc696eadd76b386ebe004ef6a70 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/jimbozhang/speechocean762/resolve/main/README.md | ---
license: cc-by-4.0
---
|
neuralchen | null | null | null | false | 1 | false | neuralchen/VGGFace2-HQ | 2022-06-28T08:59:32.000Z | null | false | d198fda56cad37354711e0a959cc73d250374e7b | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/neuralchen/VGGFace2-HQ/resolve/main/README.md | ---
license: apache-2.0
---
|
huggingface-legal | null | null | null | false | 1 | false | huggingface-legal/takedown-notices | 2022-10-20T17:19:50.000Z | null | false | 7ed1268fc3319ef0bf885d6e8f6de810136d4a3c | [] | [
"license:cc-by-nc-nd-4.0",
"tags:legal"
] | https://huggingface.co/datasets/huggingface-legal/takedown-notices/resolve/main/README.md | ---
license: cc-by-nc-nd-4.0
tags:
- legal
---
### Takedown notices received by the Hugging Face team
Please click on Files and versions to browse them
Also check out our:
- [Terms of Service](https://huggingface.co/terms-of-service)
- [Community Code of Conduct](https://huggingface.co/code-of-conduct)
- [Content Guidelines](https://huggingface.co/content-guidelines)
|
victor | null | null | null | false | 3 | false | victor/titanic | 2022-06-28T10:13:31.000Z | null | false | 199f924917909dabbdf62ed3fdaad781f56e547f | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/victor/titanic/resolve/main/README.md | ---
license: afl-3.0
---
|
knkarthick | null | null | null | false | 869 | false | knkarthick/dialogsum | 2022-10-23T06:19:19.000Z | null | false | 14ce740d8ba877d658e5dcf8e757364e2d163664 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:summarization"
] | https://huggingface.co/datasets/knkarthick/dialogsum/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
- topic modeling
- one liner summary
- email subject
- meeting title
task_ids: []
pretty_name: DIALOGSum Corpus
---
# Dataset Card for DIALOGSum Corpus
## Dataset Description
### Links
- **Homepage:** https://aclanthology.org/2021.findings-acl.449
- **Repository:** https://github.com/cylnlp/dialogsum
- **Paper:** https://aclanthology.org/2021.findings-acl.449
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
### Languages
English
## Dataset Structure
### Data Instances
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues split into train, test and validation.
The first instance in the training set:
{'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- topic: human written topic/one liner of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 12460
- val: 1500
- test: 1500
- holdout: 100 [Only 3 features: id, dialogue, topic]
## Dataset Creation
### Curation Rationale
In paper:
We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
Under rich real-life scenarios, including more diverse task-oriented scenarios;
Have clear communication patterns and intents, which is valuable to serve as summarization sources;
Have a reasonable length, which comforts the purpose of automatic summarization.
We ask annotators to summarize each dialogue based on the following criteria:
Convey the most salient information;
Be brief;
Preserve important named entities within the conversation;
Be written from an observer perspective;
Be written in formal language.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
non-commercial licence: MIT
## Citation Information
```
@inproceedings{chen-etal-2021-dialogsum,
title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset",
author = "Chen, Yulong and
Liu, Yang and
Chen, Liang and
Zhang, Yue",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.449",
doi = "10.18653/v1/2021.findings-acl.449",
pages = "5062--5074",
```
## Contributions
Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset. |
knkarthick | null | null | null | false | 32 | false | knkarthick/AMI | 2022-10-24T09:16:01.000Z | null | false | 51ee8e22888b3aafb4a2601796c76c8fd750ebfd | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10<n<1000",
"source_datasets:original",
"task_categories:summarization"
] | https://huggingface.co/datasets/knkarthick/AMI/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10<n<1000
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: AMI Corpus
---
# Dataset Card for AMI Corpus
## Dataset Description
### Links
- **Homepage:** https://groups.inf.ed.ac.uk/ami/corpus/
- **Repository:** https://groups.inf.ed.ac.uk/ami/download/
- **Paper:** https://groups.inf.ed.ac.uk/ami/corpus/overview.shtml
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.
#### Synchronised recording devices:
close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, individual pens.
#### Annotation:
orthographic transcription, annotations for many different phenomena (dialog acts, head movement etc. ).
Although the AMI Meeting Corpus was created for the uses of a consortium that is developing meeting browsing technology, it is designed to be useful for a wide range of research areas. The downloads on this website include videos that are suitable for most purposes, but higher resolution videos are available for researchers engaged in video processing.
All of the signals and transcription, and some of the annotations, have been released publicly under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0).
### Languages
English
## Dataset Structure
### Data Instances
AMI Corpus is a meeting summarization dataset, consisting of 279 dialogues split into train, test and validation.
The first instance in the training set:
{'id': '30', 'summary': "The project manager opens the meeting by stating that they will address functional design and then going over the agenda. The industrial designer gives his presentation, explaining how remote controls function and giving personal preference to a clear, simple design that upgrades the technology as well as incorporates the latest features in chip design. The interface specialist gives her presentation next, addressing the main purpose of a remote control. She pinpoints the main functions of on/off, channel-switching, numbers for choosing particular channels, and volume; and also suggests adding a menu button to change settings such as brightness on the screen. She gives preference to a remote that is small, easy to use, and follows some conventions. The group briefly discusses the possibility of using an LCD screen if cost allows it, since it is fancy and fashionable. The marketing expert presents, giving statistical information from a survey of 100 subjects. She prefers a remote that is sleek, stylish, sophisticated, cool, beautiful, functional, solar-powered, has long battery life, and has a locator. They discuss the target group, deciding it should be 15-35 year olds. After they talk about features they might include, the project manager closes the meeting by allocating tasks.", 'dialogue': "Speaker A: Cool. Do you wanna give me the little cable thing? Yeah. Cool. Ah, that's why it won't meet. Okay, cool. Yep, cool. Okay, functional requirements. Alright, yeah. It's working. Cool, okay. So what I have, wh where I've got my information from is a survey where the usability lab um observed remote control use with um a hundred subjects and then they gave them a questionnaire. Um so it was all about, you know, how people feel about the look and feel of the remote control, you know. What's the most annoying things about remote controls and um the possibility of speech recognition and L_C_D_ screens in remote control. Not that they actually gave me any answers on the L_C_D_ screens, so I should have taken that bit out, but anyway. Um okay, so. What they found is that people don't like how current remote controls are, so you know, definitely you should be looking at something quite different. Um seventy five percent of users find most remote controls ugly. Uh the other twenty five percent have no fashion sense. Uh eighty percent of users would spend more to get um you know, a nice looking remote control. Um current remote controls, they don't match the user behaviour well, as you'll see on the next slide. Um I dunno what zapping is, but Oh, right. But you have that little thing that comes up at the bottom and tells you what's on. Um okay, fifty percent of users say they only use ten percent of the buttons, so that's going back to what, you know, we were saying earlier about, you know, do you need all the buttons on the remote control, they just make it look ugly. Okay? Cool. Um so this is my little graph thing. Mm k Okay, well, I can send it to all of you. What it is is um it's cones, 'cause I thought they'd be more exciting. Um but ooh where's it go? Back. Oh. Oh yes, cool. Okay, I'm gonna stop playing with the little pointy thing. Um okay, so like what it shows is how much things are used relatively and what you can clearly see from that is the thing that's used most is the channel selection. What you can't see is volume selection, it's a little bit higher than all the others. Yeah, so what the graph shows is that, you know, power, channel selection and volume selection are important, and the rest of them, you know, nobody really uses and so that's the the numbers along the top represent their like um their importance, you know, so on a scale of one to ten, how important is that and, you know, channel selection and volume selection are absolutely essential, and the power, well it's not quite so essential, apparently, although I don't understand how it couldn't be, um and everything else, I think, you know, you can forget about having those buttons on the remote control, 'cause they're just not needed, and they're not used. Okay. This is the bit that the email messed up for me and that's what I was fiddling about with at the beginning of the thing. Okay, cool. So um okay, so this is what people find annoying about remote controls. Uh that they get lost, that the uh you know, they're not intuitive and that they're bad for repetitive strain injury. I think if you're watching enough T_V_ to get repetitive strain injury from um you know, watching T_V_, then that's the least of your problems, but you know, it's up there. Um that yeah. Okay, so um I mean the the R_S_I_ thing would be that, like when you have the computer keyboards and you keep your wrists up would be something that encourages you want something with an ergonomic t design that encourages good use of the remote control and you know, not straining your wrists watching T_V_. Yes. Okay, cool. Right, um sorry this is pink because I was copying and pasting the table, and I didn't have time to white it out again. Um okay, but that shows how people whether they would pay more for voice recognition software. So you can see from that that, you know, younger people to the age of thirty five are quite likely to pay quite a lot more f well quite are quite likely to pay more for voice recognition software, whereas as people get older, they're a bit more sceptical about it and they're less willing to to try it. Um so clearly voice recognition is something to think about, but um you know I d I do wonder how well that would work given that a T_V_, you know, tends to be people talking and um, you know, how are you going to stop it from just flipping channels whilst watching T_V_. Um okay? Cool. Um okay, so these are my personal preferences. So you have sleek, stylish, sophisticated, you know, so something that's, you know, a bit cool. Um you know, functional, so it's useful, but minimalist. Um there's a there's an important thing that, you know, people use when, you know, when you're filling up your home, you know, a lot of people fill up their home with bits of crap, basically, you know, and you've got all this stuff, and you're just like, what the hell is that, who is ever gonna use it? You know, so things should either be functional or beautiful or preferably both, so I think we need to aim for both. Um okay, then a long battery life, like you were talking about earlier and um, you know, I was thinking that solar power would be quite cool because, you know, your remote control just sits there, and you could just sit it in the sunshine and save the environment a bit. Um and then like a locator, so you know, kind of like you have for a mobile phone or not a mobile phone Yeah, that's it, you know. I know, it's weird. My flatmate and I were talking about this on the way into uni this morning and I was like I need to get one for everything. So yeah, so maybe something where you clap and then it beeps, something a kind of sound that you don't often hear on the T_V_, you know, 'cause you don't want your remote control beeping every five minutes, 'cause you you'd then deliberately lose it by throwing it out the window or something. So okay? Cool. That's me. Cat's. Ca. Yeah, I mean that's the thing is that it didn't say in the survey, you know, whether, you know, these are the people that will pay more for a more stylish remote control, but I'm assuming, you know, yes. Well, that's when you go to uni, isn't it? So, you know Yeah. Oh, I've unplugged it. Do you want me to Yeah. Seventy six point three percent. Yeah. Yeah, I kn I mean I know what you're saying about the fifteen to twenty five year olds, but I mean it has been proven that that people of that age group have a higher disposable income because they don't have like I mean, you know, if you're at university, you're paying your rent, but you don't have a mortgage, you don't have a life insurance policy, you don't normally have a car, yeah, so. You're still learning to drive actually, so that just costs more than a car, but yeah. Um so I mean like it is an age group to target, really, I think. No, I mean that's what, that's like fifteen Pounds? You know, I think Yeah, I d I don't know many people without a T_V_. We didn't have a T_V_ last year, and everyone thought we were off our heads, you know. Yeah, I d well we've we've got quite a d decent T_V_. Yeah. I think I think the fact that, you know, ninety one point two percent of fifteen to twenty five year olds are saying yes, I would pay more for a voice recognition remote control, does say quite a lot really. You know, so I mean that and the disposable income and I don't think it's something to ignore, you know. Is not a massive difference, you know. No, do totally. You do have it in your mobile phone though, don't you? Because you have like I mean every mobile phone now has like call this person and it calls them. I don't know. Yeah. S so y you'd maybe need a code word. Do you know what I mean? So like when you say change, except that's being said quite a lot on T_V_, so maybe like, you know, remote. I mean how often do people say remote on T_V_? Although I only watch Charmed, so really I wouldn't know but like so you'd just say remote five, you know, remote ten, remote one two nine. I don't think there's a lot of uh voice recognition remote controls. Yeah, that would be another way to do it. Yeah, but then the code word would be even more important, because I mean Sky advertise on every channel, don't they, you know, so then it would be you'd be watching Charmed, and then the Sky advert would come on and it would change to Sky. Yeah, yeah, and that would be really annoying. Yeah. Do you not think that defeats the object of having voice recognition on a remote control though? Yeah, you know, so you have to have the remote control. It's more like if you lost it and it's down the sofa sometime, you can yell at it and it'll just change it, you can look for it later, yeah. Yeah, yeah, I suppose nearer to you but a b like if you have surround sound then Yeah. Yeah, 'cause it's it's quite important that you don't lose the the bit to locate the remote control. Yeah, definitely, yeah. Oh, so y you want our um PowerPoint presentations in there, hey? Okay. There you go. But is everyone's called functional requirements? Okay, so that's good. That's me done. Okay, cool.\r\nSpeaker B: No. Mm. Um um wi on on a what? Oh project project documents, yeah, yeah, yeah, okay. Oh okay, yeah. Yes, I think so. Yeah, the last minute, yeah, yeah. Yeah. Um Okay. Hmm. Mm. Okay, yeah, afterwards, yeah, okay. Thanks. I think we need like some general discussion at the end probably. Yeah. Yeah, I think since since we were discussing some um design issues then I I I would like to continue okay, yeah. Thanks. Oh i Okay, I hope wait. Should it just There's just nothing. Oh right, right, right, um Okay. Nothin okay, something is coming up. No signal? Why? Oh. My my computer went blank now. Adjusting. But I don't see anything I don't see anything on my computer now. This is the problem, but Um. Uh now it's okay. No? No. Oh okay. Okay, that's fine, that's good. Okay, let's start from the beginning. So I'm going to speak about technical functions design uh just like some some first issues that came up. Um 'kay, so the method I was um adopting at this point, it's not um for the for the whole um period of the um all the project but it's just at th at this very moment. Um uh my method was um to look at um other um remote controls, uh so mostly just by searching on the web and to see what um functionality they used. And then um after having got this inspiration and having compared what I found on the web um just to think about what the de what the user really needs and what um what the user might desire as additional uh functionalities. And yeah, and then just to um put the main function of the remote control in in words. Um so the findings uh were um that the main function of the remote control is is just sending messages to the television set, so this quite straightforward. And uh w some of the main functions would be switching on, switching off, uh then the user would like to switch the channel um for example just m changing to the next channel to to flip through all all of the possible channels, or then mm uh the other possibility would be that um she might just want to choose one particular channel, so we would need the numbers. And and also the volume is very important. Um um I als okay. 'Kay. Um um among the findings I found that m m most of the curr mm presently available remote controls also include other mm functionalities um in their design, like operating a V_C_R_, but they don't seem to be able to deal with D_V_D_ players, but then there are surely there are many other functionali functions that could possibly be added to them, but according to the last minute update um actually um we do not want to have all this complicated functions added to our design. So my personal preferences would be uh to keep the mm the whole remote control small um just like the physical size. And then it must be easy to use, so it must follow some conventions um like whereabouts you find the on off button and maybe the colour tends to be red or something. Um then yeah, the must-have buttons would be on off and then the channel numbers and then um the one that allows us to go to the next or the previous channel, and then volume has to be there. But then um other functionalities um could be just uh there could be a menu button and you could change things on the screen then, um for example brightness and mm similar functions could be just um done through the menu. And yeah, the last question I had about whether we wanted to incorporate n uh more functionalities, the answer was already no because of the last minute update. So at the for the time being that's uh that's all. If you have questions Yeah, and also it's it's um other question is uh because there are so many different And there are so many different things that could possibly be included because besides video and D_V_D_ there are the mm um video C_D_s and whatever, so it might be problematic to to choose between all these possible things. Um well, I think the buttons are still mm kind of the most um easy for the user to use, I mean um what other options would you have? A little screen or something, but this would be really kind of I think a lot of learning for the user and and I mean the user just wants to get um get a result um quickly, not to spend time in like um giving several orders um I dunno. I think I th I would I would think the put the buttons, but if if you have other mm proposals um. Yeah. Yeah. Mm-hmm. Yep. Uh am I going in the right direction? No. Wait. Okay, here it comes. Okay, here you are. Um that's very good, very interesting. Mm-hmm. Yeah. Yeah, you share a television or something that yeah. It was seventy something, yeah, yeah. Yeah this this is not unaffordable, but the problem is whether people need it, whether they do have a T_V_ to use its full Yeah. Common, the students yeah, yeah. The s the stu yeah, and the remote control might not yeah, it might not even function with the old T_V_. Yeah, we're still yeah. Or w maybe we can just kind of uh uh Yeah, but at the same time I think maybe we can we can just decide to to have both of these groups as our target, because actually I mean they're all still re young people. Yeah. Yeah. Yeah. Yeah. An Yeah. Yeah. Yeah but uh um Yeah, yeah sure, yeah, yeah. Yeah. Yeah, w well now the v the voice recognition if if it works wonderfully w we could possibly do away with all buttons, but I think this is not really the right moment yet, because people are just so used to buttons and um, yeah it's it's kind of safer, so we we need both, so the voice recognition would be just an extra, it wouldn't really reduce the size of the remote. Yeah but m but on the other hand, remote control isn't as close to you you probably might just just uh speak into it and and the T_V_ would be already further away, so it might not pick up the other things coming from there. Yeah, but then the remote control I think I mean um the idea is kind of it's it's not that it's sitting there on on top of the television, because then you could already yell at the television and you wouldn't you you wouldn't need the remote control, so the remote control is still something you keep n near yourself. Yeah, yeah, yeah. No, but I I I was just defending the the fact why why we want to keep the remote control close to us, a and uh not to yell at it from the distance. Okay. Oh yeah, yeah. Okay, yeah, mm-hmm. The major ones, yeah. Mm-hmm. Mm-hmm. Yeah. Did you find it? It's just yeah, yeah. Oh so so we'll just put them i there, we we yeah, w we won't even okay. Yeah. Yeah. Uh something conceptual, yeah. Hmm. Sorry, but um the next meeting um are we going to have it um right after lunch or shall we prepare our To prepare, okay, yeah, that's good. Okay. Cool. Okay, see you.\r\nSpeaker C: Mm. You said uh targ target groups, what does that mean? Uh okay, 'kay. So are Okay. Alright. I can go first, yeah. Right. Um so f from the Right sure. Uh okay. So n uh with uh with regard to the uh working design of this uh uh remote control uh I've identified um a few basic uh components of the remote and uh se uh from the design, functional design perspective um w I c we can now uh know wha what exactly the components are and how how they work together with each other. So this is the method that uh I'll mostly be following in my um in my uh role. Um the identification of the components, uh and uh since since I'm dealing only with the technical aspects, I would need feedback from the marketing person uh and uh from the user interface person. Uh we'll then integrate this into the product design at a technical level and uh basically update and come up with a new design, so it's a cyclical process. Okay, so these were the basic findings from today. The last three bullets have been integrated from uh the last minute uh email. Uh I just quickly jotted them down. Um so basically uh the as I told you the identification of how the remote control works and what are the various parts to it uh and what are the different processes um and how the parts uh communicate with each other. Um okay, so e the mee email said that teletext is now outdated, so we need to do away with that functionality of the remote control. Um also uh the remote control should be used only for television, because incorporating other features um makes it more comp complex. And the reason why teletext is outdated because uh of internet and uh the availability of internet over television. How however, our our remote control would only be dealing uh with the the use for television, in order to keep things simple. Um also the management wants that um our design should be unique uh it so it should incorporate um colour and the slogan uh that our company um has it as its standard. Okay, so he he here is a functional overview of the remote control. Um there's basically an energy source at the heart uh which feeds into the chip and the user interface. The user interf interface communicates with the chip, so I'll basic go over to the Okay. So if uh if this is our energy source and this is a cell, uh it communicates uh it feeds energy into the into the chip, which basically finds out h uh how how to do everything. There is a user interface here. So whe when the user presses a button, it feeds into the chip and the chip then generates a response and takes the response to an infrared terminal, um which then so the output of the chip is an infrared bit code, which is then communicated to the remote site, which h has an infrared receiver. Um the there can be uh a bulb here or something to indicate whether the remote is on or communicating. Um so these are the essent so a all the functionality of the remote control, whatever new functions that we need to do, um make the chip more complicated uh and bigger, basically. Okay. Um so i in my personal preferences um I'm hoping that we can ke keep the design as simple and clear as possible. This would uh help us uh to upgrade our technology at a future point of time. And uh also if we can incorporate uh the latest features in our chip design, so that our um uh remote control does not become outdated soon and it's compatible with mot most uh televisions. That's about it. So anything that you would like to know or No, I don't have any idea about what each component costs. Um yeah. Anything else? Yeah. Certainly, yeah. So so tha yeah, we definitely need to operate within our constraints, but um unfortunately I I do not have any data, so uh I just identified the functional components for that. Yeah, okay. Yeah. Mm 'kay. I it'll take some time. Oh, there it is, yeah. It'll come up, it um uh no signal. Yeah yeah, it says something now, adjusting Okay. Oh, that's strange. Okay. And one more time. Mm. Sorry, cou could you go back for a second? Uh switching on off channel, uh volume, okay, that's great. So in the u user interface requirements uh uh uh we we have been able to identify what are the basic buttons that we do want. Um but um so so at this stage, uh how we go about implementing those button we will not identify or I mean in we can completely do away with buttons and uh have some kind of a fancy user interface or something like that. But uh is is there any uh uh any thoughts on that? Right. Yeah, and it'll make the costs yeah. Right. Uh I think the co costs will also play a big role when we come to know about them. So well we can probably wait until t we have more knowledge on that. Uh i if the if the costs allow, we can have like an L_C_D_ display and uh with um because we do want something fancy and fashionable as well. So yeah? Cool. try to press oh, okay, yep. Mm. Right. Mm-hmm. Mm. Right. Mm-hmm. Hmm. Right. Mm. Mm. Mm. Some kind of a ring, some Right. Hmm. Okay, that's great, thanks. Mm. I think one of the very interesting things that came up in um uh Ka Kate Cat Cat's uh presentation was um uh this this issue of uh uh like voice recognition being more popular with uh younger people. So if we need to have a target group um then uh I think as far as the m motto of our company is concerned, if we want to have something sleek and uh you know, good looking uh we are better off targeting a younger audience then um you know, people who are comparatively elderly. Um. Right. Right. Bu but but the survey did say that f things like voice recognition are more popular with them, so if you want to put in something stylish, then uh th it'll certainly be more popular with this i ye with the younger people as compared to older people, yeah. Right, and Right. Mm. Right. But uh still, if if you can go back to that slide and uh, how popular was it? Oh, oh, okay. That's alright, if you can just look it up on your computer, wh uh um people between twenty five to thirty five, uh how popular was so it was sti still still quite popular amongst them. So even they are seventy six percent, is that high amount? Alright. Yeah. So you're more likely to b Yeah. Yeah. Mm. Bu but even even in the case of twenty five to thirty five it's quite popular, right? So mm uh are are are Mm. Mm. Um I was having a a general outlook on um m most like sophisticated features, but voice recognition itself I'm not very sure about, because one of the p uh things that Cat pointed out was uh uh how do we go about implementing it? Uh and uh Yeah. But how frequently do we use it anyway and um uh h ho how good is it, you know uh voice recognition softwares are still quite uh Yeah. Right. Right. Okay. O Right. Mm. Right. Yeah. Okay, so it seems like a feasible thing to implement uh for for a limited yeah. Mm. W What uh Mm. What wh uh what I was thinking is that there is this uh separation between what the channels are on T_V_ and how they are numbered on the remote control. If we can do with away with that, our product can be really popular uh in the sense that uh a person can say, I want to watch uh I_T_V_ one instead of saying that I want to go onto channel number forty five. Yeah, so if uh if something like that can be incorporated, some kind of Mm-hmm. Alright. Yeah, that's Right. Mm. Mm yeah and it might become very difficult from a distance for the television to understand what you're saying because of the noise factor for the remote control being cl I mean it'll it'll mm. Yeah. Mm. So uh wh another thing uh that can be used is that uh there can be a beeper button on the T_V_, so you can go and press that button and um and the remote control, wherever it is, it'll beep, so we we can probably come to know where it is. Right, yeah, yeah, yeah. Alright, yeah. Right. Okay. So where exactly is this i Ah, okay. Yeah. Yeah, yeah in that one, right yeah. No. Right. I guess I'll find out. Wha what was it again that I was supposed to look into? Con components, oh.\r\nSpeaker D: All hooked up. Okay, so now we are here at the functional design meeting. Um hopefully this meeting I'll be doing a little bit less talking than I did last time 'cause this is when you get to show us what you've been doing individually. The agenda for the meeting, I put it in the sh shared documents folder. I don't know if that meant that you could see it or not. Did anyone? No. Oh well. Um I'll try and do that for the next meeting as well so if you check in there, there's a shared project documents folder. Um and it should be in there. Project documents, yeah. So I'll put it in there. Is it best if I send you an email maybe, to let you know it's there? Yep. I'll do that next time. Um I'll act as secretary for this meeting and just take minutes as we go through, and then I'll send them to you after the meeting. The main the main focus of this meeting is your presentations that you've been preparing during the time, so we'll go through each of you one by one. Um then we need to briefly discuss the new project requirements that were sent to us. I just sent at the last minute, I'm sorry about that, but we can see how that affects what you were you were doing. Um and then we need to, by the end of the meeting come to some kind of decision on who our target group's going to be and what the functions of the remote control that's the the main goal is to come up with those two things, target group and functions of the remote control. And we've got forty minutes to do that in. So I would say yeah? As uh who it is that we're going to be trying to sell this thing to, yeah. So we need to yeah, we need to have a fairly defined group that that we want to focus on and then look at the functions um of the dem remote control itself. So with that I think it's best if I hand over to you. Does anyone have a preference for going first? You wanna go first? Okay, so we need to unplug my laptop and plug in yours. I assume we just pull it out? Just before you start, to make it easier, would you three mind emailing me your presentations? Once we you don't have to do it now but when once you go back, just so that I don't have to scribble everything down. Hmm. Mm-hmm. Okay. Do you have any um i idea about costs at this point? Br Okay. 'Cause that's something to consider, I guess, if we're if we're using more advanced technology, it might increase the price. Yeah. That's fine. Are there any more questions, or shall we just skip straight to the next one and then we can discuss all of them together at the end? Yeah, I think that will do. Okay, so do you want to Yes, shall shall we pull this up? I think that has to come out of there. Yeah. Yeah, I thought those last minute things, they're gonna hit you the worst. It ta takes a little Oh, and have you you need to then also press on yours, function F_ eight, so the blue function key at the bottom and F_ eight. Now it's coming, computer no signal. Maybe again? Okay, adjusting. There we go, there we go. Oh, if you press if you press function and that again there's there's usually three modes, one where it's only here, one where it's only there, and one where it's both. Okay, so one more time. Should yeah just wait for a moment, adjusting. Okay. Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. Yeah. If I mean that was the the directive that came through from management, but if we had a a decent case for that we really think it's important to include video and D_V_D_, I could get back to them and see. It's w it's just whether it's worth arguing about. Mm-hmm. Yeah. Mm-hmm. Okay. Are there any questions for clarification of Maarika before we go on to the next one? Mm-hmm. Mm. Mm. Mm-hmm. Sure, we can discuss that maybe after the next one. Do you want to yeah. Oh, I'm getting hungry. You set? Uh we need to do the function key thing so that it comes up on here. Hello. Is it plugged in prop it's working? Okay. Excellent. It's um switching between channels, sort of randomly going through. Mm. Ooh, that's a bit difficult to see. If you explain it to us it'll be fine. Yeah. I liked the, I liked the litt ooh come back. No. Okay. Mm-hmm, that's the next one along, yeah? Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. The remote control. Mm-hmm. That's alright. Mm. Keys and things like that, yeah. Whistle and it screams at you, yeah. Mm-hmm. That's you, excellent. Um. I'm just gonna tick yes. So, we've got about ten, fifteen minutes to discuss Mm-hmm. Yeah. Mm-hmm. Yeah. Then again I guess the th where it was most popular was the fifteen to twenty five bracket and the I don't know how often they're buying televisions. Yeah, but you don't have much money, generally. I would've thought it's it's more that twenty five to thirty five, when people are really moving out and they've got their first job and they want their nice toys and O oh it's on sorry, we unplugged it. Here, let me Yeah. Mm-hmm. Yeah. Yeah, they've got no commitments and usually not a car and all of those things. Kids. Yeah. Yeah, and if we're if we're talking twenty five Euros as a price, that's not unaffordable, even for young people. Yeah. Yeah. But do they But the T_V_s are often kind of someone's old T_V_ that's blah blah and be a bit strange to have a fancy rome remote. Mm. Yeah. Yeah. Yeah. Yeah. Yeah, if we ta if we take fifteen to thirty five, but that then does imply that we should try and incorporate voice recognition. Is that gonna have a an implication for the technical specs? Mm-hmm. Yeah. Yeah. With um but with a T_V_ remote it's gonna be quite limited if we're t saying the main things people want to do is on off channel five, louder, tha that should be relatively simple. Mm. Yeah. Mm-hmm. Yeah, but maybe if you wanna look into that just to just to check. Um, so if we go for the the fifteen to thirty five age group and then of course we're going to get th anyone who's older than thirty five who wants to look young and hip and trendy and has the money, then they'll they'll still go for the same advertising. Yeah, I think we need both. Yeah. Mm. Uh-huh. Uh-huh. So that if that was in the the voice recognition, that would be great. Yeah. Yeah. Watch Sky and yeah. Mm-hmm. But that's definitely a possibility. Yeah. So that you can yell at it, yeah. Yeah. Alright. Mm. Yeah. Yeah. Yeah. Yeah. Mm-hmm. That's but then if you're buying the remote separately, but y you could have something, but i if it was something that you could like stick onto the T_V_ or something, some like a two p if you bought it in a two part pack, so one part attaches to the T_V_. The l Well that's right, but it solves the problem of having different noises. Yeah. Okay, I think we're gonna have to wrap this up um. But if we go away with that that kind of general um specification in mind that we're looking at fifteen to thirty five year olds, we want it to look simple, but still have the buttons so it's easy to use, but only those key buttons, the major buttons and then one sort of menu one, and then voice recognition included as an option um but that obviously needs a little bit more working out as to whether it's really feasible and some of those problems we were mentioning um. What we have to do now is to go back to our little places, complete our questionnaire and some sort of summarisation, which y you'll get immediately by email. Send me your presentations so that I can use them to make the minutes, and then we've got a lunch break and after lunch we go back to our own little stations and have thirty minutes more work. Um I'll put the minutes in that project documents folder, but I'll send you an email when I do it, so that you know. It should be on your desktop, so on the yeah. So I'll put it I'll put them there as soon as I've written them. Yeah, and email them round. Yeah, that would be great. Oh yeah, put them in there. Yeah, then you don't have to email them. No, they're all called something slightly different. Technical requirements and something something, yeah. So, if you put them in there, we'll all be able to see them and refer to them if we need to. Um as to where we're going from here, you're going to look at the components concept. Yeah? Whatever that means. Yeah. You'll be looking you'll be looking at the user interface concept, on something conceptual and you're watching trends to see how we go and surely voice recognition'll fall off the map or something that um we'll keep keep our options op hmm? Components, yeah. No, we have we have after lunch we have thirty minutes to ourselves to prepare, so that's fine, w before lunch we just have to complete the questionnaire and some sort of summary. Okay? Right on time. Okay, so you can I guess we'll see you for lunch in a sec?"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 209
- val: 42
- test: 28
## Dataset Creation
### Curation Rationale
Refer Above.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
non-commercial licence: cc-by-4.0
## Citation Information
```
Carletta, J. (2006) Announcing the AMI Meeting Corpus. The ELRA Newsletter 11(1), January-March, p. 3-5
```
## Contributions
Thanks to Carletta for adding this dataset. |
mmdjiji | null | null | null | false | 1 | false | mmdjiji/bert-chinese-idioms | 2022-06-28T11:41:58.000Z | null | false | d139909f9f053a68e2ef99acabe4f1d0d78c2ee1 | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/mmdjiji/bert-chinese-idioms/resolve/main/README.md | ---
license: gpl-3.0
---
For the detail, see [github:mmdjiji/bert-chinese-idioms](https://github.com/mmdjiji/bert-chinese-idioms).
[preprocess.js](preprocess.js) is a Node.JS script to generate the data for training the language model. |
AIKey | null | null | null | false | 1 | false | AIKey/testName | 2022-06-28T11:44:05.000Z | null | false | c2549e2dafa22752f41d64315b8ed34c141c571a | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/AIKey/testName/resolve/main/README.md | ---
license: apache-2.0
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-2e072638-8015092 | 2022-06-28T13:09:33.000Z | null | false | f069243d3560f018e22799d15d67c64e393b7977 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:catalonia_independence"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-2e072638-8015092/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- catalonia_independence
eval_info:
task: multi_class_classification
model: JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector
metrics: []
dataset_name: catalonia_independence
dataset_config: catalan
dataset_split: test
col_mapping:
text: TWEET
target: LABEL
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector
* Dataset: catalonia_independence
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-2e072638-8015093 | 2022-06-28T13:09:36.000Z | null | false | 2525155fd1f996230d5b5776ccef80397b640d3f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:catalonia_independence"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-2e072638-8015093/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- catalonia_independence
eval_info:
task: multi_class_classification
model: JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector
metrics: []
dataset_name: catalonia_independence
dataset_config: catalan
dataset_split: test
col_mapping:
text: TWEET
target: LABEL
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector
* Dataset: catalonia_independence
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
victor | null | null | null | false | 7 | false | victor/real-or-fake-fake-jobposting-prediction | 2022-06-28T16:05:26.000Z | null | false | a751f55136a26da3c36e26aa207a9e187ca24b45 | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/victor/real-or-fake-fake-jobposting-prediction/resolve/main/README.md | ---
license: cc0-1.0
---
|
imvladikon | null | @article{10.1162/tacl_a_00404,
author = {Bareket, Dan and Tsarfaty, Reut},
title = "{Neural Modeling for Named Entities and Morphology (NEMO2)}",
journal = {Transactions of the Association for Computational Linguistics},
volume = {9},
pages = {909-928},
year = {2021},
month = {09},
abstract = "{Named Entity Recognition (NER) is a fundamental NLP task, commonly formulated as classification over a sequence of tokens. Morphologically rich languages (MRLs) pose a challenge to this basic formulation, as the boundaries of named entities do not necessarily coincide with token boundaries, rather, they respect morphological boundaries. To address NER in MRLs we then need to answer two fundamental questions, namely, what are the basic units to be labeled, and how can these units be detected and classified in realistic settings (i.e., where no gold morphology is available). We empirically investigate these questions on a novel NER benchmark, with parallel token- level and morpheme-level NER annotations, which we develop for Modern Hebrew, a morphologically rich-and-ambiguous language. Our results show that explicitly modeling morphological boundaries leads to improved NER performance, and that a novel hybrid architecture, in which NER precedes and prunes morphological decomposition, greatly outperforms the standard pipeline, where morphological decomposition strictly precedes NER, setting a new performance bar for both Hebrew NER and Hebrew morphological decomposition tasks.}",
issn = {2307-387X},
doi = {10.1162/tacl_a_00404},
url = {https://doi.org/10.1162/tacl\_a\_00404},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00404/1962472/tacl\_a\_00404.pdf},
} | \ | false | 23 | false | imvladikon/nemo_corpus | 2022-07-01T19:21:19.000Z | null | false | 4eda5340eec90804fb50f64e383598fe23f325d3 | [] | [
"annotations_creators:crowdsourced",
"language_creators:found",
"language:he",
"license:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-reuters-corpus",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/imvladikon/nemo_corpus/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- he
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-reuters-corpus
task_categories:
- token-classification
task_ids:
- named-entity-recognition
train-eval-index:
- config: bmc
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: validation
test_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
---
# NEMO-Corpus - The Hebrew Named Entities and Morphology Corpus
**Disclaimer**: It's just a huggingface datasets convenient interface for research purpose which is fetching the original data from [github](https://github.com/OnlpLab/NEMO-Corpus). I'm not an author of this work.
```python
from datasets import load_dataset
# the main corpus
ds = load_dataset('imvladikon/nemo_corpus')
for sample in ds["train"]:
print(sample)
# the nested corpus
ds = load_dataset('imvladikon/nemo_corpus', "nested")
```
Getting classes and encoding/decoding could be done through
```
idx2label = dataset["train"].features["ner_tags"].feature.int2str
label2idx = dataset["train"].features["ner_tags"].feature.str2int
```
## Dataset Description
it's README.md of the [original repository](https://github.com/OnlpLab/NEMO-Corpus)
Named Entity (NER) annotations of the Hebrew Treebank (Haaretz newspaper) corpus, including: morpheme and token level NER labels, nested mentions, and more.
We publish the NEMO corpus in the TACL paper [*"Neural Modeling for Named Entities and Morphology (NEMO<sup>2</sup>)"*](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00404/107206/Neural-Modeling-for-Named-Entities-and-Morphology) [1], where we use it in extensive experiments and analyses, showing the importance of morphological boundaries for neural modeling of NER in morphologically rich languages. Code for these models and experiments can be found in the [NEMO code repo](https://github.com/OnlpLab/NEMO).
## Main features:
1. Morpheme, token-single and token-multi sequence labels. Morpheme labels provide exact boundaries, token-multi provide partial sub-word morphological but no exact boundaries, token-single provides only token-level information.
1. All annotations are in `BIOSE` format (`B`=Begin, `I`=Inside, `O`=Outside, `S`=Singleton, `E`=End).
1. Widely-used OntoNotes entity category set: `GPE` (geo-political entity), `PER` (person), `LOC` (location), `ORG` (organization), `FAC` (facility), `EVE` (event), `WOA` (work-of-art), `ANG` (language), `DUC` (product).
1. NEMO includes NER annotations for the two major versions of the Hebrew Treebank, UD (Universal Dependency) and SPMRL. These can be aligned to the other morphosyntactic information layers of the treebank using [bclm](https://github.com/OnlpLab/bclm)
1. We provide nested mentions. Only the first, widest, layer is used in the NEMO<sup>2</sup> paper. We invite you to take on this challenge!
1. Guidelines used for annotation are provided [here](./guidelines/).
1. Corpus was annotated by two native Hebrew speakers of academic education, and curated by the project manager. We provide the original annotations made by the annotators as well to promote work on [learning with disagreements](https://sites.google.com/view/semeval2021-task12/home).
1. Annotation was performed using [WebAnno](https://webanno.github.io/webanno/) (version 3.4.5)
## Legend for Files and Folder Structure
1. The two main [data](./data/) folders are [ud](./data/ud/) and [spmrl](./data/spmrl/), corresponding to the relevant Hebrew Treebank corpus version.
1. Both contain a `gold` folder ([spmrl/gold](./data/spmrl/gold/), [ud/gold](./data/ud/gold/)) of gold curated annotations.
1. Each `gold` folder contains files of the three input-output variants (morph, token-multi, token-single), for each of the treebank splits (train,dev,test).
1. Each `gold` folder also contains a `nested` subfolder ([spmrl/nested](./data/spmrl/gold/nested/), [ud/nested](./data/ud/gold/nested/)), which contains all layers of nested mentions (the first layer is the layer used in the non-nested files, and in the NEMO<sup>2</sup> paper [1])
1. The `ud` folder also contains an [ab_annotators](./data/ud/ab_annotators/) folder. This folder contains the original annotations made by each annotator (named `a`, `b`), including first-layer and nested annotatations.
1. *\*UPDATE 2021-09-06\** `ud` folder now contains a [pilot_annotations](./data/ud/pilot_annotations/) folder. This folder contains the original annotations made by each annotator in our two phase pilot (phase I - sentences 1-200 of dev; phase II - sentences 201-400 of dev).
## Basic Corpus Statistics
| | train | dev | test |
|------------------------------| --:| --:| --:|
| Sentences | 4,937 | 500 | 706 |
| Tokens | 93,504 | 8,531 | 12,619 |
| Morphemes | 127,031 | 11,301 | 16,828 |
| All mentions | 6,282 | 499 | 932 |
| Type: Person (PER) | 2,128 | 193 | 267 |
| Type: Organization (ORG) | 2,043 | 119 | 408 |
| Type: Geo-Political (GPE) | 1,377 | 121 | 195 |
| Type: Location (LOC) | 331 | 28 | 41 |
| Type: Facility (FAC) | 163 | 12 | 11 |
| Type: Work-of-Art (WOA) | 114 | 9 | 6 |
| Type: Event (EVE) | 57 | 12 | 0 |
| Type: Product (DUC) | 36 | 2 | 3 |
| Type: Language (ANG) | 33 | 3 | 1 |
## Aligned Treenbank Versions
The NEMO corpus matches the treebank version of [bclm v.1.0.0](https://github.com/OnlpLab/bclm/releases/tag/v1.0.0-alpha).
This version is based on the [HTB UD v2.2](https://github.com/UniversalDependencies/UD_Hebrew-HTB/releases/tag/r2.2) and the [latest SPMRL HTB version](https://github.com/OnlpLab/HebrewResources/tree/102674bb030f5836e1ab827feb63954ad7a6f8fe/HebrewTreebank/hebtb).
The changes contain (but might not be limited to the following):
1. Flagged and dropped duplicate and leaking sentences (between train and test). In addition to the sentences already removed in the bclm v1.0.0 HTB version, the following duplicate sentences were dropped as well (SPMRL sentence IDs): 5438, 5444, 5445, 5446, 5448, 5449, 5450, 5451, 5453, 5459 (in the bclm dataframes, these are marked in the `duplicate_sent_id` column).
To read the treebank (UD/SPMRL) in a way that matches the NEMO corpus, you can use the following:
```python
import bclm
dropped = [5438, 5444, 5445, 5446, 5448, 5449, 5450, 5451, 5453, 5459]
spdf = bclm.read_dataframe('spmrl') # load SPMRL treebank dataframe
global_dropped = [spdf[spdf.sent_id==d].global_sent_id.iat[0] for d in dropped]
uddf = bclm.read_dataframe('ud') # load UD treebank dataframe
uddf = uddf[(~uddf.global_sent_id.isin(global_dropped))] # remove extra duplicates
spdf = spdf[(~spdf.sent_id.isin(dropped))] # remove extra duplicates
# The resulting dataframes contain gold morph NER labels in the `biose_layer0`, `biose_layer1`... columns.
```
2. The UD treebank contains many more duplicates. In this version: all sentences exist in both UD and SPMRL versions, and all sentences and tokens are aligned between UD and SPMRL.
2. Fixed numbers that were originally reversed.
2. Fixed mismatches between tokens and morphemes.
2. Added Binyan feature.
2. No individual morphemes or tokens were added or removed, only complete sentences.
## Evaluation
An evaluation script is provided in the [NEMO code repo](https://github.com/OnlpLab/NEMO#evaluation) along with evaluation instructions.
## Citations
##### [1]
If you use the NEMO corpus in your research, please cite the NEMO<sup>2</sup> paper:
```bibtex
@article{10.1162/tacl_a_00404,
author = {Bareket, Dan and Tsarfaty, Reut},
title = "{Neural Modeling for Named Entities and Morphology (NEMO2)}",
journal = {Transactions of the Association for Computational Linguistics},
volume = {9},
pages = {909-928},
year = {2021},
month = {09},
abstract = "{Named Entity Recognition (NER) is a fundamental NLP task, commonly formulated as classification over a sequence of tokens. Morphologically rich languages (MRLs) pose a challenge to this basic formulation, as the boundaries of named entities do not necessarily coincide with token boundaries, rather, they respect morphological boundaries. To address NER in MRLs we then need to answer two fundamental questions, namely, what are the basic units to be labeled, and how can these units be detected and classified in realistic settings (i.e., where no gold morphology is available). We empirically investigate these questions on a novel NER benchmark, with parallel token- level and morpheme-level NER annotations, which we develop for Modern Hebrew, a morphologically rich-and-ambiguous language. Our results show that explicitly modeling morphological boundaries leads to improved NER performance, and that a novel hybrid architecture, in which NER precedes and prunes morphological decomposition, greatly outperforms the standard pipeline, where morphological decomposition strictly precedes NER, setting a new performance bar for both Hebrew NER and Hebrew morphological decomposition tasks.}",
issn = {2307-387X},
doi = {10.1162/tacl_a_00404},
url = {https://doi.org/10.1162/tacl\_a\_00404},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00404/1962472/tacl\_a\_00404.pdf},
}
```
##### [2]
Please cite the Hebrew Treebank as well, described the following paper:
```bibtex
@article{sima2001building,
title={Building a tree-bank of modern Hebrew text},
author={Sima’an, Khalil and Itai, Alon and Winter, Yoad and Altman, Alon and Nativ, Noa},
journal={Traitement Automatique des Langues},
volume={42},
number={2},
pages={247--380},
year={2001},
publisher={Citeseer}
}
```
##### [3]
The UD version of the Hebrew Treebank is described in:
```bibtex
@inproceedings{sade-etal-2018-hebrew,
title = "The {H}ebrew {U}niversal {D}ependency Treebank: Past Present and Future",
author = "Sade, Shoval and
Seker, Amit and
Tsarfaty, Reut",
booktitle = "Proceedings of the Second Workshop on Universal Dependencies ({UDW} 2018)",
month = nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W18-6016",
doi = "10.18653/v1/W18-6016",
pages = "133--143",
abstract = "The Hebrew treebank (HTB), consisting of 6221 morpho-syntactically annotated newspaper sentences, has been the only resource for training and validating statistical parsers and taggers for Hebrew, for almost two decades now. During these decades, the HTB has gone through a trajectory of automatic and semi-automatic conversions, until arriving at its UDv2 form. In this work we manually validate the UDv2 version of the HTB, and, according to our findings, we apply scheme changes that bring the UD HTB to the same theoretical grounds as the rest of UD. Our experimental parsing results with UDv2New confirm that improving the coherence and internal consistency of the UD HTB indeed leads to improved parsing performance. At the same time, our analysis demonstrates that there is more to be done at the point of intersection of UD with other linguistic processing layers, in particular, at the points where UD interfaces external morphological and lexical resources.",
}
``` |
SpeedOfMagic | null | null | null | false | 185 | false | SpeedOfMagic/ontonotes_english | 2022-07-01T16:06:06.000Z | null | false | 8b4000d7a1e7779bd0c7291f785a2160f95c03fb | [] | [
"annotations_creators:found",
"language_creators:found",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/SpeedOfMagic/ontonotes_english/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: ontonotes_english
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for ontonotes_english
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html)
- **Repository:**
- **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/)
- **Leaderboard:** [Papers With Code](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- **Point of Contact:**
### Dataset Summary
This is preprocessed version of what I assume is OntoNotes v5.0.
Instead of having sentences stored in files, files are unpacked and sentences are the rows now. Also, fields were renamed in order to match [conll2003](https://huggingface.co/datasets/conll2003).
The source of data is from private repository, which in turn got data from another public repository, location of which is unknown :)
Since data from all repositories had no license (creator of the private repository told me so), there should be no licensing issues. But bear in mind, I don't give any guarantees that this is real OntoNotes, and may differ as a result.
### Supported Tasks and Leaderboards
- [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes)
- [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes)
### Languages
English
## Dataset Structure
### Data Instances
```
{
'tokens': ['Well', ',', 'the', 'Hundred', 'Regiments', 'Offensive', 'was', 'divided', 'into', 'three', 'phases', '.'],
'ner_tags': [0, 0, 29, 30, 30, 30, 0, 0, 0, 27, 0, 0]
}
```
### Data Fields
- **`tokens`** (*`List[str]`*) : **`words`** in original dataset
- **`ner_tags`** (*`List[ClassLabel]`*) : **`named_entities`** in original dataset. The BIO tags for named entities in the sentence.
- tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])`
### Data Splits
_train_, _validation_, and _test_
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
No license
### Citation Information
```
@inproceedings{pradhan-etal-2013-towards,
title = "Towards Robust Linguistic Analysis using {O}nto{N}otes",
author = {Pradhan, Sameer and
Moschitti, Alessandro and
Xue, Nianwen and
Ng, Hwee Tou and
Bj{\"o}rkelund, Anders and
Uryupina, Olga and
Zhang, Yuchen and
Zhong, Zhi},
booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-3516",
pages = "143--152",
}
```
### Contributions
Thanks to the author of private repository, that uploaded this dataset. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-25118781-8365116 | 2022-06-28T21:19:34.000Z | null | false | 29322bc987e6481fca61f75d3414f6b977807b04 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:scientific_papers"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-25118781-8365116/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- scientific_papers
eval_info:
task: summarization
model: google/bigbird-pegasus-large-pubmed
metrics: ['bertscore', 'meteor']
dataset_name: scientific_papers
dataset_config: pubmed
dataset_split: test
col_mapping:
text: article
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-pubmed
* Dataset: scientific_papers
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-25118781-8365117 | 2022-06-28T21:17:57.000Z | null | false | 2b38a740c66da19f7030b20580e05a709969d3c5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:scientific_papers"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-25118781-8365117/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- scientific_papers
eval_info:
task: summarization
model: google/bigbird-pegasus-large-arxiv
metrics: ['bertscore', 'meteor']
dataset_name: scientific_papers
dataset_config: pubmed
dataset_split: test
col_mapping:
text: article
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-arxiv
* Dataset: scientific_papers
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-c967fc98-8385124 | 2022-06-28T21:22:31.000Z | null | false | 991a3bbd972f620321ef1ba66609fa052aab761f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:scientific_papers"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-c967fc98-8385124/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- scientific_papers
eval_info:
task: summarization
model: google/bigbird-pegasus-large-pubmed
metrics: ['bertscore', 'meteor']
dataset_name: scientific_papers
dataset_config: pubmed
dataset_split: test
col_mapping:
text: article
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-pubmed
* Dataset: scientific_papers
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-c76b0e96-8395128 | 2022-06-28T21:41:56.000Z | null | false | 18a2dffc39dd01bd15cd950a95981915e0772efe | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:scientific_papers"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-c76b0e96-8395128/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- scientific_papers
eval_info:
task: summarization
model: google/bigbird-pegasus-large-pubmed
metrics: ['bertscore', 'meteor']
dataset_name: scientific_papers
dataset_config: pubmed
dataset_split: test
col_mapping:
text: article
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-pubmed
* Dataset: scientific_papers
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-36bd0b51-8375120 | 2022-06-28T22:01:55.000Z | null | false | 23906970078ab096c46b7cddcb6ba20ac530675a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:scientific_papers"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-36bd0b51-8375120/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- scientific_papers
eval_info:
task: summarization
model: google/bigbird-pegasus-large-pubmed
metrics: ['bertscore', 'meteor']
dataset_name: scientific_papers
dataset_config: pubmed
dataset_split: test
col_mapping:
text: article
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-pubmed
* Dataset: scientific_papers
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-36bd0b51-8375121 | 2022-06-28T22:06:32.000Z | null | false | 19e1cd1bb14a0bdc47a90f0b7fc82577da378131 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:scientific_papers"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-36bd0b51-8375121/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- scientific_papers
eval_info:
task: summarization
model: google/bigbird-pegasus-large-arxiv
metrics: ['bertscore', 'meteor']
dataset_name: scientific_papers
dataset_config: pubmed
dataset_split: test
col_mapping:
text: article
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-arxiv
* Dataset: scientific_papers
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-c76b0e96-8395129 | 2022-06-28T22:24:00.000Z | null | false | a5da1dfaf0e9e1e459e57c058a07d6d74389513f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:scientific_papers"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-c76b0e96-8395129/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- scientific_papers
eval_info:
task: summarization
model: google/bigbird-pegasus-large-arxiv
metrics: ['bertscore', 'meteor']
dataset_name: scientific_papers
dataset_config: pubmed
dataset_split: test
col_mapping:
text: article
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-arxiv
* Dataset: scientific_papers
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-9d6be317-8445136 | 2022-06-28T20:25:52.000Z | null | false | f6edac68ac34018903e004a6627bc6e7ae01a24f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-9d6be317-8445136/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: flax-community/t5-base-cnn-dm
metrics: ['bertscore', 'comet']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: flax-community/t5-base-cnn-dm
* Dataset: cnn_dailymail
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@gneubig](https://huggingface.co/gneubig) for evaluating this model. |
cakiki | null | null | null | false | 1 | false | cakiki/rosetta-code | 2022-10-10T11:29:31.000Z | null | false | 346daeff45bdb05574fb56537a281fd295749a27 | [] | [
"license:cc-by-sa-4.0"
] | https://huggingface.co/datasets/cakiki/rosetta-code/resolve/main/README.md | ---
license: cc-by-sa-4.0
---
# Dataset Card for the Rosetta Code Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
launch | null | @inproceedings{cao-wang-2021-controllable,
title = "Controllable Open-ended Question Generation with A New Question Type Ontology",
author = "Cao, Shuyang and
Wang, Lu",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.502",
doi = "10.18653/v1/2021.acl-long.502",
pages = "6424--6439",
abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.",
} | Open-ended question type annotated dataset. | false | 1 | false | launch/open_question_type | 2022-11-09T01:58:10.000Z | null | false | 1cf33ab60b1855c636eed32ca381dbac55116571 | [] | [
"annotations_creators:expert-generated",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"task_categories:text-classification"
] | https://huggingface.co/datasets/launch/open_question_type/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids: []
pretty_name: OpenQuestionType
---
# Dataset Card for OpenQuestionType
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://shuyangcao.github.io/projects/ontology_open_ended_question/](https://shuyangcao.github.io/projects/ontology_open_ended_question/)
- **Repository:** [https://github.com/ShuyangCao/open-ended_question_ontology](https://github.com/ShuyangCao/open-ended_question_ontology)
- **Paper:** [https://aclanthology.org/2021.acl-long.502/](https://aclanthology.org/2021.acl-long.502/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Question types annotated on open-ended questions.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
An example looks as follows.
```
{
"id": "123",
"question": "A test question?",
"annotator1": ["verification", None],
"annotator2": ["concept", None],
"resolve_type": "verification"
}
```
### Data Fields
- `id`: a `string` feature.
- `question`: a `string` feature.
- `annotator1`: a sequence feature containing two elements. The first one is the most confident label by the first annotator and the second one is the second-most confident label by the first annotator.
- `annotator2`: a sequence feature containing two elements. The first one is the most confident label by the second annotator and the second one is the second-most confident label by the second annotator.
- `resolve_type`: a `string` feature which is the final label after resolving disagreement.
### Data Splits
- train: 3716
- valid: 580
- test: 660
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Yahoo Answer and Reddit users.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY 4.0
### Citation Information
```
@inproceedings{cao-wang-2021-controllable,
title = "Controllable Open-ended Question Generation with A New Question Type Ontology",
author = "Cao, Shuyang and
Wang, Lu",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.502",
doi = "10.18653/v1/2021.acl-long.502",
pages = "6424--6439",
abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.",
}
```
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-c967fc98-8385125 | 2022-06-29T01:09:37.000Z | null | false | 7b3565ba7321585678cbd4f057163c2a202ec4ee | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:scientific_papers"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-c967fc98-8385125/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- scientific_papers
eval_info:
task: summarization
model: google/bigbird-pegasus-large-arxiv
metrics: ['bertscore', 'meteor']
dataset_name: scientific_papers
dataset_config: pubmed
dataset_split: test
col_mapping:
text: article
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-arxiv
* Dataset: scientific_papers
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model. |
rungalileo | null | null | null | false | 1 | false | rungalileo/mltakehome | 2022-06-28T22:58:48.000Z | null | false | 4fbd8efbb2c158e502928720f40d88d00c5fe315 | [] | [] | https://huggingface.co/datasets/rungalileo/mltakehome/resolve/main/README.md | Trec6 with 10% noise |
slorge | null | null | null | false | 1 | false | slorge/sorge | 2022-06-29T00:01:36.000Z | null | false | e54daa5e48b14103d9034afe9a04b1985b8ed534 | [] | [
"license:cc"
] | https://huggingface.co/datasets/slorge/sorge/resolve/main/README.md | ---
license: cc
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-34433c04-8625146 | 2022-06-29T00:39:30.000Z | null | false | a053764e08bbcd9d2af53f3c40738f797020e1f3 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lewtun/dog_food"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-34433c04-8625146/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lewtun/dog_food
eval_info:
task: image_multi_class_classification
model: abhishek/convnext-tiny-finetuned-dogfood
metrics: []
dataset_name: lewtun/dog_food
dataset_config: lewtun--dog_food
dataset_split: test
col_mapping:
image: image
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: abhishek/convnext-tiny-finetuned-dogfood
* Dataset: lewtun/dog_food
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@haesun](https://huggingface.co/haesun) for evaluating this model. |
Bluepelt | null | null | null | false | 1 | false | Bluepelt/Idkwhattodomate | 2022-06-29T03:43:46.000Z | null | false | 218eeeb3c018487d35ca6252ee0cbfee30df9751 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Bluepelt/Idkwhattodomate/resolve/main/README.md | ---
license: mit
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-72edae24-8665151 | 2022-06-30T05:04:02.000Z | null | false | f7b1cf0b7808c73459132d36db9bcb63c7293d87 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-72edae24-8665151/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: eslamxm/mbart-finetune-en-cnn
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: train
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: eslamxm/mbart-finetune-en-cnn
* Dataset: cnn_dailymail
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@](https://huggingface.co/) for evaluating this model. |
ThierryZhou | null | @article{wang2019Test,
title={Test: A Stickier Benchmark for General-Purpose Language Understanding Systems},
author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
journal={arXiv preprint arXiv:1905.00537},
year={2019}
}
Note that each Test dataset has its own citation. Please see the source to
get the correct citation for each contained dataset. | Test (https://super.gluebenchmark.com/) is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard. | false | 1 | false | ThierryZhou/test | 2022-08-22T02:13:10.000Z | test | false | 948743fa83750fa766c62a131ffa015d6cf990c9 | [] | [
"arxiv:2111.11431",
"annotations_creators:found",
"language_creators:found",
"language:en",
"source_datasets:original",
"task_categories:image-to-text",
"task_ids:image-captioning"
] | https://huggingface.co/datasets/ThierryZhou/test/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: test
pretty_name: Test
---
# Dataset Card for "test"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [RedCaps homepage](https://redcaps.xyz/)
- **Repository:** [RedCaps repository](https://github.com/redcaps-dataset/redcaps-downloader)
- **Paper:** [RedCaps: web-curated image-text data created by the people, for the people](https://arxiv.org/abs/2111.11431)
- **Leaderboard:**
- **Point of Contact:** [Karan Desai](mailto:kdexd@umich.edu)
### Dataset Summary
### Dataset Preprocessing
|
shinexia | null | null | null | false | 1 | false | shinexia/dataset1 | 2022-06-29T02:35:28.000Z | null | false | 0b6a93c1a6c6d572aa7f1683e196a7667464a7f4 | [] | [
"license:mit"
] | https://huggingface.co/datasets/shinexia/dataset1/resolve/main/README.md | ---
license: mit
---
|
nvm472001 | null | @article{LayoutLmv3 for CV extractions,
title={LayoutLmv3for Key Information Extraction},
author={Misa R&D Team},
year={2022},
} | CV is a collection of receipts. It contains, for each photo about cv personal, a list of OCRs - with the bounding box, text, and class. The goal is to benchmark "key information extraction" - extracting key information from documents
https://arxiv.org/abs/2103.14470 | false | 1 | false | nvm472001/cvdataset-layoutlmv3 | 2022-07-12T02:29:25.000Z | null | false | 3a821ac185914cad1cc7203954a107c7784c3f6b | [] | [
"license:mit"
] | https://huggingface.co/datasets/nvm472001/cvdataset-layoutlmv3/resolve/main/README.md | ---
license: mit
---
|
davanstrien | null | null | null | false | 1 | false | davanstrien/hmd-erwt-training | 2022-11-16T16:41:45.000Z | null | false | 8c1c9f2543be4c87e488391b8fbff66cd0aa944a | [] | [] | https://huggingface.co/datasets/davanstrien/hmd-erwt-training/resolve/main/README.md | # Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-b20351ec-8855170 | 2022-06-29T07:27:43.000Z | null | false | de6d2333628b3fa6893b658f77e0a4d72412be6c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:conll2003"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-b20351ec-8855170/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- conll2003
eval_info:
task: entity_extraction
model: huggingface-course/bert-finetuned-ner
metrics: []
dataset_name: conll2003
dataset_config: conll2003
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: huggingface-course/bert-finetuned-ner
* Dataset: conll2003
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@](https://huggingface.co/) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-81757492-8865171 | 2022-06-29T07:38:09.000Z | null | false | 2a5b793520882599e415e356621c97093eb7520c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:acronym_identification"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-81757492-8865171/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- acronym_identification
eval_info:
task: entity_extraction
model: lewtun/autotrain-acronym-identification-7324788
metrics: []
dataset_name: acronym_identification
dataset_config: default
dataset_split: train
col_mapping:
tokens: tokens
tags: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: lewtun/autotrain-acronym-identification-7324788
* Dataset: acronym_identification
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@bonbon](https://huggingface.co/bonbon) for evaluating this model. |
knkarthick | null | null | null | false | 66 | false | knkarthick/samsum | 2022-10-21T03:03:27.000Z | samsum-corpus | false | 2a6b79b3e3c939aebb149d9109d7cdb78a9c2d3b | [] | [
"arxiv:1911.12237",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:cc-by-nc-nd-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:summarization",
"tags:conversations-summarization"
... | https://huggingface.co/datasets/knkarthick/samsum/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: samsum-corpus
pretty_name: SAMSum Corpus
tags:
- conversations-summarization
---
# Dataset Card for SAMSum Corpus
## Dataset Description
### Links
- **Homepage:** hhttps://arxiv.org/abs/1911.12237v2
- **Repository:** https://arxiv.org/abs/1911.12237v2
- **Paper:** https://arxiv.org/abs/1911.12237v2
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Languages
English
## Dataset Structure
### Data Instances
SAMSum dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 14732
- val: 818
- test: 819
## Dataset Creation
### Curation Rationale
In paper:
In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
### Annotation process
In paper:
Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary.
## Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
## Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
```
## Contributions |
knkarthick | null | null | null | false | 1 | false | knkarthick/xsum | 2022-10-21T03:03:03.000Z | samsum-corpus | false | d8dac05da680f98087feee034597ba9582a8780e | [] | [
"arxiv:1808.08745",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:cc-by-nc-nd-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:summarization",
"tags:conversations-summarization"
] | https://huggingface.co/datasets/knkarthick/xsum/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- topic modeling
- one liner summary
- email subject
- meeting title
task_ids: []
paperswithcode_id: samsum-corpus
pretty_name: XSum Corpus
tags:
- conversations-summarization
---
# Dataset Card for SAMSum Corpus
## Dataset Description
### Links
- **Homepage:** https://arxiv.org/abs/1808.08745
- **Repository:** https://arxiv.org/abs/1808.08745
- **Paper:** https://arxiv.org/abs/1808.08745
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
This repository contains data and code for our EMNLP 2018 paper "[Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)".
### Languages
English
## Dataset Structure
### Data Instances
XSum dataset is made of 226711 conversations split into train, test and val.
The first instance in the training set:
{'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\n"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\'re neglected or forgotten," she said.\n"That may not be true but it is perhaps my perspective over the last few days.\n"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?"\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\nThe Labour Party\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\n"I was quite taken aback by the amount of damage that has been done," he said.\n"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses."\nHe said it was important that "immediate steps" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on selkirk.news@bbc.co.uk or dumfries@bbc.co.uk.', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.',
'id': '35232142'}
### Data Fields
- dialogue: text of dialogue.
- summary: one line human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 204045
- val: 11332
- test: 11334
## Dataset Creation
### Curation Rationale
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
### Annotation process
## Licensing Information
non-commercial licence: MIT
## Citation Information
```
@InProceedings{xsum-emnlp,
author = "Shashi Narayan and Shay B. Cohen and Mirella Lapata",
title = "Don't Give Me the Details, Just the Summary! {T}opic-Aware Convolutional Neural Networks for Extreme Summarization",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing ",
year = "2018",
address = "Brussels, Belgium",
```
## Contributions
Thanks to [@Edinburgh NLP](https://github.com/EdinburghNLP) for adding this dataset. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-b756be98-8935185 | 2022-06-29T09:30:21.000Z | null | false | 4fe6b74529ba552ef552afb7bafc54a980f45628 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-b756be98-8935185/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: uygarkurt/distilbert-base-uncased-finetuned-emotion
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: uygarkurt/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
bigscience | null | null | null | false | 11 | false | bigscience/bloom-generations | 2022-07-12T00:10:02.000Z | null | true | d60a6613f39c3de92ba86f59aff1a96b9bb22ff5 | [] | [] | https://huggingface.co/datasets/bigscience/bloom-generations/resolve/main/README.md | |
traversaro | null | null | null | false | 1 | false | traversaro/testdataset | 2022-06-29T10:22:02.000Z | null | false | 3ffa5a2e4c2e4938fae1ed63cb55f88a39cce8d6 | [] | [
"license:bsd-3-clause"
] | https://huggingface.co/datasets/traversaro/testdataset/resolve/main/README.md | ---
license: bsd-3-clause
---
|
knkarthick | null | null | null | false | 22 | false | knkarthick/highlightsum | 2022-10-24T09:17:00.000Z | null | false | 9a66b367250f32cea9b78aeac1ec2a719a8dd59f | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:summarization"
] | https://huggingface.co/datasets/knkarthick/highlightsum/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: HighlightSum Corpus
---
# Dataset Card for HighlightSum Corpus [Single Dataset Comprising of AMI, SamSUM & DialogSUM for Brief Summarization of Text]
## Dataset Description
### Links
- **AMI:** https://huggingface.co/datasets/knkarthick/AMI
- **DialogSUM:** https://github.com/cylnlp/dialogsum
- **SamSUM:** https://huggingface.co/datasets/knkarthick/samsum
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
HighlightSUM is collection of large-scale dialogue summarization dataset from AMI, SamSUM & DialogSUM, consisting of 31,108 dialogues with corresponding manually labeled summaries.
### Languages
English
## Dataset Structure
### Data Instances
HighlightSum is a large-scale dialogue summarization dataset collection, consisting of 31,108 dialogues split into train, test and validation.
The first instance in the training set:
{'id': 'train_0',
'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.",
'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor."}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 27401
- val: 1360
- test: 2347
## Dataset Creation
### Curation Rationale
Collection of AMI, SamSUM & DialogSUM Datasets.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
non-commercial licence: MIT
## Citation Information
Refer the above links for Credits & Citations. |
knkarthick | null | null | null | false | 2 | false | knkarthick/topicsum | 2022-10-23T06:20:37.000Z | null | false | dfe28223f2b3bb45a7b521b09bdf5aed3745c5ae | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:summarization"
] | https://huggingface.co/datasets/knkarthick/topicsum/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- topic modeling
- one liner summary
- email subject
- meeting title
task_ids: []
pretty_name: TopicSum Corpus
---
# Dataset Card for TopicSum Corpus [Single Dataset Comprising of XSUM & DialogSUM for One Liner Summarization/ Topic Generation of Text]
## Dataset Description
### Links
- **DialogSUM:** https://github.com/cylnlp/dialogsum
- **XSUM:** https://huggingface.co/datasets/knkarthick/xsum
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
TopicSUM is collection of large-scale dialogue summarization dataset from XSUM & DialogSUM, consisting of 241,171 dialogues with corresponding manually labeled one-liner summaries/ topics.
### Languages
English
## Dataset Structure
### Data Instances
TopicSum is a large-scale dialogue summarization dataset collection [XSUM & DialogDUM], consisting of 241,171 dialogues split into train, test and validation.
The first instance in the training set:
{'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\n"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\'re neglected or forgotten," she said.\n"That may not be true but it is perhaps my perspective over the last few days.\n"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?"\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\nThe Labour Party\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\n"I was quite taken aback by the amount of damage that has been done," he said.\n"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses."\nHe said it was important that "immediate steps" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on selkirk.news@bbc.co.uk or dumfries@bbc.co.uk.', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.',
'id': '35232142'}
### Data Fields
- dialogue: text of dialogue.
- summary: human written one-liner summary/ topic of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 216,505
- val: 11,832
- test: 12,834
## Dataset Creation
### Curation Rationale
Collection of XSUM & DialogSUM Datasets.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
non-commercial licence: MIT
## Citation Information
Refer the above links for Credits & Citations. |
projecte-aina | null | None | CatalanQA: an extractive QA dataset from original Catalan Sources: Wikipedia and VilaWeb newswire.
It is an aggregation and balancing of 2 previous datasets: VilaQUAD and ViquiQUAD, which were described in
This dataset can be used to build extractive-QA and Language Models.
Splts have been balanced by kind of question, and unlike other datasets like SQUAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
- test.json contains 2135 question/answer pairs
- train.json contains 17135 question/answer pairs
- dev.json contains 2157 question/answer pairs
Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA),
and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL). | false | 1 | false | projecte-aina/catalanqa | 2022-11-16T15:25:57.000Z | null | false | a35fb666df666d92cc09fc4fb9000232d33d48e4 | [] | [
"arxiv:1606.05250",
"annotations_creators:expert-generated",
"language_creators:found",
"language:ca",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:extractive-qa"
] | https://huggingface.co/datasets/projecte-aina/catalanqa/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: catalanqa
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
# Dataset Card for CatalanQA
## Dataset Description
- **Homepage:** https://github.com/projecte-aina
- **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](mailto:carme.armentano@bsc.es)
### Dataset Summary
This dataset can be used to build extractive-QA and Language Models. It is an aggregation and balancing of 2 previous datasets: [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) and [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad).
Splits have been balanced by kind of question, and unlike other datasets like [SQuAD](http://arxiv.org/abs/1606.05250), it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Extractive-QA, Language Model.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
```
{
"title": "Els 521 policies espanyols amb més mala nota a les oposicions seran enviats a Catalunya",
"paragraphs": [
{
"context": "El Ministeri d'Interior espanyol enviarà a Catalunya els 521 policies espanyols que han obtingut més mala nota a les oposicions. Segons que explica El País, hi havia mig miler de places vacants que s'havien de cobrir, però els agents amb més bones puntuacions han elegit destinacions diferents. En total van aprovar les oposicions 2.600 aspirants. D'aquests, en seran destinats al Principat 521 dels 560 amb més mala nota. Per l'altra banda, entre els 500 agents amb més bona nota, només 8 han triat Catalunya. Fonts de la policia espanyola que esmenta el diari ho atribueixen al procés d'independència, al Primer d'Octubre i a la 'situació social' que se'n deriva.",
"qas": [
{
"question": "Quants policies enviaran a Catalunya?",
"id": "0.5961700408283691",
"answers": [
{
"text": "521",
"answer_start": 57
}
]
}
]
}
]
},
```
### Data Fields
Follows [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets:
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the article.
- `context` (str): Article text.
- `question` (str): Question.
- `answers` (list): Answer to the question, containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- train.json: 17135 question/answer pairs
- dev.json: 2157 question/answer pairs
- test.json: 2135 question/answer pairs
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
- [VilaWeb](https://www.vilaweb.cat/) and [Catalan Wikipedia](https://ca.wikipedia.org).
#### Initial Data Collection and Normalization
This dataset is a balanced aggregation from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets.
#### Who are the source language producers?
Volunteers from [Catalan Wikipedia](https://ca.wikipedia.org) and professional journalists from [VilaWeb](https://www.vilaweb.cat/).
### Annotations
#### Annotation process
We did an aggregation and balancing from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets.
To annotate those datasets, we commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250).
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
#### Who are the annotators?
Annotation was commissioned by a specialized company that hired a team of native language speakers.
### Personal and Sensitive Information
No personal or sensitive information is included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Contributions
[N/A] |
chrix390 | null | null | null | false | 1 | false | chrix390/ashnikko | 2022-06-29T14:24:42.000Z | null | false | ef14f11a4a24838222202cb06ee58b2225d954c8 | [] | [
"license:other"
] | https://huggingface.co/datasets/chrix390/ashnikko/resolve/main/README.md | ---
license: other
---
|
hongdijk | null | null | null | false | 1 | false | hongdijk/klue_final | 2022-06-30T08:53:07.000Z | null | false | 1979ec05253d16743e4ceee416a1bd9d94510a1c | [] | [
"license:other"
] | https://huggingface.co/datasets/hongdijk/klue_final/resolve/main/README.md | ---
license: other
---
|
wkrl | null | @article{park2019cord,
title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing},
author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
} | CORD (Consolidated Receipt Dataset) with normalized bounding boxes. | false | 1 | false | wkrl/cord | 2022-07-09T09:28:36.000Z | null | false | 609880c2f80d9f7e1e64e8b2ae85ec474b772eb3 | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"multilinguality:monolingual",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:token-classification",
"task_ids:parsing"
] | https://huggingface.co/datasets/wkrl/cord/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
multilinguality:
- monolingual
license:
- cc-by-4.0
pretty_name: CORD
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- parsing
---
# Dataset Card for CORD (Consolidated Receipt Dataset)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository: https://github.com/clovaai/cord**
- **Paper: https://openreview.net/pdf?id=SJl3z659UH**
- **Leaderboard: https://paperswithcode.com/dataset/cord**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
```python
{
"id": datasets.Value("string"),
"words": datasets.Sequence(datasets.Value("string")),
"bboxes": datasets.Sequence(datasets.Sequence(datasets.Value("int64"))),
"labels": datasets.Sequence(datasets.features.ClassLabel(names=_LABELS)),
"images": datasets.features.Image(),
}
```
### Data Splits
- train (800 rows)
- validation (100 rows)
- test (100 rows)
## Dataset Creation
### Licensing Information
[Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@article{park2019cord,
title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing},
author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
}
```
### Contributions
Thanks to [@clovaai](https://github.com/clovaai) for adding this dataset. |
hongdijk | null | null | null | false | 1 | false | hongdijk/kluefinal | 2022-06-30T09:22:02.000Z | null | false | ce7db86ac7bdd7a4ad6dd95f031fc7f383e2cff7 | [] | [
"license:other"
] | https://huggingface.co/datasets/hongdijk/kluefinal/resolve/main/README.md | ---
license: other
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-f89b1257-9045192 | 2022-06-29T17:17:15.000Z | null | false | 5ba3c4934f4b20a4d9cf13e1b877524267ef5f70 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lewtun/dog_food"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-f89b1257-9045192/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lewtun/dog_food
eval_info:
task: image_multi_class_classification
model: abhishek/convnext-tiny-finetuned-dogfood
metrics: []
dataset_name: lewtun/dog_food
dataset_config: lewtun--dog_food
dataset_split: train
col_mapping:
image: image
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: abhishek/convnext-tiny-finetuned-dogfood
* Dataset: lewtun/dog_food
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@aciborowska](https://huggingface.co/aciborowska) for evaluating this model. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.