author stringlengths 2 29 β | cardData null | citation stringlengths 0 9.58k β | description stringlengths 0 5.93k β | disabled bool 1 class | downloads float64 1 1M β | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 β | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mansiksohn | null | null | null | false | 1 | false | mansiksohn/opendict-korean-proverb | 2022-07-31T03:23:30.000Z | null | false | 5aa6d7d0c90976162beb9e98f11df3bdae500118 | [] | [
"annotations_creators:no-annotation",
"language:ko",
"language_creators:found",
"license:cc-by-2.0",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"tags:korean",
"tags:proverb",
"task_categories:text-generation",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/mansiksohn/opendict-korean-proverb/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- ko
language_creators:
- found
license:
- cc-by-2.0
multilinguality:
- monolingual
pretty_name: "νκ΅μ΄ μλ΄ λͺ¨μ v1.0"
size_categories:
- n<1K
source_datasets:
- original
tags:
- korean
- proverb
task_categories:
- text-generation
task_ids:
- language-modeling
---
# νκ΅μ΄ μλ΄ λͺ¨μ v1.0
κ΅λ¦½κ΅μ΄μ μ°λ¦¬λ§μμ μλ΄μ μ μ ν΄ λ§λ λ°μ΄ν°μ
λλ€.
- νλμ λ§μ§ μλ λ¨μ΄κ° ν¬ν¨λ μλ΄ μμ
- κ΄νΈλ‘ ννλ λ³ν μμ
- μ€λ³΅λ΄μ© ν΅ν©
## μλ³Έ λ°μ΄ν° λ°κΈ°
μ°λ¦¬λ§μμμ μλ΄μ ν΄μ€μ ν¬ν¨ν μλ³Έλ°μ΄ν°λ₯Ό λ€μ΄λ°μ μ μμ΅λλ€.
> κ΅λ¦½κ΅μ΄μ λλ¦¬μ§ μ¬μ μ μ€λ € μλ μλ΄μ 'μμΈν μ°ΎκΈ°' κΈ°λ₯μ νμ©νμ¬ λ³΄μ€ μ μμ΅λλ€. μλ΄μ΄ λ λ§μ΄ μ€λ € μλ μ¬μ -μ°λ¦¬λ§μμ 'μμΈν μ°ΎκΈ°'λ‘ λ€μ΄κ°μ
μ 'μλ΄'μ μ ννμλ©΄ μ¬μ μ μ€λ € μλ λͺ¨λ μλ΄μ λͺ©λ‘μ΄ λμ΅λλ€.
https://opendict.korean.go.kr/
μ°λ¦¬λ§μμ μλΉμ€ μ΄μ© μ½κ΄μ λ°λ₯΄λ©΄
- βν¬λ¦¬μμ΄ν°λΈ μ»€λ¨Όμ¦ μ μμ νμ-λμΌμ‘°κ±΄λ³κ²½νλ½2.0 λνλ―Όκ΅ λΌμ΄μ μ€βλ₯Ό μ μ©ν©λλ€.
- μμ
μ μ©λκΉμ§ ν¬ν¨νμ¬ λꡬλ μμ λ‘κ² μ΄μ©ν μ μμΌλ©° μ μμμ νΉλ³ν νκ°κ° νμνμ§ μμ΅λλ€.
- μ μλ¬Όμ μ΄μ©νκΈ° μν΄μλ λ€μμ 쑰건μ μ§μΌμΌ ν©λλ€.
1. μ μμ νμ: μλ£λ₯Ό μ¬μ©ν λ μ μμλ₯Ό νμλ‘ νμν΄μΌ ν©λλ€.
2. λμΌμ‘°κ±΄λ³κ²½νλ½: μλ£λ₯Ό λ³κ²½νμ¬ μλ‘μ΄ μ μλ¬Όμ λ§λ€ λ, κ·Έ μ μλ¬Όλ λμΌν λΌμ΄μ μ€λ‘ λ°°ν¬ν΄μΌ ν©λλ€. |
beiergo | null | null | null | false | 1 | false | beiergo/test | 2022-07-31T05:12:55.000Z | null | false | f3bbca4f1441cbc73a14973fb769302713d1a298 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/beiergo/test/resolve/main/README.md | ---
license: apache-2.0
---
|
okg | null | null | null | false | 1 | false | okg/turkish-poems | 2022-07-31T10:22:53.000Z | null | false | 9ad3dd427c226e588642000394eae8a394c4c845 | [] | [
"annotations_creators:found",
"language:tr",
"language_creators:found",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"task_categories:text-generation",
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:text-scoring"
] | https://huggingface.co/datasets/okg/turkish-poems/resolve/main/README.md | ---
annotations_creators:
- found
language:
- tr
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: turkish-poems
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- text-generation
- text-classification
task_ids:
- language-modeling
- text-scoring
---
Turkish poems scraped from antoloji.com. Features consists of id, poet name, poem rating and the poem.
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-a84cddd6-12085614 | 2022-07-31T14:34:01.000Z | null | false | 4c51ddbf5fdb05d80db8466d2a7eb9253e240dcf | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:kmfoda/booksum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-a84cddd6-12085614/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- kmfoda/booksum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
metrics: []
dataset_name: kmfoda/booksum
dataset_config: kmfoda--booksum
dataset_split: test
col_mapping:
text: chapter
target: summary_text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
succinctly | null | null | null | false | 1 | false | succinctly/medium-titles-and-images | 2022-07-31T17:44:16.000Z | null | false | 053020686dfa791746f5f3f463e4bc2875ba5ab2 | [] | [
"arxiv:2104.08691",
"license:apache-2.0"
] | https://huggingface.co/datasets/succinctly/medium-titles-and-images/resolve/main/README.md | ---
license: apache-2.0
---
This dataset contains `<title, encoded_image>` pairs from [Medium](https://medium.com) articles. It was processed from the [Medium Articles Dataset (128k): Metadata + Images](https://www.kaggle.com/datasets/succinctlyai/medium-data) dataset on Kaggle.
The original images were processed in the following way:
1. Given an image of size `(w, h)`, we cropped a square of size `(n, n)` from the center of the image, where `n = min(w, h)`.
2. The resulting `(n, n)` image was resized to `(256, 256)`.
3. The resulting `(256, 256)` image was encoded into image tokens via the [dalle-mini/vqgan\_imagenet\_f16\_16384](https://huggingface.co/dalle-mini/vqgan_imagenet_f16_16384) model.
Note that this dataset contains ~128k entries and is too small for training a text-to-image model end to end; it is more suitable for operations on a pre-trained model
like [dalle-mini](https://huggingface.co/dalle-mini/dalle-mini) (fine-tuning, [prompt tuning](https://arxiv.org/pdf/2104.08691.pdf), etc.). |
VSPuzzler | null | null | null | false | 5 | false | VSPuzzler/SemevalClickbaitSpoilingTrainingData | 2022-08-16T17:57:47.000Z | null | false | d8b94e4250dd83bfdc7375f2e29327ead0888e69 | [] | [] | https://huggingface.co/datasets/VSPuzzler/SemevalClickbaitSpoilingTrainingData/resolve/main/README.md | |
gsganden | null | null | null | false | 1 | false | gsganden/lpz_2016_2017_processed | 2022-07-31T21:21:21.000Z | null | false | bafd9e2c4c9c0f5767641c249b0c10ffab96b781 | [] | [
"license:bsd-3-clause"
] | https://huggingface.co/datasets/gsganden/lpz_2016_2017_processed/resolve/main/README.md | ---
license: bsd-3-clause
---
|
AI-Growth-Lab | null | null | null | false | 1 | false | AI-Growth-Lab/patents_claims_1.5m_traim_test_embeddings | 2022-07-31T20:45:39.000Z | null | false | db3f6f363ae48cd3de82d070906e95719fc48c74 | [] | [
"license:other"
] | https://huggingface.co/datasets/AI-Growth-Lab/patents_claims_1.5m_traim_test_embeddings/resolve/main/README.md | ---
license: other
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-samsum-0c52930e-12115616 | 2022-08-01T00:59:32.000Z | null | false | ba1ab3571cae2263de50e79e0325852a4208ff53 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-samsum-0c52930e-12115616/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
BirdL | null | null | null | false | 13 | false | BirdL/DALL-E-Dogs | 2022-09-28T21:09:11.000Z | null | false | 96ef0d44f0763412ece4a22244a7dbb75aa4e316 | [] | [
"license:other",
"size_categories:1K<n<10K",
"task_categories:image-classification",
"task_categories:unconditional-image-generation"
] | https://huggingface.co/datasets/BirdL/DALL-E-Dogs/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators: []
license:
- other
multilinguality: []
pretty_name: DALL-E Cats Dataset
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- image-classification
- unconditional-image-generation
task_ids: []
---
DALL-E-Dogs is a dataset meant to produce a synthetic animal dataset. This is a precursor to DALL-E-Cats. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) |
Jang-Hyun | null | null | null | false | 1 | false | Jang-Hyun/EfficientDatasetCondensation | 2022-08-01T06:53:14.000Z | null | false | f5c77a95e61267d03a9235414f5389e2aa721e30 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Jang-Hyun/EfficientDatasetCondensation/resolve/main/README.md | ---
license: mit
---
|
ICML2022 | null | null | null | false | 1 | false | ICML2022/EfficientDatasetCondensation | 2022-08-01T07:12:52.000Z | null | false | 773323193e80d60a61ee816e58e24b7564bbb98c | [] | [
"license:mit",
"data_type:image (0-1 ranged float)"
] | https://huggingface.co/datasets/ICML2022/EfficientDatasetCondensation/resolve/main/README.md | ---
license: mit
data_type: image (0-1 ranged float)
---
### Data summary
- This repository contains small synthetic data for Image datasets; MNIST, SVHN, and CIFAR-10.
- Each torch file contains the images and corresponding labels of sizes ranging from 1,10,50 images per class (IPC).
- For more details, please refer to our GitHub page and paper below.
### Reference
https://github.com/snu-mllab/Efficient-Dataset-Condensation
### Citation
```
@inproceedings{kimICML22,
title = {Dataset Condensation via Efficient Synthetic-Data Parameterization},
author = {Kim, Jang-Hyun and Kim, Jinuk and Oh, Seong Joon and Yun, Sangdoo and Song, Hwanjun and Jeong, Joonhyun and Ha, Jung-Woo and Song, Hyun Oh},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2022}
}
``` |
joelito | null | TODO add citation | The LEXTREME Benchmark is a collection of multilingual datasets for evaluating model performance
across a diverse set of legal NLU tasks. | false | 8,837 | false | joelito/lextreme | 2022-10-25T16:16:35.000Z | null | false | 50b36958fb66068b0f752a2f4a0bfdd27be9e4c1 | [] | [
"annotations_creators:other",
"language_creators:found",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
... | https://huggingface.co/datasets/joelito/lextreme/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 1K<n<100K
source_datasets:
- original
task_categories:
- token-classification
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
- named-entity-recognition
- topic-classification
pretty_name: 'LEXTREME: A Multilingual Legal Benchmark for Natural Language Understanding'
tags:
- named-entity-recognition-and-classification
- judgement-prediction
---
# Dataset Card for LEXTREME: A Multilingual Legal Benchmark for Natural Language Understanding
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
The dataset consists of 11 diverse multilingual legal NLU tasks. 6 tasks have one single configuration and 5 tasks have two or three configurations. This leads to a total of 18 tasks (8 single-label text classification tasks, 5 multi-label text classification tasks and 5 token-classification tasks).
### Supported Tasks and Leaderboards
The dataset supports the tasks of text classification and token classification.
In detail, we support the folliwing tasks and configurations:
| task | task type | configurations | link |
|:---------------------------|--------------------------:|--------------------------------:|-------------------------------------------------------------------------------------------------------:|
| Brazilian Court Decisions | Judgment Prediction | (judgment, unanimity) | [joelito/brazilian_court_decisions](https://huggingface.co/datasets/joelito/brazilian_court_decisions) |
| Swiss Judgment Prediction | Judgment Prediction | default | [joelito/swiss_judgment_prediction](https://huggingface.co/datasets/swiss_judgment_prediction) |
| German Argument Mining | Argument Mining | default | [joelito/german_argument_mining](https://huggingface.co/datasets/joelito/german_argument_mining) |
| Greek Legal Code | Topic Classification | (volume, chapter, subject) | [greek_legal_code](https://huggingface.co/datasets/greek_legal_code) |
| Online Terms of Service | Unfairness Classification | (unfairness level, claus topic) | [online_terms_of_service](https://huggingface.co/datasets/joelito/online_terms_of_service) |
| Covid 19 Emergency Event | Event Classification | default | [covid19_emergency_event](https://huggingface.co/datasets/joelito/covid19_emergency_event) |
| MultiEURLEX | Topic Classification | (level 1, level 2, level 3) | [multi_eurlex](https://huggingface.co/datasets/multi_eurlex) |
| LeNER BR | Named Entity Recognition | default | [lener_br](https://huggingface.co/datasets/lener_br) |
| LegalNERo | Named Entity Recognition | default | [legalnero](https://huggingface.co/datasets/joelito/legalnero) |
| Greek Legal NER | Named Entity Recognition | default | [greek_legal_ner](https://huggingface.co/datasets/joelito/greek_legal_ner) |
| MAPA | Named Entity Recognition | (coarse, fine) | [mapa](https://huggingface.co/datasets/joelito/mapa) |
### Languages
The following languages are supported: bg , cs , da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl and three data splits are present for each configuration (train, validation and test).
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
TODO add citation
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-samsum-db063b78-12135617 | 2022-08-01T09:28:59.000Z | null | false | 6ce1c304556d5f62c1c7ad2378ec3dcbebdd4474 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-samsum-db063b78-12135617/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-xsum-4118bb33-12145618 | 2022-08-01T13:41:09.000Z | null | false | 32fba0b0ee59bc29ea13ff25f7029ca19b48f410 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-4118bb33-12145618/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
metrics: []
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
ziwenyd | null | null | null | false | 1 | false | ziwenyd/transcoder-geeksforgeeks | 2022-08-03T14:59:08.000Z | null | false | 6e28526de611e2cce102546dc19ee2aa5c4d9606 | [] | [
"license:mit"
] | https://huggingface.co/datasets/ziwenyd/transcoder-geeksforgeeks/resolve/main/README.md | ---
license: mit
---
# statistics
cpp-java: 627 pairs
python-java: 616 pairs
cpp-python: 545 pairs
|
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-b454c496-12155619 | 2022-08-01T15:27:24.000Z | null | false | b48f43ffb8808a1d3797ad2f9c112fc743fc37a9 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cnn_dailymail-b454c496-12155619/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-a771a5f9-12165620 | 2022-08-01T13:47:31.000Z | null | false | cd0823496bbf167f176f6239a9ee8c0985247853 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ml6team/cnn_dailymail_nl"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-a771a5f9-12165620/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ml6team/cnn_dailymail_nl
eval_info:
task: summarization
model: yhavinga/t5-v1.1-base-dutch-cnn-test
metrics: []
dataset_name: ml6team/cnn_dailymail_nl
dataset_config: default
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/t5-v1.1-base-dutch-cnn-test
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. |
daekeun-ml | null | null | null | false | 20 | false | daekeun-ml/naver-news-summarization-ko | 2022-08-01T15:03:07.000Z | null | false | 4aca624ae98573b6246a2821b97cc89aeb9f0d23 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/daekeun-ml/naver-news-summarization-ko/resolve/main/README.md | This dataset is a custom dataset created by the author by crawling Naver News (https://news.naver.com) for the Korean NLP model hands-on.
- Period: July 1, 2022 - July 10, 2022
- Subject: IT, economics
```
DatasetDict({
train: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 22194
})
test: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 2740
})
validation: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 2466
})
})
```
---
license: apache-2.0
---
|
graphs-datasets | null | null | null | false | 36 | false | graphs-datasets/ZINC | 2022-08-01T15:40:00.000Z | null | false | 87bbc54eb7de3162a7be97489f382187f0a345df | [] | [
"license:unknown"
] | https://huggingface.co/datasets/graphs-datasets/ZINC/resolve/main/README.md | ---
license: unknown
---
# Dataset Card for ZINC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://zinc15.docking.org/)**
- **[Repository](https://www.dropbox.com/s/feo9qle74kg48gy/molecules.zip?dl=1):**:
- **Paper:**: ZINC 15 β Ligand Discovery for Everyone (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/)
### Dataset Summary
The `ZINC` dataset is a "curated collection of commercially available chemical compounds prepared especially for virtual screening" (Wikipedia).
### Supported Tasks and Leaderboards
`ZINC` should be used for molecular property prediction (aiming to predict the constrained solubility of the molecules), a graph regression task. The score used is the MAE.
The associated leaderboard is here: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-regression-on-zinc).
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 220011 |
| average #nodes | 23.15 |
| average #edges | 49.81 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset, and follows the provided data splits.
This information can be found back using
```python
from torch_geometric.datasets import ZINC
dataset = ZINC(root = '', split='train') # valid, test
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license. Please open an issue if you know what is the license of this dataset.
### Citation Information
```bibtex
@article{doi:10.1021/acs.jcim.5b00559,
author = {Sterling, Teague and Irwin, John J.},
title = {ZINC 15 β Ligand Discovery for Everyone},
journal = {Journal of Chemical Information and Modeling},
volume = {55},
number = {11},
pages = {2324-2337},
year = {2015},
doi = {10.1021/acs.jcim.5b00559},
note ={PMID: 26479676},
URL = {
https://doi.org/10.1021/acs.jcim.5b00559
},
eprint = {
https://doi.org/10.1021/acs.jcim.5b00559
}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
|
graphs-datasets | null | null | null | false | 901 | false | graphs-datasets/PROTEINS | 2022-08-01T16:06:36.000Z | null | false | 09db5d4f43ddbc794711ba75d9e6ee435cda16b2 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/graphs-datasets/PROTEINS/resolve/main/README.md | ---
license: unknown
---
# Dataset Card for PROTEINS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://academic.oup.com/bioinformatics/article/21/suppl_1/i47/202991)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/PROTEINS.zip):**:
- **Paper:**: Protein function prediction via graph kernels (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-proteins)
### Dataset Summary
The `PROTEINS` dataset is a medium molecular property prediction dataset.
### Supported Tasks and Leaderboards
`PROTEINS` should be used for molecular property prediction (aiming to predict whether molecules are enzymes or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1113 |
| average #nodes | 39.06 |
| average #edges | 72.82 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by TUDataset.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
dataset = TUDataset(root='', name = 'PROTEINS')
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have info about it.
### Citation Information
```
@article{10.1093/bioinformatics/bti1007,
author = {Borgwardt, Karsten M. and Ong, Cheng Soon and SchΓΆnauer, Stefan and Vishwanathan, S. V. N. and Smola, Alex J. and Kriegel, Hans-Peter},
title = "{Protein function prediction via graph kernels}",
journal = {Bioinformatics},
volume = {21},
number = {suppl_1},
pages = {i47-i56},
year = {2005},
month = {06},
abstract = "{Motivation: Computational approaches to protein function prediction infer protein function by finding proteins with similar sequence, structure, surface clefts, chemical properties, amino acid motifs, interaction partners or phylogenetic profiles. We present a new approach that combines sequential, structural and chemical information into one graph model of proteins. We predict functional class membership of enzymes and non-enzymes using graph kernels and support vector machine classification on these protein graphs.Results: Our graph model, derivable from protein sequence and structure only, is competitive with vector models that require additional protein information, such as the size of surface pockets. If we include this extra information into our graph model, our classifier yields significantly higher accuracy levels than the vector models. Hyperkernels allow us to select and to optimally combine the most relevant node attributes in our protein graphs. We have laid the foundation for a protein function prediction system that integrates protein information from various sources efficiently and effectively.Availability: More information available via www.dbs.ifi.lmu.de/Mitarbeiter/borgwardt.html.Contact:borgwardt@dbs.ifi.lmu.de}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/bti1007},
url = {https://doi.org/10.1093/bioinformatics/bti1007},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/21/suppl\_1/i47/524364/bti1007.pdf},
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
|
graphs-datasets | null | null | null | false | 841 | false | graphs-datasets/MUTAG | 2022-08-01T16:05:28.000Z | null | false | e154c283df9d76e78fed3034ce4578a71e8863e8 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/graphs-datasets/MUTAG/resolve/main/README.md | ---
license: unknown
---
# Dataset Card for MUTAG
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://pubs.acs.org/doi/abs/10.1021/jm00106a046)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/MUTAG.zip):**:
- **Paper:**: Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-mutag)
### Dataset Summary
The `MUTAG` dataset is 'a collection of nitroaromatic compounds and the goal is to predict their mutagenicity on Salmonella typhimurium'.
### Supported Tasks and Leaderboards
`MUTAG` should be used for molecular property prediction (aiming to predict whether molecules have a mutagenic effect on a given bacterium or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | small |
| #graphs | 187 |
| average #nodes | 18.03 |
| average #edges | 39.80 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
cur_dataset = TUDataset(root="../dataset/loaded/",
name="MUTAG")
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have information.
### Citation Information
```
@article{doi:10.1021/jm00106a046,
author = {Debnath, Asim Kumar and Lopez de Compadre, Rosa L. and Debnath, Gargi and Shusterman, Alan J. and Hansch, Corwin},
title = {Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity},
journal = {Journal of Medicinal Chemistry},
volume = {34},
number = {2},
pages = {786-797},
year = {1991},
doi = {10.1021/jm00106a046},
URL = {
https://doi.org/10.1021/jm00106a046
},
eprint = {
https://doi.org/10.1021/jm00106a046
}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
|
graphs-datasets | null | null | null | false | 26 | false | graphs-datasets/IMDB-BINARY | 2022-08-01T16:23:46.000Z | null | false | f73543f3d16d510070fd40f5726f0991649b2f51 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/graphs-datasets/IMDB-BINARY/resolve/main/README.md | ---
license: unknown
---
# Dataset Card for IMDB-BINARY (IMDb-B)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://dl.acm.org/doi/10.1145/2783258.2783417)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/IMDB-BINARY.zip):**:
- **Paper:**: Deep Graph Kernels (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-imdb-b)
### Dataset Summary
The `IMDb-B` dataset is "a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres".
### Supported Tasks and Leaderboards
`IMDb-B` should be used for graph classification (aiming to predict whether a movie graph is an action or romance movie), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1000 |
| average #nodes | 19.79 |
| average #edges | 193.25 |
### Data Fields
Each row of a given file is a graph, with:
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
cur_dataset = TUDataset(root="../dataset/loaded/",
name="IMDB-BINARY")
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have this information.
### Citation Information
```
@inproceedings{10.1145/2783258.2783417,
author = {Yanardag, Pinar and Vishwanathan, S.V.N.},
title = {Deep Graph Kernels},
year = {2015},
isbn = {9781450336642},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2783258.2783417},
doi = {10.1145/2783258.2783417},
abstract = {In this paper, we present Deep Graph Kernels, a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels.},
booktitle = {Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
pages = {1365β1374},
numpages = {10},
keywords = {collaboration networks, bioinformatics, r-convolution kernels, graph kernels, structured data, deep learning, social networks, string kernels},
location = {Sydney, NSW, Australia},
series = {KDD '15}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
|
bigscience | null | @article{scao2022bloom,
title={BLOOM: A 176B-Parameter Open-Access Multilingual Language Model},
author={Scao, Teven Le and Fan, Angela and Akiki, Christopher and Pavlick, Ellie and Ili{\'c}, Suzana and Hesslow, Daniel and Castagn{\'e}, Roman and Luccioni, Alexandra Sasha and Yvon, Fran{\c{c}}ois and Gall{\'e}, Matthias and others},
journal={arXiv preprint arXiv:2211.05100},
year={2022}
} | false | 30 | false | bigscience/evaluation-results | 2022-11-15T20:23:19.000Z | null | false | 7a18cb4f8f96e6a7477263a8658ff3fc43138236 | [] | [
"size_categories:100M<n<1B",
"task_categories:other"
] | https://huggingface.co/datasets/bigscience/evaluation-results/resolve/main/README.md | ---
pretty_name: evaluation-results
size_categories:
- 100M<n<1B
task_categories:
- other
---
# BigScience BLOOM Evaluation Results
This repository contains evaluation results & original predictions of BLOOM & friends.
## Usage
You can load numeric results via:
```python
from datasets import load_dataset
ds = load_dataset("bigscience/evaluation-results", "bloom")
```
If it takes too long, it may be faster to clone the repository and load the data from disk:
```python
!git clone https://huggingface.co/datasets/bigscience/evaluation-results
ds = load_dataset("evaluation-results", "bloom")
```
For example generations (.jsonl files), you need to manually browse the repository.
## Structure
For `bigsciencelmevalharness`, `lmevalharness` & `codeeval` evaluation_frameworks the structure is:
`model_name > evaluation_framework > checkpoint_type > dataset_name > data`
## Evaluation Procedure
- `bigsciencelmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/291
- https://github.com/bigscience-workshop/lm-evaluation-harness
- `lmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed
- https://github.com/EleutherAI/lm-evaluation-harness
- `codeeval` files were created using the HumanEval code dataset with the below:
- https://github.com/loubnabnl/bloom-code-evaluation
| |
BirdL | null | null | null | false | 2 | false | BirdL/DALL-E-Cats | 2022-09-28T21:07:37.000Z | null | false | 00649413018d64c58ab9b9e9008c51c84e3d1919 | [] | [
"license:other",
"size_categories:1K<n<10K",
"task_categories:image-classification",
"task_categories:unconditional-image-generation"
] | https://huggingface.co/datasets/BirdL/DALL-E-Cats/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators: []
license:
- other
multilinguality: []
pretty_name: DALL-E Cats Dataset
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- image-classification
- unconditional-image-generation
task_ids: []
---
DALL-E-Cats is a dataset meant to produce a synthetic animal dataset. This is a successor to DALL-E-Dogs. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) |
Corran | null | null | null | false | 2 | false | Corran/pexelvideos | 2022-08-08T13:22:04.000Z | null | false | a9f7f1ac75934a7c01d3ca02217544251939c881 | [] | [] | https://huggingface.co/datasets/Corran/pexelvideos/resolve/main/README.md | **Pexel Videos**
*358,551 video urls, average length 19.5s, and associated metadata from pexels.com.*
Data was extracted from their video sitemaps (pexels.com/robots.txt) on 01/08/2022.
Data is stored in PexelVideos.parquet.gzip as a gzipped parquet
To get this data ensure you have git installed and do !git lfs clone https://huggingface.co/datasets/Corran/pexelvideos/
In python the reccomended reading is by opening the file with pandas.
!pip install pandas <br>
import pandas <br>
data=pd.read_parquet('PexelVideos.parquet.gzip') <br>
Get a specific url and its metadata using data.iloc[0], read this like a python dict
e.g to get the url for index i run
url= df.iloc[i]["content_loc"]
https://pandas.pydata.org/pandas-docs/version/1.1/getting_started/index.html#getting-started
**Explore this dataset using Open-Clip**
https://colab.research.google.com/drive/1m3_KfPKOC_oivqoruaseiNUlP-_MqqyX#scrollTo=bNngcd8UAOma
**License**
According to Pexels licensing, these videos are free to use for personal or commercial purposes, attribution is polite but not required however,
-Identifiable people may not appear in a bad light or in a way that is offensive. <br>
-Don't sell unaltered copies of a photo or video, e.g. as a poster, print or on a physical product without modifying it first. <br>
-Don't imply endorsement of your product by people or brands on the imagery. <br>
-Don't redistribute or sell the photos and videos on other stock photo or wallpaper platforms. <br>
license https://www.pexels.com/license/
|
Kadarxwoody | null | null | null | false | 2 | false | Kadarxwoody/artistic-2.0 | 2022-08-02T04:03:09.000Z | null | false | 6d6899645fe698f33873fb1e5f8f1b4166289715 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Kadarxwoody/artistic-2.0/resolve/main/README.md | ---
license: artistic-2.0
---
|
NX2411 | null | null | null | false | 2 | false | NX2411/AIhub-korean-speech-data | 2022-08-03T09:13:28.000Z | null | false | 08de04b777c94502ac34a514e79652ba0086425b | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/NX2411/AIhub-korean-speech-data/resolve/main/README.md | ---
license: apache-2.0
---
|
ShreySavaliya | null | null | null | false | 1 | false | ShreySavaliya/TextSummarisation | 2022-08-17T06:03:10.000Z | null | false | 923d33d0d849afee9887b1f80e71e686bb5a68af | [] | [
"tags:autotrain",
"tags:summarization",
"language:unk",
"datasets:vishw2703/autotrain-data-unisumm_3"
] | https://huggingface.co/datasets/ShreySavaliya/TextSummarisation/resolve/main/README.md | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain π€"
datasets:
- vishw2703/autotrain-data-unisumm_3
co2_eq_emissions:
emissions: 1368.894142563709
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1228646724
- CO2 Emissions (in grams): 1368.8941
## Validation Metrics
- Loss: 2.319
- Rouge1: 43.703
- Rouge2: 16.106
- RougeL: 23.715
- RougeLsum: 38.984
- Gen Len: 141.091
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/vishw2703/autotrain-unisumm_3-1228646724
``` |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-612d6c13-12185622 | 2022-08-02T12:11:44.000Z | null | false | 7e7d231c127baf5185b7e25b3086591df61c5b07 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ml6team/cnn_dailymail_nl"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-612d6c13-12185622/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ml6team/cnn_dailymail_nl
eval_info:
task: summarization
model: yhavinga/mt5-base-cnn-nl
metrics: []
dataset_name: ml6team/cnn_dailymail_nl
dataset_config: default
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/mt5-base-cnn-nl
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-612d6c13-12185623 | 2022-08-02T12:32:01.000Z | null | false | fbc605ed17bc3f3930bce6489c04f4cf3546cf91 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ml6team/cnn_dailymail_nl"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-612d6c13-12185623/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ml6team/cnn_dailymail_nl
eval_info:
task: summarization
model: yhavinga/mt5-base-mixednews-nl
metrics: []
dataset_name: ml6team/cnn_dailymail_nl
dataset_config: default
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/mt5-base-mixednews-nl
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-adversarial_qa-1cd241d3-12195624 | 2022-08-02T10:42:07.000Z | null | false | 19cda222ed39522c3b1b340261a5ba09766d9d4b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-1cd241d3-12195624/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: deepset/roberta-large-squad2
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205625 | 2022-08-02T10:42:37.000Z | null | false | 681f907c1bfc909157ce2fb38f101ab336764137 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205625/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: deepset/xlm-roberta-large-squad2
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-large-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205626 | 2022-08-02T10:41:34.000Z | null | false | 4c021cc32cf68644cdf094a49154425f1089a8ec | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205626/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: deepset/roberta-base-squad2-distilled
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-distilled
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205627 | 2022-08-02T10:41:51.000Z | null | false | 8b13664c3be80d2efe8e51c4d2f9404d854d9872 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205627/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: deepset/xlm-roberta-base-squad2-distilled
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2-distilled
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205628 | 2022-08-02T10:41:46.000Z | null | false | 7af19d4b60ccd712521d35090b9a032bda03374c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205628/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: deepset/tinybert-6l-768d-squad2
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinybert-6l-768d-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205629 | 2022-08-02T10:41:55.000Z | null | false | 687b60cfba2df04d63b009179832de2e6b5e2db6 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205629/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: deepset/bert-base-uncased-squad2
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-sms_spam-216c1ded-12215630 | 2022-08-02T10:41:15.000Z | null | false | 39c4d334cad8018816b024476a85c85a11f082c2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:sms_spam"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-sms_spam-216c1ded-12215630/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- sms_spam
eval_info:
task: binary_classification
model: Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection
metrics: []
dataset_name: sms_spam
dataset_config: plain_text
dataset_split: train
col_mapping:
text: sms
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection
* Dataset: sms_spam
* Config: plain_text
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Al-Ip](https://huggingface.co/Al-Ip) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235635 | 2022-08-02T11:31:13.000Z | null | false | 6500ed59d1b0764caa2b526bb72c66f097e95f8d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235635/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Blaise-g/led_pubmed_sumpubmed_1
metrics: ['bertscore']
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_1
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235636 | 2022-08-02T11:29:01.000Z | null | false | 28e036a2c5176b700ef625b46740702b23034dd1 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235636/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Blaise-g/led_pubmed_sumpubmed_2
metrics: ['bertscore']
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_2
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235637 | 2022-08-02T13:26:39.000Z | null | false | 25e614252e9ce89fcf8cc4af6e918711cbb3c528 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235637/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Blaise-g/long_t5_global_large_pubmed_explanatory
metrics: ['bertscore']
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_pubmed_explanatory
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-93d67e8f-12255638 | 2022-08-02T12:01:02.000Z | null | false | 61b61341f2e6e3ff845cbb5c2a6a8ecf5f798cc9 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-93d67e8f-12255638/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Blaise-g/led_large_baseline_pubmed
metrics: []
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-93d67e8f-12255639 | 2022-08-02T19:47:37.000Z | null | false | 18d6acb7b5eb51e83b9c02b70eed7f33c76c8075 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-93d67e8f-12255639/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Blaise-g/long_t5_global_large_baseline_pubmed
metrics: []
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-3c512f6e-12265641 | 2022-08-02T19:53:52.000Z | null | false | 4f333c302ff8acf17091c65ea016973bea5b55fd | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-3c512f6e-12265641/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Blaise-g/long_t5_global_large_baseline_pubmed
metrics: ['bertscore']
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-3c512f6e-12265640 | 2022-08-02T12:23:15.000Z | null | false | 4d959d3ddcccbcdc6bd5eb9263a0bfe1ac4c21bf | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-3c512f6e-12265640/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Blaise-g/led_large_baseline_pubmed
metrics: ['bertscore']
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-cuad-e5412c0a-12275642 | 2022-08-02T11:21:22.000Z | null | false | 47c39cc6f07bdfdb281cfe463ec5fa20b6d51a47 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cuad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-cuad-e5412c0a-12275642/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cuad
eval_info:
task: extractive_question_answering
model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
metrics: []
dataset_name: cuad
dataset_config: default
dataset_split: test
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: cuad
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@halima](https://huggingface.co/halima) for evaluating this model. |
NitishKarra | null | null | null | false | 2 | false | NitishKarra/invoice-bills | 2022-08-02T13:27:10.000Z | null | false | d6c3f2be38076d596dfa083a987c86466634ea8d | [] | [] | https://huggingface.co/datasets/NitishKarra/invoice-bills/resolve/main/README.md | |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335643 | 2022-08-02T17:24:17.000Z | null | false | 2dbc0d5727ee0cfa7704021bc39a9480f8ee1a7d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335643/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Blaise-g/led_pubmed_sumpubmed_3
metrics: ['bertscore']
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_3
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335644 | 2022-08-02T17:43:11.000Z | null | false | 691cb00d999c35d401985121f2ee489b2b8f5de6 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335644/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Blaise-g/led_pubmed_sumpubmed_4
metrics: ['bertscore']
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_4
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335645 | 2022-08-02T17:55:50.000Z | null | false | e3fe65be167f5aa4698afaa58d32d3eeaf834c71 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335645/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Blaise-g/led_pubmed_sumpubmed_5
metrics: ['bertscore']
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_5
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-07d54673-12345646 | 2022-08-03T21:34:30.000Z | null | false | 42a9884a2e30084417f497d64829ff3d7162492f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:kmfoda/booksum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-07d54673-12345646/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- kmfoda/booksum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
metrics: []
dataset_name: kmfoda/booksum
dataset_config: kmfoda--booksum
dataset_split: test
col_mapping:
text: chapter
target: summary_text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-xsum-19ae30f1-12355647 | 2022-08-04T03:41:57.000Z | null | false | 0761f2c5a7799569a8662dcc39a352206225b43d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-19ae30f1-12355647/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
metrics: []
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
owaiskha9654 | null | null | null | false | 16 | false | owaiskha9654/PubMed_MultiLabel_Text_Classification_Dataset_MeSH | 2022-10-25T07:49:12.000Z | null | false | 9f44f8510dd6ef22d5083f48c7227f2114485386 | [] | [
"language:en",
"license:afl-3.0",
"source_datasets:BioASQ Task A",
"task_categories:text-classification",
"task_ids:multi-label-classification"
] | https://huggingface.co/datasets/owaiskha9654/PubMed_MultiLabel_Text_Classification_Dataset_MeSH/resolve/main/README.md | ---
language:
- en
license: afl-3.0
source_datasets:
- BioASQ Task A
task_categories:
- text-classification
task_ids:
- multi-label-classification
---
This dataset consists of a approx 50k collection of research articles from **PubMed** repository. Originally these documents are manually annotated by Biomedical Experts with their MeSH labels and each articles are described in terms of 10-15 MeSH labels. In this Dataset we have huge numbers of labels present as a MeSH major which is raising the issue of extremely large output space and severe label sparsity issues. To solve this Issue Dataset has been Processed and mapped to its root as Described in the Below Figure.


|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-f53a4404-12415653 | 2022-08-02T22:14:52.000Z | null | false | ba2fde998044a29968fa13af93c291be5626bff5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-f53a4404-12415653/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Blaise-g/led-large-sumpubmed
metrics: []
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led-large-sumpubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. |
RKocielnik | null | null | null | false | 2 | false | RKocielnik/bias_repo | 2022-09-01T16:32:22.000Z | null | false | 79c3bbb133e7c3d4a44f5daf34d8c8ccac65b11d | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/RKocielnik/bias_repo/resolve/main/README.md | ---
license: afl-3.0
---
|
Danitg95 | null | null | null | false | 2 | false | Danitg95/feedback | 2022-08-02T20:45:40.000Z | null | false | 609f0b21763fac0105020450bdd279714085c03f | [] | [
"license:other"
] | https://huggingface.co/datasets/Danitg95/feedback/resolve/main/README.md | ---
license: other
---
|
NitishKarra | null | null | null | false | 1 | false | NitishKarra/dMART_BILL | 2022-08-03T07:19:10.000Z | null | false | f651060737f968bb62fe942495da2dde61b9f75f | [] | [] | https://huggingface.co/datasets/NitishKarra/dMART_BILL/resolve/main/README.md | |
Bingsu | null | null | null | false | 2 | false | Bingsu/laion2B-multi-korean-subset | 2022-10-14T05:23:17.000Z | null | false | 162574e34bf5cd64881b2689909f43b0aa971a0b | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:ko",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"task_categories:feature-extraction"
] | https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ko
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: laion2B-multi-korean-subset
size_categories:
- 10M<n<100M
task_categories:
- feature-extraction
---
# laion2B-multi-korean-subset
## Dataset Description
- **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/)
- **Huggingface:** [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi)
## About dataset
a subset data of [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi), including only korean
### Lisence
CC-BY-4.0
## Data Structure
### Data Instance
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/laion2B-multi-korean-subset")
>>> dataset
DatasetDict({
train: Dataset({
features: ['SAMPLE_ID', 'URL', 'TEXT', 'HEIGHT', 'WIDTH', 'LICENSE', 'LANGUAGE', 'NSFW', 'similarity'],
num_rows: 11376263
})
})
```
```py
>>> dataset["train"].features
{'SAMPLE_ID': Value(dtype='int64', id=None),
'URL': Value(dtype='string', id=None),
'TEXT': Value(dtype='string', id=None),
'HEIGHT': Value(dtype='int32', id=None),
'WIDTH': Value(dtype='int32', id=None),
'LICENSE': Value(dtype='string', id=None),
'LANGUAGE': Value(dtype='string', id=None),
'NSFW': Value(dtype='string', id=None),
'similarity': Value(dtype='float32', id=None)}
```
### Data Size
download: 1.56 GiB<br>
generated: 2.37 GiB<br>
total: 3.93 GiB
### Data Field
- 'SAMPLE_ID': `int`
- 'URL': `string`
- 'TEXT': `string`
- 'HEIGHT': `int`
- 'WIDTH': `int`
- 'LICENSE': `string`
- 'LANGUAGE': `string`
- 'NSFW': `string`
- 'similarity': `float`
### Data Splits
| | train |
| --------- | -------- |
| # of data | 11376263 |
## Note
### Height, Width
μ΄λ―Έμ§μ κ°λ‘κ° `HEIGHT`λ‘, μΈλ‘κ° `WIDTH`λ‘ λμ΄μλ κ² κ°μ΅λλ€.
```pycon
>>> dataset["train"][98]
{'SAMPLE_ID': 2937471001780,
'URL': 'https://image.ajunews.com/content/image/2019/04/12/20190412175643597949.png',
'TEXT': 'μΈμ²μκ΅μ‘μ², μΈμ² μꡰꡬλ°μ νμν μμμ§κ³Όμ κ°λ΄ν κ°μ΅',
'HEIGHT': 640,
'WIDTH': 321,
'LICENSE': '?',
'LANGUAGE': 'ko',
'NSFW': 'UNLIKELY',
'similarity': 0.33347243070602417}
```

### csv file, pandas
```py
# pip install zstandard
import pandas as pd
from huggingface_hub import hf_hub_url
url = hf_hub_url("Bingsu/laion2B-multi-korean-subset", filename="laion2B-multi-korean-subset.csv.zst", repo_type="dataset")
# url = "https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/laion2B-multi-korean-subset.csv.zst"
df = pd.read_csv(url)
```
<https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/laion2B-multi-korean-subset.csv.zst>
778 MB
### Code used to generate
```py
import csv
import re
from datasets import load_dataset
from tqdm import tqdm
pattern = re.compile(r"[κ°-ν£]")
def quote(s: str) -> str:
s = s.replace('"""', "")
return s
def filter_func(example) -> bool:
lang = example.get("LANGUAGE")
text = example.get("TEXT")
if not isinstance(lang, str) or not isinstance(text, str):
return False
return lang == "ko" or pattern.search(text) is not None
file = open("./laion2B-mulit_korean_subset.csv", "w", encoding="utf-8", newline="")
ds = load_dataset("laion/laion2B-multi", split="train", streaming=True)
dsf = ds.filter(filter_func)
header = [
"SAMPLE_ID",
"URL",
"TEXT",
"HEIGHT",
"WIDTH",
"LICENSE",
"LANGUAGE",
"NSFW",
"similarity",
]
writer = csv.DictWriter(file, fieldnames=header)
writer.writeheader()
try:
for data in tqdm(dsf): # total=11378843
data["TEXT"] = quote(data.get("TEXT", ""))
if data["TEXT"]:
writer.writerow(data)
finally:
file.close()
print("Done!")
```
μ€νμ μ½ 8μκ°μ΄ μμλμμ΅λλ€. μ΄νμ `HEIGHT`λ `WIDTH`κ° NoneμΈ λ°μ΄ν°λ₯Ό μ κ±°νκ³ μ
λ‘λνμμ΅λλ€.
### img2dataset
[img2dataset](https://github.com/rom1504/img2dataset)μ μ¬μ©νμ¬ URLλ‘λ μ΄λ―Έμ§λ€μ λ°μ΄ν°μ
ννλ‘ λ§λ€ μ μμ΅λλ€.
|
NitishKarra | null | null | null | false | 1 | false | NitishKarra/mayds | 2022-08-03T07:02:13.000Z | null | false | 5e4902d05a661db4ce45d0297930102ebb3d4ebf | [] | [] | https://huggingface.co/datasets/NitishKarra/mayds/resolve/main/README.md | maydataset |
NitishKarra | null | null | null | false | 2 | false | NitishKarra/mydsssss | 2022-08-03T07:11:43.000Z | null | false | 9a801afa9c04957bcc709e5a8e298ffd6a660a3e | [] | [] | https://huggingface.co/datasets/NitishKarra/mydsssss/resolve/main/README.md | newwdATAASEt
|
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-project-imdb-f49f2e4f-12435655 | 2022-08-03T07:51:43.000Z | null | false | 454f4d08791516ecf455762cea2a931a1e3b2650 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:imdb"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-imdb-f49f2e4f-12435655/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- imdb
eval_info:
task: binary_classification
model: lvwerra/distilbert-imdb
metrics: []
dataset_name: imdb
dataset_config: plain_text
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: lvwerra/distilbert-imdb
* Dataset: imdb
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lvwerra](https://huggingface.co/lvwerra) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-imdb-ed2a920e-12445656 | 2022-08-03T07:52:28.000Z | null | false | 00ebce44a5ccead88cdad67882e7ecc32ae3debd | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:imdb"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-imdb-ed2a920e-12445656/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- imdb
eval_info:
task: binary_classification
model: lvwerra/distilbert-imdb
metrics: []
dataset_name: imdb
dataset_config: plain_text
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: lvwerra/distilbert-imdb
* Dataset: imdb
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lvwerra](https://huggingface.co/lvwerra) for evaluating this model. |
its5Q | null | null | null | false | 11 | false | its5Q/panorama | 2022-08-05T18:18:10.000Z | null | false | 8aec023a4a4de9d01302f33f9fc1d7331c2ca7ca | [] | [
"annotations_creators:no-annotation",
"language:ru",
"language_creators:other",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:news",
"tags:articles",
"tags:newspapers",
"tags:panorama",
"task_categories:text-generation",
"ta... | https://huggingface.co/datasets/its5Q/panorama/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- ru
language_creators:
- other
license:
- unknown
multilinguality:
- monolingual
pretty_name: Dataset of satirical news from "Panorama", Russian "The Onion".
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- news
- articles
- newspapers
- panorama
task_categories:
- text-generation
task_ids:
- language-modeling
---
### Dataset Summary
Dataset of satirical news from "Panorama", Russian "The Onion".
### Dataset Format
Dataset is in JSONLines format, where "title" is the article title, and "body" are contents of the article. |
NitishKarra | null | null | null | false | 2 | false | NitishKarra/Dmart_ds | 2022-08-03T10:52:16.000Z | null | false | dfc66dcde3cd3a6d09c28da9890115ae6c3e807e | [] | [] | https://huggingface.co/datasets/NitishKarra/Dmart_ds/resolve/main/README.md | dmartbillsdsd |
NitishKarra | null | null | null | false | 2 | false | NitishKarra/billings | 2022-08-03T11:26:01.000Z | null | false | ad839f2ed3ffb1517959cebbefad45c24708d027 | [] | [] | https://huggingface.co/datasets/NitishKarra/billings/resolve/main/README.md | |
NitishKarra | null | null | null | false | 2 | false | NitishKarra/mydata | 2022-08-03T11:29:39.000Z | null | false | 07c671179e16ce4eb6ea2689a2f497603fda8d85 | [] | [] | https://huggingface.co/datasets/NitishKarra/mydata/resolve/main/README.md | |
ijunc2 | null | null | null | false | 2 | false | ijunc2/nnnnnn | 2022-08-03T12:54:59.000Z | null | false | e2fe219a800a2f953a3e533a5fd1db8218cd2604 | [] | [] | https://huggingface.co/datasets/ijunc2/nnnnnn/resolve/main/README.md | |
Danitg95 | null | null | null | false | 2 | false | Danitg95/feedback-kaggle-sentiment | 2022-08-06T19:59:40.000Z | null | false | 00a8f376de5befacd9cb8be6c30cc87de99ee6ab | [] | [] | https://huggingface.co/datasets/Danitg95/feedback-kaggle-sentiment/resolve/main/README.md | |
NitishKarra | null | null | null | false | 1 | false | NitishKarra/Nitishh | 2022-08-03T14:13:17.000Z | null | false | 4d2a8a09970f5779705814c8b0aef7e4eed2244d | [] | [] | https://huggingface.co/datasets/NitishKarra/Nitishh/resolve/main/README.md | bus_bills |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-squad-6abc415f-12465657 | 2022-08-03T14:56:07.000Z | null | false | 9da612b1bb02c71e04e79758e84bf9f81b9cb93d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-6abc415f-12465657/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: deepset/deberta-v3-large-squad2
metrics: []
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-large-squad2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. |
NitishKarra | null | null | null | false | 8 | false | NitishKarra/invoioc | 2022-08-03T15:20:08.000Z | null | false | 05b0e1d75a024403670f58bdcc28e87d7930d1c1 | [] | [] | https://huggingface.co/datasets/NitishKarra/invoioc/resolve/main/README.md | invoice bills |
allenai | null | @article{Shen2022MultiLexSum,
author = {Zejiang Shen and
Kyle Lo and
Lauren Yu and
Nathan Dahlberg and
Margo Schlanger and
Doug Downey},
title = {Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities},
journal = {CoRR},
volume = {abs/2206.10883},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2206.10883},
doi = {10.48550/arXiv.2206.10883}
} | Multi-LexSum is a multi-doc summarization dataset for civil rights litigation lawsuits with summaries of three granularities. | false | 251 | false | allenai/multi_lexsum | 2022-08-03T18:59:06.000Z | null | false | 51309677c66038a64f020358669cf8b4e80787cb | [] | [
"arxiv:2206.10883",
"annotations_creators:expert-generated",
"language:en",
"language_creators:found",
"license:odc-by",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:summarization"
] | https://huggingface.co/datasets/allenai/multi_lexsum/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- odc-by
multilinguality:
- monolingual
pretty_name: Multi-LexSum
size_categories:
- 1K<n<10K
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- summarization
task_ids: []
---
# Dataset Card for Multi-LexSum
## Table of Contents
- [Dataset Card for Multi-LexSum](#dataset-card-for-multi-lexsum)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset](#dataset)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Sheet (Datasheet)](#dataset-sheet-datasheet)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Release History](#release-history)
## Dataset Description
- **Homepage:** https://multilexsum.github.io
- **Repository:** https://github.com/multilexsum/dataset
- **Paper:** https://arxiv.org/abs/2206.10883
<a href="https://multilexsum.github.io" style="display: inline-block;"><img src="https://img.shields.io/badge/-homepage-informational.svg?logo=jekyll" title="Multi-LexSum Paper" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a> <a href="https://github.com/multilexsum/dataset" style="display: inline-block;"><img src="https://img.shields.io/badge/-multilexsum-lightgrey.svg?logo=github" title="Multi-LexSum Github Repo" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a> <a href="https://arxiv.org/abs/2206.10883" style="display: inline-block;"><img src="https://img.shields.io/badge/arXiv-2206.10883-b31b1b.svg" title="Multi-LexSum Paper" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a>
</p>
### Dataset Summary
The Multi-LexSum dataset is a collection of 9,280 such legal case summaries. Multi-LexSum is distinct from other datasets in its **multiple target summaries, each at a different granularity** (ranging from one-sentence βextremeβ summaries to multi-paragraph narrations of over five hundred words). It presents a challenging multi-document summarization task given **the long length of the source documents**, often exceeding two hundred pages per case. Unlike other summarization datasets that are (semi-)automatically curated, Multi-LexSum consists of **expert-authored summaries**: the expertsβlawyers and law studentsβare trained to follow carefully created guidelines, and their work is reviewed by an additional expert to ensure quality.
### Languages
English
## Dataset
### Data Fields
The dataset contains a list of instances (cases); each instance contains the following data:
| Field | Description |
| ------------: | -------------------------------------------------------------------------------: |
| id | `(str)` The case ID |
| sources | `(List[str])` A list of strings for the text extracted from the source documents |
| summary/long | `(str)` The long (multi-paragraph) summary for this case |
| summary/short | `(Optional[str])` The short (one-paragraph) summary for this case |
| summary/tiny | `(Optional[str])` The tiny (one-sentence) summary for this case |
Please check the exemplar usage below for loading the data:
```python
from datasets import load_dataset
multi_lexsum = load_dataset("allenai/multi_lexsum", name="v20220616")
# Download multi_lexsum locally and load it as a Dataset object
example = multi_lexsum["validation"][0] # The first instance of the dev set
example["sources"] # A list of source document text for the case
for sum_len in ["long", "short", "tiny"]:
print(example["summary/" + sum_len]) # Summaries of three lengths
```
### Data Splits
| | Instances | Source Documents (D) | Long Summaries (L) | Short Summaries (S) | Tiny Summaries (T) | Total Summaries |
| ----------: | --------: | -------------------: | -----------------: | ------------------: | -----------------: | --------------: |
| Train (70%) | 3,177 | 28,557 | 3,177 | 2,210 | 1,130 | 6,517 |
| Test (20%) | 908 | 7,428 | 908 | 616 | 312 | 1,836 |
| Dev (10%) | 454 | 4,134 | 454 | 312 | 161 | 927 |
## Dataset Sheet (Datasheet)
Please check our [dataset sheet](https://multilexsum.github.io/datasheet) for details regarding dataset creation, source data, annotation, and considerations for the usage.
## Additional Information
### Dataset Curators
The dataset is created by the collaboration between Civil Rights Litigation Clearinghouse (CRLC, from University of Michigan) and Allen Institute for AI. Multi-LexSum builds on the dataset used and posted by the Clearinghouse to inform the public about civil rights litigation.
### Licensing Information
The Multi-LexSum dataset is distributed under the [Open Data Commons Attribution License (ODC-By)](https://opendatacommons.org/licenses/by/1-0/).
The case summaries and metadata are licensed under the [Creative Commons Attribution License (CC BY-NC)](https://creativecommons.org/licenses/by-nc/4.0/), and the source documents are already in the public domain.
Commercial users who desire a license for summaries and metadata can contact [info@clearinghouse.net](mailto:info@clearinghouse.net), which will allow free use but limit summary re-posting.
The corresponding code for downloading and loading the dataset is licensed under the Apache License 2.0.
### Citation Information
```
@article{Shen2022MultiLexSum,
author = {Zejiang Shen and
Kyle Lo and
Lauren Yu and
Nathan Dahlberg and
Margo Schlanger and
Doug Downey},
title = {Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities},
journal = {CoRR},
volume = {abs/2206.10883},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2206.10883},
doi = {10.48550/arXiv.2206.10883}
}
```
## Release History
| Version | Description |
| ----------: | -----------------------: |
| `v20220616` | The initial v1.0 release | |
alex-apostolo | null | null | null | false | 2 | false | alex-apostolo/filtered-cuad | 2022-08-04T06:24:04.000Z | cuad | false | f5e8e0a268c18fa828f2ba41ea459bfeb8ceb12e | [] | [
"arxiv:2103.06268",
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:cuad",
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"task_ids:extractive... | https://huggingface.co/datasets/alex-apostolo/filtered-cuad/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- cuad
task_categories:
- question-answering
task_ids:
- closed-domain-qa
- extractive-qa
paperswithcode_id: cuad
pretty_name: CUAD
train-eval-index:
- config: default
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: test
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: cuad
name: CUAD
---
# Dataset Card for filtered_cuad
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad)
- **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/)
- **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- **Point of Contact:** [Atticus Project Team](info@atticusprojectai.org)
### Dataset Summary
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. This dataset is a filtered version of CUAD. It excludes legal contracts with an Agreement date prior to 2002 and contracts which are not Business to Business. From the 41 categories we filtered them down to 12 which we considered the most crucial.
We wanted a small dataset to quickly fine-tune different models without sacrificing the categories which we deemed as important. The need to remove most questions was due to them not having an answer which is problematic since it can scue the resulting metrics such as the F1 score and the AUPR curve.
CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [44],
"text": ['DISTRIBUTOR AGREEMENT']
},
"context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...',
"id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0",
"question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract",
"title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT"
}
```
### Data Fields
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
This dataset is split into train/test set. Number of samples in each set is given below:
| | Train | Test |
| ----- | ------ | ---- |
| CUAD | 5442 | 936 |
## Dataset Creation
### Curation Rationale
A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.
Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.
To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.
### Source Data
#### Initial Data Collection and Normalization
The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.
Type of Contracts: # of Docs
Affiliate Agreement: 8
Agency Agreement: 8
Collaboration/Cooperation Agreement: 26
Co-Branding Agreement: 6
Consulting Agreement: 11
Development Agreement: 28
Distributor Agreement: 23
Endorsement Agreement: 10
Franchise Agreement: 14
Hosting Agreement: 12
IP Agreement: 16
Joint Venture Agreemen: 22
License Agreement: 32
Maintenance Agreement: 24
Manufacturing Agreement: 6
Marketing Agreement: 16
Non-Compete/No-Solicit/Non-Disparagement Agreement: 3
Outsourcing Agreement: 12
Promotion Agreement: 9
Reseller Agreement: 12
Service Agreement: 24
Sponsorship Agreement: 17
Supply Agreement: 13
Strategic Alliance Agreement: 32
Transportation Agreement: 1
TOTAL: 385
Categories
Document Name
Parties
Agreement Date
Effective Date
Expiration Date
Renewal Term
Notice Period To Terminate Renewal
Governing Law
Non-Compete
Exclusivity
Change Of Control
Anti-Assignment
#### Who are the source language producers?
The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD.
### Annotations
#### Annotation process
The labeling process included multiple steps to ensure accuracy:
1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.
2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.
3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the βStudent Labelβ step.
4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.
5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.
6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of βextrasβ, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the βextrasβ and added the correct ones. The process is repeated until all or substantially all of the βextrasβ are incorrect labels.
7. Final Report: The final report was exported into a CSV file. Volunteers manually added the βYes/Noβ answer column to categories that do not contain an answer.
#### Who are the annotators?
Answered in above section.
### Personal and Sensitive Information
Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for βJanuary \_\_ 2020β would be β1/[]/2020β).
For any categories that require an answer of βYes/Noβ, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of βfrom period to periodβ.
For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, βPartiesβ may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of βParty A Inc. (βParty Aβ); Party B Corp. (βParty Bβ)β.
Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:
THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.
Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.
To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a βTermination for Convenienceβ clause starts with βEach Party may terminate this Agreement ifβ followed by three subparts β(a), (b) and (c)β, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>β. Another example is for βEffective Dateβ, the contract includes a sentence βThis Agreement is effective as of the date written aboveβ that appears after the date βJanuary 1, 2010β. The annotation is as follows: βJanuary 1, 2010 <omitted> This Agreement is effective as of the date written above.β
Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Attorney Advisors
Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu
Law Student Leaders
John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran
Law Student Contributors
Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin
Technical Advisors & Contributors
Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen
### Licensing Information
CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.
The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.
Privacy Policy & Disclaimers
The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@atticusprojectai.org. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.
The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer.
### Citation Information
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. |
Plashkar | null | null | null | false | 2 | false | Plashkar/diabetes-predict-db | 2022-08-03T16:22:23.000Z | null | false | eb29fab27c5ca7b37d973b117f82ae60bedb1bad | [] | [] | https://huggingface.co/datasets/Plashkar/diabetes-predict-db/resolve/main/README.md | ---
{}
---
# AutoTrain Dataset for project: sample-diabetes-predict
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project sample-diabetes-predict.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 0,
"feat_HighBP": 0.0,
"feat_HighChol": 0.0,
"feat_CholCheck": 1.0,
"feat_BMI": 34.0,
"feat_Smoker": 1.0,
"feat_Stroke": 0.0,
"feat_HeartDiseaseorAttack": 0.0,
"feat_PhysActivity": 1.0,
"feat_Fruits": 1.0,
"feat_Veggies": 1.0,
"feat_HvyAlcoholConsump": 0.0,
"feat_AnyHealthcare": 1.0,
"feat_NoDocbcCost": 0.0,
"feat_GenHlth": 3.0,
"feat_MentHlth": 0.0,
"feat_PhysHlth": 0.0,
"feat_DiffWalk": 0.0,
"feat_Sex": 0.0,
"feat_Age": 6.0,
"feat_Education": 6.0,
"feat_Income": 7.0
},
{
"target": 1,
"feat_HighBP": 0.0,
"feat_HighChol": 0.0,
"feat_CholCheck": 1.0,
"feat_BMI": 46.0,
"feat_Smoker": 1.0,
"feat_Stroke": 0.0,
"feat_HeartDiseaseorAttack": 0.0,
"feat_PhysActivity": 1.0,
"feat_Fruits": 1.0,
"feat_Veggies": 1.0,
"feat_HvyAlcoholConsump": 0.0,
"feat_AnyHealthcare": 1.0,
"feat_NoDocbcCost": 0.0,
"feat_GenHlth": 2.0,
"feat_MentHlth": 1.0,
"feat_PhysHlth": 0.0,
"feat_DiffWalk": 0.0,
"feat_Sex": 1.0,
"feat_Age": 10.0,
"feat_Education": 6.0,
"feat_Income": 5.0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=2, names=['0.0', '1.0'], id=None)",
"feat_HighBP": "Value(dtype='float64', id=None)",
"feat_HighChol": "Value(dtype='float64', id=None)",
"feat_CholCheck": "Value(dtype='float64', id=None)",
"feat_BMI": "Value(dtype='float64', id=None)",
"feat_Smoker": "Value(dtype='float64', id=None)",
"feat_Stroke": "Value(dtype='float64', id=None)",
"feat_HeartDiseaseorAttack": "Value(dtype='float64', id=None)",
"feat_PhysActivity": "Value(dtype='float64', id=None)",
"feat_Fruits": "Value(dtype='float64', id=None)",
"feat_Veggies": "Value(dtype='float64', id=None)",
"feat_HvyAlcoholConsump": "Value(dtype='float64', id=None)",
"feat_AnyHealthcare": "Value(dtype='float64', id=None)",
"feat_NoDocbcCost": "Value(dtype='float64', id=None)",
"feat_GenHlth": "Value(dtype='float64', id=None)",
"feat_MentHlth": "Value(dtype='float64', id=None)",
"feat_PhysHlth": "Value(dtype='float64', id=None)",
"feat_DiffWalk": "Value(dtype='float64', id=None)",
"feat_Sex": "Value(dtype='float64', id=None)",
"feat_Age": "Value(dtype='float64', id=None)",
"feat_Education": "Value(dtype='float64', id=None)",
"feat_Income": "Value(dtype='float64', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 56552 |
| valid | 14140 |
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-bfaf23ee-12505670 | 2022-08-03T21:16:04.000Z | null | false | 88c8874b5003f5defca3f2aad8031d5925ac3f8c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ml6team/cnn_dailymail_nl"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-bfaf23ee-12505670/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ml6team/cnn_dailymail_nl
eval_info:
task: summarization
model: yhavinga/long-t5-tglobal-small-dutch-cnn
metrics: []
dataset_name: ml6team/cnn_dailymail_nl
dataset_config: default
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/long-t5-tglobal-small-dutch-cnn
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. |
falabrasil | null | null | null | false | 2 | false | falabrasil/lapsbm | 2022-08-03T23:42:22.000Z | null | false | 2f8542b79f4abad2802809e98f87fee6745a3ecd | [] | [
"license:mit"
] | https://huggingface.co/datasets/falabrasil/lapsbm/resolve/main/README.md | ---
license: mit
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-ben-yu__ms2_combined-823f066f-12515671 | 2022-08-04T20:56:42.000Z | null | false | 30cebad0e823eb4ab1becef422f44931b4da5b7e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ben-yu/ms2_combined"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-ben-yu__ms2_combined-823f066f-12515671/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ben-yu/ms2_combined
eval_info:
task: summarization
model: Blaise-g/long_t5_global_large_pubmed_explanatory
metrics: []
dataset_name: ben-yu/ms2_combined
dataset_config: ben-yu--ms2_combined
dataset_split: train
col_mapping:
text: Abstract
target: Target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_pubmed_explanatory
* Dataset: ben-yu/ms2_combined
* Config: ben-yu--ms2_combined
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ben-yu](https://huggingface.co/ben-yu) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-samsum-99725515-12535673 | 2022-08-04T05:08:25.000Z | null | false | bd3854b9bb621424168dbfa48790db5385bc7a65 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-samsum-99725515-12535673/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-xsum-d2b9e56c-12525674 | 2022-08-05T10:19:15.000Z | null | false | cf293f7e3c683acacd14e082b236f5145eb3f85e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-xsum-d2b9e56c-12525674/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
metrics: []
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
Auriga | null | null | null | false | 3 | false | Auriga/rules3 | 2022-08-04T08:43:51.000Z | null | false | cf7ebf787de65e4aac566ae8f3866d9d3d3a62e2 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Auriga/rules3/resolve/main/README.md | ---
license: artistic-2.0
---
|
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-d94a9931-12545675 | 2022-08-04T15:28:56.000Z | null | false | aaa27bc25d2c67b9870d4a6390ba6cbd30a7e558 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Blaise-g/SumPubmed"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-d94a9931-12545675/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Blaise-g/SumPubmed
eval_info:
task: summarization
model: Jacobsith/autotrain-Hello_there-1209845735
metrics: []
dataset_name: Blaise-g/SumPubmed
dataset_config: Blaise-g--SumPubmed
dataset_split: test
col_mapping:
text: text
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Jacobsith/autotrain-Hello_there-1209845735
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Jacobsith](https://huggingface.co/Jacobsith) for evaluating this model. |
merkalo-ziri | null | null | null | false | 2 | false | merkalo-ziri/trans_dataset | 2022-08-04T10:29:44.000Z | null | false | 8e38dfd0ff467955d47a0369af72b2a536a2e3c4 | [] | [] | https://huggingface.co/datasets/merkalo-ziri/trans_dataset/resolve/main/README.md |
annotations_creators:
- found
language:
- Russian
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: trans_dataset
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- trans
- piska
task_categories:
- text-classification
task_ids:
- multi-class-classification
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
Silvia | null | null | null | false | 86 | false | Silvia/WITS | 2022-08-04T13:33:31.000Z | null | false | 24b1ead0d5b681ab6350c049a5b6720bfddf384c | [] | [] | https://huggingface.co/datasets/Silvia/WITS/resolve/main/README.md | # Dataset Card for WITS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** https://github.com/slvcsl/WITS
- **Paper:** http://ceur-ws.org/Vol-3033/paper65.pdf
### Dataset Summary
WITS (Wikipedia for Italian Text Summarization) is a large-scale dataset for abstractive summarization in Italian, built exploiting Wikipedia articles' structure. WITS contains almost 700,000 Wikipedia articles, together with their human-written summaries.
Compared to existing data for text summarization in Italian, WITS is more than an order of magnitude larger and more challenging, given its lengthy sources.
### Languages
The dataset is in Italian.
### Licensing Information
The dataset uses text from Wikipedia. Please refer to Wikipedia's license.
### Citation Information
If you use the dataset, please cite:
```
@inproceedings{DBLP:conf/clic-it/CasolaL21,
author={Silvia Casola and Alberto Lavelli},
title={WITS: Wikipedia for Italian Text Summarization},
year={2021},
cdate={1609459200000},
url={http://ceur-ws.org/Vol-3033/paper65.pdf},
booktitle={CLiC-it},
crossref={conf/clic-it/2021}
}
``` |
conceptofmind | null | null | null | false | 13 | false | conceptofmind/wikitext-2-v1-clean | 2022-08-04T20:25:15.000Z | null | false | b641d5a558424e1d41a3acf9f2cb76099ef65923 | [] | [
"arxiv:1609.07843",
"license:mit"
] | https://huggingface.co/datasets/conceptofmind/wikitext-2-v1-clean/resolve/main/README.md | ---
license: mit
---
### Citation Information
```
@misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
conceptofmind | null | null | null | false | 7 | false | conceptofmind/wikitext-103-v1-clean | 2022-08-04T20:25:36.000Z | null | false | 5c53abfb11079a51c7c0bb278f56e217f04d5066 | [] | [
"arxiv:1609.07843",
"license:mit"
] | https://huggingface.co/datasets/conceptofmind/wikitext-103-v1-clean/resolve/main/README.md | ---
license: mit
---
### Citation Information
```
@misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
NX2411 | null | null | null | false | 4 | false | NX2411/AIhub-korean-speech-data-large | 2022-08-04T16:23:54.000Z | null | false | 54d98901eeeb9a922ec290c90a9989be5166ac6d | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/NX2411/AIhub-korean-speech-data-large/resolve/main/README.md | ---
license: apache-2.0
---
|
Qilex | null | null | null | false | 1 | false | Qilex/EN-ME | 2022-08-11T21:25:34.000Z | null | false | 1cc5ff914b850c92808d1a9c92082d8d6101b165 | [] | [
"language:en",
"language:me",
"license:afl-3.0",
"multilinguality:translation",
"size_categories:10K<n<100K",
"tags:middle english",
"task_categories:translation"
] | https://huggingface.co/datasets/Qilex/EN-ME/resolve/main/README.md | ---
language:
- en
- me
license:
- afl-3.0
multilinguality:
- translation
pretty_name: EN-ME
size_categories:
- 10K<n<100K
tags:
- middle english
task_categories:
- translation
---
EN-ME Special Chars is a dataset of roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Wycliffe, and the Gawain Poet.
It includes special characters such as ΓΎ.
This dataset reflects the spelling inconsistencies characteristic of Middle English.
|
Sunny5555 | null | null | null | false | 2 | false | Sunny5555/My_First_Dataset | 2022-08-04T19:29:33.000Z | null | false | cb045b4eb1028845609fa74d1a8b3d5072978b90 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Sunny5555/My_First_Dataset/resolve/main/README.md | ---
license: afl-3.0
---
|
sacculifer | null | null | null | false | 2 | false | sacculifer/dimbat_disaster_detection | 2022-08-05T13:18:31.000Z | null | false | 231c30478949022b78907358aa07d7eefee8c7e4 | [] | [] | https://huggingface.co/datasets/sacculifer/dimbat_disaster_detection/resolve/main/README.md | The dataset contains the well balanced disaster and none disaster tweets selected from 2011, 2012, 2013, 2014, 2015, 2017 and 2018.
The predicted label is shown in predict column |
sacculifer | null | null | null | false | 1 | false | sacculifer/dimbat_disaster_type_detection | 2022-08-05T09:35:34.000Z | null | false | cd13d68018443758e8888f79e79709e69e032e73 | [] | [] | https://huggingface.co/datasets/sacculifer/dimbat_disaster_type_detection/resolve/main/README.md | ## Labels
- biological --- 1
- earthquake --- 2
- flood --- 3
- hurricane & tornado --- 4
- wildfire --- 5
- industrial --- 6
- societal --- 7
- transportation --- 8
- meteor --- 9
- haze --- 10 |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-project-wmt16-d9e39a12-12565676 | 2022-08-05T02:14:27.000Z | null | false | 9f538df29ee297a7750ec7d270ab49b7810e8b31 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:wmt16"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-wmt16-d9e39a12-12565676/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- wmt16
eval_info:
task: translation
model: Lvxue/finetuned-mt5-small-10epoch
metrics: []
dataset_name: wmt16
dataset_config: ro-en
dataset_split: test
col_mapping:
source: translation.en
target: translation.ro
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: Lvxue/finetuned-mt5-small-10epoch
* Dataset: wmt16
* Config: ro-en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Lvxue](https://huggingface.co/Lvxue) for evaluating this model. |
Cxrles | null | null | null | false | 2 | false | Cxrles/uh | 2022-08-05T03:21:54.000Z | null | false | 13196e0a2763b1dc617a5acbfaa808a8f2e94b54 | [] | [
"license:bsd"
] | https://huggingface.co/datasets/Cxrles/uh/resolve/main/README.md | ---
license: bsd
---
|
vector | null | null | null | false | 3 | false | vector/test_demo | 2022-08-15T09:09:12.000Z | null | false | 403489eb4daef8bd3b2f1b54cfb0ca07c1490ee5 | [] | [
"annotaeators:found",
"language_creators:found",
"language:cn"
] | https://huggingface.co/datasets/vector/test_demo/resolve/main/README.md | ---
annotaeators:
- found
language_creators:
- found
language:
- cn
---
### Dataset Summary
Placeholder
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/wiki_lingua')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/wiki_lingua).
#### website
None (See Repository)
#### paper
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### authors
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
None (See Repository)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
https://github.com/esdurmus/Wikilingua
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
@inproceedings{ladhak-etal-2020-wikilingua,
title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
author = "Ladhak, Faisal and
Durmus, Esin and
Cardie, Claire and
McKeown, Kathleen",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.360",
doi = "10.18653/v1/2020.findings-emnlp.360",
pages = "4034--4048",
abstract = "We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct cross-lingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.",
}
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Faisal Ladhak, Esin Durmus
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
faisal@cs.columbia.edu, esdurmus@stanford.edu
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
Dataset does not have multiple dialects per language.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`, `Spanish, Castilian`, `Portuguese`, `French`, `German`, `Russian`, `Italian`, `Indonesian`, `Dutch, Flemish`, `Arabic`, `Chinese`, `Vietnamese`, `Thai`, `Japanese`, `Korean`, `Hindi`, `Czech`, `Turkish`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
No information about the user demographic is available.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-3.0: Creative Commons Attribution 3.0 Unported
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset was intended to serve as a large-scale, high-quality benchmark dataset for cross-lingual summarization.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Produce a high quality summary for the given input article.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Columbia University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Jenny Chim (Queen Mary University of London), Faisal Ladhak (Columbia University)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
gem_id -- The id for the data instance.
source_language -- The language of the source article.
target_language -- The language of the target summary.
source -- The source document.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
{
"gem_id": "wikilingua_crosslingual-train-12345",
"gem_parent_id": "wikilingua_crosslingual-train-12345",
"source_language": "fr",
"target_language": "de",
"source": "Document in fr",
"target": "Summary in de",
}
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The data is split into train/dev/test. In addition to the full test set, there's also a sampled version of the test set.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The data was split to ensure the same document would appear in the same split across languages so as to ensure there's no leakage into the test set.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset provides a large-scale, high-quality resource for cross-lingual summarization in 18 languages, increasing the coverage of languages for the GEM summarization task.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
XSum covers English news articles, and MLSum covers news articles in German and Spanish.
In contrast, this dataset has how-to articles in 18 languages, substantially increasing the languages covered. Moreover, it also provides a a different domain than the other two datasets.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
The ability to generate quality summaries across multiple languages.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`other`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
Previous version had separate data loaders for each language. In this version, we've created a single monolingual data loader, which contains monolingual data in each of the 18 languages. In addition, we've also created a single cross-lingual data loader across all the language pairs in the dataset.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Ability to summarize content across different languages.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
ROUGE is used to measure content selection by comparing word overlap with reference summaries. In addition, the authors of the dataset also used human evaluation to evaluate content selection and fluency of the systems.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset was created in order to enable new approaches for cross-lingual and multilingual summarization, which are currently understudied as well as open up inetersting new directions for research in summarization. E.g., exploration of multi-source cross-lingual architectures, i.e. models that can summarize from multiple source languages into a target language, building models that can summarize articles from any language to any other language for a given set of languages.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Given an input article, produce a high quality summary of the article in the target language.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
WikiHow, which is an online resource of how-to guides (written and reviewed by human authors) is used as the data source.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The articles cover 19 broad categories including health, arts and entertainment, personal care and style, travel, education and communications, etc. The categories cover a broad set of genres and topics.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
(1) Text Content. All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as provided herein. The Creative Commons license allows such text content be used freely for non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants each User of the Service a license to all text content that Users contribute to the Service under the terms and conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully. You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as you wish, whether for commercial or non-commercial purposes.
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
The data is made freely available under the Creative Commons license, therefore there are no restrictions about downstream uses as long is it's for non-commercial purposes.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
Only the article text and summaries were collected. No user information was retained in the dataset.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
yes - other datasets featuring the same task
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`non-commercial use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`non-commercial use only`
### Known Technical Limitations
|
lingophilonaut | null | null | null | false | 3 | false | lingophilonaut/dummy_json | 2022-08-05T07:27:14.000Z | null | false | d7b7d99d2d2617f6f1c221d1729ad6a9373a8ee0 | [] | [] | https://huggingface.co/datasets/lingophilonaut/dummy_json/resolve/main/README.md | TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
maja601 | null | null | null | false | 3 | false | maja601/EuroCrops | 2022-08-05T07:24:29.000Z | null | false | 96c0b32022d28f8128e6586a1792376ca9790b29 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/maja601/EuroCrops/resolve/main/README.md | ---
license: cc-by-4.0
---
|
jeremy2599 | null | null | null | false | 2 | false | jeremy2599/stripai | 2022-08-05T08:15:05.000Z | null | false | 0fb384e4ddaede7fef6694a127d7adfa39927a5d | [] | [] | https://huggingface.co/datasets/jeremy2599/stripai/resolve/main/README.md | |
arize-ai | null | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# }
# | This dataset was crafted to be used in our tutorial [Link to the tutorial when
ready]. It consists on product reviews from an e-commerce store. The reviews
are labeled on a scale from 1 to 5 (stars). The training & validation sets are
fully composed by reviews written in english. However, the production set has
some reviews written in spanish. At Arize, we work to surface this issue and
help you solve it. | false | 3 | false | arize-ai/human_actions_quality_drift | 2022-08-05T08:41:46.000Z | null | false | 287b269de92b7833e9d2a27177dfad0d1dec0eff | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|imdb",
"task_categories:image-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/arize-ai/human_actions_quality_drift/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
source_datasets:
- extended|imdb
task_categories:
- image-classification
task_ids:
- multi-class-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. |
tau | null | @inproceedings{Ivgi2022EfficientLU,
title={Efficient Long-Text Understanding with Short-Text Models},
author={Maor Ivgi and Uri Shaham and Jonathan Berant},
year={2022}
}
Note that each SLED dataset has its own citation. Please see the source to
get the correct citation for each contained dataset (and also cite the SCROLLS dataset on which it is based). | Efficient Long-Text Understanding with Short-Text Models.
Our SLiding-Encoder and Decoder uses any pretrained encoder-decoder model, to independtly encode overlapping chunks of
the inputs, and perform fusion-in-decoder to achieve linear-memory requirment for long-range natural language understanding. | false | 317 | false | tau/sled | 2022-10-25T07:33:44.000Z | null | false | e6a474f5ff4133338b4b9b5e393bad65e787b152 | [] | [
"arxiv:2208.00748",
"arxiv:2201.03533",
"arxiv:2104.02112",
"arxiv:2104.07091",
"arxiv:2104.05938",
"arxiv:1712.07040",
"arxiv:2105.03011",
"arxiv:2112.08608",
"arxiv:2110.01799",
"arxiv:1606.05250",
"arxiv:1809.09600",
"language:en",
"license:mit",
"task_categories:question-answering",
... | https://huggingface.co/datasets/tau/sled/resolve/main/README.md | ---
language:
- en
license:
- mit
task_categories:
- question-answering
- summarization
- text-generation
task_ids:
- multiple-choice-qa
- natural-language-inference
configs:
- gov_report
- summ_screen_fd
- qmsum
- qasper
- narrative_qa
- quality
- contract_nli
- squad
- squad_shuffled_distractors
- squad_ordered_distractors
- hotpotqa
- hotpotqa_second_only
tags:
- multi-hop-question-answering
- query-based-summarization
- long-texts
---
## Dataset Description
- **Repository:** [SLED Github repository](https://github.com/Mivg/SLED)
- **Paper:** [Efficient Long-Text Understanding with Short-Text Models
](https://arxiv.org/pdf/2208.00748.pdf)
# Dataset Card for SCROLLS
## Overview
This dataset is based on the [SCROLLS](https://huggingface.co/datasets/tau/scrolls) dataset ([paper](https://arxiv.org/pdf/2201.03533.pdf)), the [SQuAD 1.1](https://huggingface.co/datasets/squad) dataset and the [HotpotQA](https://huggingface.co/datasets/hotpot_qa) dataset.
It doesn't contain any unpblished data, but includes the configuration needed for the [Efficient Long-Text Understanding with Short-Text Models
](https://arxiv.org/pdf/2208.00748.pdf) paper.
## Tasks
The tasks included are:
#### GovReport ([Huang et al., 2021](https://arxiv.org/pdf/2104.02112.pdf))
GovReport is a summarization dataset of reports addressing various national policy issues published by the
Congressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary.
The reports and their summaries are longer than their equivalents in other popular long-document summarization datasets;
for example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.
#### SummScreenFD ([Chen et al., 2021](https://arxiv.org/pdf/2104.07091.pdf))
SummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones).
Given a transcript of a specific episode, the goal is to produce the episode's recap.
The original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts.
For SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows,
making it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows.
Community-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.
#### QMSum ([Zhong et al., 2021](https://arxiv.org/pdf/2104.05938.pdf))
QMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains.
The corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control,
and committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues.
Annotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions,
while ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.
#### NarrativeQA ([KoΔiskΓ½ et al., 2021](https://arxiv.org/pdf/1712.07040.pdf))
NarrativeQA (KoΔiskΓ½ et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites.
Annotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs,
resulting in about 30 questions and answers for each of the 1,567 books and scripts.
They were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast.
Each question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical).
#### Qasper ([Dasigi et al., 2021](https://arxiv.org/pdf/2105.03011.pdf))
Qasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC).
Questions were written by NLP practitioners after reading only the title and abstract of the papers,
while another set of NLP practitioners annotated the answers given the entire document.
Qasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.
#### QuALITY ([Pang et al., 2021](https://arxiv.org/pdf/2112.08608.pdf))
QuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg,
the Open American National Corpus, and more.
Experienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them,
human annotators must read large portions of the given document.
Reference answers were then calculated using the majority vote between of the annotators and writer's answers.
To measure the difficulty of their questions, Pang et al. conducted a speed validation process,
where another set of annotators were asked to answer questions given only a short period of time to skim through the document.
As a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.
#### ContractNLI ([Koreeda and Manning, 2021](https://arxiv.org/pdf/2110.01799.pdf))
Contract NLI is a natural language inference dataset in the legal domain.
Given a non-disclosure agreement (the premise), the task is to predict whether a particular legal statement (the hypothesis) is entailed, not entailed (neutral), or cannot be entailed (contradiction) from the contract.
The NDAs were manually picked after simple filtering from the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR) and Google.
The dataset contains a total of 607 contracts and 17 unique hypotheses, which were combined to produce the dataset's 10,319 examples.
#### SQuAD 1.1 ([Rajpurkar et al., 2016](https://arxiv.org/pdf/1606.05250.pdf))
Stanford Question Answering Dataset (SQuAD) is a reading comprehension \
dataset, consisting of questions posed by crowdworkers on a set of Wikipedia \
articles, where the answer to every question is a segment of text, or span, \
from the corresponding reading passage, or the question might be unanswerable.
#### HotpotQA ([Yang et al., 2018](https://arxiv.org/pdf/1809.09600.pdf))
HotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features:
(1) the questions require finding and reasoning over multiple supporting documents to answer;
(2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas;
(3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervisionand explain the predictions;
(4) we offer a new type of factoid comparison questions to testQA systemsβ ability to extract relevant facts and perform necessary comparison.
## Data Fields
All the datasets in the benchmark are in the same input-output format
- `input`: a `string` feature. The input document.
- `input_prefix`: an optional `string` feature, for the datasets containing prefix (e.g. question)
- `output`: a `string` feature. The target.
- `id`: a `string` feature. Unique per input.
- `pid`: a `string` feature. Unique per input-output pair (can differ from 'id' in NarrativeQA and Qasper, where there is more then one valid target).
The dataset that contain `input_prefix` are:
- SQuAD - the question
- HotpotQA - the question
- qmsum - the query
- qasper - the question
- narrative_qa - the question
- quality - the question + the four choices
- contract_nli - the hypothesis
## Controlled experiments
To test multiple properties of SLED, we modify SQuAD 1.1 [Rajpurkar et al., 2016](https://arxiv.org/pdf/1606.05250.pdf)
and HotpotQA [Yang et al., 2018](https://arxiv.org/pdf/1809.09600.pdf) to create a few controlled experiments settings.
Those are accessible via the following configurations:
- squad - Contains the original version of SQuAD 1.1 (question + passage)
- squad_ordered_distractors - For each example, 9 random distrctor passages are concatenated (separated by '\n')
- squad_shuffled_distractors - For each example, 9 random distrctor passages are added (separated by '\n'), and jointly the 10 passages are randomly shuffled
- hotpotqa - A clean version of HotpotQA, where each input contains only the two gold passages (separated by '\n')
- hotpotqa_second_only - In each example, the input contains only the second gold passage
## Citation
If you use this dataset, **please make sure to cite all the original dataset papers as well SCROLLS.** [[bibtex](https://drive.google.com/uc?export=download&id=1IUYIzQD9DPsECw0JWkwk4Ildn8JOMtuU)]
```
@inproceedings{Ivgi2022EfficientLU,
title={Efficient Long-Text Understanding with Short-Text Models},
author={Maor Ivgi and Uri Shaham and Jonathan Berant},
year={2022}
}
``` |
rony | null | null | null | false | 3 | false | rony/VANiLLa | 2022-08-05T11:45:29.000Z | null | false | ac1325919b9de7b6daf6bff34d77ccff838ea52d | [] | [
"license:mit"
] | https://huggingface.co/datasets/rony/VANiLLa/resolve/main/README.md | ---
license: mit
---
Knowledge graph triple to answer verbalization dataset.
VANiLLa: Verbalized answers in natural language at large scale
|
yogiyulianto | null | null | null | false | 3 | false | yogiyulianto/twitter-sentiment-dataset-en | 2022-08-05T09:42:41.000Z | null | false | 0ab2e4c58f80f646893202b866331719f6cdb358 | [] | [
"license:other"
] | https://huggingface.co/datasets/yogiyulianto/twitter-sentiment-dataset-en/resolve/main/README.md | ---
license: other
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.