author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
rousses | null | null | null | false | 1 | false | rousses/imagine | 2022-09-28T22:16:15.000Z | null | false | 57e5044606ea180cd495a3c301c25a19fde3d7ff | [] | [
"license:other"
] | https://huggingface.co/datasets/rousses/imagine/resolve/main/README.md | ---
license: other
---
|
poeticoncept | null | null | null | false | 3 | false | poeticoncept/autoportrait2 | 2022-09-28T22:06:11.000Z | null | false | 8e28a20ff6b87b1740201fab0035b05f044fa2be | [] | [
"license:other"
] | https://huggingface.co/datasets/poeticoncept/autoportrait2/resolve/main/README.md | ---
license: other
---
|
Franmg | null | null | null | false | 1 | false | Franmg/Fotos | 2022-09-28T22:37:01.000Z | null | false | cc27350c690c3bf84e52554a42e7e6af62d917c3 | [] | [] | https://huggingface.co/datasets/Franmg/Fotos/resolve/main/README.md | |
cattoroboto | null | null | null | false | 1 | false | cattoroboto/waifudiffusion-marine-textual-inversion | 2022-09-29T00:06:45.000Z | null | false | 1786207ffebfbe62211179fccbd4d0566ace37a9 | [] | [] | https://huggingface.co/datasets/cattoroboto/waifudiffusion-marine-textual-inversion/resolve/main/README.md | This textual inversion has been trained on WaifuDiffusion v1.2 (`[45dee52b]`). This will probably not work well with the standard Stable Diffusion model.
# How to use (with webui)
- create `embeddings` folder in the root directory of the webui
- paste the .bin in there
**keyword: `<marine>`** |
AmliArt | null | null | null | false | 3 | false | AmliArt/face | 2022-09-28T23:55:28.000Z | null | false | 048a873dc8ee97644ef250ff3e5fdec23e635a68 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/AmliArt/face/resolve/main/README.md | ---
license: unknown
---
|
Jonnyck | null | null | null | false | 1 | false | Jonnyck/myself | 2022-09-29T00:14:28.000Z | null | false | 774d821c1bb64c62c0eef7204ff19776946d9892 | [] | [
"license:other"
] | https://huggingface.co/datasets/Jonnyck/myself/resolve/main/README.md | ---
license: other
---
|
Limbicnation | null | null | null | false | 3 | false | Limbicnation/pixelart | 2022-09-29T00:03:03.000Z | null | false | ff362105035ab3d6251d4fd0dbb65bb826d3e357 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Limbicnation/pixelart/resolve/main/README.md | ---
license: artistic-2.0
---
|
JorgeAcevedx | null | null | null | false | 1 | false | JorgeAcevedx/portrait | 2022-09-29T00:17:43.000Z | null | false | 6fd8ede7dbde80c793cf5a335a3f5ccf431f9890 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/JorgeAcevedx/portrait/resolve/main/README.md | ---
license: afl-3.0
---
|
Pitagorak | null | null | null | false | 1 | false | Pitagorak/Yo | 2022-10-01T04:21:10.000Z | null | false | 0c52d74f1f27559051c13c40bcbdc0ea22e5dac9 | [] | [
"license:other"
] | https://huggingface.co/datasets/Pitagorak/Yo/resolve/main/README.md | ---
license: other
---
|
waifu-research-department | null | null | null | false | 5 | false | waifu-research-department/regularization | 2022-09-29T22:00:10.000Z | null | false | 337ec38c58a30812c0944d807f5acdc1f86f4bc3 | [] | [
"license:mit"
] | https://huggingface.co/datasets/waifu-research-department/regularization/resolve/main/README.md | ---
license: mit
---
# Info
> This is a repository for anime regularization. If you wish to contribute to the dataset, contact me at naotsue#9786. I will add them to the dataset and update it.
# Criteria
> 512x512
> No excessive deformations
> Vaguely resembles an anime artstyle
# Contribution Leaderboard
> 1. bWm_nubby: 5838 images
> 2. naotsue: 888 images
 |
Brayant115 | null | null | null | false | 1 | false | Brayant115/yo | 2022-09-29T02:47:35.000Z | null | false | 501e676071e2bde888b80b52227f0aedc4f82d81 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/Brayant115/yo/resolve/main/README.md | ---
license: apache-2.0
---
|
vialibre | null | @misc{https://doi.org/10.48550/arxiv.2207.06591,
doi = {10.48550/ARXIV.2207.06591},
url = {https://arxiv.org/abs/2207.06591},
author = {Alemany, Laura Alonso and Benotti, Luciana and González, Lucía and Maina, Hernán and Busaniche, Beatriz and Halvorsen, Alexia and Bordone, Matías and Sánchez, Jorge},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI),
FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A tool to overcome technical barriers for bias assessment in human language technologies},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
} | null | false | 1 | false | vialibre/spanish3bwc_vocab | 2022-09-29T08:37:10.000Z | null | false | fdc29b49751501be120fbcbc6afa1e4c6cd0887a | [] | [
"arxiv:2207.06591",
"language:es",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"license:mit"
] | https://huggingface.co/datasets/vialibre/spanish3bwc_vocab/resolve/main/README.md | ---
language:
- 'es'
multilinguality:
- monolingual
size_categories:
- "1M<n<10M"
pretty_name: "Vocabulary info - Spanish 3 Billion Words Corpora"
license: mit
---
# Dataset Card for "Vocabulary info - Spanish 3 Billion Words Corpora"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Source Data](#source-data)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Paper:** https://arxiv.org/abs/2207.06591
### Dataset Summary
* Number of words: 1.529.876 (1.5M)
* Only word with an absolute frequency greater than or equal to four.
### Languages
* Spanish
### Source Data
- **Repository:** https://huggingface.co/datasets/nanom/splittedspanish3bwc
## Dataset Structure
### Data Fields
| Field |Descriptions|
|-----------------------|----:|
|word | Word|
|freq | Absolute word frequency|
|cum_freq | Cumulative frequency in relation to the absolute frequency of the words|
|percentile | percentile = $ith_{cum\_freqf}/max_{cum\_freq}$|
|splits | Block name of the complete data set containing the word|
|splits_freq | Number of occurrences of the word in the block|
|in_subset | Dictionary of parent set name (from which each block is extracted) and number of occurrences of the word|
Example:
```
{"word":"crisantemo",
"freq":714,
"cum_freq":255249,
"percentile":0.00009,
"splits":["OpenSubtitles2018_501","allwikis_41","allwikis_242","allwikis_173","allwikis_327","allwikis_331","OpenSubtitles2018_494","allwikis_131","OpenSubtitles2018_263","allwikis_316","ParaCrawl_66","OpenSubtitles2018_43","ParaCrawl_15","allwikis_300","OpenSubtitles2018_617","allwikis_4","ParaCrawl_143","allwikis_178","ParaCrawl_39","allwikis_96","OpenSubtitles2018_555","allwikis_265","OpenSubtitles2018_393","OpenSubtitles2018_589","allwikis_200","OpenSubtitles2018_152","allwikis_273","allwikis_153","OpenSubtitles2018_272","OpenSubtitles2018_37","allwikis_120","allwikis_311","OpenSubtitles2018_217","allwikis_136","OpenSubtitles2018_421","allwikis_169","ParaCrawl_137","allwikis_199","allwikis_258","allwikis_77","allwikis_91","OpenSubtitles2018_348","OpenSubtitles2018_372","OpenSubtitles2018_126","OpenSubtitles2018_143","OpenSubtitles2018_364","allwikis_292","allwikis_211","OpenSubtitles2018_598","allwikis_269","allwikis_46","allwikis_299","allwikis_245","DOGC_13","allwikis_253","OpenSubtitles2018_559","allwikis_192","OpenSubtitles2018_230","allwikis_345","ParaCrawl_35","allwikis_184","OpenSubtitles2018_637","OpenSubtitles2018_279","allwikis_158","OpenSubtitles2018_485","allwikis_197","allwikis_340","allwikis_114","allwikis_333","allwikis_181","allwikis_171","allwikis_325","allwikis_138","OpenSubtitles2018_219","OpenSubtitles2018_624","OpenSubtitles2018_596","OpenSubtitles2018_503","OpenSubtitles2018_112","OpenSubtitles2018_539","allwikis_89","OpenSubtitles2018_350","OpenSubtitles2018_104","OpenSubtitles2018_187","OpenSubtitles2018_524","allwikis_82","OpenSubtitles2018_557","allwikis_94","OpenSubtitles2018_377","allwikis_297","OpenSubtitles2018_135","allwikis_214","allwikis_48","allwikis_267","allwikis_125","allwikis_302","OpenSubtitles2018_212","allwikis_63","OpenSubtitles2018_546","DGT_24","allwikis_93","allwikis_75","DGT_32","OpenSubtitles2018_132","OpenSubtitles2018_519","OpenSubtitles2018_366","allwikis_59","allwikis_205","OpenSubtitles2018_23","OpenSubtitles2018_280","OpenSubtitles2018_628","allwikis_118","OpenSubtitles2018_423","ParaCrawl_123","allwikis_105","allwikis_186","OpenSubtitles2018_241","allwikis_160","allwikis_113","OpenSubtitles2018_623","OpenSubtitles2018_208","allwikis_129","OpenSubtitles2018_28","OpenSubtitles2018_635","OpenSubtitles2018_504","ParaCrawl_2","OpenSubtitles2018_587","allwikis_44","OpenSubtitles2018_591","OpenSubtitles2018_180","allwikis_68","OpenSubtitles2018_115","ParaCrawl_134","GlobalVoices_0","allwikis_328","allwikis_312","allwikis_146","OpenSubtitles2018_297","OpenSubtitles2018_271","OpenSubtitles2018_202","allwikis_261","OpenSubtitles2018_133","allwikis_58","allwikis_204","OpenSubtitles2018_125","OpenSubtitles2018_522","allwikis_84","OpenSubtitles2018_333","allwikis_246","allwikis_235","OpenSubtitles2018_197","allwikis_53","allwikis_128","OpenSubtitles2018_429","allwikis_323","allwikis_104","OpenSubtitles2018_13","allwikis_335","OpenSubtitles2018_538","allwikis_88","allwikis_224","allwikis_257","allwikis_78","OpenSubtitles2018_322","EUBookShop_29","allwikis_54","ParaCrawl_102","OpenSubtitles2018_625","allwikis_308","ParaCrawl_138","allwikis_180","OpenSubtitles2018_222","allwikis_103","allwikis_324","ParaCrawl_31","OpenSubtitles2018_67","GlobalVoices_7","ParaCrawl_65","allwikis_124","allwikis_303","allwikis_157","OpenSubtitles2018_276","allwikis_141","ParaCrawl_83","allwikis_203","OpenSubtitles2018_305","allwikis_49","OpenSubtitles2018_329","allwikis_239","OpenSubtitles2018_533","allwikis_95","allwikis_285","allwikis_29","allwikis_210","OpenSubtitles2018_553","allwikis_76","OpenSubtitles2018_420","OpenSubtitles2018_453","OpenSubtitles2018_249","allwikis_168","allwikis_152","OpenSubtitles2018_45","allwikis_121","allwikis_310","OpenSubtitles2018_469","allwikis_137","OpenSubtitles2018_645","OpenSubtitles2018_636","OpenSubtitles2018_288","allwikis_163","GlobalVoices_9","allwikis_321","allwikis_175","allwikis_106","OpenSubtitles2018_354","OpenSubtitles2018_4","OpenSubtitles2018_562","allwikis_47","EUBookShop_2","OpenSubtitles2018_574","allwikis_51","ParaCrawl_1","JRC_5","OpenSubtitles2018_465","allwikis_148","OpenSubtitles2018_269","allwikis_101","allwikis_194","OpenSubtitles2018_245","ParaCrawl_149","OpenSubtitles2018_162","allwikis_56","OpenSubtitles2018_595","OpenSubtitles2018_565","allwikis_217","allwikis_294","OpenSubtitles2018_145","allwikis_264","allwikis_282","OpenSubtitles2018_374","allwikis_81","allwikis_5","OpenSubtitles2018_78","allwikis_189","OpenSubtitles2018_258","allwikis_130","OpenSubtitles2018_211","allwikis_143","OpenSubtitles2018_42","ParaCrawl_97"],
"splits_freq":[8,3,1,1,5,1,26,1,1,1,8,7,1,1,1,1,3,4,1,3,1,1,1,27,2,4,1,1,2,1,3,2,2,4,1,1,1,1,1,1,8,1,1,18,1,2,1,1,3,1,2,11,1,1,1,2,2,1,1,1,5,1,2,1,3,8,2,1,2,4,3,1,2,2,1,3,1,5,1,1,4,3,2,81,1,5,2,3,1,4,1,2,1,2,1,9,2,1,7,1,1,2,1,1,1,1,1,5,1,1,5,1,12,3,3,1,1,1,6,4,2,4,9,12,1,1,1,1,1,1,2,2,4,1,2,2,1,2,6,1,2,2,1,1,9,1,2,2,1,2,1,2,4,1,1,15,1,5,1,1,1,1,2,2,1,1,1,1,1,2,1,1,1,4,3,1,4,1,3,1,5,1,1,1,3,2,1,1,2,1,1,4,1,3,1,3,6,2,1,2,1,2,8,2,1,2,1,2,1,1,1,2,1,3,1,1,2,1,1,6,1,2,1,1,3,1,1,3,1,1,1,2,1,3,1,6,2,2,2,1,1,4,1,1,1,3,8,1,2],
"in_subset":{"OpenSubtitles2018":417,"allwikis":231,"ParaCrawl":45,"DOGC":1,"DGT":9,"GlobalVoices":6,"EUBookShop":3,"JRC":2}}'
```
### Data Splits
| name |train|
|------------------------|----:|
|full |1529876|
|mini |20|
## Additional Information
### Licensing Information
* [MIT Licence](https://huggingface.co/datasets/vialibre/spanish3bwc_vocab/resolve/main/LICENSE)
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2207.06591,
doi = {10.48550/ARXIV.2207.06591},
url = {https://arxiv.org/abs/2207.06591},
author = {Alemany, Laura Alonso and Benotti, Luciana and González, Lucía and Maina, Hernán and Busaniche, Beatriz and Halvorsen, Alexia and Bordone, Matías and Sánchez, Jorge},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI),
FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A tool to overcome technical barriers for bias assessment in human language technologies},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
Xitari | null | null | null | false | 2 | false | Xitari/soyyo | 2022-09-29T03:49:31.000Z | null | false | f2c96e0553b980a0f6d6660dac79b7c8b2e8b0a7 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Xitari/soyyo/resolve/main/README.md | ---
license: artistic-2.0
---
|
leizu | null | null | null | false | 1 | false | leizu/face1 | 2022-09-29T04:33:47.000Z | null | false | 0277cb91dfecc95f779b25bbd9223bc770b276e1 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/leizu/face1/resolve/main/README.md | ---
license: openrail
---
|
sudapop | null | null | null | false | 1 | false | sudapop/test | 2022-09-29T04:51:38.000Z | null | false | b4adca9c6281d8076dcd2f1d30d83f991cdca1ec | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/sudapop/test/resolve/main/README.md | ---
license: afl-3.0
---
|
ruffusplay | null | null | null | false | 1 | false | ruffusplay/ajolote | 2022-09-29T05:27:47.000Z | null | false | fb9b4efc3c14b039c5012ad7d7de29bca88e4a0b | [] | [
"license:openrail"
] | https://huggingface.co/datasets/ruffusplay/ajolote/resolve/main/README.md | ---
license: openrail
---
|
ruffusplay | null | null | null | false | 1 | false | ruffusplay/ajolote2 | 2022-09-29T05:30:34.000Z | null | false | 3fbc3b7f095455dcbfa990c7cd9840bca953aceb | [] | [
"license:openrail"
] | https://huggingface.co/datasets/ruffusplay/ajolote2/resolve/main/README.md | ---
license: openrail
---
|
ruffusplay | null | null | null | false | 1 | false | ruffusplay/ajo | 2022-09-29T05:31:46.000Z | null | false | d821a66d1ea7e1a1c3d0f41d2b214d53af651cde | [] | [
"license:c-uda"
] | https://huggingface.co/datasets/ruffusplay/ajo/resolve/main/README.md | ---
license: c-uda
---
|
Metalistenia | null | null | null | false | 1 | false | Metalistenia/daniel | 2022-09-29T05:54:18.000Z | null | false | 2066097f3c1e270598bdeb8376f45e4d55bfdeb3 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Metalistenia/daniel/resolve/main/README.md | ---
license: openrail
---
|
Johannesemme | null | null | This dataset is designed to solve the task of categorizing a text wrt. 14 different categories obtained using the Wikipaedia category hierarchy. | false | 1 | false | Johannesemme/wiki_kategori | 2022-10-07T16:38:13.000Z | null | false | cf7ba9db383e5ef5a05295a5efd55058000c73ef | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/Johannesemme/wiki_kategori/resolve/main/README.md | ---
license: apache-2.0
---
|
joelito | null | false | 1,592 | false | joelito/eurlex_resources | 2022-11-15T20:07:54.000Z | null | false | 8f3732b8372fcad3aad678caea942f637566e304 | [] | [
"annotations_creators:other",
"language_creators:found",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
... | https://huggingface.co/datasets/joelito/eurlex_resources/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "EurlexResources: A Corpus Covering the Largest EURLEX Resources"
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for EurlexResources: A Corpus Covering the Largest EURLEX Resources
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/JoelNiklaus/LegalDatasets/tree/main/pretrain/eurlex)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
This dataset contains large text resources (~179GB in total) from EURLEX that can be used for pretraining language models.
### Supported Tasks and Leaderboards
The dataset supports the task of masked language modeling.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl.xz and there is one split available ("train").
The following resource types are supported: caselaw, decision, directive, intagr, proposal, recommendation, regulation
More information about the resource types can be found here:
- Caselaw: [EU](https://eur-lex.europa.eu/collection/eu-law/eu-case-law.html)
- Decision: [EU](https://eur-lex.europa.eu/EN/legal-content/summary/european-union-decisions.html), [Wikipedia](https://en.wikipedia.org/wiki/Decision_(European_Union))
- Directive: [EU](https://european-union.europa.eu/institutions-law-budget/law/types-legislation_en), [Wikipedia](https://en.wikipedia.org/wiki/Directive_(European_Union))
- Recommendation: [EU](https://eur-lex.europa.eu/EN/legal-content/glossary/recommendation.html), [Wikipedia](https://en.wikipedia.org/wiki/Recommendation_(European_Union))
- Regulation: [EU](https://european-union.europa.eu/institutions-law-budget/law/types-legislation_en), [Wikipedia](https://en.wikipedia.org/wiki/Regulation_(European_Union))
- Intagr: [EU](https://eur-lex.europa.eu/collection/eu-law/inter-agree.html), [Wikipedia](https://en.wikipedia.org/wiki/Treaties_of_the_European_Union)
- Proposal: No resource found
| Language | Source | Size (MB) | Tokens | Documents | Tokens/Document |
|:-----------|:---------------|------------:|------------:|------------:|------------------:|
| all | all | 176558 | 21526462547 | 8267651 | 2603 |
| all | caselaw | 32320 | 5465831562 | 2428444 | 2250 |
| all | decision | 27974 | 3054210877 | 1267253 | 2410 |
| all | directive | 4689 | 682257277 | 103020 | 6622 |
| all | intagr | 11264 | 1360585785 | 271055 | 5019 |
| all | proposal | 26526 | 3801600560 | 702392 | 5412 |
| all | recommendation | 1886 | 293127733 | 80277 | 3651 |
| all | regulation | 71899 | 6868848753 | 3415210 | 2011 |
| bg | all | 7819 | 683675086 | 348691 | 1960 |
| bg | caselaw | 1588 | 179453583 | 104434 | 1718 |
| bg | decision | 1248 | 102781999 | 54075 | 1900 |
| bg | directive | 263 | 28004882 | 4388 | 6382 |
| bg | intagr | 603 | 54294524 | 11581 | 4688 |
| bg | proposal | 1083 | 103035900 | 29251 | 3522 |
| bg | recommendation | 89 | 9061227 | 3321 | 2728 |
| bg | regulation | 2943 | 207042971 | 141641 | 1461 |
| cs | all | 8360 | 934087127 | 449793 | 2076 |
| cs | caselaw | 1163 | 208895704 | 104519 | 1998 |
| cs | decision | 1102 | 114330830 | 54075 | 2114 |
| cs | directive | 186 | 28119441 | 4388 | 6408 |
| cs | intagr | 449 | 55008615 | 11581 | 4749 |
| cs | proposal | 840 | 121367313 | 29252 | 4149 |
| cs | recommendation | 64 | 10216343 | 3323 | 3074 |
| cs | regulation | 4557 | 396148881 | 242655 | 1632 |
| da | all | 7676 | 927426056 | 303594 | 3054 |
| da | caselaw | 491 | 46535791 | 59328 | 784 |
| da | decision | 1356 | 161197980 | 54085 | 2980 |
| da | directive | 207 | 32246780 | 4388 | 7348 |
| da | intagr | 506 | 64738297 | 11582 | 5589 |
| da | proposal | 1399 | 213540972 | 29257 | 7298 |
| da | recommendation | 100 | 16169558 | 3352 | 4823 |
| da | regulation | 3618 | 392996678 | 141602 | 2775 |
| de | all | 9607 | 1259810918 | 348290 | 3617 |
| de | caselaw | 1930 | 337759616 | 104228 | 3240 |
| de | decision | 1449 | 169687102 | 53980 | 3143 |
| de | directive | 218 | 32100715 | 4385 | 7320 |
| de | intagr | 531 | 67188513 | 11580 | 5802 |
| de | proposal | 1556 | 227441945 | 29219 | 7784 |
| de | recommendation | 109 | 16914583 | 3318 | 5097 |
| de | regulation | 3813 | 408718444 | 141580 | 2886 |
| el | all | 12469 | 1378107391 | 349667 | 3941 |
| el | caselaw | 2951 | 386130014 | 105138 | 3672 |
| el | decision | 1823 | 182486411 | 54150 | 3370 |
| el | directive | 321 | 38513818 | 4390 | 8773 |
| el | intagr | 701 | 77738992 | 11584 | 6710 |
| el | proposal | 2085 | 251594184 | 29290 | 8589 |
| el | recommendation | 145 | 17742091 | 3357 | 5285 |
| el | regulation | 4443 | 423901881 | 141758 | 2990 |
| en | all | 9217 | 1161691945 | 348641 | 3332 |
| en | caselaw | 1846 | 316782142 | 104422 | 3033 |
| en | decision | 1504 | 176469698 | 54054 | 3264 |
| en | directive | 204 | 28841387 | 4388 | 6572 |
| en | intagr | 499 | 60504759 | 11581 | 5224 |
| en | proposal | 1538 | 209851759 | 29242 | 7176 |
| en | recommendation | 97 | 14694658 | 3320 | 4426 |
| en | regulation | 3530 | 354547542 | 141634 | 2503 |
| es | all | 8588 | 1077415710 | 348443 | 3092 |
| es | caselaw | 1870 | 312501137 | 104312 | 2995 |
| es | decision | 1334 | 147730710 | 54001 | 2735 |
| es | directive | 221 | 31679902 | 4385 | 7224 |
| es | intagr | 516 | 64131203 | 11581 | 5537 |
| es | proposal | 1366 | 197060809 | 29224 | 6743 |
| es | recommendation | 82 | 12355655 | 3319 | 3722 |
| es | regulation | 3199 | 311956294 | 141621 | 2202 |
| et | all | 6090 | 712559412 | 349615 | 2038 |
| et | caselaw | 1074 | 188899781 | 105111 | 1797 |
| et | decision | 1069 | 107752981 | 54159 | 1989 |
| et | directive | 177 | 25983417 | 4390 | 5918 |
| et | intagr | 436 | 51558677 | 11584 | 4450 |
| et | proposal | 810 | 114597516 | 29283 | 3913 |
| et | recommendation | 61 | 9717239 | 3355 | 2896 |
| et | regulation | 2464 | 214049801 | 141733 | 1510 |
| fi | all | 7346 | 926601752 | 349633 | 2650 |
| fi | caselaw | 1596 | 280391862 | 105119 | 2667 |
| fi | decision | 1227 | 133158370 | 54163 | 2458 |
| fi | directive | 204 | 30439964 | 4389 | 6935 |
| fi | intagr | 463 | 56305341 | 11584 | 4860 |
| fi | proposal | 1075 | 161285908 | 29288 | 5506 |
| fi | recommendation | 73 | 11623296 | 3356 | 3463 |
| fi | regulation | 2707 | 253397011 | 141734 | 1787 |
| fr | all | 9937 | 1383610076 | 348295 | 3972 |
| fr | caselaw | 2158 | 400304923 | 104228 | 3840 |
| fr | decision | 1473 | 182025567 | 53981 | 3372 |
| fr | directive | 222 | 34239059 | 4385 | 7808 |
| fr | intagr | 536 | 71340724 | 11580 | 6160 |
| fr | proposal | 1592 | 245293973 | 29218 | 8395 |
| fr | recommendation | 112 | 18413965 | 3318 | 5549 |
| fr | regulation | 3845 | 431991865 | 141585 | 3051 |
| ga | all | 1028 | 129394560 | 349778 | 369 |
| ga | caselaw | 11 | 1322015 | 105205 | 12 |
| ga | decision | 87 | 9132798 | 54189 | 168 |
| ga | directive | 18 | 2881950 | 4390 | 656 |
| ga | intagr | 19 | 3544433 | 11586 | 305 |
| ga | proposal | 289 | 51138741 | 29298 | 1745 |
| ga | recommendation | 10 | 1770503 | 3361 | 526 |
| ga | regulation | 594 | 59604120 | 141749 | 420 |
| hr | all | 4594 | 479668409 | 348691 | 1375 |
| hr | caselaw | 617 | 108134448 | 104434 | 1035 |
| hr | decision | 596 | 61443063 | 54075 | 1136 |
| hr | directive | 156 | 19255268 | 4388 | 4388 |
| hr | intagr | 450 | 44372755 | 11581 | 3831 |
| hr | proposal | 552 | 60718165 | 29251 | 2075 |
| hr | recommendation | 40 | 6313739 | 3321 | 1901 |
| hr | regulation | 2183 | 179430971 | 141641 | 1266 |
| hu | all | 6653 | 744676866 | 349605 | 2130 |
| hu | caselaw | 1278 | 206954585 | 105144 | 1968 |
| hu | decision | 1147 | 112655714 | 54156 | 2080 |
| hu | directive | 200 | 27421410 | 4389 | 6247 |
| hu | intagr | 470 | 54481362 | 11586 | 4702 |
| hu | proposal | 912 | 120493483 | 29291 | 4113 |
| hu | recommendation | 70 | 10294766 | 3357 | 3066 |
| hu | regulation | 2576 | 212375546 | 141682 | 1498 |
| it | all | 8222 | 963423333 | 303187 | 3177 |
| it | caselaw | 526 | 46071081 | 59116 | 779 |
| it | decision | 1445 | 166154664 | 53983 | 3077 |
| it | directive | 217 | 31786252 | 4385 | 7248 |
| it | intagr | 528 | 66036352 | 11580 | 5702 |
| it | proposal | 1533 | 224727845 | 29218 | 7691 |
| it | recommendation | 109 | 16724986 | 3318 | 5040 |
| it | regulation | 3865 | 411922153 | 141587 | 2909 |
| lt | all | 4909 | 590271724 | 220817 | 2673 |
| lt | caselaw | 1137 | 202588185 | 105477 | 1920 |
| lt | decision | 551 | 53711077 | 21841 | 2459 |
| lt | directive | 88 | 13428712 | 2072 | 6481 |
| lt | intagr | 294 | 33148829 | 4051 | 8182 |
| lt | proposal | 850 | 121316064 | 29272 | 4144 |
| lt | recommendation | 64 | 10187341 | 3363 | 3029 |
| lt | regulation | 1926 | 155891516 | 54741 | 2847 |
| lv | all | 6349 | 752446195 | 349919 | 2150 |
| lv | caselaw | 1153 | 205473532 | 105242 | 1952 |
| lv | decision | 1103 | 112930883 | 54224 | 2082 |
| lv | directive | 186 | 27612314 | 4392 | 6286 |
| lv | intagr | 452 | 54724543 | 11630 | 4705 |
| lv | proposal | 846 | 120571107 | 29298 | 4115 |
| lv | recommendation | 64 | 10221637 | 3361 | 3041 |
| lv | regulation | 2545 | 220912179 | 141772 | 1558 |
| mt | all | 6540 | 1141585121 | 350292 | 3258 |
| mt | caselaw | 1164 | 320156230 | 105479 | 3035 |
| mt | decision | 1109 | 161249825 | 54280 | 2970 |
| mt | directive | 203 | 45493266 | 4392 | 10358 |
| mt | intagr | 470 | 79787487 | 11675 | 6834 |
| mt | proposal | 878 | 192699148 | 29274 | 6582 |
| mt | recommendation | 65 | 16698859 | 3363 | 4965 |
| mt | regulation | 2650 | 325500306 | 141829 | 2295 |
| nl | all | 9586 | 1317883702 | 349407 | 3771 |
| nl | caselaw | 1847 | 338694761 | 105005 | 3225 |
| nl | decision | 1456 | 178362332 | 54152 | 3293 |
| nl | directive | 217 | 33850801 | 4388 | 7714 |
| nl | intagr | 529 | 70124352 | 11584 | 6053 |
| nl | proposal | 1540 | 239464702 | 29279 | 8178 |
| nl | recommendation | 111 | 18213240 | 3355 | 5428 |
| nl | regulation | 3886 | 439173514 | 141644 | 3100 |
| pl | all | 6677 | 780658463 | 350349 | 2228 |
| pl | caselaw | 1231 | 212977774 | 105479 | 2019 |
| pl | decision | 1125 | 115926181 | 54287 | 2135 |
| pl | directive | 197 | 29102885 | 4392 | 6626 |
| pl | intagr | 466 | 55384447 | 11680 | 4741 |
| pl | proposal | 886 | 125097572 | 29317 | 4267 |
| pl | recommendation | 68 | 10633172 | 3363 | 3161 |
| pl | regulation | 2703 | 231536432 | 141831 | 1632 |
| pt | all | 8450 | 1075496120 | 348449 | 3086 |
| pt | caselaw | 1763 | 303574704 | 104312 | 2910 |
| pt | decision | 1327 | 148950694 | 54007 | 2757 |
| pt | directive | 217 | 31807446 | 4385 | 7253 |
| pt | intagr | 504 | 61127624 | 11581 | 5278 |
| pt | proposal | 1361 | 200827190 | 29224 | 6871 |
| pt | recommendation | 81 | 12520469 | 3319 | 3772 |
| pt | regulation | 3197 | 316687993 | 141621 | 2236 |
| ro | all | 6315 | 713047860 | 350300 | 2035 |
| ro | caselaw | 1110 | 187613531 | 105516 | 1778 |
| ro | decision | 1047 | 103349951 | 54281 | 1903 |
| ro | directive | 206 | 27651600 | 4392 | 6295 |
| ro | intagr | 481 | 54663108 | 11675 | 4682 |
| ro | proposal | 805 | 106000393 | 29274 | 3620 |
| ro | recommendation | 63 | 9634151 | 3363 | 2864 |
| ro | regulation | 2603 | 224135126 | 141799 | 1580 |
| sk | all | 6484 | 763317735 | 350570 | 2177 |
| sk | caselaw | 1160 | 205490717 | 105608 | 1945 |
| sk | decision | 1111 | 114735132 | 54349 | 2111 |
| sk | directive | 188 | 27728158 | 4393 | 6311 |
| sk | intagr | 458 | 54700961 | 11676 | 4684 |
| sk | proposal | 859 | 123177145 | 29290 | 4205 |
| sk | recommendation | 66 | 10522604 | 3364 | 3128 |
| sk | regulation | 2642 | 226963018 | 141890 | 1599 |
| sl | all | 6222 | 719535411 | 350574 | 2052 |
| sl | caselaw | 1071 | 192339474 | 105608 | 1821 |
| sl | decision | 1075 | 108465814 | 54349 | 1995 |
| sl | directive | 176 | 25833250 | 4393 | 5880 |
| sl | intagr | 441 | 51487014 | 11676 | 4409 |
| sl | proposal | 812 | 114959046 | 29290 | 3924 |
| sl | recommendation | 62 | 9802044 | 3364 | 2913 |
| sl | regulation | 2585 | 216648769 | 141894 | 1526 |
| sv | all | 7419 | 910071575 | 351051 | 2592 |
| sv | caselaw | 1585 | 276785972 | 105980 | 2611 |
| sv | decision | 1213 | 129521101 | 54357 | 2382 |
| sv | directive | 195 | 28234600 | 4393 | 6427 |
| sv | intagr | 463 | 54192873 | 11676 | 4641 |
| sv | proposal | 1059 | 155339680 | 29292 | 5303 |
| sv | recommendation | 79 | 12681607 | 3366 | 3767 |
| sv | regulation | 2825 | 253315742 | 141987 | 1784 |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data has been downloaded using the R package [eurlex](https://cran.r-project.org/web/packages/eurlex/vignettes/eurlexpkg.html) between June and August 2022.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| ||
pablohorch | null | null | null | false | 2 | false | pablohorch/miFaceHorch | 2022-09-29T07:42:48.000Z | null | false | a9873510cff4ae717264cf96e403b4ac71548080 | [] | [] | https://huggingface.co/datasets/pablohorch/miFaceHorch/resolve/main/README.md | |
Algp123 | null | null | null | false | 1 | false | Algp123/seansimon | 2022-09-29T08:06:44.000Z | null | false | b5776b60b9d42f79b41260579d0e7d3420b045ee | [] | [
"license:cc"
] | https://huggingface.co/datasets/Algp123/seansimon/resolve/main/README.md | ---
license: cc
---
|
bergr7 | null | null | null | false | 4 | false | bergr7/weakly_supervised_ag_news | 2022-10-06T12:51:52.000Z | null | false | e5f041fc5d507821b395ff746d57f97818bd8db1 | [] | [
"language:en",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|ag_news",
"task_categories:text-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/bergr7/weakly_supervised_ag_news/resolve/main/README.md | ---
annotations_creators: []
language:
- en
language_creators:
- other
license: []
multilinguality:
- monolingual
pretty_name: Weakly supervised AG News Dataset
size_categories:
- 1K<n<10K
source_datasets:
- extended|ag_news
tags: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for Weakly supervised AG News Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .
The Weakly supervised AG News Dataset was created by Team 44 of FSDL 2022 course with the only purpose of experimenting with weak supervision techniques. It was assumed that only the labels of the original test set and 20% of the training set were available. The labels in the training set were obtained by creating weak labels with LFs and denoising them with Snorkel's label model.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
text: a string feature
label: a classification label, with possible values including World (0), Sports (1), Business (2), Sci/Tech (3).
### Data Splits
- Training set with probabilistic labels from weak supervision: 37340
- Unlabeled data: 58660
- Validation set: 24000
- Test set: 7600
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to Xiang Zhang (xiang.zhang@nyu.edu) for adding this dataset to the HF Dataset Hub. |
Doudou69 | null | null | null | false | 1 | false | Doudou69/Cloud_Recognition | 2022-09-29T10:19:04.000Z | null | false | f6323032886e971c842c7b0b5b9f3592e6e2bd0a | [] | [] | https://huggingface.co/datasets/Doudou69/Cloud_Recognition/resolve/main/README.md | Ces images de nuages sont divisées en 2 classes, les cirrus et les cumulus.
These cloud images are divided into 2 classes, cirrus and cumulus. |
Fhantomchaos | null | null | null | false | 1 | false | Fhantomchaos/testing | 2022-09-29T09:53:27.000Z | null | false | e6fb52c53dc1e653addb69adfa0113d171f221ab | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Fhantomchaos/testing/resolve/main/README.md | ---
license: afl-3.0
---
|
liuweihug | null | null | null | false | 1 | false | liuweihug/da | 2022-09-29T09:56:08.000Z | null | false | aa6c355c4ac69c8e28fe1db0a5b5c194839328aa | [] | [
"license:openrail"
] | https://huggingface.co/datasets/liuweihug/da/resolve/main/README.md | ---
license: openrail
---
|
merkalo-ziri | null | null | null | false | 1 | false | merkalo-ziri/vsosh2022 | 2022-09-29T11:02:34.000Z | null | false | 81b731b90a2a11229c78e6791d0d8c1ccf6833d4 | [] | [
"annotations_creators:found",
"language:ru",
"language_creators:found",
"license:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/merkalo-ziri/vsosh2022/resolve/main/README.md | ---
annotations_creators:
- found
language:
- ru
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: vsosh_dataset
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
joelito | null | false | 53 | false | joelito/mc4_legal | 2022-11-12T06:51:49.000Z | null | false | fd5d204aba18bcc343a1b39271bfe606be220b1a | [] | [
"annotations_creators:other",
"language_creators:found",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hu",
"language:it",
"language:lt",
"language:lv",
... | https://huggingface.co/datasets/joelito/mc4_legal/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages"
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/JoelNiklaus/LegalDatasets/tree/main/pretrain/mc4_legal)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
This dataset contains large text resources (~133GB in total) from mc4 filtered for legal data that can be used for pretraining language models.
### Supported Tasks and Leaderboards
The dataset supports the task of masked language modeling.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl.xz and there is one split available ("train").
| Source | Size (MB) | Tokens | Documents | Words/Document |
|-------:|----------:|-------:|----------:|---------------:|
| bg | 13.9 | xxx | xxx | xxx |
| cs | 7252.4 | xxx | xxx | xxx |
| da | 37.7 | xxx | xxx | xxx |
| de | 27651.3 | xxx | xxx | xxx |
| el | 69.2 | xxx | xxx | xxx |
| en | 22043.9 | xxx | xxx | xxx |
| es | 21410.2 | xxx | xxx | xxx |
| et | 552.7 | xxx | xxx | xxx |
| fi | 8074.0 | xxx | xxx | xxx |
| fr | 10974.1 | xxx | xxx | xxx |
| ga | 3.5 | xxx | xxx | xxx |
| hu | 1450.0 | xxx | xxx | xxx |
| it | 12368.6 | xxx | xxx | xxx |
| lt | 175.9 | xxx | xxx | xxx |
| lv | 0.3 | xxx | xxx | xxx |
| mt | 227.7 | xxx | xxx | xxx |
| nl | 82.9 | xxx | xxx | xxx |
| pl | 10008.0 | xxx | xxx | xxx |
| pt | 4799.3 | xxx | xxx | xxx |
| ro | 2124.3 | xxx | xxx | xxx |
| sk | 1388.5 | xxx | xxx | xxx |
| sl | 404.6 | xxx | xxx | xxx |
| sv | 1888.1 | xxx | xxx | xxx |
| total | 133001.1 | xxx | xxx | xxx |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
The dataset was created by filtering mc4 for legal data.
We used terms indicating legal citations to get the texts.
Note that this dataset can be quite noisy, and the quality is not known.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| ||
coredeveloper | null | null | null | false | 1 | false | coredeveloper/test | 2022-09-29T11:20:58.000Z | null | false | 383519c3d7d40b9b9374f12a075b1c594d206888 | [] | [
"license:other"
] | https://huggingface.co/datasets/coredeveloper/test/resolve/main/README.md | ---
license: other
---
|
INAI | null | null | null | false | 2 | false | INAI/svet | 2022-09-29T12:36:43.000Z | null | false | d5ed1a1b69fc5d8f027273a4686fc3bff6c6c05f | [] | [] | https://huggingface.co/datasets/INAI/svet/resolve/main/README.md | |
DannyHane | null | null | null | false | 1 | false | DannyHane/test | 2022-09-29T13:43:52.000Z | null | false | d8c978c8b79d61393b9036a9bf09e76a83b39345 | [] | [] | https://huggingface.co/datasets/DannyHane/test/resolve/main/README.md | |
fredguth | null | null | null | false | 1 | false | fredguth/aisegmentcn-matting-human | 2022-09-29T15:18:42.000Z | null | false | f400ef054edf219b2529b673de34ff6c49f9ac9c | [] | [
"annotations_creators:Beijing Wanxing Convergence Technology Co",
"license:mit",
"size_categories:10K<n<100K",
"tags:binary",
"tags:aisegment.cn",
"task_categories:image-segmentation",
"task_ids:semantic-segmentation"
] | https://huggingface.co/datasets/fredguth/aisegmentcn-matting-human/resolve/main/README.md | ---
annotations_creators:
- Beijing Wanxing Convergence Technology Co
license:
- mit
pretty_name: aisegmentcn-matting-human
size_categories:
- 10K<n<100K
tags:
- binary
- aisegment.cn
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
---
# Dataset Card for AISegment.cn - Matting Human datasets
## Table of Contents
- [Dataset Card for AISegment.cn - Matting Human datasets](#dataset-card-for-aisegmentcn---matting-human-datasets)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Licensing Information](#licensing-information)
## Dataset Description
Quoting the [dataset's github](https://github.com/aisegmentcn/matting_human_datasets) (translated by Apple Translator):
> This dataset is currently the largest portrait matting dataset, containing 34,427 images and corresponding matting results.
> The data set was marked by the high quality of Beijing Play Star Convergence Technology Co. Ltd., and the portrait soft segmentation model trained using this data set has been commercialized.
> The original images in the dataset are from `Flickr`, `Baidu`, and `Taobao`. After face detection and area cropping, a half-length portrait of 600\*800 was generated.
> The clip_img directory is a half-length portrait image in the format jpg; the matting directory is the corresponding matting file (convenient to confirm the matting quality), the format is png, you should first extract the alpha map from the png image before training.
- **Repository:** [aisegmentcn/matting_human_datasets](https://github.com/aisegmentcn/matting_human_datasets)
## Dataset Structure
```text
└── data/
├── clip_img/
│ └── {group-id}/
│ └── clip_{subgroup-id}/
│ └── {group-id}-{img-id}.jpg
└── matting/
└── {group-id}/
└── matting_{subgroup-id}/
└── {group-id}-{img-id}.png
```
The input `data/clip_img/1803151818/clip_00000000/1803151818-00000003.jpg` matches the label `data/matting/1803151818/matting_00000000/1803151818-00000003.png`
### Licensing Information
See authors [Github](https://github.com/aisegmentcn/matting_human_datasets)
|
airnicco8 | null | null | null | false | 1 | false | airnicco8/umls_sent_trans | 2022-09-29T14:04:52.000Z | null | false | 3293876da7c613c9e5c603411139d2c8933319e5 | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/airnicco8/umls_sent_trans/resolve/main/README.md | ---
license: gpl-3.0
---
|
Gossher | null | null | null | false | 1 | false | Gossher/GossherImages | 2022-09-29T14:51:25.000Z | null | false | a80bf0644d4149cbe69d2e57b0517c86975dd1fa | [] | [
"license:other"
] | https://huggingface.co/datasets/Gossher/GossherImages/resolve/main/README.md | ---
license: other
---
|
miracl | null | null | null | false | 534 | false | miracl/miracl-corpus | 2022-11-01T20:46:42.000Z | null | false | 36d9415332e89ae44a65e411c3a2bfa512d741e4 | [] | [
"arxiv:2210.09984",
"annotations_creators:expert-generated",
"language:ar",
"language:bn",
"language:en",
"language:es",
"language:fa",
"language:fi",
"language:fr",
"language:hi",
"language:id",
"language:ja",
"language:ko",
"language:ru",
"language:sw",
"language:te",
"language:th"... | https://huggingface.co/datasets/miracl/miracl-corpus/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- ar
- bn
- en
- es
- fa
- fi
- fr
- hi
- id
- ja
- ko
- ru
- sw
- te
- th
- zh
multilinguality:
- multilingual
pretty_name: MIRACL-corpus
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Dataset Card for MIRACL Corpus
## Dataset Description
* **Homepage:** http://miracl.ai
* **Repository:** https://github.com/project-miracl/miracl
* **Paper:** https://arxiv.org/abs/2210.09984
MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
This dataset contains the collection data of the 16 "known languages". The remaining 2 "surprise languages" will not be released until later.
The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Dataset Structure
Each retrieval unit contains three fields: `docid`, `title`, and `text`. Consider an example from the English corpus:
```
{
"docid": "39#0",
"title": "Albedo",
"text": "Albedo (meaning 'whiteness') is the measure of the diffuse reflection of solar radiation out of the total solar radiation received by an astronomical body (e.g. a planet like Earth). It is dimensionless and measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation)."
}
```
The `docid` has the schema `X#Y`, where all passages with the same `X` come from the same Wikipedia article, whereas `Y` denotes the passage within that article, numbered sequentially. The text field contains the text of the passage. The title field contains the name of the article the passage comes from.
The collection can be loaded using:
```
lang='ar' # or any of the 16 languages
miracl_corpus = datasets.load_dataset('miracl/miracl-corpus', lang)['train']
for doc in miracl_corpus:
docid = doc['docid']
title = doc['title']
text = doc['text']
```
## Dataset Statistics and Links
The following table contains the number of passage and Wikipedia articles in the collection of each language, along with the links to the datasets and raw Wikipedia dumps.
| Language | # of Passages | # of Articles | Links | Raw Wiki Dump |
|:----------------|--------------:|--------------:|:------|:------|
| Arabic (ar) | 2,061,414 | 656,982 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ar) | [🌏](https://archive.org/download/arwiki-20190201/arwiki-20190201-pages-articles-multistream.xml.bz2)
| Bengali (bn) | 297,265 | 63,762 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-bn) | [🌏](https://archive.org/download/bnwiki-20190201/bnwiki-20190201-pages-articles-multistream.xml.bz2)
| English (en) | 32,893,221 | 5,758,285 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-en) | [🌏](https://archive.org/download/enwiki-20190201/enwiki-20190201-pages-articles-multistream.xml.bz2)
| Spanish (es) | 10,373,953 | 1,669,181 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-es) | [🌏](https://archive.org/download/eswiki-20220301/eswiki-20220301-pages-articles-multistream.xml.bz2)
| Persian (fa) | 2,207,172 | 857,827 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-fa) | [🌏](https://archive.org/download/fawiki-20220301/fawiki-20220301-pages-articles-multistream.xml.bz2)
| Finnish (fi) | 1,883,509 | 447,815 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-fi) | [🌏](https://archive.org/download/fiwiki-20190201/fiwiki-20190201-pages-articles-multistream.xml.bz2)
| French (fr) | 14,636,953 | 2,325,608 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-fr) | [🌏](https://archive.org/download/frwiki-20220301/frwiki-20220301-pages-articles-multistream.xml.bz2)
| Hindi (hi) | 506,264 | 148,107 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-hi) | [🌏](https://archive.org/download/hiwiki-20220301/hiwiki-20220301-pages-articles-multistream.xml.bz2)
| Indonesian (id) | 1,446,315 | 446,330 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-id) | [🌏](https://archive.org/download/idwiki-20190201/idwiki-20190201-pages-articles-multistream.xml.bz2)
| Japanese (ja) | 6,953,614 | 1,133,444 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ja) | [🌏](https://archive.org/download/jawiki-20190201/jawiki-20190201-pages-articles-multistream.xml.bz2)
| Korean (ko) | 1,486,752 | 437,373 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ko) | [🌏](https://archive.org/download/kowiki-20190201/kowiki-20190201-pages-articles-multistream.xml.bz2)
| Russian (ru) | 9,543,918 | 1,476,045 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-ru) | [🌏](https://archive.org/download/ruwiki-20190201/ruwiki-20190201-pages-articles-multistream.xml.bz2)
| Swahili (sw) | 131,924 | 47,793 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-sw) | [🌏](https://archive.org/download/swwiki-20190201/swwiki-20190201-pages-articles-multistream.xml.bz2)
| Telugu (te) | 518,079 | 66,353 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-te) | [🌏](https://archive.org/download/tewiki-20190201/tewiki-20190201-pages-articles-multistream.xml.bz2)
| Thai (th) | 542,166 | 128,179 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-th) | [🌏](https://archive.org/download/thwiki-20190101/thwiki-20190101-pages-articles-multistream.xml.bz2)
| Chinese (zh) | 4,934,368 | 1,246,389 | [🤗](https://huggingface.co/datasets/miracl/miracl-corpus/tree/main/miracl-corpus-v1.0-zh) | [🌏](https://archive.org/download/zhwiki-20220301/zhwiki-20220301-pages-articles-multistream.xml.bz2)
|
riogerz | null | null | null | false | 1 | false | riogerz/florz | 2022-09-29T14:54:13.000Z | null | false | 59ced5f474e574d107b1b669e745b047f33d2947 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/riogerz/florz/resolve/main/README.md | ---
license: openrail
---
|
Shinadayu | null | null | null | false | 1 | false | Shinadayu/test | 2022-09-29T15:21:16.000Z | null | false | aa4f6645451098df234769f89af1fcccd16d567f | [] | [] | https://huggingface.co/datasets/Shinadayu/test/resolve/main/README.md | ---
license: othera
|
KamiNoGi | null | null | null | false | 1 | false | KamiNoGi/pochi | 2022-09-29T15:39:50.000Z | null | false | 6eb9f5c5ce5375d1620a1809cd1d0490d5318342 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/KamiNoGi/pochi/resolve/main/README.md | ---
license: openrail
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456336 | 2022-09-29T18:00:45.000Z | null | false | f0f37162e31f17be4a703fc555be1a965b77adf5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456336/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: facebook/opt-66b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456333 | 2022-09-29T15:47:19.000Z | null | false | 4ab783c3e7e2cc5ca9ea75ab922b856f096e6b9e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456333/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: facebook/opt-6.7b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456332 | 2022-09-29T15:36:34.000Z | null | false | 8881f6b4ef7d33351a0e5b73d482b280bf35992e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456332/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: facebook/opt-2.7b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456329 | 2022-09-29T15:32:08.000Z | null | false | 4f0ea713c9fbb0e90fb46605a9d6fa40045c0cb7 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456329/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: facebook/opt-125m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456331 | 2022-09-29T15:34:41.000Z | null | false | 26eec3ffb27c97bfd5b123dae4f046a6c6cb2676 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456331/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: facebook/opt-1.3b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456330 | 2022-09-29T15:32:36.000Z | null | false | 4cf687b19fb10893ab4f13a9e2bec3323150897b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456330/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: facebook/opt-350m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456335 | 2022-09-29T16:38:50.000Z | null | false | 40811b7d45e7be647accbaad064231273e3d5ff0 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456335/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: facebook/opt-30b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456334 | 2022-09-29T15:59:04.000Z | null | false | 1201e7301176c674f1f05bd1d01787c919b1ea76 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456334/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: facebook/opt-13b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
Gr3en | null | null | null | false | 1 | false | Gr3en/m3 | 2022-09-29T17:13:42.000Z | null | false | 9704434c1038783fb4eb69ffc76b029e2ea43643 | [] | [] | https://huggingface.co/datasets/Gr3en/m3/resolve/main/README.md | annotations_creators:
- no-annotation
language:
- en
language_creators:
- other
license:
- artistic-2.0
multilinguality:
- monolingual
pretty_name: m3 dataset (a dataset with my face in it)
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text-to-image
task_ids: [] |
Dopamina | null | null | null | false | 2 | false | Dopamina/dopamina | 2022-09-29T17:03:03.000Z | null | false | 35b7e3c042a42e312b44b2f327a889939436ed62 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Dopamina/dopamina/resolve/main/README.md | ---
license: artistic-2.0
---
|
Ivanrex | null | null | null | false | 1 | false | Ivanrex/images | 2022-09-29T17:12:35.000Z | null | false | 6e20e114326dd6e209339bc47f392d5906aeb931 | [] | [] | https://huggingface.co/datasets/Ivanrex/images/resolve/main/README.md | yes |
Ivanrex | null | null | null | false | 1 | false | Ivanrex/fotos | 2022-09-29T17:16:51.000Z | null | false | 60fb7ce6cb24b741122dd9e40a5e59a0659181ab | [] | [] | https://huggingface.co/datasets/Ivanrex/fotos/resolve/main/README.md | |
linarez83 | null | null | null | false | 1 | false | linarez83/fotos_mias | 2022-09-29T17:20:12.000Z | null | false | e83d8655bfe879dc84a5cc298550f0d4dfdf4d40 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/linarez83/fotos_mias/resolve/main/README.md | ---
license: afl-3.0
---
|
jmhessel | null | @article{hessel2022androids,
title={Do Androids Laugh at Electric Sheep? Humor" Understanding" Benchmarks from The New Yorker Caption Contest},
author={Hessel, Jack and Marasovi{\'c}, Ana and Hwang, Jena D and Lee, Lillian and Da, Jeff and Zellers, Rowan and Mankoff, Robert and Choi, Yejin},
journal={arXiv preprint arXiv:2209.06293},
year={2022}
}
www.capcon.dev
Our data contributions are:
- The cartoon-level annotations;
- The joke explanations;
- and the framing of the tasks
We release these data we contribute under CC-BY (see DATASET_LICENSE).
If you find this data useful in your work, in addition to citing our contributions, please also cite the following, from which the cartoons/captions in our corpus are derived:
@misc{newyorkernextmldataset,
author={Jain, Lalit and Jamieson, Kevin and Mankoff, Robert and Nowak, Robert and Sievert, Scott},
title={The {N}ew {Y}orker Cartoon Caption Contest Dataset},
year={2020},
url={https://nextml.github.io/caption-contest-data/}
}
@inproceedings{radev-etal-2016-humor,
title = "Humor in Collective Discourse: Unsupervised Funniness Detection in The {New Yorker} Cartoon Caption Contest",
author = "Radev, Dragomir and
Stent, Amanda and
Tetreault, Joel and
Pappu, Aasish and
Iliakopoulou, Aikaterini and
Chanfreau, Agustin and
de Juan, Paloma and
Vallmitjana, Jordi and
Jaimes, Alejandro and
Jha, Rahul and
Mankoff, Robert",
booktitle = "LREC",
year = "2016",
}
@inproceedings{shahaf2015inside,
title={Inside jokes: Identifying humorous cartoon captions},
author={Shahaf, Dafna and Horvitz, Eric and Mankoff, Robert},
booktitle={KDD},
year={2015},
} | There are 3 caption contest tasks, described in the paper. In the Matching multiple choice task, models must recognize a caption written about a cartoon (vs. options that were not). In the Quality Ranking task, models must evaluate the quality
of that caption by scoring it more highly than a lower quality option from the same contest. In the Explanation Generation task, models must explain why the joke is funny. | false | 135 | false | jmhessel/newyorker_caption_contest | 2022-11-08T21:50:46.000Z | null | false | 6f77af2d189117301c1324ff481dc2e752110963 | [] | [
"arxiv:2209.06293",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"language:en",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"... | https://huggingface.co/datasets/jmhessel/newyorker_caption_contest/resolve/main/README.md | ---
annotations_creators:
- expert-generated
- crowdsourced
- found
language:
- en
language_creators:
- crowdsourced
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: newyorker_caption_contest
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- humor
- caption contest
- new yorker
task_categories:
- image-to-text
- multiple-choice
- text-classification
- text-generation
- visual-question-answering
- other
- text2text-generation
task_ids:
- multi-class-classification
- language-modeling
- visual-question-answering
- explanation-generation
---
# Dataset Card for New Yorker Caption Contest Benchmarks
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [capcon.dev](https://www.capcon.dev)
- **Repository:** [https://github.com/jmhessel/caption_contest_corpus](https://github.com/jmhessel/caption_contest_corpus)
- **Paper:** [Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest](https://arxiv.org/abs/2209.06293)
- **Leaderboard:** No official leaderboard (yet).
- **Point of Contact:** jackh@allenai.org
### Dataset Summary
Data from:
[Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest](https://arxiv.org/abs/2209.06293)
```
@article{hessel2022androids,
title={Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest},
author={Hessel, Jack and Marasovi{\'c}, Ana and Hwang, Jena D and Lee, Lillian and Da, Jeff and Zellers, Rowan and Mankoff, Robert and Choi, Yejin},
journal={arXiv preprint arXiv:2209.06293},
year={2022}
}
```
If you use this dataset, we would appreciate you citing our work, but also -- several other papers that we build this corpus upon. See [Citation Information](#citation-information).
We challenge AI models to "demonstrate understanding" of the
sophisticated multimodal humor of The New Yorker Caption Contest.
Concretely, we develop three carefully circumscribed tasks for which
it suffices (but is not necessary) to grasp potentially complex and
unexpected relationships between image and caption, and similarly
complex and unexpected allusions to the wide varieties of human
experience.
### Supported Tasks and Leaderboards
Three tasks are supported:
- "Matching:" a model must recognize a caption written about a cartoon (vs. options that were not);
- "Quality ranking:" a model must evaluate the quality of a caption by scoring it more highly than a lower quality option from the same contest;
- "Explanation:" a model must explain why a given joke is funny.
There are no official leaderboards (yet).
### Languages
English
## Dataset Structure
Here's an example instance from Matching:
```
{'caption_choices': ['Tell me about your childhood very quickly.',
"Believe me . . . it's what's UNDER the ground that's "
'most interesting.',
"Stop me if you've heard this one.",
'I have trouble saying no.',
'Yes, I see the train but I think we can beat it.'],
'contest_number': 49,
'entities': ['https://en.wikipedia.org/wiki/Rule_of_three_(writing)',
'https://en.wikipedia.org/wiki/Bar_joke',
'https://en.wikipedia.org/wiki/Religious_institute'],
'from_description': 'scene: a bar description: Two priests and a rabbi are '
'walking into a bar, as the bartender and another patron '
'look on. The bartender talks on the phone while looking '
'skeptically at the incoming crew. uncanny: The scene '
'depicts a very stereotypical "bar joke" that would be '
'unlikely to be encountered in real life; the skepticism '
'of the bartender suggests that he is aware he is seeing '
'this trope, and is explaining it to someone on the '
'phone. entities: Rule_of_three_(writing), Bar_joke, '
'Religious_institute. choices A: Tell me about your '
"childhood very quickly. B: Believe me . . . it's what's "
"UNDER the ground that's most interesting. C: Stop me if "
"you've heard this one. D: I have trouble saying no. E: "
'Yes, I see the train but I think we can beat it.',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=323x231 at 0x7F34F283E9D0>,
'image_description': 'Two priests and a rabbi are walking into a bar, as the '
'bartender and another patron look on. The bartender '
'talks on the phone while looking skeptically at the '
'incoming crew.',
'image_location': 'a bar',
'image_uncanny_description': 'The scene depicts a very stereotypical "bar '
'joke" that would be unlikely to be encountered '
'in real life; the skepticism of the bartender '
'suggests that he is aware he is seeing this '
'trope, and is explaining it to someone on the '
'phone.',
'instance_id': '21125bb8787b4e7e82aa3b0a1cba1571',
'label': 'C',
'n_tokens_label': 1,
'questions': ['What is the bartender saying on the phone in response to the '
'living, breathing, stereotypical bar joke that is unfolding?']}
```
The label "C" indicates that the 3rd choice in the `caption_choices` is correct.
Here's an example instance from Ranking (in the from pixels setting --- though, this is also available in the from description setting)
```
{'caption_choices': ['I guess I misunderstood when you said long bike ride.',
'Does your divorce lawyer have any other cool ideas?'],
'contest_number': 582,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=600x414 at 0x7F8FF9F96610>,
'instance_id': 'dd1c214a1ca3404aa4e582c9ce50795a',
'label': 'A',
'n_tokens_label': 1,
'winner_source': 'official_winner'}
```
the label indicates that the first caption choice ("A", here) in the `caption_choices` list was more highly rated.
Here's an example instance from Explanation:
```
{'caption_choices': 'The classics can be so intimidating.',
'contest_number': 752,
'entities': ['https://en.wikipedia.org/wiki/Literature',
'https://en.wikipedia.org/wiki/Solicitor'],
'from_description': 'scene: a road description: Two people are walking down a '
'path. A number of giant books have surrounded them. '
'uncanny: There are book people in this world. entities: '
'Literature, Solicitor. caption: The classics can be so '
'intimidating.',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=800x706 at 0x7F90003D0BB0>,
'image_description': 'Two people are walking down a path. A number of giant '
'books have surrounded them.',
'image_location': 'a road',
'image_uncanny_description': 'There are book people in this world.',
'instance_id': 'eef9baf450e2fab19b96facc128adf80',
'label': 'A play on the word intimidating --- usually if the classics (i.e., '
'classic novels) were to be intimidating, this would mean that they '
'are intimidating to read due to their length, complexity, etc. But '
'here, they are surrounded by anthropomorphic books which look '
'physically intimidating, i.e., they are intimidating because they '
'may try to beat up these people.',
'n_tokens_label': 59,
'questions': ['What do the books want?']}
```
The label is an explanation of the joke, which serves as the autoregressive target.
### Data Instances
See above
### Data Fields
See above
### Data Splits
Data splits can be accessed as:
```
from datasets import load_dataset
dset = load_dataset("newyorker_caption_contest", "matching")
dset = load_dataset("newyorker_caption_contest", "ranking")
dset = load_dataset("newyorker_caption_contest", "explanation")
```
Or, in the from pixels setting, e.g.,
```
from datasets import load_dataset
dset = load_dataset("newyorker_caption_contest", "ranking_from_pixels")
```
Because the dataset is small, we reported in 5-fold cross-validation setting initially. The default splits are split 0. You can access the other splits, e.g.:
```
from datasets import load_dataset
# the 4th data split
dset = load_dataset("newyorker_caption_contest", "explanation_4")
```
## Dataset Creation
Full details are in the paper.
### Curation Rationale
See the paper for rationale/motivation.
### Source Data
See citation below. We combined 3 sources of data, and added significant annotations of our own.
#### Initial Data Collection and Normalization
Full details are in the paper.
#### Who are the source language producers?
We paid crowdworkers $15/hr to annotate the corpus.
In addition, significant annotation efforts were conducted by the authors of this work.
### Annotations
Full details are in the paper.
#### Annotation process
Full details are in the paper.
#### Who are the annotators?
A mix of crowdworks and authors of this paper.
### Personal and Sensitive Information
Has been redacted from the dataset. Images are published in the New Yorker already.
## Considerations for Using the Data
### Social Impact of Dataset
It's plausible that humor could perpetuate negative stereotypes. The jokes in this corpus are a mix of crowdsourced entries that are highly rated, and ones published in the new yorker.
### Discussion of Biases
Humor is subjective, and some of the jokes may be considered offensive. The images may contain adult themes and minor cartoon nudity.
### Other Known Limitations
More details are in the paper
## Additional Information
### Dataset Curators
The dataset was curated by researchers at AI2
### Licensing Information
The annotations we provide are CC-BY-4.0. See www.capcon.dev for more info.
### Citation Information
```
@article{hessel2022androids,
title={Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest},
author={Hessel, Jack and Marasovi{\'c}, Ana and Hwang, Jena D and Lee, Lillian and Da, Jeff and Zellers, Rowan and Mankoff, Robert and Choi, Yejin},
journal={arXiv preprint arXiv:2209.06293},
year={2022}
}
```
Our data contributions are:
- The cartoon-level annotations;
- The joke explanations;
- and the framing of the tasks
We release these data we contribute under CC-BY (see DATASET_LICENSE). If you find this data useful in your work, in addition to citing our contributions, please also cite the following, from which the cartoons/captions in our corpus are derived:
```
@misc{newyorkernextmldataset,
author={Jain, Lalit and Jamieson, Kevin and Mankoff, Robert and Nowak, Robert and Sievert, Scott},
title={The {N}ew {Y}orker Cartoon Caption Contest Dataset},
year={2020},
url={https://nextml.github.io/caption-contest-data/}
}
@inproceedings{radev-etal-2016-humor,
title = "Humor in Collective Discourse: Unsupervised Funniness Detection in The {New Yorker} Cartoon Caption Contest",
author = "Radev, Dragomir and
Stent, Amanda and
Tetreault, Joel and
Pappu, Aasish and
Iliakopoulou, Aikaterini and
Chanfreau, Agustin and
de Juan, Paloma and
Vallmitjana, Jordi and
Jaimes, Alejandro and
Jha, Rahul and
Mankoff, Robert",
booktitle = "LREC",
year = "2016",
}
@inproceedings{shahaf2015inside,
title={Inside jokes: Identifying humorous cartoon captions},
author={Shahaf, Dafna and Horvitz, Eric and Mankoff, Robert},
booktitle={KDD},
year={2015},
}
``` |
Grizz | null | null | null | false | 1 | false | Grizz/gothic | 2022-09-29T17:57:56.000Z | null | false | b72dd5646b9a7d3b3eb60ab0f73479d1869c67ef | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Grizz/gothic/resolve/main/README.md | ---
license: afl-3.0
---
|
marcosfevre | null | null | null | false | 1 | false | marcosfevre/stromberg | 2022-09-30T19:02:56.000Z | null | false | baf5387cd27f305a07ce560081cac4b525526355 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/marcosfevre/stromberg/resolve/main/README.md | ---
license: cc-by-4.0
---
|
Gianpaolo | null | null | null | false | 1 | false | Gianpaolo/ORGANIC_TYPOGRAPHY | 2022-09-29T20:31:32.000Z | null | false | 427887a50d4bb85b86723440d15fe4889bd5f020 | [] | [] | https://huggingface.co/datasets/Gianpaolo/ORGANIC_TYPOGRAPHY/resolve/main/README.md | |
badmaiky | null | null | null | false | 1 | false | badmaiky/images | 2022-09-29T20:22:21.000Z | null | false | 4b887241ee3f0c2efa31a7f04596bd1042a0ef05 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/badmaiky/images/resolve/main/README.md | ---
license: openrail
---
|
imranraad | null | null | null | false | 1 | false | imranraad/github-emotion-surprise | 2022-10-20T10:18:22.000Z | null | false | d2e593d645e8b7d71ab76738be13269f96b0139b | [] | [
"arxiv:2208.05573",
"doi:10.57967/hf/0050",
"task_categories:text-classification"
] | https://huggingface.co/datasets/imranraad/github-emotion-surprise/resolve/main/README.md | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: github-emotion-surprise
## Dataset Description
Dataset used in the paper: Imran et al., ["Data Augmentation for Improving Emotion Recognition in Software Engineering Communication"](https://arxiv.org/abs/2208.05573), ASE-2022.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_id": 704844644,
"text": "This change doesn't affect anything but makes the code more clear. If you look at the line about, `currentUrlTree` is set to `urlAfterRedirects`.",
"feat_Anger": 0,
"feat_Love": 0,
"feat_Fear": 0,
"feat_Joy": 1,
"feat_Sadness": 0,
"target": 0
},
{
"feat_id": 886568180,
"text": "Thanks very much for your feedback [USER] Your point is totally fair. My intention was to highlight that camelCase or dash-case class names are perfectly fine to use in Angular templates. Most people, especially beginners, do not know that and end up using the `ngClass` directive. Do you think that rewording the alert towards that direction would make sense?",
"feat_Anger": 0,
"feat_Love": 1,
"feat_Fear": 0,
"feat_Joy": 0,
"feat_Sadness": 0,
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_id": "Value(dtype='int64', id=None)",
"text": "Value(dtype='string', id=None)",
"feat_Anger": "Value(dtype='int64', id=None)",
"feat_Love": "Value(dtype='int64', id=None)",
"feat_Fear": "Value(dtype='int64', id=None)",
"feat_Joy": "Value(dtype='int64', id=None)",
"feat_Sadness": "Value(dtype='int64', id=None)",
"target": "ClassLabel(num_classes=2, names=['0', '1'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1600 |
| valid | 400 |
|
wallyg | null | null | null | false | 15 | false | wallyg/Pictures | 2022-09-29T21:20:59.000Z | null | false | 8d818753c4d4b3541433a20d2a7008e4e3cfa427 | [] | [] | https://huggingface.co/datasets/wallyg/Pictures/resolve/main/README.md | pictures |
Enoch2090 | null | null | null | false | null | false | Enoch2090/github_semantic_search | 2022-11-14T08:12:54.000Z | null | false | 84668ac11b9bcc8186cf41c1fdaa334bf4a1ed1c | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/Enoch2090/github_semantic_search/resolve/main/README.md | ---
license: gpl-3.0
---
|
Jose888888 | null | null | null | false | 1 | false | Jose888888/helloeee | 2022-11-07T19:15:12.000Z | null | false | cd2e95ae08dc82f53588e91d63a8817dd4f5b553 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Jose888888/helloeee/resolve/main/README.md | ---
license: openrail
---
|
mvb6969 | null | null | null | false | 1 | false | mvb6969/Fotos_mvb6969 | 2022-09-30T00:00:41.000Z | null | false | 4ef74c321f3c474a1549b42545cda4e74b3870ae | [] | [
"license:openrail"
] | https://huggingface.co/datasets/mvb6969/Fotos_mvb6969/resolve/main/README.md | ---
license: openrail
---
|
sati93 | null | null | null | false | 1 | false | sati93/fotos | 2022-10-02T20:19:26.000Z | null | false | 6a151c5d80f3c0d00af267e030daca4f42df9012 | [] | [] | https://huggingface.co/datasets/sati93/fotos/resolve/main/README.md | Imagenes:
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me1.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me2.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me3.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me4.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me5.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me6.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me7.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me8.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me9.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me10.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me11.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me12.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me13.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me14.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me15.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me16.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me17.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me18.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me19.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me20.jpg",
"https://huggingface.co/datasets/sati93/fotos/resolve/main/me21.jpg",
Configuracion:
instance_prompt: sati
prior_preservation_class_prompt: person
|
jamesluc007 | null | null | null | false | null | false | jamesluc007/test | 2022-09-30T01:04:38.000Z | null | false | 2b8181cec3b249fea71bc0f09abef2861b020417 | [] | [] | https://huggingface.co/datasets/jamesluc007/test/resolve/main/README.md | |
murphyk | null | null | null | false | 1 | false | murphyk/dogs-cats-small-clip-embedding | 2022-09-30T03:46:33.000Z | null | false | 627e5cc137bcd577a9769bbb108ff97c65cd8aac | [] | [
"license:mit"
] | https://huggingface.co/datasets/murphyk/dogs-cats-small-clip-embedding/resolve/main/README.md | scratch directory for storing image datasets which are processed through a clip embedding model!
---
license: mit
---
|
betoshogun | null | null | null | false | 1 | false | betoshogun/ME | 2022-09-30T02:58:12.000Z | null | false | 71bb2850bd238a3290e8e38d349b2e9373e33620 | [] | [] | https://huggingface.co/datasets/betoshogun/ME/resolve/main/README.md | |
trevfran | null | null | null | false | 1 | false | trevfran/perfil | 2022-09-30T02:07:57.000Z | null | false | 0cdde8c9b4daeb3b70b9269cbe1fbbf613b927a6 | [] | [
"license:other"
] | https://huggingface.co/datasets/trevfran/perfil/resolve/main/README.md | ---
license: other
---
|
Nian | null | null | null | false | null | false | Nian/DreamBooth_Test | 2022-09-30T02:19:29.000Z | null | false | 1fdb3a86e97900ed57af51b62880ba504e6d91a8 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Nian/DreamBooth_Test/resolve/main/README.md | ---
license: mit
---
|
virfuji | null | null | null | false | null | false | virfuji/connor | 2022-09-30T02:25:04.000Z | null | false | 61a5f43d7178da5e7a43a372acbeab6212db5e96 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/virfuji/connor/resolve/main/README.md | ---
license: afl-3.0
---
|
joemmile | null | null | null | false | 1 | false | joemmile/Lia | 2022-09-30T17:01:58.000Z | null | false | 5088e9b8a20afb6797b4ddff4d7b014b573818bb | [] | [
"license:cc"
] | https://huggingface.co/datasets/joemmile/Lia/resolve/main/README.md | ---
license: cc
---
|
emoneil | null | null | null | false | 1 | false | emoneil/reflections-in-peer-counseling | 2022-10-14T03:59:04.000Z | null | false | e99e27c90f20307ebbefd7e79e35255a62de3118 | [] | [
"annotations_creators:expert-generated",
"size_categories:1K<n<10K",
"tags:gpt3",
"tags:natural language processing",
"tags:natural language generation",
"tags:peer counseling",
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:conversational",
"task_ids:dialogue... | https://huggingface.co/datasets/emoneil/reflections-in-peer-counseling/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language: []
language_creators: []
license: []
pretty_name: Reflections in Peer Counseling
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- gpt3
- natural language processing
- natural language generation
- peer counseling
task_categories:
- summarization
- text-generation
- conversational
task_ids:
- dialogue-generation
---
# Dataset Card for Reflections in Peer Counseling
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper: Automatic Reflection Generation for Peer-to-Peer Counseling**
- **Point of Contact: emoneil@sas.upenn.edu**
### Dataset Summary
The dataset derives from conversations between clients and counselors on a large peer-to-peer online counseling service. There are a total of 1061 observations across training and testing datasets, with 50 additional randomly sampled examples used in defining the few-shot learning prompt or for validation purposes in tuning hyperparameters, thus totaling 1111 observations across these sets. These observations were sourced from a larger dataset consisting of annotations of several different clinical counseling skills. We thus focus on the annotations of counselor reflections. The counselor reflections were annotated at utterance level with counselor verbal behaviors using the Motivational Interviewing Treatment Integrity 4.2 (MITI) and the Motivational Interviewing Skill Code 2.5 (MISC) manuals. Thus, the entire dataset consists of conversational context-counselor reflection pairs.
### Supported Tasks and Leaderboards
The dataset was used for conditioning and tuning generative models for generating reflection statements in the domain of peer-to-peer counseling.
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
Each instance consists of the chat room id of the conversation in which the dialogue occurred, the prompt which is the conversational context that immediately precedes the counselor reflection (including previous utterances from either the client or counselor up until and including the most recent prior client message that immediately followed a counselor’s message), and the completion which is the counselor reflection.
```
{
'chat_id': "1234567",
'prompt': "Client: I'm 19, he's 25. He's not very considerate of how I feel but says he cares about me and loves me.\nCounselor:",
'completion': " The words are easy, actions are needed. Guys who are 25 just desire to have different experiences.\n\n",
}
```
### Data Fields
* `chat_id`: an integer defining the chat id of the conversation
* `prompt`: a string corresponding to the conversational context preceding the counselor reflection with the messages separated by new line characters and each utterance prepended by 'Client:' or 'Counselor:'. The string ends with 'Counselor:' to indicate that it is followed by the counselor completion described below.
* `completion`: a string corresponding to the counselor reflection
### Data Splits
The dataset is split into training, testing, and a small set of 50 examples used either for designing the few-shot learning prompt or tuning hyperparameters. 911 examples were used for training. 350 of these examples also constitute a reduced training set used in comparative experiments. 150 examples were used for testing. 50 of these testing examples (randomly selected) were used in the human evaluation. We ensured that the chat identifiers for messages in the test set uniquely differed from those included in the training set.
## Dataset Creation
### Curation Rationale
Reflective listening is a critical skill in peer-to-peer counseling that is only effective when tailored to the context. Thus, we wanted to home in on this particular skill and explore the potential of state-of-the-art language models for text generation in this domain.
### Source Data
#### Initial Data Collection and Normalization
The dataset was created by filtering the larger dataset of utterances annotated for many different counseling skills to only those counselor messages annotated as reflections. Then, the prompt instances were created by identifying the preceding messages for each of these counselor reflection instances. After the prompts were initially created, prompts with less than or equal to five words were removed.
The author created reference reflections for each of the 350 training example prompts in the reduced training set and each of the 150 testing example prompts. In creating a reference reflection given each conversational context, the author intended to simulate responding to the client in roughly the same time a counselor would as if this turn was embedded in a conversation the client was having with the author. This gauging of time is based on the author’s experience in volunteering as a counselor at crisis hotlines. It is possible that the reference reflections were created in roughly even less time than an average counselor response given that there were hundreds of conversational contexts for which reflections needed to be created.
#### Who are the source language producers?
The 'client' messages are utterances of those seeking mental health support on a large online counseling service platform. The 'counselor' messages are utterances of minimally-trained peer counselors of this large online counseling service.
For each of the 350 training example prompts in the reduced training set and each of the 150 testing example prompts, a reference reflection was also created by the author.
### Annotations
#### Annotation process
The human evaluation examined text of generative models fine-tuned on the full training set, a reduced training set, and reference reflections; a few-shot learning model; the actual counselor; and the reference reflection.
We administered a survey through Amazon Mechanical Turk Developer Sandbox. 50 of the testing prompts were provided along with the corresponding six response sources. Provided with the conversational context, the annotators evaluated responses based on three criteria: fluency, resemblance of reflection, and overall preference. Thus, for each context, evaluators measured the fluency, reflection resemblance, and overall preference for all six candidate responses.
We used a variation of Efficient Annotation of Scalar Labels (EASL), a hybrid approach between direct assessment and online pairwise ranking aggregation and rank-based magnitude estimation. Evaluators saw all six responses at once (without knowledge of each response’s origin) and used a sliding scale from 1 to 5 to rate the responses based on each of the three dimensions. The order of the model responses for each conversational context was randomized. We provided examples of response ratings for ratings of 1 and 5 on the overall fluency and reflection resemblance dimensions. However, we did not include an example for overall preference, noting its subjectivity. The order of the model responses for each conversational context was randomized. We provided examples of response ratings for ratings of 1 and 5 on the overall fluency and reflection resemblance dimensions. However, we did not include an example for overall preference, noting its subjectivity.
Fluency refers to the response's overall fluency and human-likeness. In the instructions, we noted non-capitalized words and colloquial language are acceptable and not to be considered fluency errors. Reflection resemblance refers to whether the response captures and returns to the client something the client has said. Overall preference refers to the extent to which the evaluator likes the response.
Using Krippendorff’s alpha, we measured inter-annotator agreement, obtaining alpha values of -0.0369, 0.557, and 0.358 for overall fluency, reflection resemblance, and overall preference, respectively. Although these agreement values are low, the 0.557 inter-annotator agreement we obtained for reflection resemblance is notably higher than the inter-annotator agreement obtained for reflection likeness in the most relevant prior work.
#### Who are the annotators?
The three annotators recruited for the human evaluation were familiar with counseling reflections. All three annotators have worked with this large online counseling service dataset with IRB approval. They are quite familiar with motivational interviewing codes, annotating messages and using large language models for mass labeling.
### Personal and Sensitive Information
Due to the sensitive nature of this dataset and privacy concerns, we are unable to publicly share the data.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset of reflections in peer-to-peer counseling can be used as a reference point in understanding and evaluating counselor clinical skills and furthering the potential of language technology to be applied in this space. Given the sensitive nature of the mental health care context and the minimal training of these counselors, the use of such data requires care in understanding the limitations of technology defined based on this language.
### Discussion of Biases
Much of the language of conversations on this online counseling service platform is very informal and some client and counselor utterances may also contain pejorative language.
As for the generated text assessed in the human evaluation of this work, it is important to note that GPT-3 was trained on over 45 terabytes of data from the internet and books, and large volumes of data collected from online sources will inevitably contain biases that may be captured. There may thus be inadvertent discrimination against subclasses of particular protected groups. Using generated responses as a source of guidance rather than using generative systems as the counselors themselves may be able to balance the benefits and risks of using artificial intelligence in delicate mental health settings. It is imperative that such systems are not misused by companies seeking to maximize efficiency and minimize cost.
The reference reflections in this work were created by the author, whose experience with counseling and motivational interviewing derives from over one hundred hours of training at a teen-to-teen crisis hotline and textline service and experience through a research fellowship developing and user testing a platform for nurses to practice and grow their motivational interviewing skills. Therefore, the reference reflections may not be as clinically precise as are possible from a medical professional, and the diversity of reflections is inherently limited.
### Other Known Limitations
## Additional Information
### Dataset Curators
Developed by Emma O'Neil, João Sedoc, Diyi Yang, Haiyi Zhu, Lyle Ungar.
### Licensing Information
### Citation Information
### Contributions
Thanks to [@emoneil](https://github.com/emoneil) for adding this dataset. |
skytnt | null | null | A segmentation dataset for anime character | false | 20 | false | skytnt/anime-segmentation | 2022-10-03T01:35:40.000Z | null | false | 6685505e1e3c02ac0483398e633922b31de89fb0 | [] | [
"license:cc0-1.0",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:image-segmentation",
"task_ids:semantic-segmentation"
] | https://huggingface.co/datasets/skytnt/anime-segmentation/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators: []
license:
- cc0-1.0
multilinguality: []
pretty_name: Anime Segmentation
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
---
## Dataset Description
A segmentation dataset for anime character
My project: [anime-segmentation](https://github.com/SkyTNT/anime-segmentation)
### Dataset Summary
| Dir | Description | Format | Images |
| ---- | ---- | ---- | ---- |
| bg | background images | jpg | 8057 |
| fg | foreground images, transparent background | png | 11802 |
| imgs | real images with background and foreground| jpg | 1111 |
| masks| labels for imgs | jpg | 1111 |
Total size: 18GB
### Collection Method
Collect background from [character_bg_seg_data](https://github.com/ShuhongChen/bizarre-pose-estimator#download)
Collect foreground from danbooru website.
Collect imgs and masks from [AniSeg](https://github.com/jerryli27/AniSeg#about-the-models) and danbooru website.
I use [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) to restore the background images.
I clean the dataset using [DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) first then manually, to make sue all foreground is anime character.
### Contributions
Thanks to [@SkyTNT](https://github.com/SkyTNT) for adding this dataset.
Thanks to [@ShuhongChen](https://github.com/ShuhongChen) for [character_bg_seg_data](https://github.com/ShuhongChen/bizarre-pose-estimator#download)
Thanks to [@jerryli27](https://github.com/jerryli27) for [AniSeg](https://github.com/jerryli27/AniSeg#about-the-models)
|
neibla | null | null | null | false | 3 | false | neibla/debates | 2022-09-30T08:51:33.000Z | null | false | 909545ffdb20e7d356b95c561f54afa9e12f7a3c | [] | [
"license:mit"
] | https://huggingface.co/datasets/neibla/debates/resolve/main/README.md | ---
license: mit
---
|
skatemonke | null | null | null | false | null | false | skatemonke/bartek | 2022-09-30T10:54:05.000Z | null | false | 56d4be6b894907642fa235e00540b257b303b2fc | [] | [
"license:unknown"
] | https://huggingface.co/datasets/skatemonke/bartek/resolve/main/README.md | ---
license: unknown
---
|
khaclinh | null | null | null | false | 1 | false | khaclinh/testdata | 2022-10-11T05:31:45.000Z | null | false | d27f0f48476bc24ee60e2bd50c0a7f002d6f2eea | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"license:cc-by-nc-nd-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"task_categories:object-detection",
"task_ids:face-detection",
"task_ids:license-plate-detection"
] | https://huggingface.co/datasets/khaclinh/testdata/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- object-detection
task_ids:
- face-detection
- license-plate-detection
pretty_name: PP4AV
---
# Dataset Card for PP4AV
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/khaclinh/pp4av
- **Repository:**
- **Paper:** [PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving]
- **Point of Contact:** linhtk.dhbk@gmail.com
### Dataset Summary
PP4AV is the first public dataset with faces and license plates annotated with driving scenarios. P4AV provides 3,447 annotated driving images for both faces and license plates. For normal camera data, dataset sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities. The images in PP4AV were sampled from 6 European cities at various times of day, including nighttime. This dataset use the fisheye images from the WoodScape dataset to select 244 images from the front, rear, left, and right cameras for fisheye camera data. PP4AV dataset can be used as a benchmark suite (evaluating dataset) for data anonymization models in autonomous driving.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its face and license plate annotations.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1920x1080 at 0x19FA12186D8>, 'objects': {
'bbox': [
[0 0.230078 0.317081 0.239062 0.331367],
[1 0.5017185 0.0306425 0.5185935 0.0410975],
[1 0.695078 0.0710145 0.7109375 0.0863355],
[1 0.4089065 0.31646 0.414375 0.32764],
[0 0.1843745 0.403416 0.201093 0.414182],
[0 0.7132 0.3393474 0.717922 0.3514285]
]
}
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `objects`: a dictionary of face and license plate bounding boxes present on the image
- `bbox`: the bounding box of each face and license plate (in the [yolo](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#yolo) format). Basically, each row in annotation `.txt` file for each image `.png` file consists of data in format: `<object-class> <x_center> <y_center> <width> <height>`:
- `object-class`: integer number of object from 0 to 1, where 0 indicate face object, and 1 indicate licese plate object
- `x_center`: normalized x-axis coordinate of the center of the bounding box.
`x_center = <absolute_x_center> / <image_width>`
- `y_center`: normalized y-axis coordinate of the center of the bounding box.
`y_center = <absolute_y_center> / <image_height>`
- `width`: normalized width of the bounding box.
`width = <absolute_width> / <image_width>`
- `height`: normalized wheightdth of the bounding box.
`height = <absolute_height> / <image_height>`
- Example lines in YOLO v1.1 format `.txt' annotation file:
` 1 0.716797 0.395833 0.216406 0.147222
0 0.687109 0.379167 0.255469 0.158333
1 0.420312 0.395833 0.140625 0.166667
`
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The objective of PP4AV is to build a benchmark dataset that can be used to evaluate face and license plate detection models for autonomous driving. For normal camera data, we sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities. We focus on sampling data in urban areas rather than highways in order to provide sufficient samples of license plates and pedestrians. The images in PP4AV were sampled from **6** European cities at various times of day, including nighttime. The source data from 6 cities in European was described as follow:
- `Paris`: This subset contains **1450** images of the car driving down a Parisian street during the day. The video frame rate is 30 frames per second. The video is longer than one hour. We cut a shorter video for sampling and annotation. The original video can be found at the following URL:
URL: [paris_youtube_video](https://www.youtube.com/watch?v=nqWtGWymV6c)
- `Netherland day time`: This subset consists of **388** images of Hague, Amsterdam city in day time. The image of this subset are sampled from the bellow original video:
URL: [netherland_youtube_video](https://www.youtube.com/watch?v=Xuo4uCZxNrE)
The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than a half hour.
- `Netherland night time`: This subset consists of **824** images of Hague, Amsterdam city in night time sampled by the following original video:
URL: [netherland_youtube_video](https://www.youtube.com/watch?v=eAy9eHsynhM)
The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than a half hour.
- `Switzerland`: This subset consists of **372** images of Switzerland sampled by the following video:
URL: [switzerland_youtube_video](https://www.youtube.com/watch?v=0iw5IP94m0Q)
The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than one hour.
- `Zurich`: This subset consists of **50** images of Zurich city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
- `Stuttgart`: This subset consists of **69** images of Stuttgart city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
- `Strasbourg`: This subset consists of **50** images of Strasbourg city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
We use the fisheye images from the WoodScape dataset to select **244** images from the front, rear, left, and right cameras for fisheye camera data.
The source of fisheye data for sampling is located at WoodScape's [Fisheye images](https://woodscape.valeo.com/download).
In total, **3,447** images were selected and annotated in PP4AV.
### Annotations
#### Annotation process
Annotators annotate facial and license plate objects in images. For facial objects, bounding boxes are defined by all detectable human faces from the forehead to the chin to the ears. Faces were labelled with diverse sizes, skin tones, and faces partially obscured by a transparent material, such as a car windshield. For license plate objects, bounding boxes consists of all recognizable license plates with high variability, such as different sizes, countries, vehicle types (motorcycle, automobile, bus, truck), and occlusions by other vehicles. License plates were annotated for vehicles involved in moving traffic. To ensure the quality of annotation, there are two-step process for annotation. In the first phase, two teams of annotators will independently annotate identical image sets. After their annotation output is complete, a merging method based on the IoU scores between the two bounding boxes of the two annotations will be applied. Pairs of annotations with IoU scores above a threshold will be merged and saved as a single annotation. Annotated pairs with IoU scores below a threshold will be considered conflicting. In the second phase, two teams of reviewers will inspect the conflicting pairs of annotations for revision before a second merging method similar to the first is applied. The results of these two phases will be combined to form the final annotation. All work is conducted on the CVAT tool https://github.com/openvinotoolkit/cvat.
#### Who are the annotators?
Vantix Data Science team
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Linh Trinh
### Licensing Information
[Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Citation Information
```
@article{PP4AV2022,
title = {PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving},
author = {Linh Trinh, Phuong Pham, Hoang Trinh, Nguyen Bach, Dung Nguyen, Giang Nguyen, Huy Nguyen},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
year = {2023}
}
```
### Contributions
Thanks to [@khaclinh](https://github.com/khaclinh) for adding this dataset.
|
davanstrien | null | null | null | false | null | false | davanstrien/autotrain-data-metadata_eval | 2022-09-30T09:19:44.000Z | null | true | 290545cfbfb328e0d38b3af0a52662c1801d8681 | [] | [
"language:it",
"task_categories:text-classification"
] | https://huggingface.co/datasets/davanstrien/autotrain-data-metadata_eval/resolve/main/README.md | |
Speedy02 | null | null | null | false | 1 | false | Speedy02/eric | 2022-09-30T09:55:02.000Z | null | false | ec76e2bfdd7bfbd9d04b24b5d0cbefb424e0b5c9 | [] | [] | https://huggingface.co/datasets/Speedy02/eric/resolve/main/README.md | Eric pics |
delima87 | null | null | null | false | 80 | false | delima87/beesvsants | 2022-09-30T09:34:41.000Z | null | false | 56584831fefeb7d6cef37df192c05f4ad8b8fc00 | [] | [] | https://huggingface.co/datasets/delima87/beesvsants/resolve/main/README.md | This dataset contains images for the classification of bees and ants |
DFKI-SLT | null | @inproceedings{zhang-etal-2017-position,
title = "Position-aware Attention and Supervised Data Improve Slot Filling",
author = "Zhang, Yuhao and
Zhong, Victor and
Chen, Danqi and
Angeli, Gabor and
Manning, Christopher D.",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D17-1004",
doi = "10.18653/v1/D17-1004",
pages = "35--45",
}
@inproceedings{alt-etal-2020-tacred,
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
author = "Alt, Christoph and
Gabryszak, Aleksandra and
Hennig, Leonhard",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.142",
doi = "10.18653/v1/2020.acl-main.142",
pages = "1558--1569",
} | TACRED is a large-scale relation extraction dataset with 106,264 examples built over newswire
and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges.
Examples in TACRED cover 41 relation types as used in the TAC KBP challenges (e.g., per:schools_attended
and org:members) or are labeled as no_relation if no defined relation is held. These examples are created
by combining available human annotations from the TAC KBP challenges and crowdsourcing.
Please see our EMNLP paper, or our EMNLP slides for full details.
Note: There is currently a label-corrected version of the TACRED dataset, which you should consider using instead of
the original version released in 2017. For more details on this new version, see the TACRED Revisited paper
published at ACL 2020.
NOTE: This Datasetreader supports a reduced version of the original TACRED JSON format with the following changes:
- Removed fields: stanford_pos, stanford_ner, stanford_head, stanford_deprel, docid
The motivation for this is that we want to support additional languages, for which these fields were not required
or available. The reader expects the specification of a language-specific configuration specifying the variant
(original or revised) and the language (as a two-letter iso code). The default config is 'original-en'.
The Datasetreader changes the offsets of the following fields, to conform with standard Python usage (see
#_generate_examples()):
- subj_end to subj_end + 1 (make end offset exclusive)
- obj_end to obj_end + 1 (make end offset exclusive) | false | 6 | false | DFKI-SLT/multilingual_tacred | 2022-11-14T12:47:44.000Z | null | false | 1d506d8f768532f65d9529a21d5ba0ef3fdbf861 | [] | [
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language:ar",
"language:de",
"language:en",
"language:es",
"language:fi",
"language:fr",
"language:hi",
"language:hu",
"language:ja",
"language:pl",
"language:ru",
"language:tr",
"language:zh",
"language_cre... | https://huggingface.co/datasets/DFKI-SLT/multilingual_tacred/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
- expert-generated
language:
- ar
- de
- en
- es
- fi
- fr
- hi
- hu
- ja
- pl
- ru
- tr
- zh
language_creators:
- found
license:
- other
multilinguality:
- translation
pretty_name: The Multilingual TAC Relation Extraction Dataset
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
tags:
- relation extraction
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for "multilingual_tacred"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nlp.stanford.edu/projects/tacred](https://nlp.stanford.edu/projects/tacred)
- **Paper:** [Position-aware Attention and Supervised Data Improve Slot Filling](https://aclanthology.org/D17-1004/)
- **Point of Contact:** See [https://nlp.stanford.edu/projects/tacred/](https://nlp.stanford.edu/projects/tacred/)
- **Size of downloaded dataset files:** 62.3 MB
- **Size of the generated dataset:** 40.9 MB
- **Total amount of disk used:** 103.2 MB
### Dataset Summary
NOTE: This Datasetreader supports a reduced version of the original TACRED JSON format with the following changes:
- Removed fields: stanford_pos, stanford_ner, stanford_head, stanford_deprel, docid
The motivation for this is that we want to support additional languages, for which these fields were not required
or available. The reader expects the specification of a language-specific configuration specifying the variant
(original or revised) and the language (as a two-letter iso code). The default config is 'original-en'.
You can find the TACRED dataset reader for the original version of the
dataset [here](https://huggingface.co/datasets/DFKI-SLT/tacred).
The TAC Relation Extraction Dataset (TACRED) is a large-scale relation extraction dataset with 106,264 examples built over newswire and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges. Examples in TACRED cover 41 relation types as used in the TAC KBP challenges (e.g., per:schools_attended
and org:members) or are labeled as no_relation if no defined relation is held. These examples are created by combining available human annotations from the TAC
KBP challenges and crowdsourcing. Please see [Stanford's EMNLP paper](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf), or their [EMNLP slides](https://nlp.stanford.edu/projects/tacred/files/position-emnlp2017.pdf) for full details.
Note: There is currently a [label-corrected version](https://github.com/DFKI-NLP/tacrev) of the TACRED dataset, which you should consider using instead of
the original version released in 2017. For more details on this new version, see the [TACRED Revisited paper](https://aclanthology.org/2020.acl-main.142/)
published at ACL 2020.
### Supported Tasks and Leaderboards
- **Tasks:** Relation Classification
- **Leaderboards:** [https://paperswithcode.com/sota/relation-extraction-on-tacred](https://paperswithcode.com/sota/relation-extraction-on-tacred)
### Languages
The languages in the dataset are Arabic, German, English, Spanish, Finnish, French, Hindi, Hungarian, Japanese, Polish, Russian, Turkish, and Chinese.
All languages except English are machine-translated using either Deepl's or Google's translation APIs.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 62.3 MB
- **Size of the generated dataset:** 40.9 MB
- **Total amount of disk used:** 103.2 MB
An example of 'train' looks as follows:
```json
{
"id": "61b3a5c8c9a882dcfcd2",
"relation": "org:founded_by",
"token": ["Tom", "Thabane", "resigned", "in", "October", "last", "year", "to", "form", "the", "All", "Basotho", "Convention", "-LRB-", "ABC", "-RRB-", ",", "crossing", "the", "floor", "with", "17", "members", "of", "parliament", ",", "causing", "constitutional", "monarch", "King", "Letsie", "III", "to", "dissolve", "parliament", "and", "call", "the", "snap", "election", "."],
"subj_start": 10,
"subj_end": 13,
"obj_start": 0,
"obj_end": 2,
"subj_type": "ORGANIZATION",
"obj_type": "PERSON"
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: the instance id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, obtained with the StanfordNLP toolkit, a `list` of `string` features.
- `relation`: the relation label of this instance, a `string` classification label.
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `subj_type`: the NER type of the subject mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
- `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `obj_type`: the NER type of the object mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
### Data Splits
To miminize dataset bias, TACRED is stratified across years in which the TAC KBP challenge was run.
Languages statistics for the splits differ because not all instances could be translated with the
subject and object entity markup still intact, these were discarded.
| Language (Translation Engine - D = Deepl, G = Google) | Train | Dev | Test |
| ----- | ------ | ----- | ---- |
| English (en) | 68,124 (TAC KBP 2009-2012) | 22,631 (TAC KBP 2013) | 15,509 (TAC KBP 2014) |
| ar (G) | 67,736 | 22,502 | 15,425 |
| de (D) | 67,205 | 22,343 | 15,282 |
| es (D) | 65,247 | 21,697 | 14,908 |
| fi (D) | 66,751 | 22,268 | 15,083 |
| fr (D) | 66,856 | 22,248 | 15,237 |
| hi (G) | 67,751 | 22,511 | 15,440 |
| hu (G) | 67,766 | 22,519 | 15,436 |
| ja (D) | 61,571 | 20,290 | 13,701 |
| pl (G) | 68,124 | 22,631 | 15,509 |
| ru (D) | 66,413 | 21,998 | 14,995 |
| tr (G) | 67,652 | 22,510 | 15,429 |
| zh (D) | 65,211 | 21,490 | 14,694 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
See the Stanford paper and the Tacred Revisited paper, plus their appendices.
To ensure that models trained on TACRED are not biased towards predicting false positives on real-world text,
all sampled sentences where no relation was found between the mention pairs were fully annotated to be negative examples. As a result, 79.5% of the examples
are labeled as no_relation.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
To respect the copyright of the underlying TAC KBP corpus, TACRED is released via the
Linguistic Data Consortium ([LDC License](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf)).
You can download TACRED from the [LDC TACRED webpage](https://catalog.ldc.upenn.edu/LDC2018T24).
If you are an LDC member, the access will be free; otherwise, an access fee of $25 is needed.
### Citation Information
The original dataset:
```
@inproceedings{zhang2017tacred,
author = {Zhang, Yuhao and Zhong, Victor and Chen, Danqi and Angeli, Gabor and Manning, Christopher D.},
booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017)},
title = {Position-aware Attention and Supervised Data Improve Slot Filling},
url = {https://nlp.stanford.edu/pubs/zhang2017tacred.pdf},
pages = {35--45},
year = {2017}
}
```
For the revised version, please also cite:
```
@inproceedings{alt-etal-2020-tacred,
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
author = "Alt, Christoph and
Gabryszak, Aleksandra and
Hennig, Leonhard",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.142",
doi = "10.18653/v1/2020.acl-main.142",
pages = "1558--1569",
}
```
### Contributions
Thanks to [@leonhardhennig](https://github.com/leonhardhennig) for adding this dataset.
|
holen | null | null | null | false | 1 | false | holen/Finite_element_crash_data | 2022-09-30T16:35:49.000Z | null | false | ca9324836eefa4c1d7bc835afcaace6759dc3202 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/holen/Finite_element_crash_data/resolve/main/README.md | ---
license: apache-2.0
---
The data contains three different vehicles from CCSA (https://www.ccsa.gmu.edu/models/):
A Toyota Yaris
A Chevy Silverado
And an ADS vehicle
These vehicles were tested at different speeds, and the binout files were stored.
The car models were used to develop an AI that could estimate a full frontal impact for different cars at different speeds.
This can then be used to predict the force of an impact for an Autonomous car simulator. |
pking | null | null | null | false | 1 | false | pking/SMG-NFT | 2022-10-04T19:31:50.000Z | null | false | 22ed42ff72e12eac2938306f120987e9b3e4c711 | [] | [
"license:cc-by-nc-sa-4.0",
"annotations_creators:machine-generated",
"language:en",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"task_categories:text-to-image"
] | https://huggingface.co/datasets/pking/SMG-NFT/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: 'SMG-NFT'
size_categories:
- n<1K
source_datasets:
-
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for SMG-NFT
## Examples
## Citation
|
alkzar90 | null | @inproceedings{Wang_2017,
doi = {10.1109/cvpr.2017.369},
url = {https://doi.org/10.1109%2Fcvpr.2017.369},
year = 2017,
month = {jul},
publisher = {{IEEE}
},
author = {Xiaosong Wang and Yifan Peng and Le Lu and Zhiyong Lu and Mohammadhadi Bagheri and Ronald M. Summers},
title = {{ChestX}-Ray8: Hospital-Scale Chest X-Ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases},
booktitle = {2017 {IEEE} Conference on Computer Vision and Pattern Recognition ({CVPR})}
} | The NIH Chest X-ray dataset consists of 100,000 de-identified images of chest x-rays. The images are in PNG format.
The data is provided by the NIH Clinical Center and is available through the NIH download site: https://nihcc.app.box.com/v/ChestXray-NIHCC | false | 105 | false | alkzar90/NIH-Chest-X-ray-dataset | 2022-11-07T16:35:14.000Z | chestx-ray14 | false | e7ec57d45c19c155619cd21d9aac81d36899da00 | [] | [
"arxiv:1705.02315",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:expert-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"task_categories:image-c... | https://huggingface.co/datasets/alkzar90/NIH-Chest-X-ray-dataset/resolve/main/README.md | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- machine-generated
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: NIH-CXR14
paperswithcode_id: chestx-ray14
size_categories:
- 100K<n<1M
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
---
# Dataset Card for NIH Chest X-ray dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NIH Chest X-ray Dataset of 10 Common Thorax Disease Categories](https://nihcc.app.box.com/v/ChestXray-NIHCC/folder/36938765345)
- **Repository:**
- **Paper:** [ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases](https://arxiv.org/abs/1705.02315)
- **Leaderboard:**
- **Point of Contact:** rms@nih.gov
### Dataset Summary
_ChestX-ray dataset comprises 112,120 frontal-view X-ray images of 30,805 unique patients with the text-mined fourteen disease image labels (where each image can have multi-labels), mined from the associated radiological reports using natural language processing. Fourteen common thoracic pathologies include Atelectasis, Consolidation, Infiltration, Pneumothorax, Edema, Emphysema, Fibrosis, Effusion, Pneumonia, Pleural_thickening, Cardiomegaly, Nodule, Mass and Hernia, which is an extension of the 8 common disease patterns listed in our CVPR2017 paper. Note that original radiology reports (associated with these chest x-ray studies) are not meant to be publicly shared for many reasons. The text-mined disease labels are expected to have accuracy >90%.Please find more details and benchmark performance of trained models based on 14 disease labels in our arxiv paper: [1705.02315](https://arxiv.org/abs/1705.02315)_

## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{'image_file_path': '/root/.cache/huggingface/datasets/downloads/extracted/95db46f21d556880cf0ecb11d45d5ba0b58fcb113c9a0fff2234eba8f74fe22a/images/00000798_022.png',
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=1024x1024 at 0x7F2151B144D0>,
'labels': [9, 3]}
```
### Data Fields
The data instances have the following fields:
- `image_file_path` a `str` with the image path
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label.
<details>
<summary>Class Label Mappings</summary>
```json
{
"No Finding": 0,
"Atelectasis": 1,
"Cardiomegaly": 2,
"Effusion": 3,
"Infiltration": 4,
"Mass": 5,
"Nodule": 6,
"Pneumonia": 7,
"Pneumothorax": 8,
"Consolidation": 9,
"Edema": 10,
"Emphysema": 11,
"Fibrosis": 12,
"Pleural_Thickening": 13,
"Hernia": 14
}
```
</details>
**Label distribution on the dataset:**
| labels | obs | freq |
|:-------------------|------:|-----------:|
| No Finding | 60361 | 0.426468 |
| Infiltration | 19894 | 0.140557 |
| Effusion | 13317 | 0.0940885 |
| Atelectasis | 11559 | 0.0816677 |
| Nodule | 6331 | 0.0447304 |
| Mass | 5782 | 0.0408515 |
| Pneumothorax | 5302 | 0.0374602 |
| Consolidation | 4667 | 0.0329737 |
| Pleural_Thickening | 3385 | 0.023916 |
| Cardiomegaly | 2776 | 0.0196132 |
| Emphysema | 2516 | 0.0177763 |
| Edema | 2303 | 0.0162714 |
| Fibrosis | 1686 | 0.0119121 |
| Pneumonia | 1431 | 0.0101104 |
| Hernia | 227 | 0.00160382 |
### Data Splits
| |train| test|
|-------------|----:|----:|
|# of examples|86524|25596|
**Label distribution by dataset split:**
| labels | ('Train', 'obs') | ('Train', 'freq') | ('Test', 'obs') | ('Test', 'freq') |
|:-------------------|-------------------:|--------------------:|------------------:|-------------------:|
| No Finding | 50500 | 0.483392 | 9861 | 0.266032 |
| Infiltration | 13782 | 0.131923 | 6112 | 0.164891 |
| Effusion | 8659 | 0.082885 | 4658 | 0.125664 |
| Atelectasis | 8280 | 0.0792572 | 3279 | 0.0884614 |
| Nodule | 4708 | 0.0450656 | 1623 | 0.0437856 |
| Mass | 4034 | 0.038614 | 1748 | 0.0471578 |
| Consolidation | 2852 | 0.0272997 | 1815 | 0.0489654 |
| Pneumothorax | 2637 | 0.0252417 | 2665 | 0.0718968 |
| Pleural_Thickening | 2242 | 0.0214607 | 1143 | 0.0308361 |
| Cardiomegaly | 1707 | 0.0163396 | 1069 | 0.0288397 |
| Emphysema | 1423 | 0.0136211 | 1093 | 0.0294871 |
| Edema | 1378 | 0.0131904 | 925 | 0.0249548 |
| Fibrosis | 1251 | 0.0119747 | 435 | 0.0117355 |
| Pneumonia | 876 | 0.00838518 | 555 | 0.0149729 |
| Hernia | 141 | 0.00134967 | 86 | 0.00232012 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### License and attribution
There are no restrictions on the use of the NIH chest x-ray images. However, the dataset has the following attribution requirements:
- Provide a link to the NIH download site: https://nihcc.app.box.com/v/ChestXray-NIHCC
- Include a citation to the CVPR 2017 paper (see Citation information section)
- Acknowledge that the NIH Clinical Center is the data provider
### Citation Information
```
@inproceedings{Wang_2017,
doi = {10.1109/cvpr.2017.369},
url = {https://doi.org/10.1109%2Fcvpr.2017.369},
year = 2017,
month = {jul},
publisher = {{IEEE}
},
author = {Xiaosong Wang and Yifan Peng and Le Lu and Zhiyong Lu and Mohammadhadi Bagheri and Ronald M. Summers},
title = {{ChestX}-Ray8: Hospital-Scale Chest X-Ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases},
booktitle = {2017 {IEEE} Conference on Computer Vision and Pattern Recognition ({CVPR})}
}
```
### Contributions
Thanks to [@alcazar90](https://github.com/alcazar90) for adding this dataset.
|
TurkuNLP | null |
Please cite the article and also acknowledge Filip Ginter / TurkuNLP for the machine translated version
@inproceedings{hasan-etal-2021-xl,
title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Islam, Md. Saiful and
Mubasshir, Kazi and
Li, Yuan-Fang and
Kang, Yong-Bin and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.413",
pages = "4693--4703",
} | This dataset is a DeepL -based machine translation of a part of the English section of the XLSum dataset:[https://github.com/csebuetnlp/xl-sum](https://github.com/csebuetnlp/xl-sum) In the present version, only examples where the full version is at most 10x the summary in length are included. We might translate more later. | false | 12 | false | TurkuNLP/xlsum-fi | 2022-10-25T06:30:19.000Z | null | false | aa33b87297442d3bf9aa64ac8db2f1f14bd76b4f | [] | [
"annotations_creators:found",
"language_creators:machine translated",
"language:fi",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:xlsum",
"task_categories:summarization",
"task_categories:text2text-generation",
"tags:conditional-text-gene... | https://huggingface.co/datasets/TurkuNLP/xlsum-fi/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- machine translated
language:
- fi
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- xlsum
task_categories:
- summarization
- text2text-generation
task_ids: []
pretty_name: XL-Sum-FI
tags:
- conditional-text-generation
---
# Dataset Card for "XL-Sum-FI"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/TurkuNLP/xlsum-fi
- **Point of Contact:** [Filip Ginter](mailto:figint@utu.fi)
### Dataset Summary
This dataset is a DeepL -based machine translation of a part of the English section of the XLSum dataset:[https://github.com/csebuetnlp/xl-sum](https://github.com/csebuetnlp/xl-sum) In the present version, only examples where the full version is at most 10x the summary in length are included. We might translate more later.
### Supported Tasks and Leaderboards
### Languages
- `finnish`
## Dataset Structure
### Data Instances
One example from the `Finnish` dataset is given below in JSON format.
```
{
"id": "technology-17657859",
"url": "https://www.bbc.com/news/technology-17657859",
"title": "Walesin myrskytuulien vuoksi annettu säävaroitus",
"summary": "Tuulet voivat yltyä Walesissa myrskytuuliin, ja myrskysää on luvassa koko maahan tällä viikolla.",
"text": "Met Office on antanut Walesin ja Englannin kattavan keltaisen tuulivaroituksen keskiviikkoillasta kello 21.00 GMT alkaen. Matkustaminen ja sähkönjakelu todennäköisesti häiriintyvät, ja varoitus on voimassa torstaihin kello 15:00 asti. Puuskat ovat todennäköisesti nopeudeltaan 88 kilometriä tunnissa, ja rannikoilla ja kukkuloilla puuskat voivat nousta jopa 70 kilometriin tunnissa, ja lisäksi voi esiintyä rankkasateita ja myrskyisiä sadekuuroja."
}
```
### Data Fields
- 'id': A string representing the article ID, matched to the XLSum dataset original
- 'url': A string representing the article URL as in the original XLSum dataset
- 'title': A string containing the article title, machine-translated to Finnish
- 'summary': A string containing the article summary, machine-translated to Finnish
- 'text' : A string containing the article text, machine-translated to Finnish
### Data Splits
Follows the XLSum dataset.
## Dataset Creation
### Curation Rationale
### Source Data
[BBC News](https://www.bbc.co.uk/ws/languages)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/) For this present dataset, only English was used as the source and only examples where the full text is at maximum 10x in length compared to the summary are preserved. This 10x cutoff is naturally measured on English.
#### Who are the source language producers?
[Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/)
### Annotations
[Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/) DeepL was used to machine-translate from English to Finnish
#### Annotation process
[Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/)
#### Who are the annotators?
[Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/xl-sum)
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation system development and evaluation of any kind. In general, we wish you do not pair the original English data with the translations except when working on research unrelated to machine translation, so as not to infringe on the terms and conditions.
## Additional Information
### Dataset Curators
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the original XL-Sum paper below as well as acknowledge Filip Ginter and the TurkuNLP group for the Finnish machine translated version.
```
@inproceedings{hasan-etal-2021-xl,
title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Islam, Md. Saiful and
Mubasshir, Kazi and
Li, Yuan-Fang and
Kang, Yong-Bin and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.413",
pages = "4693--4703",
}
```
### Contributions
Thanks to the creators of the XLSum dataset! |
lhoestq | null | null | null | false | 2 | false | lhoestq/tmp-test-4 | 2022-09-30T13:10:31.000Z | null | false | 8e66387b3fec7bc49d92307483c2baa1070aadbb | [] | [] | https://huggingface.co/datasets/lhoestq/tmp-test-4/resolve/main/README.md | ---
dataset_info:
features:
- name: a
dtype: int64
splits:
- name: train
num_bytes: 40
num_examples: 5
download_size: 587
dataset_size: 40
---
D
a
t
a
s
e
t
C
a
r
d
f
o
r
"
t
m
p
-
t
e
s
t
-
4
"
[
M
o
r
e
I
n
f
o
r
m
a
t
i
o
n
n
e
e
d
e
d
]
(
h
t
t
p
s
:
/
/
g
i
t
h
u
b
.
c
o
m
/
h
u
g
g
i
n
g
f
a
c
e
/
d
a
t
a
s
e
t
s
/
b
l
o
b
/
m
a
i
n
/
C
O
N
T
R
I
B
U
T
I
N
G
.
m
d
#
h
o
w
-
t
o
-
c
o
n
t
r
i
b
u
t
e
-
t
o
-
t
h
e
-
d
a
t
a
s
e
t
-
c
a
r
d
s
) |
lhoestq | null | null | null | false | 1 | false | lhoestq/tmp-test-5 | 2022-09-30T13:12:59.000Z | null | false | 0ca85c42aa2ddcab55609afa2d9dba28cc51fd45 | [] | [] | https://huggingface.co/datasets/lhoestq/tmp-test-5/resolve/main/README.md | ---
dataset_info:
features:
- name: a
dtype: int64
splits:
- name: train
num_bytes: 40
num_examples: 5
download_size: 587
dataset_size: 40
---
Dataset Card for "tmp-test-5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Besedo | null | null | null | false | 1 | false | Besedo/random-dataset-10000 | 2022-09-30T15:27:40.000Z | null | false | 5e63d4fc3c1140553c27f8db01e881011147b0b6 | [] | [] | https://huggingface.co/datasets/Besedo/random-dataset-10000/resolve/main/README.md | This dataset was pushed to Hub through the UI. |
Besedo | null | null | null | false | 1 | false | Besedo/random-dataset-1000000 | 2022-09-30T15:25:51.000Z | null | false | 882bcea9e7a2a6c83e55fee2f9021b4bdf4f95f2 | [] | [] | https://huggingface.co/datasets/Besedo/random-dataset-1000000/resolve/main/README.md | This dataset was programmatically uploaded to this repo using huggingface-hub Python API |
Marcelpribu | null | null | null | false | 1 | false | Marcelpribu/stabledifusion | 2022-10-03T19:20:28.000Z | null | false | 907bfd21480fca99235f17dc91de2e65dde63960 | [] | [
"license:other"
] | https://huggingface.co/datasets/Marcelpribu/stabledifusion/resolve/main/README.md | ---
license: other
---
|
din0s | null | null | null | false | 4 | false | din0s/msmarco-nlgen | 2022-10-01T12:30:18.000Z | null | false | 27624246741bea210f5f437820169dc2e39d41d4 | [] | [
"arxiv:1611.09268",
"annotations_creators:expert-generated",
"language:en",
"language_creators:crowdsourced",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|ms_marco",
"tags:msmarco",
"tags:natural language generation",
"tags:question a... | https://huggingface.co/datasets/din0s/msmarco-nlgen/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: MSMARCO NLGEN
size_categories:
- 100K<n<1M
source_datasets:
- extended|ms_marco
tags:
- msmarco
- natural language generation
- question answering
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for MSMARCO - Natural Language Generation Task
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://microsoft.github.io/msmarco/
- **Repository:** https://github.com/microsoft/MSMARCO-Question-Answering
- **Paper:** https://arxiv.org/abs/1611.09268
- **Leaderboard:** https://microsoft.github.io/msmarco#qnadataset
### Dataset Summary
The original focus of MSMARCO was to provide a corpus for training and testing systems which given a real domain user query systems would then provide the most likley candidate answer and do so in language which was natural and conversational. All questions have been generated from real anonymized Bing user queries which grounds the dataset in a real world problem and can provide researchers real contrainsts their models might be used in. The context passages, from which the answers in the dataset are derived, are extracted from real web documents using the most advanced version of the Bing search engine. The answers to the queries are human generated.
### Supported Tasks and Leaderboards
Question Answering & Natural Language Generation. [Leaderboard](https://microsoft.github.io/msmarco#qnadataset)
### Languages
- English
## Dataset Structure
### Data Instances
```py
{
"query_id":604568,
"query":"what county is columbus city in",
"passages":[
{
"is_selected":0,
"passage_text":"WELCOME TO COLUMBUS! The City of Columbus includes a mix of residential, rural and commercial property. Columbus boasts large tracts of public land, including Carlos Avery Wildlife Management Area and Lamprey Pass.",
"url":"http://www.ci.columbus.mn.us/"
},
{
"is_selected":0,
"passage_text":"The ratio of number of residents in Columbus to the number of sex offenders is 488 to 1. The number of registered sex offenders compared to the number of residents in this city is near the state average. Nearest city with pop. 50,000+: Bloomington, IN (33.3 miles , pop. 69,291).",
"url":"http://www.city-data.com/city/Columbus-Indiana.html"
},
{
"is_selected":0,
"passage_text":"Phone Number: Columbus-Muscogee, the first consolidated city-county in Georgia, began development in 1826, building on ceded Creek Indian territory. Muscogee is the name of a branch of the Creek Nation. Columbus, of course, is named for Christopher Columbus.",
"url":"https://georgia.gov/cities-counties/columbus-muscogee-county"
},
{
"is_selected":1,
"passage_text":"Sponsored Topics. Columbus ( /kəlʌmbəs/) is a city in and the county seat of Bartholomew County, Indiana, United States. The population was 44,061 at the 2010 census, and the current mayor is Fred Armstrong. Located approximately 40 miles (64 km) south of Indianapolis, on the east fork of the White River, it is the state's 20th largest city.",
"url":"https://www.mapquest.com/us/in/columbus-282032817"
},
{
"is_selected":0,
"passage_text":"Columbus, Ohio. Columbus (/kəˈlʌmbəs/; kə-LUM-bəs) is the capital and largest city of the U.S. state of Ohio. It is the 15th-largest city in the United States, with a population of 850,106 as of 2015 estimates. This makes Columbus the fourth-most populous state capital in the United States, and the third-largest city in the Midwestern United States.",
"url":"https://en.wikipedia.org/wiki/Columbus,_Ohio"
},
{
"is_selected":0,
"passage_text":"Phone Number: Columbus-Muscogee, the first consolidated city-county in Georgia, began development in 1826, building on ceded Creek Indian territory. Muscogee is the name of a branch of the Creek Nation. Columbus, of course, is named for Christopher Columbus.",
"url":"https://georgia.gov/cities-counties/columbus"
},
{
"is_selected":0,
"passage_text":"Latest news from Columbus, IN collected exclusively by city-data.com from local newspapers, TV, and radio stations. Ancestries: American (30.5%), German (13.7%), English (7.7%), Irish (5.3%), European (2.4%), Scottish (1.2%).",
"url":"http://www.city-data.com/city/Columbus-Indiana.html"
},
{
"is_selected":0,
"passage_text":"Columbus, Indiana. 1 Columbus: covered Bridge at Mill Race Park. 2 Columbus: A statue in cloumbus. 3 Columbus. Columbus: Bartholomew County Courthouse. Columbus: Tipton Lakes - A wonderful planned 1 community! Columbus: Barthalomew county memorial for veterans. Columbus: A sculpter called summer storm in 1 columbus. Columbus: Downtown Columbus.",
"url":"http://www.city-data.com/city/Columbus-Indiana.html"
},
{
"is_selected":0,
"passage_text":"The City owns and operates a volunteer fire department through a joint powers agreement with the City of Forest Lake. Police protection is provided through a contract with the Anoka County Sheriff’s Department. Columbus is located within the Forest Lake Area School District (ISD #831).",
"url":"http://www.ci.columbus.mn.us/"
},
{
"is_selected":0,
"passage_text":"Acceptable ID for children: State ID, Birth Certificate, or Health Insurance Card. Effective June 27, 2016, the Franklin County Sheriff's Office will be implementing changes to ensure the safety of inmates, staff, and visitors. Printed materials (magazines, books, pamphlets, leaflets, or catalogues) MUST fit all the below criteria:",
"url":"https://sheriff.franklincountyohio.gov/services/inmate-information.cfm"
}
],
"query_type":"LOCATION",
"answers":[
"Columbus is a city in Bartholomew County."
]
}
```
### Data Fields
- `query_id`: a unique id for each query that is used in evaluation
- `query`: a unique query based on initial Bing usage
- `passages`: a list of 10 passages (`passage_text`), URLs (`url`), and an annotation if they were used to formulate the answer (`is_selected`)
- `query_type`: a basic division of queries based on a trained classifier (`LOCATION`,`NUMERIC`,`PERSON`,`DESCRIPTION`,`ENTITY`)
- `answers`: a list of "well-formed" answers generated by human annotators using natural language
### Data Splits
| **Split** | **Instances** |
|-----------|---------------|
| Train | 153725 |
| Dev | 12467 |
## Dataset Creation
### Curation Rationale
What is the differences between MSMARCO and other MRC datasets?
- Real questions: All questions have been sampled from real anonymized bing queries.
- Real Documents: Most of the URLs that the passages were sourced from contain the full web documents (passages).
- Human Generated Well-Formed Answers: All questions have an answer written by a human in natural language.
### Annotations
#### Annotation process
The MSMARCO dataset is generated by a well oiled pipeline optimized for the highest quality examples. The general process runs as follows:
1. Bing logs are sampled, filtered and anonymized to make sure the queries are both useful to the research community and respectful to bing users and fans.
2. Using the sampled and anonymized queries Bing generates the 10 most relevant passages for the query.
3. Highly trained judges read the query and its related passages and if there is an answer present, the supporting passages are annotated and a natural language answer is generated.
4. A smaller proportion of queries(~17% of overall dataset with 182,887 unique queries) are then passed on to a second round of judges who are asked to verify the answer is correct and rewrite(if possible) the query to be a well formed answer. These answers are designed to be understood without perfect context and are designed with smart speakers/digital assistants in mind.
## Additional Information
### Licensing Information
MS MARCO is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@inproceedings{Bajaj2016Msmarco,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang},
booktitle={InCoCo@NIPS},
year={2016}
}
```
### Contributions
Thanks to [@din0s](https://github.com/din0s) for adding this dataset. |
DrHalom | null | null | null | false | 1 | false | DrHalom/hk-grey | 2022-09-30T15:21:38.000Z | null | false | 286c357e880bb89dabca70d160a43860517f875b | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/DrHalom/hk-grey/resolve/main/README.md | ---
license: afl-3.0
---
|
DavLeonardo | null | null | null | false | 1 | false | DavLeonardo/fotitos | 2022-09-30T16:58:50.000Z | null | false | 65fe91d79e2e3360048c78eae0905634cf57bb99 | [] | [] | https://huggingface.co/datasets/DavLeonardo/fotitos/resolve/main/README.md | |
Alexvval | null | null | null | false | 1 | false | Alexvval/alexvalval | 2022-09-30T17:22:36.000Z | null | false | 93f49f7324347fc8ea13e5d6ff99de978292b293 | [] | [
"license:cc"
] | https://huggingface.co/datasets/Alexvval/alexvalval/resolve/main/README.md | ---
license: cc
---
|
Hellisotherpeople | null | null | null | false | 1 | false | Hellisotherpeople/Lipogram-e | 2022-09-30T18:04:43.000Z | null | false | 5e3ddde521c24727a134e4825d2927de25784c41 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:expert-generated",
"license:mit",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"tags:ctgs",
"tags:CTGS",
"tags:constrained-text-generation",
"tags:lipogram",
"tags:i-hate-the-lett... | https://huggingface.co/datasets/Hellisotherpeople/Lipogram-e/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: 'Lipogram-e from Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio'
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- ctgs
- CTGS
- constrained-text-generation
- lipogram
- i-hate-the-letter-e
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for Lipogram-e
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://github.com/Hellisotherpeople/Constrained-Text-Generation-Studio
- **Repository**: https://github.com/Hellisotherpeople/Constrained-Text-Generation-Studio
- **Paper** Most Language Models can be Poets too: An AI Writing Assistant
and Constrained Text Generation Studio
- **Leaderboard**: https://github.com/Hellisotherpeople/Constrained-Text-Generation-Studio
- **Point of Contact**: https://www.linkedin.com/in/allen-roush-27721011b/
### Dataset Summary



This is a dataset of 3 English books which do not contain the letter "e" in them. This dataset includes all of "Gadsby" by Ernest Vincent Wright, all of "A Void" by Georges Perec, and almost all of "Eunoia" by Christian Bok (except for the single chapter that uses the letter "e" in it)
This dataset is contributed as part of a paper titled "Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio" to appear at COLING 2022.
This dataset and the works within them are examples of Lipograms, which are works where a letter or string is systematically omitted. Lipograms are an example of hard-constrained writing.
### Supported Tasks and Leaderboards
The main task for this dataset is Constrained Text Generation - but all types of language modeling are suitable.
### Languages
English
## Dataset Structure
### Data Instances
Each is extracted directly from the available pdf or epub documents converted to txt using pandoc.
### Data Fields
Text. The name of each work appears before the work starts and again at the end, so the books can be trivially split again if necessary.
### Data Splits
None given. The way I do so in the paper is to extract the final 20% of each book, and concatenate these together. This may not be the most ideal way to do a train/test split, but I couldn't think of a better way. I did not believe randomly sampling was appropriate, but I could be wrong.
## Dataset Creation
### Curation Rationale
One way that we could extract text from datasets that doesn't use the letter "e" in it would be to simply computationally parse through large existing datasets for blocks or sentences which don't have the letter "e" in them. Unfortunately, this is extremely unlikely to lead to coherent or meaningful text. Doing so over increasingly large blocks or spans is likely to result in fewer and fewer examples. While the preparation of such a dataset would be fascinating in its own right - it is more interesting from the perspective of fine-tuning language models to have large scale prose narratives which fulfill the given constraint. This constraint of omitting the letter "e" is attractive because several book length works exist which do this.
### Source Data
#### Initial Data Collection and Normalization
Project Gutenberg
#### Who are the source language producers?
Ernest Vincent Wright
Georges Perec
Christian Bok
### Annotations
#### Annotation process
None
#### Who are the annotators?
n/a
### Personal and Sensitive Information
None
## Considerations for Using the Data
There may be conversion artifacts. I noticed 3 cases of the letter "e" being hallucinated from the pdf conversion of "a void" that I had to fix manually. They were reading special characters as the letter "e", and were not due to the authors making mistakes themselves. This implies that at least a few OCR errors exist.
### Social Impact of Dataset
These books have existed for a awhile now, so it's unlikely that this will have dramatic Social Impact.
### Discussion of Biases
This dataset is 100% biased against the letter "e". There may be biases present in contents of these works. It's recommended to read the books before using this in any non research application to verify that they are not problematic.
### Other Known Limitations
It's possible that more works exist but were not well known enough for the authors to find them and include them. Finding such inclusions would be grounds for iteration of this dataset (e.g. a version 1.1 would be released). The goal of this project is to eventually encompass all book length english language "e" lipograms.
## Additional Information
n/a
### Dataset Curators
Allen Roush
### Licensing Information
MIT
### Citation Information
TBA
### Contributions
Thanks to [@Hellisotherpeople](https://github.com/Hellisotherpeople) for adding this dataset.
|
Freemanvk1 | null | null | null | false | 1 | false | Freemanvk1/debbie | 2022-10-01T07:41:44.000Z | null | false | 4827f9e34855e68a8001a2970110d4999ac4488c | [] | [] | https://huggingface.co/datasets/Freemanvk1/debbie/resolve/main/README.md | |
loubnabnl | null | null | null | false | 5 | false | loubnabnl/code_pii_data | 2022-10-25T13:58:57.000Z | null | false | b369078e58e45a2481b4f4be8ce0e1193a4b6923 | [] | [
"language:code",
"multilinguality:monolingual"
] | https://huggingface.co/datasets/loubnabnl/code_pii_data/resolve/main/README.md | ---
pinned: True
language: ["code"]
multilinguality:
- monolingual
---
This is the result of running PII on 500 samples of python programs. The dataset contains samples for which sensitive information was detected. We look for emails, IP adresses and SSH keys and hide them.
````python
Dataset({
features: ['path', 'size', 'content', 'regex_metadata', 'old_text', 'modified'],
num_rows: 110
})
```` |
ggerganov | null | null | null | false | 2 | false | ggerganov/whisper.cpp | 2022-11-16T05:09:22.000Z | null | false | 8984b58e901ba78ce9a92caa68c836746c443b2f | [] | [
"license:mit"
] | https://huggingface.co/datasets/ggerganov/whisper.cpp/resolve/main/README.md | ---
license: mit
---
# OpenAI's Whisper models converted to ggml format
[Available models](https://huggingface.co/datasets/ggerganov/whisper.cpp/tree/main)
| Model | Disk | Mem | SHA |
| --- | --- | --- | --- |
| tiny | 75 MB | ~390 MB | `bd577a113a864445d4c299885e0cb97d4ba92b5f` |
| tiny.en | 75 MB | ~390 MB | `c78c86eb1a8faa21b369bcd33207cc90d64ae9df` |
| base | 142 MB | ~500 MB | `465707469ff3a37a2b9b8d8f89f2f99de7299dac` |
| base.en | 142 MB | ~500 MB | `137c40403d78fd54d454da0f9bd998f78703390c` |
| small | 466 MB | ~1.0 GB | `55356645c2b361a969dfd0ef2c5a50d530afd8d5` |
| small.en | 466 MB | ~1.0 GB | `db8a495a91d927739e50b3fc1cc4c6b8f6c2d022` |
| medium | 1.5 GB | ~2.6 GB | `fd9727b6e1217c2f614f9b698455c4ffd82463b4` |
| medium.en | 1.5 GB | ~2.6 GB | `8c30f0e44ce9560643ebd10bbe50cd20eafd3723` |
| large | 2.9 GB | ~4.7 GB | `b1caaf735c4cc1429223d5a74f0f4d0b9b59a299` |
For more information, visit:
https://github.com/ggerganov/whisper.cpp/tree/master/models |
Freemanvk1 | null | null | null | false | 1 | false | Freemanvk1/Debbie1 | 2022-09-30T20:01:36.000Z | null | false | 491755a196fee0a932d93cfa390809f2aeb616d1 | [] | [] | https://huggingface.co/datasets/Freemanvk1/Debbie1/resolve/main/README.md | |
laion | null | null | null | false | 74 | false | laion/laion-coco | 2022-10-23T18:55:09.000Z | null | false | d22869de3ccd39dfec1507f7ded32e4a518dad24 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion-coco/resolve/main/README.md | ---
license: cc-by-4.0
---
# LAION COCO: 600M SYNTHETIC CAPTIONS FROM LAION2B-EN
by: Christoph Schuhmann, Andreas Köpf, Richard Vencu, Theo Coombes, Romain Beaumont, 10 Oct, 2022
Author: Christoph Schuhmann, Andreas Köpf , Theo Coombes, Richard Vencu, Benjamin Trom , Romain Beaumont
We present LAION-COCO, the world’s largest dataset of 600M generated high-quality captions for publicly available web-images
Laion5B has five billion natural captions. They provide a lot of information, but could synthetic captions complement them? To answer this question, we use a combination of existing, publicly available models to produce high quality captions for images in the style of MS COCO. We captioned 600M images from the english subset of Laion-5B with an ensemble of BLIP L/14 and 2 CLIP versions (L/14 and RN50x64).
This will make it possible to investigate the value of generated captions to train models. We’re curious on how these synthetic captions could impact models trained on them!
The 600M samples are provided in parquet files. Columns include the original caption, the url, the top caption and a list of alternative captions with lower CLIP-similarity scores.
## Method
The method we used to generate these captions was to:
- We use Blip L/14 to generate 40 captions
- Rank them using openai Clip Open AI L/14 ; selected the best 5 captions
- Rank using Open AI RN50x64 Clip model to select the best one
- Use a small, fine-tuned T0 model to roughly repair grammar and punctuation of the texts
The hyperparameters were chosen through a grid search (settings) by Andreas Köpf to best match the style ( ROUGE scores ) of MS COCO texts.
## Evaluation
We evaluated these generated captions by asking human evaluators to guess whether a caption is coming from a human or an AI model. We also asked them to rate the quality on a scale from 0(bad) to 5 (good).
In a first round we presented the evaluators each 200 samples, that contained 100 AI generated and 100 human written MS COCO captions.
## Observations
Mean rating & standard deviation of samples, that were written by a human:
Mean: 3.98
Stdev: 0.99
Mean rating & standard deviation of samples, that were written by an AI
Mean: 3.89
Stdev: 1.12
Mean rating & standard deviation of samples, where the annotator believed they were written by a human:
Mean: 4.44
Stdev: 0.61
Mean rating & standard deviation of samples, where the annotator believed they were generated by an AI
Mean: 3.50
Stdev: 1.15
## Interpretation
It is very interesting that the mean scores of the samples generated by humans and generated by the model are very similar. We also notice that the standard deviation of the generated captions is a little bit higher.
We hypothesize that most in most cases the quality of the generated captions is perceived as as good as the quality of the human written captions.
But sometimes the captioning model obviously fails and the quality of the results is pretty low because the model doesn't relevant understand concepts about what is going on in the picture, because it's knowledge is not grounded in a sufficiently sophisticated world model.
## Failure cases
“Two people posing for the camera in their wedding attire, one with an umbrella over his head and another with long red hair.”
“An older man having a heart attack, with his hand on the chest.”
When we remove all samples from the evaluations that have ratings of either 0 or 1, we Observe that the mean ratings and standard deviations move closer together.
Scores without ratings of 0 and 1
Mean rating & standard deviation of samples, that were written by a human:
Mean: 4.07
Stdev: 0.81
Mean rating & standard deviation of samples, that were written by an AI
Mean: 4.02
Stdev: 0.94
The mean ratings of the generated captions are still a little bit lower and the standard deviation is still a little bit higher, but the trend is pretty clear. By removing samples with rating 2, the gap between the qualities would probably decrease even further.
Presentation only generated captions:
In a next step, we presented the human evaluators 400 captions that were only generated by the model (no human written captions in between):
Mean rating of all samples
3.81
Standard deviation of all samples
0.94
% rated as human
47.5
% rated as AI
52.5
We observe that the human evaluators thought in 47.5% of all cases, that the captions were written by a human. This makes us confident that our captains are on average pretty good. When we told the evaluators later that all captions were generated by the model they told us that it was very hard for them to judge whether a caption was written by a model or a human, and that it only was easy for them in obvious failure cases.
## Conclusions
We conclude that Our ensemble of BLIP and CLIP is already pretty good and capable of generating captions with a quality that is on average pretty close to the human written captions of MS Coco.
It would be very interesting for future work to let people rate our generated captions at larger scale and then filter out the samples with low rating values. These results could be used to train models to rate the quality of captions and to predict whether a caption looks like a generated or a human written caption.
And even without further automated filtering, an ensemble of our captions and human evaluators would be a pretty good workflow to curate high quality captions at much lower costs than if we would ask humans to write them from scratch.
## Credit assignments
- Christoph Schuhmann lead the project, implemented a first version of the code, ran most of the generations & conducted the human evaluations
- Andreas Köpf conducted the hyperparameter search & wrote the code to execute BLIP + CLIP filtering at scale
- Theo Coombes managed the server that coordinated which GPU worker got which part of LAION to work on
- Romain Beaumont packaged the .json into parquet files, sent to HF and wrote the first draft of this post
- Richard Vencu provided the infra structure to use the idle compute for this project
- Benjamin Trom wrote code that help us to convert the .json files to parquet
We thank stability.ai for providing the compute used to generate the captions in the dataset. |
Kasuzu | null | null | null | false | 1 | false | Kasuzu/522 | 2022-09-30T21:23:27.000Z | null | false | 3ff14c818bc168cb674b5f954b3933fc05f55e50 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/Kasuzu/522/resolve/main/README.md | ---
license: unknown
---
|
luciolrv | null | null | null | false | 1 | false | luciolrv/lener_br_finetuning_language_model | 2022-11-07T12:00:44.000Z | null | false | 62a4824960539c437641d91c356703f21246c1c7 | [] | [] | https://huggingface.co/datasets/luciolrv/lener_br_finetuning_language_model/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1254026
num_examples: 2659
- name: validation
num_bytes: 574627
num_examples: 665
download_size: 989047
dataset_size: 1828653
---
# Dataset Card for "lener_br_finetuning_language_model"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.