author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
anton-l | null | \n@misc{https://doi.org/10.48550/arxiv.2203.15591,
doi = {10.48550/ARXIV.2203.15591},
url = {https://arxiv.org/abs/2203.15591},
author = {Del Rio, Miguel and Ha, Peter and McNamara, Quinten and Miller, Corey and Chandra, Shipra},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Earnings-22: A Practical Benchmark for Accents in the Wild},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
} | \nThe Earnings 22 dataset ( also referred to as earnings22 ) is a 119-hour corpus of English-language earnings calls collected from global companies.
The primary purpose is to serve as a benchmark for industrial and academic automatic speech recognition (ASR) models on real-world accented speech. | false | 8 | false | anton-l/earnings22_baseline_5_gram | 2022-10-17T18:35:04.000Z | null | false | deb6287d02a3b1465a6ea16f6a99f04bac73b348 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/anton-l/earnings22_baseline_5_gram/resolve/main/README.md | ---
license: apache-2.0
---
|
Shushant | null | null | null | false | 8 | false | Shushant/CovidNepaliTweets | 2022-09-17T15:44:00.000Z | null | false | 78e631ea285b694dd251681beb36808bb6f0c58e | [] | [
"license:other"
] | https://huggingface.co/datasets/Shushant/CovidNepaliTweets/resolve/main/README.md | ---
license: other
---
|
igorknez | null | null | null | false | 7 | false | igorknez/clth_dset | 2022-09-17T18:50:13.000Z | null | false | f0cff768b955f714ee7bb948d66c083937eab6a4 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/igorknez/clth_dset/resolve/main/README.md | ---
license: afl-3.0
---
|
dadtheimpaler | null | null | null | false | 7 | false | dadtheimpaler/test | 2022-09-17T19:10:36.000Z | null | false | 03d627dd1196682431ae80cb27d20f066925d43c | [] | [
"license:cc"
] | https://huggingface.co/datasets/dadtheimpaler/test/resolve/main/README.md | ---
license: cc
---
|
bzh-dataset | null | null | null | false | 8 | false | bzh-dataset/Korpus-frazennou-brezhonek | 2022-09-17T21:26:30.000Z | null | false | 9f7a6cacd22203e821ffdb3470f1575eb71eedc5 | [] | [
"language:fr",
"language:br",
"license:unknown"
] | https://huggingface.co/datasets/bzh-dataset/Korpus-frazennou-brezhonek/resolve/main/README.md | ---
language:
- fr
- br
license: unknown
---
# Korpus-frazennou-brezhonek
Corpus de 4532 phrases bilingues (français-breton) alignées et libres de droits provenant de l'Office Public de la Langue Bretonne.
Plus d'informations [ici](https://www.fr.brezhoneg.bzh/212-donnees-libres-de-droits.htm)
# Usage
```
from datasets import load_dataset
dataset = load_dataset("bzh-dataset/Korpus-frazennou-brezhonek", sep=";")
```
|
klimbat85 | null | null | null | false | 7 | false | klimbat85/AnthonyEdwards | 2022-09-17T21:36:18.000Z | null | false | 155f133311b4694856b26627cbc61850cee07484 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/klimbat85/AnthonyEdwards/resolve/main/README.md | ---
license: afl-3.0
---
|
lapix | null | null | null | false | 7 | false | lapix/UFSC_OCPap | 2022-09-17T22:08:59.000Z | null | false | 4993f4d62b5c8ccb21a1458b3d1fddbe18c09466 | [] | [
"license:cc-by-nc-3.0"
] | https://huggingface.co/datasets/lapix/UFSC_OCPap/resolve/main/README.md | ---
license: cc-by-nc-3.0
---
|
skytnt | null | null | FBAnimeHQ is a dataset with high-quality full-body anime girl images in a resolution of 1024 × 512. | false | 22 | false | skytnt/fbanimehq | 2022-10-23T14:02:23.000Z | null | false | 493d1d86e7977892b60f8eeb901a10fe84fd1fc7 | [] | [
"license:cc0-1.0",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:unconditional-image-generation"
] | https://huggingface.co/datasets/skytnt/fbanimehq/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators: []
license:
- cc0-1.0
multilinguality: []
pretty_name: Full Body Anime HQ
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- unconditional-image-generation
task_ids: []
---
## Dataset Description
FBAnimeHQ is a dataset with high-quality full-body anime girl images in a resolution of 1024 × 512.
### Dataset Summary
The dataset contains 112,806 images.
All images are on white background
### Collection Method
#### v1.0
Collect from danbooru website.
Use yolov5 to detect and clip image.
Use anime-segmentation to remove background.
Use deepdanbooru to filter image.
Finally clean the dataset manually.
#### v2.0
Base on v1.0, use Novelai image-to-image to enhance and expand the dataset.
### Contributions
Thanks to [@SkyTNT](https://github.com/SkyTNT) for adding this dataset. |
taskmasterpeace | null | null | null | false | 7 | false | taskmasterpeace/taskmasterpeace | 2022-09-18T01:44:17.000Z | null | false | 46f8cc73be38aac9b95090801882532336b56a1b | [] | [
"license:other"
] | https://huggingface.co/datasets/taskmasterpeace/taskmasterpeace/resolve/main/README.md | ---
license: other
---
|
taskmasterpeace | null | null | null | false | 2 | false | taskmasterpeace/andrea | 2022-09-18T03:17:11.000Z | null | false | f81b067a153d11f2a7375d1cb74186cae21cf8d5 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/taskmasterpeace/andrea/resolve/main/README.md | ---
license: unknown
---
|
taskmasterpeace | null | null | null | false | 2 | false | taskmasterpeace/andrea1 | 2022-09-18T03:19:04.000Z | null | false | ad4d52140c484e159ff5c9ffc3484aba6e46d933 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/taskmasterpeace/andrea1/resolve/main/README.md | ---
license: apache-2.0
---
|
mediabiasgroup | null | null | null | false | 2 | false | mediabiasgroup/BABE | 2022-09-18T14:20:25.000Z | null | false | 3c2026e55331a5b360d8d8c26169171b046d90ed | [] | [
"license:agpl-3.0"
] | https://huggingface.co/datasets/mediabiasgroup/BABE/resolve/main/README.md | ---
license: agpl-3.0
---
# Please cite as
```
@InProceedings{Spinde2021f,
title = "Neural Media Bias Detection Using Distant Supervision With {BABE} - Bias Annotations By Experts",
author = "Spinde, Timo and
Plank, Manuel and
Krieger, Jan-David and
Ruas, Terry and
Gipp, Bela and
Aizawa, Akiko",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.101",
doi = "10.18653/v1/2021.findings-emnlp.101",
pages = "1166--1177",
}
``` |
premhuggingface | null | null | null | false | 1 | false | premhuggingface/prem | 2022-09-18T08:50:31.000Z | null | false | 6b1af94c41e300f43a41ec578499df68033f6b14 | [] | [] | https://huggingface.co/datasets/premhuggingface/prem/resolve/main/README.md | prem |
firqaaa | null | null | null | false | 8 | false | firqaaa/snli-id | 2022-09-18T09:20:31.000Z | null | false | 74573b05a2bc0afcf4a9c698b982437076f5c7db | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/firqaaa/snli-id/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
emma7033 | null | null | null | false | null | false | emma7033/test | 2022-09-18T08:55:02.000Z | null | false | f058f77c166f37556bf04f99ab1a89ef35007e85 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/emma7033/test/resolve/main/README.md | ---
license: afl-3.0
---
|
acaciaca | null | null | null | false | null | false | acaciaca/VR1 | 2022-09-18T09:57:24.000Z | null | false | 7ecd400426ef7354c6a167e5282b0db424706333 | [] | [] | https://huggingface.co/datasets/acaciaca/VR1/resolve/main/README.md | |
manter | null | null | null | false | 7 | false | manter/autotrain-data-dfd | 2022-09-27T08:41:50.000Z | null | false | e8baee2a0abd5c4b2c33b9284256088dad1b3e67 | [] | [] | https://huggingface.co/datasets/manter/autotrain-data-dfd/resolve/main/README.md | dont use this |
Gustavosta | null | null | null | false | 986 | false | Gustavosta/Stable-Diffusion-Prompts | 2022-09-18T22:38:59.000Z | null | false | d816d4a05cb89bde39dd99284c459801e1e7e69a | [] | [
"license:unknown",
"annotations_creators:no-annotation",
"language_creators:found",
"language:en",
"source_datasets:original"
] | https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts/resolve/main/README.md | ---
license:
- unknown
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
source_datasets:
- original
---
# Stable Diffusion Dataset
This is a set of about 80,000 prompts filtered and extracted from the image finder for Stable Diffusion: "[Lexica.art](https://lexica.art/)". It was a little difficult to extract the data, since the search engine still doesn't have a public API without being protected by cloudflare.
If you want to test the model with a demo, you can go to: "[spaces/Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion)".
If you want to see the model, go to: "[Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion)". |
din0s | null | null | null | false | 2 | false | din0s/ccmatrix_en-ro | 2022-09-19T22:42:56.000Z | null | false | 61a5b55d423a65338145f63a0247e2d1c0552cd0 | [] | [
"language:en",
"language:ro",
"multilinguality:translation",
"size_categories:100K<n<1M",
"task_categories:translation"
] | https://huggingface.co/datasets/din0s/ccmatrix_en-ro/resolve/main/README.md | ---
annotations_creators: []
language:
- en
- ro
language_creators: []
license: []
multilinguality:
- translation
pretty_name: CCMatrix (en-ro)
size_categories:
- 100K<n<1M
source_datasets: []
tags: []
task_categories:
- translation
task_ids: []
---
A sampled version of the [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix) dataset for the English-Romanian pair, containing 1M train entries.
Please refer to the original for more info. |
BramD | null | null | null | false | null | false | BramD/TextInversionTest | 2022-09-21T15:14:53.000Z | null | false | fe34485c03a7ea0d7228ca28a68a1a8e6f538662 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/BramD/TextInversionTest/resolve/main/README.md | ---
license: unknown
---
|
rajistics | null | null | null | false | 1 | false | rajistics/electricity_demand | 2022-10-19T21:03:02.000Z | null | false | 4a08d21e2e71ce0106721aa1c3bca936049fccf6 | [] | [
"task_categories:time-series-forecasting"
] | https://huggingface.co/datasets/rajistics/electricity_demand/resolve/main/README.md | ---
task_categories:
- time-series-forecasting
---
The Victoria electricity demand dataset from the [MAPIE github repository](https://github.com/scikit-learn-contrib/MAPIE/tree/master/examples/data).
It consists of hourly electricity demand (in GW)
of the Victoria state in Australia together with the temperature
(in Celsius degrees).
|
TheGreatRambler | null | null | null | false | 20 | false | TheGreatRambler/mm2_level | 2022-11-11T08:07:34.000Z | null | false | c53dad48e14e0df066905a4e4bd5893b9e790e49 | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text... | https://huggingface.co/datasets/TheGreatRambler/mm2_level/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 levels
tags:
- text-mining
---
# Mario Maker 2 levels
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 levels dataset consists of 26.6 million levels from Nintendo's online service totaling around 100GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 levels dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_level", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'data_id': 3000004,
'name': 'カベキック',
'description': 'カベキックをとにかくするコースです。',
'uploaded': 1561644329,
'created': 1561674240,
'gamestyle': 4,
'theme': 0,
'difficulty': 0,
'tag1': 7,
'tag2': 10,
'game_version': 1,
'world_record': 8049,
'upload_time': 193540,
'upload_attempts': 1,
'num_comments': 60,
'clear_condition': 0,
'clear_condition_magnitude': 0,
'timer': 300,
'autoscroll_speed': 0,
'clears': 1646,
'attempts': 3168,
'clear_rate': 51.957070707070706,
'plays': 1704,
'versus_matches': 80,
'coop_matches': 27,
'likes': 152,
'boos': 118,
'unique_players_and_versus': 1391,
'weekly_likes': 0,
'weekly_plays': 1,
'uploader_pid': '5218390885570355093',
'first_completer_pid': '16824392528839047213',
'record_holder_pid': '5411258160547085075',
'level_data': [some binary data],
'unk2': 0,
'unk3': [some binary data],
'unk9': 3,
'unk10': 4,
'unk11': 1,
'unk12': 1
}
```
Level data is a binary blob describing the actual level and is equivalent to the level format Nintendo uses in-game. It is gzip compressed and needs to be decompressed to be read. To read it you only need to use the provided `level.ksy` kaitai struct file and install the kaitai struct runtime to parse it into an object:
```python
from datasets import load_dataset
from kaitaistruct import KaitaiStream
from io import BytesIO
from level import Level
import zlib
ds = load_dataset("TheGreatRambler/mm2_level", streaming=True, split="train")
level_data = next(iter(ds))["level_data"]
level = Level(KaitaiStream(BytesIO(zlib.decompress(level_data))))
# NOTE level.overworld.objects is a fixed size (limitation of Kaitai struct)
# must iterate by object_count or null objects will be included
for i in range(level.overworld.object_count):
obj = level.overworld.objects[i]
print("X: %d Y: %d ID: %s" % (obj.x, obj.y, obj.id))
#OUTPUT:
X: 1200 Y: 400 ID: ObjId.block
X: 1360 Y: 400 ID: ObjId.block
X: 1360 Y: 240 ID: ObjId.block
X: 1520 Y: 240 ID: ObjId.block
X: 1680 Y: 240 ID: ObjId.block
X: 1680 Y: 400 ID: ObjId.block
X: 1840 Y: 400 ID: ObjId.block
X: 2000 Y: 400 ID: ObjId.block
X: 2160 Y: 400 ID: ObjId.block
X: 2320 Y: 400 ID: ObjId.block
X: 2480 Y: 560 ID: ObjId.block
X: 2480 Y: 720 ID: ObjId.block
X: 2480 Y: 880 ID: ObjId.block
X: 2160 Y: 880 ID: ObjId.block
```
Rendering the level data into an image can be done using [Toost](https://github.com/TheGreatRambler/toost) if desired.
You can also download the full dataset. Note that this will download ~100GB:
```python
ds = load_dataset("TheGreatRambler/mm2_level", split="train")
```
## Data Structure
### Data Instances
```python
{
'data_id': 3000004,
'name': 'カベキック',
'description': 'カベキックをとにかくするコースです。',
'uploaded': 1561644329,
'created': 1561674240,
'gamestyle': 4,
'theme': 0,
'difficulty': 0,
'tag1': 7,
'tag2': 10,
'game_version': 1,
'world_record': 8049,
'upload_time': 193540,
'upload_attempts': 1,
'num_comments': 60,
'clear_condition': 0,
'clear_condition_magnitude': 0,
'timer': 300,
'autoscroll_speed': 0,
'clears': 1646,
'attempts': 3168,
'clear_rate': 51.957070707070706,
'plays': 1704,
'versus_matches': 80,
'coop_matches': 27,
'likes': 152,
'boos': 118,
'unique_players_and_versus': 1391,
'weekly_likes': 0,
'weekly_plays': 1,
'uploader_pid': '5218390885570355093',
'first_completer_pid': '16824392528839047213',
'record_holder_pid': '5411258160547085075',
'level_data': [some binary data],
'unk2': 0,
'unk3': [some binary data],
'unk9': 3,
'unk10': 4,
'unk11': 1,
'unk12': 1
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|data_id|int|Data IDs are unique identifiers, gaps in the table are due to levels deleted by users or Nintendo|
|name|string|Course name|
|description|string|Course description|
|uploaded|int|UTC timestamp for when the level was uploaded|
|created|int|Local timestamp for when the level was created|
|gamestyle|int|Gamestyle, enum below|
|theme|int|Theme, enum below|
|difficulty|int|Difficulty, enum below|
|tag1|int|The first tag, if it exists, enum below|
|tag2|int|The second tag, if it exists, enum below|
|game_version|int|The version of the game this level was made on|
|world_record|int|The world record in milliseconds|
|upload_time|int|The upload time in milliseconds|
|upload_attempts|int|The number of attempts it took the uploader to upload|
|num_comments|int|Number of comments, may not reflect the archived comments if there were more than 1000 comments|
|clear_condition|int|Clear condition, enum below|
|clear_condition_magnitude|int|If applicable, the magnitude of the clear condition|
|timer|int|The timer of the level|
|autoscroll_speed|int|A unit of how fast the configured autoscroll speed is for the level|
|clears|int|Course clears|
|attempts|int|Course attempts|
|clear_rate|float|Course clear rate as a float between 0 and 1|
|plays|int|Course plays, or "footprints"|
|versus_matches|int|Course versus matches|
|coop_matches|int|Course coop matches|
|likes|int|Course likes|
|boos|int|Course boos|
|unique_players_and_versus|int|All unique players that have ever played this level, including the number of versus matches|
|weekly_likes|int|The weekly likes on this course|
|weekly_plays|int|The weekly plays on this course|
|uploader_pid|string|The player ID of the uploader|
|first_completer_pid|string|The player ID of the user who first cleared this course|
|record_holder_pid|string|The player ID of the user who held the world record at time of archival |
|level_data|bytes|The GZIP compressed decrypted level data, kaitai struct file is provided for reading|
|unk2|int|Unknown|
|unk3|bytes|Unknown|
|unk9|int|Unknown|
|unk10|int|Unknown|
|unk11|int|Unknown|
|unk12|int|Unknown|
### Data Splits
The dataset only contains a train split.
## Enums
The dataset contains some enum integer fields. This can be used to convert back to their string equivalents:
```python
GameStyles = {
0: "SMB1",
1: "SMB3",
2: "SMW",
3: "NSMBU",
4: "SM3DW"
}
Difficulties = {
0: "Easy",
1: "Normal",
2: "Expert",
3: "Super expert"
}
CourseThemes = {
0: "Overworld",
1: "Underground",
2: "Castle",
3: "Airship",
4: "Underwater",
5: "Ghost house",
6: "Snow",
7: "Desert",
8: "Sky",
9: "Forest"
}
TagNames = {
0: "None",
1: "Standard",
2: "Puzzle solving",
3: "Speedrun",
4: "Autoscroll",
5: "Auto mario",
6: "Short and sweet",
7: "Multiplayer versus",
8: "Themed",
9: "Music",
10: "Art",
11: "Technical",
12: "Shooter",
13: "Boss battle",
14: "Single player",
15: "Link"
}
ClearConditions = {
137525990: "Reach the goal without landing after leaving the ground.",
199585683: "Reach the goal after defeating at least/all (n) Mechakoopa(s).",
272349836: "Reach the goal after defeating at least/all (n) Cheep Cheep(s).",
375673178: "Reach the goal without taking damage.",
426197923: "Reach the goal as Boomerang Mario.",
436833616: "Reach the goal while wearing a Shoe.",
713979835: "Reach the goal as Fire Mario.",
744927294: "Reach the goal as Frog Mario.",
751004331: "Reach the goal after defeating at least/all (n) Larry(s).",
900050759: "Reach the goal as Raccoon Mario.",
947659466: "Reach the goal after defeating at least/all (n) Blooper(s).",
976173462: "Reach the goal as Propeller Mario.",
994686866: "Reach the goal while wearing a Propeller Box.",
998904081: "Reach the goal after defeating at least/all (n) Spike(s).",
1008094897: "Reach the goal after defeating at least/all (n) Boom Boom(s).",
1051433633: "Reach the goal while holding a Koopa Shell.",
1061233896: "Reach the goal after defeating at least/all (n) Porcupuffer(s).",
1062253843: "Reach the goal after defeating at least/all (n) Charvaargh(s).",
1079889509: "Reach the goal after defeating at least/all (n) Bullet Bill(s).",
1080535886: "Reach the goal after defeating at least/all (n) Bully/Bullies.",
1151250770: "Reach the goal while wearing a Goomba Mask.",
1182464856: "Reach the goal after defeating at least/all (n) Hop-Chops.",
1219761531: "Reach the goal while holding a Red POW Block. OR Reach the goal after activating at least/all (n) Red POW Block(s).",
1221661152: "Reach the goal after defeating at least/all (n) Bob-omb(s).",
1259427138: "Reach the goal after defeating at least/all (n) Spiny/Spinies.",
1268255615: "Reach the goal after defeating at least/all (n) Bowser(s)/Meowser(s).",
1279580818: "Reach the goal after defeating at least/all (n) Ant Trooper(s).",
1283945123: "Reach the goal on a Lakitu's Cloud.",
1344044032: "Reach the goal after defeating at least/all (n) Boo(s).",
1425973877: "Reach the goal after defeating at least/all (n) Roy(s).",
1429902736: "Reach the goal while holding a Trampoline.",
1431944825: "Reach the goal after defeating at least/all (n) Morton(s).",
1446467058: "Reach the goal after defeating at least/all (n) Fish Bone(s).",
1510495760: "Reach the goal after defeating at least/all (n) Monty Mole(s).",
1656179347: "Reach the goal after picking up at least/all (n) 1-Up Mushroom(s).",
1665820273: "Reach the goal after defeating at least/all (n) Hammer Bro(s.).",
1676924210: "Reach the goal after hitting at least/all (n) P Switch(es). OR Reach the goal while holding a P Switch.",
1715960804: "Reach the goal after activating at least/all (n) POW Block(s). OR Reach the goal while holding a POW Block.",
1724036958: "Reach the goal after defeating at least/all (n) Angry Sun(s).",
1730095541: "Reach the goal after defeating at least/all (n) Pokey(s).",
1780278293: "Reach the goal as Superball Mario.",
1839897151: "Reach the goal after defeating at least/all (n) Pom Pom(s).",
1969299694: "Reach the goal after defeating at least/all (n) Peepa(s).",
2035052211: "Reach the goal after defeating at least/all (n) Lakitu(s).",
2038503215: "Reach the goal after defeating at least/all (n) Lemmy(s).",
2048033177: "Reach the goal after defeating at least/all (n) Lava Bubble(s).",
2076496776: "Reach the goal while wearing a Bullet Bill Mask.",
2089161429: "Reach the goal as Big Mario.",
2111528319: "Reach the goal as Cat Mario.",
2131209407: "Reach the goal after defeating at least/all (n) Goomba(s)/Galoomba(s).",
2139645066: "Reach the goal after defeating at least/all (n) Thwomp(s).",
2259346429: "Reach the goal after defeating at least/all (n) Iggy(s).",
2549654281: "Reach the goal while wearing a Dry Bones Shell.",
2694559007: "Reach the goal after defeating at least/all (n) Sledge Bro(s.).",
2746139466: "Reach the goal after defeating at least/all (n) Rocky Wrench(es).",
2749601092: "Reach the goal after grabbing at least/all (n) 50-Coin(s).",
2855236681: "Reach the goal as Flying Squirrel Mario.",
3036298571: "Reach the goal as Buzzy Mario.",
3074433106: "Reach the goal as Builder Mario.",
3146932243: "Reach the goal as Cape Mario.",
3174413484: "Reach the goal after defeating at least/all (n) Wendy(s).",
3206222275: "Reach the goal while wearing a Cannon Box.",
3314955857: "Reach the goal as Link.",
3342591980: "Reach the goal while you have Super Star invincibility.",
3346433512: "Reach the goal after defeating at least/all (n) Goombrat(s)/Goombud(s).",
3348058176: "Reach the goal after grabbing at least/all (n) 10-Coin(s).",
3353006607: "Reach the goal after defeating at least/all (n) Buzzy Beetle(s).",
3392229961: "Reach the goal after defeating at least/all (n) Bowser Jr.(s).",
3437308486: "Reach the goal after defeating at least/all (n) Koopa Troopa(s).",
3459144213: "Reach the goal after defeating at least/all (n) Chain Chomp(s).",
3466227835: "Reach the goal after defeating at least/all (n) Muncher(s).",
3481362698: "Reach the goal after defeating at least/all (n) Wiggler(s).",
3513732174: "Reach the goal as SMB2 Mario.",
3649647177: "Reach the goal in a Koopa Clown Car/Junior Clown Car.",
3725246406: "Reach the goal as Spiny Mario.",
3730243509: "Reach the goal in a Koopa Troopa Car.",
3748075486: "Reach the goal after defeating at least/all (n) Piranha Plant(s)/Jumping Piranha Plant(s).",
3797704544: "Reach the goal after defeating at least/all (n) Dry Bones.",
3824561269: "Reach the goal after defeating at least/all (n) Stingby/Stingbies.",
3833342952: "Reach the goal after defeating at least/all (n) Piranha Creeper(s).",
3842179831: "Reach the goal after defeating at least/all (n) Fire Piranha Plant(s).",
3874680510: "Reach the goal after breaking at least/all (n) Crates(s).",
3974581191: "Reach the goal after defeating at least/all (n) Ludwig(s).",
3977257962: "Reach the goal as Super Mario.",
4042480826: "Reach the goal after defeating at least/all (n) Skipsqueak(s).",
4116396131: "Reach the goal after grabbing at least/all (n) Coin(s).",
4117878280: "Reach the goal after defeating at least/all (n) Magikoopa(s).",
4122555074: "Reach the goal after grabbing at least/all (n) 30-Coin(s).",
4153835197: "Reach the goal as Balloon Mario.",
4172105156: "Reach the goal while wearing a Red POW Box.",
4209535561: "Reach the Goal while riding Yoshi.",
4269094462: "Reach the goal after defeating at least/all (n) Spike Top(s).",
4293354249: "Reach the goal after defeating at least/all (n) Banzai Bill(s)."
}
```
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset consists of levels from many different Mario Maker 2 players globally and as such their titles and descriptions could contain harmful language. Harmful depictions could also be present in the level data, should you choose to render it.
|
TheGreatRambler | null | null | null | false | 2 | false | TheGreatRambler/mm2_level_comments | 2022-11-11T08:06:48.000Z | null | false | e1ded9a5fb0f1d052d0a7a44ec46f79a4b27903a | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text... | https://huggingface.co/datasets/TheGreatRambler/mm2_level_comments/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 level comments
tags:
- text-mining
---
# Mario Maker 2 level comments
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 level comment dataset consists of 31.9 million level comments from Nintendo's online service totaling around 20GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 level comment dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_level_comments", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'data_id': 3000006,
'comment_id': '20200430072710528979_302de3722145c7a2_2dc6c6',
'type': 2,
'pid': '3471680967096518562',
'posted': 1561652887,
'clear_required': 0,
'text': '',
'reaction_image_id': 10,
'custom_image': [some binary data],
'has_beaten': 0,
'x': 557,
'y': 64,
'reaction_face': 0,
'unk8': 0,
'unk10': 0,
'unk12': 0,
'unk14': [some binary data],
'unk17': 0
}
```
Comments can be one of three types: text, reaction image or custom image. `type` can be used with the enum below to identify different kinds of comments. Custom images are binary PNGs.
You can also download the full dataset. Note that this will download ~20GB:
```python
ds = load_dataset("TheGreatRambler/mm2_level_comments", split="train")
```
## Data Structure
### Data Instances
```python
{
'data_id': 3000006,
'comment_id': '20200430072710528979_302de3722145c7a2_2dc6c6',
'type': 2,
'pid': '3471680967096518562',
'posted': 1561652887,
'clear_required': 0,
'text': '',
'reaction_image_id': 10,
'custom_image': [some binary data],
'has_beaten': 0,
'x': 557,
'y': 64,
'reaction_face': 0,
'unk8': 0,
'unk10': 0,
'unk12': 0,
'unk14': [some binary data],
'unk17': 0
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|data_id|int|The data ID of the level this comment appears on|
|comment_id|string|Comment ID|
|type|int|Type of comment, enum below|
|pid|string|Player ID of the comment creator|
|posted|int|UTC timestamp of when this comment was created|
|clear_required|bool|Whether this comment requires a clear to view|
|text|string|If the comment type is text, the text of the comment|
|reaction_image_id|int|If this comment is a reaction image, the id of the reaction image, enum below|
|custom_image|bytes|If this comment is a custom drawing, the custom drawing as a PNG binary|
|has_beaten|int|Whether the user had beaten the level when they created the comment|
|x|int|The X position of the comment in game|
|y|int|The Y position of the comment in game|
|reaction_face|int|The reaction face of the mii of this user, enum below|
|unk8|int|Unknown|
|unk10|int|Unknown|
|unk12|int|Unknown|
|unk14|bytes|Unknown|
|unk17|int|Unknown|
### Data Splits
The dataset only contains a train split.
## Enums
The dataset contains some enum integer fields. This can be used to convert back to their string equivalents:
```python
CommentType = {
0: "Custom Image",
1: "Text",
2: "Reaction Image"
}
CommentReactionImage = {
0: "Nice!",
1: "Good stuff!",
2: "So tough...",
3: "EASY",
4: "Seriously?!",
5: "Wow!",
6: "Cool idea!",
7: "SPEEDRUN!",
8: "How?!",
9: "Be careful!",
10: "So close!",
11: "Beat it!"
}
CommentReactionFace = {
0: "Normal",
16: "Wink",
1: "Happy",
4: "Surprised",
18: "Scared",
3: "Confused"
}
```
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset consists of comments from many different Mario Maker 2 players globally and as such their text could contain harmful language. Harmful depictions could also be present in the custom images.
|
TheGreatRambler | null | null | null | false | 3 | false | TheGreatRambler/mm2_level_played | 2022-11-11T08:05:36.000Z | null | false | a2edf6a4a9588b3e81830cac3bd8659e12bdf8a2 | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-g... | https://huggingface.co/datasets/TheGreatRambler/mm2_level_played/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1B<n<10B
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 level plays
tags:
- text-mining
---
# Mario Maker 2 level plays
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 level plays dataset consists of 1 billion level plays from Nintendo's online service totaling around 20GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 level plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_level_played", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'data_id': 3000004,
'pid': '6382913755133534321',
'cleared': 1,
'liked': 0
}
```
Each row is a unique play in the level denoted by the `data_id` done by the player denoted by the `pid`, `pid` is a 64 bit integer stored within a string from database limitations. `cleared` and `liked` denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player.
You can also download the full dataset. Note that this will download ~20GB:
```python
ds = load_dataset("TheGreatRambler/mm2_level_played", split="train")
```
## Data Structure
### Data Instances
```python
{
'data_id': 3000004,
'pid': '6382913755133534321',
'cleared': 1,
'liked': 0
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|data_id|int|The data ID of the level this play occured in|
|pid|string|Player ID of the player|
|cleared|bool|Whether the player cleared the level during their play|
|liked|bool|Whether the player liked the level during their play|
### Data Splits
The dataset only contains a train split.
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset contains no harmful language or depictions.
|
TheGreatRambler | null | null | null | false | 2 | false | TheGreatRambler/mm2_level_deaths | 2022-11-11T08:05:52.000Z | null | false | 1f06c2b8cd09144b775cd328ed16b2033275cdc8 | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-... | https://huggingface.co/datasets/TheGreatRambler/mm2_level_deaths/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 level deaths
tags:
- text-mining
---
# Mario Maker 2 level deaths
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 level deaths dataset consists of 564 million level deaths from Nintendo's online service totaling around 2.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 level deaths dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_level_deaths", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'data_id': 3000382,
'x': 696,
'y': 0,
'is_subworld': 0
}
```
Each row is a unique death in the level denoted by the `data_id` that occurs at the provided coordinates. `is_subworld` denotes whether the death happened in the main world or the subworld.
You can also download the full dataset. Note that this will download ~2.5GB:
```python
ds = load_dataset("TheGreatRambler/mm2_level_deaths", split="train")
```
## Data Structure
### Data Instances
```python
{
'data_id': 3000382,
'x': 696,
'y': 0,
'is_subworld': 0
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|data_id|int|The data ID of the level this death occured in|
|x|int|X coordinate of death|
|y|int|Y coordinate of death|
|is_subworld|bool|Whether the death happened in the main world or the subworld|
### Data Splits
The dataset only contains a train split.
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset contains no harmful language or depictions.
|
TheGreatRambler | null | null | null | false | 18 | false | TheGreatRambler/mm2_user | 2022-11-11T08:04:51.000Z | null | false | 0c95c15ed4e4ea278f0fbd57475381eae14eca2b | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-g... | https://huggingface.co/datasets/TheGreatRambler/mm2_user/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 users
tags:
- text-mining
---
# Mario Maker 2 users
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 users dataset consists of 6 million users from Nintendo's online service totaling around 1.2GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 users dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_user", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'pid': '14608829447232141607',
'data_id': 1,
'region': 0,
'name': 'げんまい',
'country': 'JP',
'last_active': 1578384457,
'mii_data': [some binary data],
'mii_image': '000f165d6574777a7881949e9da1acc1cac7cacad3dad9e0eff2f9faf900430a151c25384258637084878e8b96a0b0',
'pose': 0,
'hat': 0,
'shirt': 0,
'pants': 0,
'wearing_outfit': 0,
'courses_played': 12,
'courses_cleared': 10,
'courses_attempted': 23,
'courses_deaths': 13,
'likes': 0,
'maker_points': 0,
'easy_highscore': 0,
'normal_highscore': 0,
'expert_highscore': 0,
'super_expert_highscore': 0,
'versus_rating': 0,
'versus_rank': 1,
'versus_won': 0,
'versus_lost': 1,
'versus_win_streak': 0,
'versus_lose_streak': 1,
'versus_plays': 1,
'versus_disconnected': 0,
'coop_clears': 1,
'coop_plays': 1,
'recent_performance': 1383,
'versus_kills': 0,
'versus_killed_by_others': 0,
'multiplayer_unk13': 286,
'multiplayer_unk14': 5999927,
'first_clears': 0,
'world_records': 0,
'unique_super_world_clears': 0,
'uploaded_levels': 0,
'maximum_uploaded_levels': 100,
'weekly_maker_points': 0,
'last_uploaded_level': 1561555201,
'is_nintendo_employee': 0,
'comments_enabled': 1,
'tags_enabled': 0,
'super_world_id': '',
'unk3': 0,
'unk12': 0,
'unk16': 0
}
```
Each row is a unique play in the level denoted by the `data_id` done by the player denoted by the `pid`, `pid` is a 64 bit integer stored within a string from database limitations. `cleared` and `liked` denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player.
Each row is a unique user associated denoted by the `pid`. `data_id` is not used by Nintendo but, like levels, it counts up sequentially and can be used to determine account age. `mii_data` is a `charinfo` type Switch Mii. `mii_image` can be used with Nintendo's online studio API to generate images:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_user", streaming=True, split="train")
mii_image = next(iter(ds))["mii_image"]
print("Face: https://studio.mii.nintendo.com/miis/image.png?data=%s&type=face&width=512&instanceCount=1" % mii_image)
print("Body: https://studio.mii.nintendo.com/miis/image.png?data=%s&type=all_body&width=512&instanceCount=1" % mii_image)
print("Face (x16): https://studio.mii.nintendo.com/miis/image.png?data=%s&type=face&width=512&instanceCount=16" % mii_image)
print("Body (x16): https://studio.mii.nintendo.com/miis/image.png?data=%s&type=all_body&width=512&instanceCount=16" % mii_image)
```
`pose`, `hat`, `shirt` and `pants` has associated enums described below. `last_active` and `last_uploaded_level` are UTC timestamps. `super_world_id`, if not empty, provides the ID of a super world in `TheGreatRambler/mm2_world`.
You can also download the full dataset. Note that this will download ~1.2GB:
```python
ds = load_dataset("TheGreatRambler/mm2_user", split="train")
```
## Data Structure
### Data Instances
```python
{
'pid': '14608829447232141607',
'data_id': 1,
'region': 0,
'name': 'げんまい',
'country': 'JP',
'last_active': 1578384457,
'mii_data': [some binary data],
'mii_image': '000f165d6574777a7881949e9da1acc1cac7cacad3dad9e0eff2f9faf900430a151c25384258637084878e8b96a0b0',
'pose': 0,
'hat': 0,
'shirt': 0,
'pants': 0,
'wearing_outfit': 0,
'courses_played': 12,
'courses_cleared': 10,
'courses_attempted': 23,
'courses_deaths': 13,
'likes': 0,
'maker_points': 0,
'easy_highscore': 0,
'normal_highscore': 0,
'expert_highscore': 0,
'super_expert_highscore': 0,
'versus_rating': 0,
'versus_rank': 1,
'versus_won': 0,
'versus_lost': 1,
'versus_win_streak': 0,
'versus_lose_streak': 1,
'versus_plays': 1,
'versus_disconnected': 0,
'coop_clears': 1,
'coop_plays': 1,
'recent_performance': 1383,
'versus_kills': 0,
'versus_killed_by_others': 0,
'multiplayer_unk13': 286,
'multiplayer_unk14': 5999927,
'first_clears': 0,
'world_records': 0,
'unique_super_world_clears': 0,
'uploaded_levels': 0,
'maximum_uploaded_levels': 100,
'weekly_maker_points': 0,
'last_uploaded_level': 1561555201,
'is_nintendo_employee': 0,
'comments_enabled': 1,
'tags_enabled': 0,
'super_world_id': '',
'unk3': 0,
'unk12': 0,
'unk16': 0
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|pid|string|The player ID of this user, an unsigned 64 bit integer as a string|
|data_id|int|The data ID of this user, while not used internally user codes are generated using this|
|region|int|User region, enum below|
|name|string|User name|
|country|string|User country as a 2 letter ALPHA-2 code|
|last_active|int|UTC timestamp of when this user was last active, not known what constitutes active|
|mii_data|bytes|The CHARINFO blob of this user's Mii|
|mii_image|string|A string that can be fed into Nintendo's studio API to generate an image|
|pose|int|Pose, enum below|
|hat|int|Hat, enum below|
|shirt|int|Shirt, enum below|
|pants|int|Pants, enum below|
|wearing_outfit|bool|Whether this user is wearing pants|
|courses_played|int|How many courses this user has played|
|courses_cleared|int|How many courses this user has cleared|
|courses_attempted|int|How many courses this user has attempted|
|courses_deaths|int|How many times this user has died|
|likes|int|How many likes this user has recieved|
|maker_points|int|Maker points|
|easy_highscore|int|Easy highscore|
|normal_highscore|int|Normal highscore|
|expert_highscore|int|Expert highscore|
|super_expert_highscore|int|Super expert high score|
|versus_rating|int|Versus rating|
|versus_rank|int|Versus rank, enum below|
|versus_won|int|How many courses this user has won in versus|
|versus_lost|int|How many courses this user has lost in versus|
|versus_win_streak|int|Versus win streak|
|versus_lose_streak|int|Versus lose streak|
|versus_plays|int|Versus plays|
|versus_disconnected|int|Times user has disconnected in versus|
|coop_clears|int|Coop clears|
|coop_plays|int|Coop plays|
|recent_performance|int|Unknown variable relating to versus performance|
|versus_kills|int|Kills in versus, unknown what activities constitute a kill|
|versus_killed_by_others|int|Deaths in versus from other users, little is known about what activities constitute a death|
|multiplayer_unk13|int|Unknown, relating to multiplayer|
|multiplayer_unk14|int|Unknown, relating to multiplayer|
|first_clears|int|First clears|
|world_records|int|World records|
|unique_super_world_clears|int|Super world clears|
|uploaded_levels|int|Number of uploaded levels|
|maximum_uploaded_levels|int|Maximum number of levels this user may upload|
|weekly_maker_points|int|Weekly maker points|
|last_uploaded_level|int|UTC timestamp of when this user last uploaded a level|
|is_nintendo_employee|bool|Whether this user is an official Nintendo account|
|comments_enabled|bool|Whether this user has comments enabled on their levels|
|tags_enabled|bool|Whether this user has tags enabled on their levels|
|super_world_id|string|The ID of this user's super world, blank if they do not have one|
|unk3|int|Unknown|
|unk12|int|Unknown|
|unk16|int|Unknown|
### Data Splits
The dataset only contains a train split.
## Enums
The dataset contains some enum integer fields. This can be used to convert back to their string equivalents:
```python
Regions = {
0: "Asia",
1: "Americas",
2: "Europe",
3: "Other"
}
MultiplayerVersusRanks = {
1: "D",
2: "C",
3: "B",
4: "A",
5: "S",
6: "S+"
}
UserPose = {
0: "Normal",
15: "Fidgety",
17: "Annoyed",
18: "Buoyant",
19: "Thrilled",
20: "Let's go!",
21: "Hello!",
29: "Show-Off",
31: "Cutesy",
39: "Hyped!"
}
UserHat = {
0: "None",
1: "Mario Cap",
2: "Luigi Cap",
4: "Mushroom Hairclip",
5: "Bowser Headpiece",
8: "Princess Peach Wig",
11: "Builder Hard Hat",
12: "Bowser Jr. Headpiece",
13: "Pipe Hat",
15: "Cat Mario Headgear",
16: "Propeller Mario Helmet",
17: "Cheep Cheep Hat",
18: "Yoshi Hat",
21: "Faceplant",
22: "Toad Cap",
23: "Shy Cap",
24: "Magikoopa Hat",
25: "Fancy Top Hat",
26: "Doctor Headgear",
27: "Rocky Wrench Manhold Lid",
28: "Super Star Barrette",
29: "Rosalina Wig",
30: "Fried-Chicken Headgear",
31: "Royal Crown",
32: "Edamame Barrette",
33: "Superball Mario Hat",
34: "Robot Cap",
35: "Frog Cap",
36: "Cheetah Headgear",
37: "Ninji Cap",
38: "Super Acorn Hat",
39: "Pokey Hat",
40: "Snow Pokey Hat"
}
UserShirt = {
0: "Nintendo Shirt",
1: "Mario Outfit",
2: "Luigi Outfit",
3: "Super Mushroom Shirt",
5: "Blockstripe Shirt",
8: "Bowser Suit",
12: "Builder Mario Outfit",
13: "Princess Peach Dress",
16: "Nintendo Uniform",
17: "Fireworks Shirt",
19: "Refreshing Shirt",
21: "Reset Dress",
22: "Thwomp Suit",
23: "Slobbery Shirt",
26: "Cat Suit",
27: "Propeller Mario Clothes",
28: "Banzai Bill Shirt",
29: "Staredown Shirt",
31: "Yoshi Suit",
33: "Midnight Dress",
34: "Magikoopa Robes",
35: "Doctor Coat",
37: "Chomp-Dog Shirt",
38: "Fish Bone Shirt",
40: "Toad Outfit",
41: "Googoo Onesie",
42: "Matrimony Dress",
43: "Fancy Tuxedo",
44: "Koopa Troopa Suit",
45: "Laughing Shirt",
46: "Running Shirt",
47: "Rosalina Dress",
49: "Angry Sun Shirt",
50: "Fried-Chicken Hoodie",
51: "? Block Hoodie",
52: "Edamame Camisole",
53: "I-Like-You Camisole",
54: "White Tanktop",
55: "Hot Hot Shirt",
56: "Royal Attire",
57: "Superball Mario Suit",
59: "Partrick Shirt",
60: "Robot Suit",
61: "Superb Suit",
62: "Yamamura Shirt",
63: "Princess Peach Tennis Outfit",
64: "1-Up Hoodie",
65: "Cheetah Tanktop",
66: "Cheetah Suit",
67: "Ninji Shirt",
68: "Ninji Garb",
69: "Dash Block Hoodie",
70: "Fire Mario Shirt",
71: "Raccoon Mario Shirt",
72: "Cape Mario Shirt",
73: "Flying Squirrel Mario Shirt",
74: "Cat Mario Shirt",
75: "World Wear",
76: "Koopaling Hawaiian Shirt",
77: "Frog Mario Raincoat",
78: "Phanto Hoodie"
}
UserPants = {
0: "Black Short-Shorts",
1: "Denim Jeans",
5: "Denim Skirt",
8: "Pipe Skirt",
9: "Skull Skirt",
10: "Burner Skirt",
11: "Cloudwalker",
12: "Platform Skirt",
13: "Parent-and-Child Skirt",
17: "Mario Swim Trunks",
22: "Wind-Up Shoe",
23: "Hoverclown",
24: "Big-Spender Shorts",
25: "Shorts of Doom!",
26: "Doorduroys",
27: "Antsy Corduroys",
28: "Bouncy Skirt",
29: "Stingby Skirt",
31: "Super Star Flares",
32: "Cheetah Runners",
33: "Ninji Slacks"
}
# Checked against user's shirt
UserIsOutfit = {
0: False,
1: True,
2: True,
3: False,
5: False,
8: True,
12: True,
13: True,
16: False,
17: False,
19: False,
21: True,
22: True,
23: False,
26: True,
27: True,
28: False,
29: False,
31: True,
33: True,
34: True,
35: True,
37: False,
38: False,
40: True,
41: True,
42: True,
43: True,
44: True,
45: False,
46: False,
47: True,
49: False,
50: False,
51: False,
52: False,
53: False,
54: False,
55: False,
56: True,
57: True,
59: False,
60: True,
61: True,
62: False,
63: True,
64: False,
65: False,
66: True,
67: False,
68: True,
69: False,
70: False,
71: False,
72: False,
73: False,
74: False,
75: True,
76: False,
77: True,
78: False
}
```
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset consists of many different Mario Maker 2 players globally and as such their names could contain harmful language. Harmful depictions could also be present in their Miis, should you choose to render it.
|
TheGreatRambler | null | null | null | false | 1 | false | TheGreatRambler/mm2_user_badges | 2022-11-11T08:05:05.000Z | null | false | 75d9ee5258f795a705fdbfe9fa51e6956df0b71f | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:1k<10K",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-gen... | https://huggingface.co/datasets/TheGreatRambler/mm2_user_badges/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1k<10K
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 user badges
tags:
- text-mining
---
# Mario Maker 2 user badges
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 user badges dataset consists of 9328 user badges (they are capped to 10k globally) from Nintendo's online service and adds onto `TheGreatRambler/mm2_user`. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_user_badges", split="train")
print(next(iter(ds)))
#OUTPUT:
{
'pid': '1779763691699286988',
'type': 4,
'rank': 6
}
```
Each row is a badge awarded to the player denoted by `pid`. `TheGreatRambler/mm2_user` contains these players.
## Data Structure
### Data Instances
```python
{
'pid': '1779763691699286988',
'type': 4,
'rank': 6
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|pid|string|Player ID|
|type|int|The kind of badge, enum below|
|rank|int|The rank of badge, enum below|
### Data Splits
The dataset only contains a train split.
## Enums
The dataset contains some enum integer fields. This can be used to convert back to their string equivalents:
```python
BadgeTypes = {
0: "Maker Points (All-Time)",
1: "Endless Challenge (Easy)",
2: "Endless Challenge (Normal)",
3: "Endless Challenge (Expert)",
4: "Endless Challenge (Super Expert)",
5: "Multiplayer Versus",
6: "Number of Clears",
7: "Number of First Clears",
8: "Number of World Records",
9: "Maker Points (Weekly)"
}
BadgeRanks = {
6: "Bronze",
5: "Silver",
4: "Gold",
3: "Bronze Ribbon",
2: "Silver Ribbon",
1: "Gold Ribbon"
}
```
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset contains no harmful language or depictions.
|
TheGreatRambler | null | null | null | false | 3 | false | TheGreatRambler/mm2_user_played | 2022-11-11T08:04:07.000Z | null | false | 44cde6a1c6338d7706bdabd2bbc42182073b9414 | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-... | https://huggingface.co/datasets/TheGreatRambler/mm2_user_played/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 user plays
tags:
- text-mining
---
# Mario Maker 2 user plays
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 user plays dataset consists of 329.8 million user plays from Nintendo's online service totaling around 2GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 user plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_user_played", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'pid': '4920036968545706712',
'data_id': 25548552
}
```
Each row is a unique play in the level denoted by the `data_id` done by the player denoted by the `pid`.
You can also download the full dataset. Note that this will download ~2GB:
```python
ds = load_dataset("TheGreatRambler/mm2_user_played", split="train")
```
## Data Structure
### Data Instances
```python
{
'pid': '4920036968545706712',
'data_id': 25548552
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|pid|string|The player ID of this user, an unsigned 64 bit integer as a string|
|data_id|int|The data ID of the level this user played|
### Data Splits
The dataset only contains a train split.
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset contains no harmful language or depictions.
|
TheGreatRambler | null | null | null | false | 17 | false | TheGreatRambler/mm2_user_liked | 2022-11-11T08:04:21.000Z | null | false | a953a5eeb81d18f6b8dd6c525934797fd2b43248 | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-... | https://huggingface.co/datasets/TheGreatRambler/mm2_user_liked/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 user likes
tags:
- text-mining
---
# Mario Maker 2 user likes
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 user likes dataset consists of 105.5 million user likes from Nintendo's online service totaling around 630MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 user likes dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_user_liked", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'pid': '14510618610706594411',
'data_id': 25861713
}
```
Each row is a unique like in the level denoted by the `data_id` done by the player denoted by the `pid`.
You can also download the full dataset. Note that this will download ~630MB:
```python
ds = load_dataset("TheGreatRambler/mm2_user_liked", split="train")
```
## Data Structure
### Data Instances
```python
{
'pid': '14510618610706594411',
'data_id': 25861713
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|pid|string|The player ID of this user, an unsigned 64 bit integer as a string|
|data_id|int|The data ID of the level this user liked|
### Data Splits
The dataset only contains a train split.
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset contains no harmful language or depictions.
|
TheGreatRambler | null | null | null | false | 3 | false | TheGreatRambler/mm2_user_posted | 2022-11-11T08:03:53.000Z | null | false | 35e87e12b511552496fa9ccecd601629fa7f2a1c | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text... | https://huggingface.co/datasets/TheGreatRambler/mm2_user_posted/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 user uploaded
tags:
- text-mining
---
# Mario Maker 2 user uploaded
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 user uploaded dataset consists of 26.5 million uploaded user levels from Nintendo's online service totaling around 215MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 user uploaded dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_user_posted", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'pid': '10491033288855085861',
'data_id': 27359486
}
```
Each row is a unique uploaded level denoted by the `data_id` uploaded by the player denoted by the `pid`.
You can also download the full dataset. Note that this will download ~215MB:
```python
ds = load_dataset("TheGreatRambler/mm2_user_posted", split="train")
```
## Data Structure
### Data Instances
```python
{
'pid': '10491033288855085861',
'data_id': 27359486
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|pid|string|The player ID of this user, an unsigned 64 bit integer as a string|
|data_id|int|The data ID of the level this user uploaded|
### Data Splits
The dataset only contains a train split.
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset contains no harmful language or depictions.
|
TheGreatRambler | null | null | null | false | 1 | false | TheGreatRambler/mm2_user_first_cleared | 2022-11-11T08:04:34.000Z | null | false | 15ec37e8e8d6f4806c2fe5947defa8d3e9b41250 | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text... | https://huggingface.co/datasets/TheGreatRambler/mm2_user_first_cleared/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 user first clears
tags:
- text-mining
---
# Mario Maker 2 user first clears
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 user first clears dataset consists of 17.8 million first clears from Nintendo's online service totaling around 157MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 user first clears dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_user_first_cleared", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'pid': '14510618610706594411',
'data_id': 25199891
}
```
Each row is a unique first clear in the level denoted by the `data_id` done by the player denoted by the `pid`.
You can also download the full dataset. Note that this will download ~157MB:
```python
ds = load_dataset("TheGreatRambler/mm2_user_first_cleared", split="train")
```
## Data Structure
### Data Instances
```python
{
'pid': '14510618610706594411',
'data_id': 25199891
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|pid|string|The player ID of this user, an unsigned 64 bit integer as a string|
|data_id|int|The data ID of the level this user first cleared|
### Data Splits
The dataset only contains a train split.
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset contains no harmful language or depictions.
|
TheGreatRambler | null | null | null | false | 1 | false | TheGreatRambler/mm2_user_world_record | 2022-11-11T08:03:39.000Z | null | false | f653680f7713e6f89eea9fc82bd96cbd498010cc | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text... | https://huggingface.co/datasets/TheGreatRambler/mm2_user_world_record/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 user world records
tags:
- text-mining
---
# Mario Maker 2 user world records
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 user world records dataset consists of 15.3 million world records from Nintendo's online service totaling around 215MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 user world records dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_user_world_record", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'pid': '14510618610706594411',
'data_id': 24866513
}
```
Each row is a unique world record in the level denoted by the `data_id` done by the player denoted by the `pid`.
You can also download the full dataset. Note that this will download ~215MB:
```python
ds = load_dataset("TheGreatRambler/mm2_user_world_record", split="train")
```
## Data Structure
### Data Instances
```python
{
'pid': '14510618610706594411',
'data_id': 24866513
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|pid|string|The player ID of this user, an unsigned 64 bit integer as a string|
|data_id|int|The data ID of the level this user got world record on|
### Data Splits
The dataset only contains a train split.
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset contains no harmful language or depictions.
|
TheGreatRambler | null | null | null | false | 3 | false | TheGreatRambler/mm2_world | 2022-11-11T08:08:15.000Z | null | false | 8640ff2491a3298963d72a0f15d28af1919b8b19 | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-... | https://huggingface.co/datasets/TheGreatRambler/mm2_world/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 super worlds
tags:
- text-mining
---
# Mario Maker 2 super worlds
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 super worlds dataset consists of 289 thousand super worlds from Nintendo's online service totaling around 13.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 super worlds dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_world", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'pid': '14510618610706594411',
'world_id': 'c96012bef256ba6b_20200513204805563301',
'worlds': 1,
'levels': 5,
'planet_type': 0,
'created': 1589420886,
'unk1': [some binary data],
'unk5': 3,
'unk6': 1,
'unk7': 1,
'thumbnail': [some binary data]
}
```
Each row is a unique super world denoted by the `world_id` created by the player denoted by the `pid`. Thumbnails are binary PNGs. `unk1` describes the super world itself, including the world map, but its format is unknown as of now.
You can also download the full dataset. Note that this will download ~13.5GB:
```python
ds = load_dataset("TheGreatRambler/mm2_world", split="train")
```
## Data Structure
### Data Instances
```python
{
'pid': '14510618610706594411',
'world_id': 'c96012bef256ba6b_20200513204805563301',
'worlds': 1,
'levels': 5,
'planet_type': 0,
'created': 1589420886,
'unk1': [some binary data],
'unk5': 3,
'unk6': 1,
'unk7': 1,
'thumbnail': [some binary data]
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|pid|string|The player ID of the user who created this super world|
|world_id|string|World ID|
|worlds|int|Number of worlds|
|levels|int|Number of levels|
|planet_type|int|Planet type, enum below|
|created|int|UTC timestamp of when this super world was created|
|unk1|bytes|Unknown|
|unk5|int|Unknown|
|unk6|int|Unknown|
|unk7|int|Unknown|
|thumbnail|bytes|The thumbnail, as a JPEG binary|
|thumbnail_url|string|The old URL of this thumbnail|
|thumbnail_size|int|The filesize of this thumbnail|
|thumbnail_filename|string|The filename of this thumbnail|
### Data Splits
The dataset only contains a train split.
## Enums
The dataset contains some enum integer fields. This can be used to convert back to their string equivalents:
```python
SuperWorldPlanetType = {
0: "Earth",
1: "Moon",
2: "Sand",
3: "Green",
4: "Ice",
5: "Ringed",
6: "Red",
7: "Spiral"
}
```
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset consists of super worlds from many different Mario Maker 2 players globally and as such harmful depictions could be present in their super world thumbnails.
|
TheGreatRambler | null | null | null | false | 10 | false | TheGreatRambler/mm2_world_levels | 2022-11-11T08:03:22.000Z | null | false | acd1e2f4c3e10eeb4315d04d44371cf531e31bcf | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-g... | https://huggingface.co/datasets/TheGreatRambler/mm2_world_levels/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 super world levels
tags:
- text-mining
---
# Mario Maker 2 super world levels
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 super world levels dataset consists of 3.3 million super world levels from Nintendo's online service and adds onto `TheGreatRambler/mm2_world`. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_world_levels", split="train")
print(next(iter(ds)))
#OUTPUT:
{
'pid': '14510618610706594411',
'data_id': 19170881,
'ninjis': 23
}
```
Each row is a level within a super world owned by player `pid` that is denoted by `data_id`. Each level contains some number of ninjis `ninjis`, a rough metric for their popularity.
## Data Structure
### Data Instances
```python
{
'pid': '14510618610706594411',
'data_id': 19170881,
'ninjis': 23
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|pid|string|The player ID of the user who created the super world with this level|
|data_id|int|The data ID of the level|
|ninjis|int|Number of ninjis shown on this level|
### Data Splits
The dataset only contains a train split.
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset contains no harmful language or depictions.
|
TheGreatRambler | null | null | null | false | 14 | false | TheGreatRambler/mm2_ninji | 2022-11-11T08:05:22.000Z | null | false | 14d9b109a50274f2a278c22c01af335da683965a | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-g... | https://huggingface.co/datasets/TheGreatRambler/mm2_ninji/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 ninjis
tags:
- text-mining
---
# Mario Maker 2 ninjis
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 ninjis dataset consists of 3 million ninji replays from Nintendo's online service totaling around 12.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 ninjis dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_ninji", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'data_id': 12171034,
'pid': '4748613890518923485',
'time': 83388,
'replay': [some binary data]
}
```
Each row is a ninji run in the level denoted by the `data_id` done by the player denoted by the `pid`, The length of this ninji run is `time` in milliseconds.
`replay` is a gzip compressed binary file format describing the animation frames and coordinates of the player throughout the run. Parsing the replay is as follows:
```python
from datasets import load_dataset
import zlib
import struct
ds = load_dataset("TheGreatRambler/mm2_ninji", streaming=True, split="train")
row = next(iter(ds))
replay = zlib.decompress(row["replay"])
frames = struct.unpack(">I", replay[0x10:0x14])[0]
character = replay[0x14]
character_mapping = {
0: "Mario",
1: "Luigi",
2: "Toad",
3: "Toadette"
}
# player_state is between 0 and 14 and varies between gamestyles
# as outlined below. Determining the gamestyle of a particular run
# and rendering the level being played requires TheGreatRambler/mm2_ninji_level
player_state_base = {
0: "Run/Walk",
1: "Jump",
2: "Swim",
3: "Climbing",
5: "Sliding",
7: "Dry bones shell",
8: "Clown car",
9: "Cloud",
10: "Boot",
11: "Walking cat"
}
player_state_nsmbu = {
4: "Sliding",
6: "Turnaround",
10: "Yoshi",
12: "Acorn suit",
13: "Propeller active",
14: "Propeller neutral"
}
player_state_sm3dw = {
4: "Sliding",
6: "Turnaround",
7: "Clear pipe",
8: "Cat down attack",
13: "Propeller active",
14: "Propeller neutral"
}
player_state_smb1 = {
4: "Link down slash",
5: "Crouching"
}
player_state_smw = {
10: "Yoshi",
12: "Cape"
}
print("Frames: %d\nCharacter: %s" % (frames, character_mapping[character]))
current_offset = 0x3C
# Ninji updates are reported every 4 frames
for i in range((frames + 2) // 4):
flags = replay[current_offset] >> 4
player_state = replay[current_offset] & 0x0F
current_offset += 1
x = struct.unpack("<H", replay[current_offset:current_offset + 2])[0]
current_offset += 2
y = struct.unpack("<H", replay[current_offset:current_offset + 2])[0]
current_offset += 2
if flags & 0b00000110:
unk1 = replay[current_offset]
current_offset += 1
in_subworld = flags & 0b00001000
print("Frame %d:\n Flags: %s,\n Animation state: %d,\n X: %d,\n Y: %d,\n In subworld: %s"
% (i, bin(flags), player_state, x, y, in_subworld))
#OUTPUT:
Frames: 5006
Character: Mario
Frame 0:
Flags: 0b0,
Animation state: 0,
X: 2672,
Y: 2288,
In subworld: 0
Frame 1:
Flags: 0b0,
Animation state: 0,
X: 2682,
Y: 2288,
In subworld: 0
Frame 2:
Flags: 0b0,
Animation state: 0,
X: 2716,
Y: 2288,
In subworld: 0
...
Frame 1249:
Flags: 0b0,
Animation state: 1,
X: 59095,
Y: 3749,
In subworld: 0
Frame 1250:
Flags: 0b0,
Animation state: 1,
X: 59246,
Y: 3797,
In subworld: 0
Frame 1251:
Flags: 0b0,
Animation state: 1,
X: 59402,
Y: 3769,
In subworld: 0
```
You can also download the full dataset. Note that this will download ~12.5GB:
```python
ds = load_dataset("TheGreatRambler/mm2_ninji", split="train")
```
## Data Structure
### Data Instances
```python
{
'data_id': 12171034,
'pid': '4748613890518923485',
'time': 83388,
'replay': [some binary data]
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|data_id|int|The data ID of the level this run occured in|
|pid|string|Player ID of the player|
|time|int|Length in milliseconds of the run|
|replay|bytes|Replay file of this run|
### Data Splits
The dataset only contains a train split.
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset contains no harmful language or depictions.
|
TheGreatRambler | null | null | null | false | 1 | false | TheGreatRambler/mm2_ninji_level | 2022-11-11T08:08:00.000Z | null | false | b5f8a698461f84a65ae06ce54705913b6e0928b8 | [] | [
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-gener... | https://huggingface.co/datasets/TheGreatRambler/mm2_ninji_level/resolve/main/README.md | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 ninji levels
tags:
- text-mining
---
# Mario Maker 2 ninji levels
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 ninji levels dataset consists of 21 ninji levels from Nintendo's online service and aids `TheGreatRambler/mm2_ninji`. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_ninji_level", split="train")
print(next(iter(ds)))
#OUTPUT:
{
'data_id': 12171034,
'name': 'Rolling Snowballs',
'description': 'Make your way through the snowfields, and keep an eye\nout for Spikes and Snow Pokeys! Stomping on Snow Pokeys\nwill turn them into small snowballs, which you can pick up\nand throw. Play this course as many times as you want,\nand see if you can find the fastest way to the finish!',
'uploaded': 1575532800,
'ended': 1576137600,
'gamestyle': 3,
'theme': 6,
'medal_time': 26800,
'clear_condition': 0,
'clear_condition_magnitude': 0,
'unk3_0': 1309513,
'unk3_1': 62629737,
'unk3_2': 4355893,
'unk5': 1,
'unk6': 0,
'unk9': 0,
'level_data': [some binary data]
}
```
Each row is a ninji level denoted by `data_id`. `TheGreatRambler/mm2_ninji` refers to these levels. `level_data` is the same format used in `TheGreatRambler/mm2_level` and the provided Kaitai struct file and `level.py` can be used to decode it:
```python
from datasets import load_dataset
from kaitaistruct import KaitaiStream
from io import BytesIO
from level import Level
import zlib
ds = load_dataset("TheGreatRambler/mm2_ninji_level", split="train")
level_data = next(iter(ds))["level_data"]
level = Level(KaitaiStream(BytesIO(zlib.decompress(level_data))))
# NOTE level.overworld.objects is a fixed size (limitation of Kaitai struct)
# must iterate by object_count or null objects will be included
for i in range(level.overworld.object_count):
obj = level.overworld.objects[i]
print("X: %d Y: %d ID: %s" % (obj.x, obj.y, obj.id))
#OUTPUT:
X: 1200 Y: 400 ID: ObjId.block
X: 1360 Y: 400 ID: ObjId.block
X: 1360 Y: 240 ID: ObjId.block
X: 1520 Y: 240 ID: ObjId.block
X: 1680 Y: 240 ID: ObjId.block
X: 1680 Y: 400 ID: ObjId.block
X: 1840 Y: 400 ID: ObjId.block
X: 2000 Y: 400 ID: ObjId.block
X: 2160 Y: 400 ID: ObjId.block
X: 2320 Y: 400 ID: ObjId.block
X: 2480 Y: 560 ID: ObjId.block
X: 2480 Y: 720 ID: ObjId.block
X: 2480 Y: 880 ID: ObjId.block
X: 2160 Y: 880 ID: ObjId.block
```
## Data Structure
### Data Instances
```python
{
'data_id': 12171034,
'name': 'Rolling Snowballs',
'description': 'Make your way through the snowfields, and keep an eye\nout for Spikes and Snow Pokeys! Stomping on Snow Pokeys\nwill turn them into small snowballs, which you can pick up\nand throw. Play this course as many times as you want,\nand see if you can find the fastest way to the finish!',
'uploaded': 1575532800,
'ended': 1576137600,
'gamestyle': 3,
'theme': 6,
'medal_time': 26800,
'clear_condition': 0,
'clear_condition_magnitude': 0,
'unk3_0': 1309513,
'unk3_1': 62629737,
'unk3_2': 4355893,
'unk5': 1,
'unk6': 0,
'unk9': 0,
'level_data': [some binary data]
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|data_id|int|The data ID of this ninji level|
|name|string|Name|
|description|string|Description|
|uploaded|int|UTC timestamp of when this was uploaded|
|ended|int|UTC timestamp of when this event ended|
|gamestyle|int|Gamestyle, enum below|
|theme|int|Theme, enum below|
|medal_time|int|Time to get a medal in milliseconds|
|clear_condition|int|Clear condition, enum below|
|clear_condition_magnitude|int|If applicable, the magnitude of the clear condition|
|unk3_0|int|Unknown|
|unk3_1|int|Unknown|
|unk3_2|int|Unknown|
|unk5|int|Unknown|
|unk6|int|Unknown|
|unk9|int|Unknown|
|level_data|bytes|The GZIP compressed decrypted level data, a kaitai struct file is provided to read this|
|one_screen_thumbnail|bytes|The one screen course thumbnail, as a JPEG binary|
|one_screen_thumbnail_url|string|The old URL of this thumbnail|
|one_screen_thumbnail_size|int|The filesize of this thumbnail|
|one_screen_thumbnail_filename|string|The filename of this thumbnail|
|entire_thumbnail|bytes|The entire course thumbnail, as a JPEG binary|
|entire_thumbnail_url|string|The old URL of this thumbnail|
|entire_thumbnail_size|int|The filesize of this thumbnail|
|entire_thumbnail_filename|string|The filename of this thumbnail|
### Data Splits
The dataset only contains a train split.
## Enums
The dataset contains some enum integer fields. They match those used by `TheGreatRambler/mm2_level` for the most part, but they are reproduced below:
```python
GameStyles = {
0: "SMB1",
1: "SMB3",
2: "SMW",
3: "NSMBU",
4: "SM3DW"
}
CourseThemes = {
0: "Overworld",
1: "Underground",
2: "Castle",
3: "Airship",
4: "Underwater",
5: "Ghost house",
6: "Snow",
7: "Desert",
8: "Sky",
9: "Forest"
}
ClearConditions = {
137525990: "Reach the goal without landing after leaving the ground.",
199585683: "Reach the goal after defeating at least/all (n) Mechakoopa(s).",
272349836: "Reach the goal after defeating at least/all (n) Cheep Cheep(s).",
375673178: "Reach the goal without taking damage.",
426197923: "Reach the goal as Boomerang Mario.",
436833616: "Reach the goal while wearing a Shoe.",
713979835: "Reach the goal as Fire Mario.",
744927294: "Reach the goal as Frog Mario.",
751004331: "Reach the goal after defeating at least/all (n) Larry(s).",
900050759: "Reach the goal as Raccoon Mario.",
947659466: "Reach the goal after defeating at least/all (n) Blooper(s).",
976173462: "Reach the goal as Propeller Mario.",
994686866: "Reach the goal while wearing a Propeller Box.",
998904081: "Reach the goal after defeating at least/all (n) Spike(s).",
1008094897: "Reach the goal after defeating at least/all (n) Boom Boom(s).",
1051433633: "Reach the goal while holding a Koopa Shell.",
1061233896: "Reach the goal after defeating at least/all (n) Porcupuffer(s).",
1062253843: "Reach the goal after defeating at least/all (n) Charvaargh(s).",
1079889509: "Reach the goal after defeating at least/all (n) Bullet Bill(s).",
1080535886: "Reach the goal after defeating at least/all (n) Bully/Bullies.",
1151250770: "Reach the goal while wearing a Goomba Mask.",
1182464856: "Reach the goal after defeating at least/all (n) Hop-Chops.",
1219761531: "Reach the goal while holding a Red POW Block. OR Reach the goal after activating at least/all (n) Red POW Block(s).",
1221661152: "Reach the goal after defeating at least/all (n) Bob-omb(s).",
1259427138: "Reach the goal after defeating at least/all (n) Spiny/Spinies.",
1268255615: "Reach the goal after defeating at least/all (n) Bowser(s)/Meowser(s).",
1279580818: "Reach the goal after defeating at least/all (n) Ant Trooper(s).",
1283945123: "Reach the goal on a Lakitu's Cloud.",
1344044032: "Reach the goal after defeating at least/all (n) Boo(s).",
1425973877: "Reach the goal after defeating at least/all (n) Roy(s).",
1429902736: "Reach the goal while holding a Trampoline.",
1431944825: "Reach the goal after defeating at least/all (n) Morton(s).",
1446467058: "Reach the goal after defeating at least/all (n) Fish Bone(s).",
1510495760: "Reach the goal after defeating at least/all (n) Monty Mole(s).",
1656179347: "Reach the goal after picking up at least/all (n) 1-Up Mushroom(s).",
1665820273: "Reach the goal after defeating at least/all (n) Hammer Bro(s.).",
1676924210: "Reach the goal after hitting at least/all (n) P Switch(es). OR Reach the goal while holding a P Switch.",
1715960804: "Reach the goal after activating at least/all (n) POW Block(s). OR Reach the goal while holding a POW Block.",
1724036958: "Reach the goal after defeating at least/all (n) Angry Sun(s).",
1730095541: "Reach the goal after defeating at least/all (n) Pokey(s).",
1780278293: "Reach the goal as Superball Mario.",
1839897151: "Reach the goal after defeating at least/all (n) Pom Pom(s).",
1969299694: "Reach the goal after defeating at least/all (n) Peepa(s).",
2035052211: "Reach the goal after defeating at least/all (n) Lakitu(s).",
2038503215: "Reach the goal after defeating at least/all (n) Lemmy(s).",
2048033177: "Reach the goal after defeating at least/all (n) Lava Bubble(s).",
2076496776: "Reach the goal while wearing a Bullet Bill Mask.",
2089161429: "Reach the goal as Big Mario.",
2111528319: "Reach the goal as Cat Mario.",
2131209407: "Reach the goal after defeating at least/all (n) Goomba(s)/Galoomba(s).",
2139645066: "Reach the goal after defeating at least/all (n) Thwomp(s).",
2259346429: "Reach the goal after defeating at least/all (n) Iggy(s).",
2549654281: "Reach the goal while wearing a Dry Bones Shell.",
2694559007: "Reach the goal after defeating at least/all (n) Sledge Bro(s.).",
2746139466: "Reach the goal after defeating at least/all (n) Rocky Wrench(es).",
2749601092: "Reach the goal after grabbing at least/all (n) 50-Coin(s).",
2855236681: "Reach the goal as Flying Squirrel Mario.",
3036298571: "Reach the goal as Buzzy Mario.",
3074433106: "Reach the goal as Builder Mario.",
3146932243: "Reach the goal as Cape Mario.",
3174413484: "Reach the goal after defeating at least/all (n) Wendy(s).",
3206222275: "Reach the goal while wearing a Cannon Box.",
3314955857: "Reach the goal as Link.",
3342591980: "Reach the goal while you have Super Star invincibility.",
3346433512: "Reach the goal after defeating at least/all (n) Goombrat(s)/Goombud(s).",
3348058176: "Reach the goal after grabbing at least/all (n) 10-Coin(s).",
3353006607: "Reach the goal after defeating at least/all (n) Buzzy Beetle(s).",
3392229961: "Reach the goal after defeating at least/all (n) Bowser Jr.(s).",
3437308486: "Reach the goal after defeating at least/all (n) Koopa Troopa(s).",
3459144213: "Reach the goal after defeating at least/all (n) Chain Chomp(s).",
3466227835: "Reach the goal after defeating at least/all (n) Muncher(s).",
3481362698: "Reach the goal after defeating at least/all (n) Wiggler(s).",
3513732174: "Reach the goal as SMB2 Mario.",
3649647177: "Reach the goal in a Koopa Clown Car/Junior Clown Car.",
3725246406: "Reach the goal as Spiny Mario.",
3730243509: "Reach the goal in a Koopa Troopa Car.",
3748075486: "Reach the goal after defeating at least/all (n) Piranha Plant(s)/Jumping Piranha Plant(s).",
3797704544: "Reach the goal after defeating at least/all (n) Dry Bones.",
3824561269: "Reach the goal after defeating at least/all (n) Stingby/Stingbies.",
3833342952: "Reach the goal after defeating at least/all (n) Piranha Creeper(s).",
3842179831: "Reach the goal after defeating at least/all (n) Fire Piranha Plant(s).",
3874680510: "Reach the goal after breaking at least/all (n) Crates(s).",
3974581191: "Reach the goal after defeating at least/all (n) Ludwig(s).",
3977257962: "Reach the goal as Super Mario.",
4042480826: "Reach the goal after defeating at least/all (n) Skipsqueak(s).",
4116396131: "Reach the goal after grabbing at least/all (n) Coin(s).",
4117878280: "Reach the goal after defeating at least/all (n) Magikoopa(s).",
4122555074: "Reach the goal after grabbing at least/all (n) 30-Coin(s).",
4153835197: "Reach the goal as Balloon Mario.",
4172105156: "Reach the goal while wearing a Red POW Box.",
4209535561: "Reach the Goal while riding Yoshi.",
4269094462: "Reach the goal after defeating at least/all (n) Spike Top(s).",
4293354249: "Reach the goal after defeating at least/all (n) Banzai Bill(s)."
}
```
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
As these 21 levels were made and vetted by Nintendo the dataset contains no harmful language or depictions.
|
yerevann | null | null | null | false | 1 | false | yerevann/coco-karpathy | 2022-10-31T11:24:01.000Z | null | false | 448fdb1bc7b2d09e46881c4541a14d796a3d41e8 | [] | [
"language:en",
"task_categories:image-to-text",
"task_ids:image-captioning",
"tags:coco",
"tags:image-captioning"
] | https://huggingface.co/datasets/yerevann/coco-karpathy/resolve/main/README.md | ---
language:
- en
task_categories:
- image-to-text
task_ids:
- image-captioning
pretty_name: COCO Karpathy split
tags:
- coco
- image-captioning
---
# Dataset Card for "yerevann/coco-karpathy"
The Karpathy split of COCO for image captioning.
|
J236 | null | null | null | false | 1 | false | J236/testing | 2022-09-18T23:11:04.000Z | null | false | 98f01722de4b3d391834c5c3afd256598728e170 | [] | [
"license:agpl-3.0"
] | https://huggingface.co/datasets/J236/testing/resolve/main/README.md | ---
license: agpl-3.0
---
|
bdotloh | null | null | null | false | 22 | false | bdotloh/empathetic-dialogues-contexts | 2022-09-21T06:12:44.000Z | null | false | 8447c236d6c6bf4986eb3e4330a41d258b727362 | [] | [
"annotations_creators:crowdsourced",
"language:en",
"multilinguality:monolingual",
"task_categories:text-classification"
] | https://huggingface.co/datasets/bdotloh/empathetic-dialogues-contexts/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- en
multilinguality:
- monolingual
task_categories:
- text-classification
---
# Dataset Description
This is a dataset of emotional contexts that was retrieved from the original EmpatheticDialogues (ED) dataset. Respondents were asked to describe an event that was associated with a particular emotion label (i.e. p(event|emotion).
There are 32 emotion labels in total.
There are 19209, 2756, and 2542 instances of emotional descriptions in the train, valid, and test set, respectively. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-xsum-default-ca7304-1504954794 | 2022-09-19T08:01:07.000Z | null | false | 7d5077a33a8336d2f53095765e22cf9987443996 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-xsum-default-ca7304-1504954794/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: morenolq/bart-base-xsum
metrics: ['bertscore']
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: morenolq/bart-base-xsum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. |
hkgkjg111 | null | null | null | false | 1 | false | hkgkjg111/ai_paint_2 | 2022-09-19T09:53:25.000Z | null | false | 8818654486d5eed521811ebebbb84cdce5ce3bb1 | [] | [] | https://huggingface.co/datasets/hkgkjg111/ai_paint_2/resolve/main/README.md | |
j0hngou | null | null | null | false | 1 | false | j0hngou/ccmatrix_de-en | 2022-09-26T16:35:03.000Z | null | false | 95b112abeaf5782f4326d869e1081816556a5d16 | [] | [
"language:en",
"language:de"
] | https://huggingface.co/datasets/j0hngou/ccmatrix_de-en/resolve/main/README.md | ---
language:
- en
- de
---
A sampled version of the [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix) dataset for the German-English pair, containing 1M train entries. |
biglam | null | @dataset{langlais_pierre_carl_2021_4751204,
author = {Langlais, Pierre-Carl},
title = {{Fictions littéraires de Gallica / Literary
fictions of Gallica}},
month = apr,
year = 2021,
publisher = {Zenodo},
version = 1,
doi = {10.5281/zenodo.4751204},
url = {https://doi.org/10.5281/zenodo.4751204}
} | The collection "Fiction littéraire de Gallica" includes 19,240 public domain documents from the digital platform of the French National Library that were originally classified as novels or, more broadly, as literary fiction in prose. It consists of 372 tables of data in tsv format for each year of publication from 1600 to 1996 (all the missing years are in the 17th and 20th centuries). Each table is structured at the page-level of each novel (5,723,986 pages in all). It contains the complete text with the addition of some metadata. It can be opened in Excel or, preferably, with the new data analysis environments in R or Python (tidyverse, pandas…)
This corpus can be used for large-scale quantitative analyses in computational humanities. The OCR text is presented in a raw format without any correction or enrichment in order to be directly processed for text mining purposes.
The extraction is based on a historical categorization of the novels: the Y2 or Ybis classification. This classification, invented in 1730, is the only one that has been continuously applied to the BNF collections now available in the public domain (mainly before 1950). Consequently, the dataset is based on a definition of "novel" that is generally contemporary of the publication.
A French data paper (in PDF and HTML) presents the construction process of the Y2 category and describes the structuring of the corpus. It also gives several examples of possible uses for computational humanities projects. | false | 2 | false | biglam/gallica_literary_fictions | 2022-09-19T13:58:06.000Z | null | false | 0f9bec2b0fbbfc8643ae5442903d63dd701ff51b | [] | [
"language:fr",
"license:cc0-1.0",
"multilinguality:monolingual",
"source_datasets:original",
"task_categories:text-generation",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/biglam/gallica_literary_fictions/resolve/main/README.md | ---
language: fr
license: cc0-1.0
multilinguality:
- monolingual
pretty_name: Literary fictions of Gallica
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for Literary fictions of Gallica
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://doi.org/10.5281/zenodo.4660197
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The collection "Fiction littéraire de Gallica" includes 19,240 public domain documents from the digital platform of the French National Library that were originally classified as novels or, more broadly, as literary fiction in prose. It consists of 372 tables of data in tsv format for each year of publication from 1600 to 1996 (all the missing years are in the 17th and 20th centuries). Each table is structured at the page-level of each novel (5,723,986 pages in all). It contains the complete text with the addition of some metadata. It can be opened in Excel or, preferably, with the new data analysis environments in R or Python (tidyverse, pandas…)
This corpus can be used for large-scale quantitative analyses in computational humanities. The OCR text is presented in a raw format without any correction or enrichment in order to be directly processed for text mining purposes.
The extraction is based on a historical categorization of the novels: the Y2 or Ybis classification. This classification, invented in 1730, is the only one that has been continuously applied to the BNF collections now available in the public domain (mainly before 1950). Consequently, the dataset is based on a definition of "novel" that is generally contemporary of the publication.
A French data paper (in PDF and HTML) presents the construction process of the Y2 category and describes the structuring of the corpus. It also gives several examples of possible uses for computational humanities projects.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```
{
'main_id': 'bpt6k97892392_p174',
'catalogue_id': 'cb31636383z',
'titre': "L'île du docteur Moreau",
'nom_auteur': 'Wells',
'prenom_auteur': 'Herbert George',
'date': 1946,
'document_ocr': 99,
'date_enligne': '07/08/2017',
'gallica': 'http://gallica.bnf.fr/ark:/12148/bpt6k97892392/f174',
'page': 174,
'texte': "_p_ dans leur expression et leurs gestes souples, d au- c tres semblables à des estropiés, ou si étrangement i défigurées qu'on eût dit les êtres qui hantent nos M rêves les plus sinistres. Au delà, se trouvaient d 'un côté les lignes onduleuses -des roseaux, de l'autre, s un dense enchevêtrement de palmiers nous séparant du ravin des 'huttes et, vers le Nord, l horizon brumeux du Pacifique. - _p_ — Soixante-deux, soixante-trois, compta Mo- H reau, il en manque quatre. J _p_ — Je ne vois pas l'Homme-Léopard, dis-je. | Tout à coup Moreau souffla une seconde fois dans son cor, et à ce son toutes les bêtes humai- ' nes se roulèrent et se vautrèrent dans la poussière. Alors se glissant furtivement hors des roseaux, rampant presque et essayant de rejoindre le cercle des autres derrière le dos de Moreau, parut l'Homme-Léopard. Le dernier qui vint fut le petit Homme-Singe. Les autres, échauffés et fatigués par leurs gesticulations, lui lancèrent de mauvais regards. _p_ — Assez! cria Moreau, de sa voix sonore et ferme. Toutes les bêtes s'assirent sur leurs talons et cessèrent leur adoration. - _p_ — Où est celui |qui enseigne la Loi? demanda Moreau."
}
```
### Data Fields
- `main_id`: Unique identifier of the page of the roman.
- `catalogue_id`: Identifier of the edition in the BNF catalogue.
- `titre`: Title of the edition as it appears in the catalog.
- `nom_auteur`: Author's name.
- `prenom_auteur`: Author's first name.
- `date`: Year of edition.
- `document_ocr`: Estimated quality of ocerization for the whole document as a percentage of words probably well recognized (from 1-100).
- `date_enligne`: Date of the online publishing of the digitization on Gallica.
- `gallica`: URL of the document on Gallica.
- `page`: Document page number (this is the pagination of the digital file, not the one of the original document).
- `texte`: Page text, as rendered by OCR.
### Data Splits
The dataset contains a single "train" split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Zero v1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/legalcode).
### Citation Information
```
@dataset{langlais_pierre_carl_2021_4751204,
author = {Langlais, Pierre-Carl},
title = {{Fictions littéraires de Gallica / Literary
fictions of Gallica}},
month = apr,
year = 2021,
publisher = {Zenodo},
version = 1,
doi = {10.5281/zenodo.4751204},
url = {https://doi.org/10.5281/zenodo.4751204}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-xsum-default-d5c7a7-1507154810 | 2022-09-19T13:45:50.000Z | null | false | 559e6e78c86a66b7353e87f78b2eaf5b487e0744 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-xsum-default-d5c7a7-1507154810/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: morenolq/bart-base-xsum
metrics: ['bertscore']
dataset_name: xsum
dataset_config: default
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: morenolq/bart-base-xsum
* Dataset: xsum
* Config: default
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad_v2-squad_v2-552ce2-1507654811 | 2022-09-19T13:41:56.000Z | null | false | 8e4813d4198fd5da65377f6757b4a420c8a6eb5b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad_v2-squad_v2-552ce2-1507654811/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: navteca/roberta-large-squad2
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: navteca/roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@tvdermeer](https://huggingface.co/tvdermeer) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-emotion-default-2be497-1508254837 | 2022-09-19T14:17:42.000Z | null | false | 76fb3cdf9ae1951b111ed14ef24d58d24c39d46c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-emotion-default-2be497-1508254837/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: morenolq/distilbert-base-cased-emotion
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: morenolq/distilbert-base-cased-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-emotion-default-f266e6-1508354838 | 2022-09-19T14:17:45.000Z | null | false | 2c0ff370938b073a6e0e894789f0697c701e4f3d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-emotion-default-f266e6-1508354838/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: morenolq/distilbert-base-cased-emotion
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: validation
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: morenolq/distilbert-base-cased-emotion
* Dataset: emotion
* Config: default
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-glue-rte-157f21-1508454839 | 2022-09-19T14:17:54.000Z | null | false | 675263df9cdf386ecb16016c1434cf90108914d5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-glue-rte-157f21-1508454839/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: natural_language_inference
model: JeremiahZ/bert-base-uncased-rte
metrics: []
dataset_name: glue
dataset_config: rte
dataset_split: validation
col_mapping:
text1: sentence1
text2: sentence2
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/bert-base-uncased-rte
* Dataset: glue
* Config: rte
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-glue-qqp-b620ce-1508754840 | 2022-09-19T14:20:34.000Z | null | false | a4302a5208a75bd5eafff39c433c0073cf7b649e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-glue-qqp-b620ce-1508754840/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: natural_language_inference
model: JeremiahZ/bert-base-uncased-qqp
metrics: []
dataset_name: glue
dataset_config: qqp
dataset_split: validation
col_mapping:
text1: question1
text2: question2
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/bert-base-uncased-qqp
* Dataset: glue
* Config: qqp
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-glue-mnli_matched-c9e0cb-1508854842 | 2022-09-19T14:18:46.000Z | null | false | e16f043921522ca6271d5174bfdc22889c7b446e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-glue-mnli_matched-c9e0cb-1508854842/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: natural_language_inference
model: JeremiahZ/bert-base-uncased-mnli
metrics: []
dataset_name: glue
dataset_config: mnli_matched
dataset_split: validation
col_mapping:
text1: premise
text2: hypothesis
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/bert-base-uncased-mnli
* Dataset: glue
* Config: mnli_matched
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-glue-cola-b911f0-1508954843 | 2022-09-19T14:49:27.000Z | null | false | 400174f5e633d5a97f599969362628c5b028794f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-glue-cola-b911f0-1508954843/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: multi_class_classification
model: JeremiahZ/roberta-base-cola
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: cola
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: JeremiahZ/roberta-base-cola
* Dataset: glue
* Config: cola
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-glue-cola-b911f0-1508954844 | 2022-09-19T14:49:28.000Z | null | false | 9509b6529ed2a785841e86bf1637353291e8ddab | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-glue-cola-b911f0-1508954844/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: multi_class_classification
model: JeremiahZ/bert-base-uncased-cola
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: cola
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: JeremiahZ/bert-base-uncased-cola
* Dataset: glue
* Config: cola
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-glue-mrpc-9038ab-1509054845 | 2022-09-19T14:49:33.000Z | null | false | 5b7b1e9a55331e18543b14c0ba25aaf38985337a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-glue-mrpc-9038ab-1509054845/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: natural_language_inference
model: JeremiahZ/roberta-base-mrpc
metrics: []
dataset_name: glue
dataset_config: mrpc
dataset_split: validation
col_mapping:
text1: sentence1
text2: sentence2
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/roberta-base-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-eval-glue-mrpc-9038ab-1509054846 | 2022-09-19T14:49:35.000Z | null | false | bf06c398b669a4cb58387c071e8e4bf84eefd64f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-glue-mrpc-9038ab-1509054846/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: natural_language_inference
model: JeremiahZ/bert-base-uncased-mrpc
metrics: []
dataset_name: glue
dataset_config: mrpc
dataset_split: validation
col_mapping:
text1: sentence1
text2: sentence2
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/bert-base-uncased-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. |
j0hngou | null | null | null | false | 32 | false | j0hngou/ccmatrix_en-it | 2022-09-26T16:34:54.000Z | null | false | 148a5dacde77aa5e337fdfaf0afbe75586dc86f9 | [] | [
"language:en",
"language:it"
] | https://huggingface.co/datasets/j0hngou/ccmatrix_en-it/resolve/main/README.md | ---
language:
- en
- it
--- |
svyas23 | null | null | null | false | 2 | false | svyas23/GAMa | 2022-09-19T17:34:14.000Z | null | false | e992f84dd6d471143439e0a111e3b9d73ebc5f3a | [] | [
"license:other"
] | https://huggingface.co/datasets/svyas23/GAMa/resolve/main/README.md | ---
license: other
---
GAMa (Ground-video to Aerial-image Matching) dataset
Download at:
https://www.crcv.ucf.edu/data1/GAMa/
# GAMa: Cross-view Video Geo-localization
by [Shruti Vyas](https://scholar.google.com/citations?user=15YqUQUAAAAJ&hl=en); [Chen Chen](https://scholar.google.com/citations?user=TuEwcZ0AAAAJ&hl=en); [Mubarak Shah](https://scholar.google.com/citations?user=p8gsO3gAAAAJ&hl=en)
code at: https://github.com/svyas23/GAMa/blob/main/README.md
|
Impe | null | null | null | false | null | false | Impe/Stuff | 2022-09-19T17:31:51.000Z | null | false | 7a7dd4cba7ff2944ded877a9b7064723698c2b6f | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Impe/Stuff/resolve/main/README.md | ---
license: afl-3.0
---
|
cjvt | null | @InProceedings{antloga2022gkomet,
title = {Korpusni pristopi za identifikacijo metafore in metonimije: primer metonimije v korpusu gKOMET},
author={Antloga, \v{S}pela},
booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities (Student papers)},
year={2022},
pages={271-277}
} | G-KOMET 1.0 (a corpus of metaphorical expressions in spoken Slovene language) is a corpus of speech transcriptions and
conversations that covers 50,000 lexical units. The corpus contains samples from the Gos corpus of spoken Slovene
and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse.
The annotation scheme was based on the MIPVU metaphor identification process.
This protocol was modified and adapted to the specifics of the Slovene language and the specifics of the spoken
language. Corpus was annotated for the following relations to metaphor: indirect metaphor, direct metaphor, borderline
cases and metaphor signals. In addition, the corpus introduces a new ‘frame’ tag, which gives information about the
concept to which it refers. | false | 7 | false | cjvt/gkomet | 2022-10-21T07:37:43.000Z | null | false | f74a75bb732c74e0a892cbfed2a437f134bd7e19 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:sl",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"task_categories:token-classification",
"tags:metaphor-classification",
"tags:metonymy-classification",
"tags:metaphor-frame-clas... | https://huggingface.co/datasets/cjvt/gkomet/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- sl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets: []
task_categories:
- token-classification
task_ids: []
pretty_name: G-KOMET
tags:
- metaphor-classification
- metonymy-classification
- metaphor-frame-classification
- multiword-expression-detection
---
# Dataset Card for G-KOMET
### Dataset Summary
G-KOMET 1.0 is a corpus of metaphorical expressions in spoken Slovene language, covering around 50,000 lexical units across 5695 sentences. The corpus contains samples from the Gos corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse.
It is also annotated with idioms and metonymies. Note that these are both annotated as metaphor types. This is different from the annotations in [KOMET](https://huggingface.co/datasets/cjvt/komet), where these are both considered a type of frame. We keep the data as untouched as possible and let the user decide how they want to handle this.
### Supported Tasks and Leaderboards
Metaphor detection, metonymy detection, metaphor type classification, metaphor frame classification.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'document_name': 'G-Komet001.xml',
'idx': 3,
'idx_paragraph': 0,
'idx_sentence': 3,
'sentence_words': ['no', 'zdaj', 'samo', 'še', 'za', 'eno', 'orientacijo'],
'met_type': [
{'type': 'MRWi', 'word_indices': [6]}
],
'met_frame': [
{'type': 'spatial_orientation', 'word_indices': [6]}
]
}
```
The sentence comes from the document `G-Komet001.xml`, is the 3rd sentence in the document and is the 3rd sentence inside the 0th paragraph in the document.
The word "orientacijo" is annotated as an indirect metaphor-related word (`MRWi`).
It is also annotated with the frame "spatial_orientation".
### Data Fields
- `document_name`: a string containing the name of the document in which the sentence appears;
- `idx`: a uint32 containing the index of the sentence inside its document;
- `idx_paragraph`: a uint32 containing the index of the paragraph in which the sentence appears;
- `idx_sentence`: a uint32 containing the index of the sentence inside its paragraph;
containing the consecutive number of the paragraph inside the current news article;
- `sentence_words`: words in the sentence;
- `met_type`: metaphors in the sentence, marked by their type and word indices;
- `met_frame`: metaphor frames in the sentence, marked by their type (frame name) and word indices.
## Dataset Creation
The corpus contains samples from the GOS corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse. It contains hand-annotated metaphor-related words, i.e. linguistic expressions that have the potential for people to interpret them as metaphors, idioms, i.e. multi-word units in which at least one word has been used metaphorically, and metonymies, expressions that we use to express something else.
For more information, please check out the paper (which is in Slovenian language) or contact the dataset author.
## Additional Information
### Dataset Curators
Špela Antloga.
### Licensing Information
CC BY-NC-SA 4.0
### Citation Information
```
@InProceedings{antloga2022gkomet,
title = {Korpusni pristopi za identifikacijo metafore in metonimije: primer metonimije v korpusu gKOMET},
author={Antloga, \v{S}pela},
booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities (Student papers)},
year={2022},
pages={271-277}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
hemangjoshi37a | null | null | null | false | 1 | false | hemangjoshi37a/token_classification_ratnakar_1300 | 2022-09-19T18:03:46.000Z | null | false | f2b534c65a64e8425f7aa01659af23493d84696e | [] | [
"license:mit"
] | https://huggingface.co/datasets/hemangjoshi37a/token_classification_ratnakar_1300/resolve/main/README.md | ---
license: mit
---
|
asapp | null | @inproceedings{shon2022slue,
title={Slue: New benchmark tasks for spoken language understanding evaluation on natural speech},
author={Shon, Suwon and Pasad, Ankita and Wu, Felix and Brusco, Pablo and Artzi, Yoav and Livescu, Karen and Han, Kyu J},
booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7927--7931},
year={2022},
organization={IEEE}
} | Spoken Language Understanding Evaluation (SLUE) benchmark. There are two subsets: (i) SLUE-VoxPopuli which has ASR and NER tasks and (ii) SLUE-VoxCeleb which has ASR and SA tasks. | false | 40 | false | asapp/slue | 2022-09-26T23:08:10.000Z | slue | false | e804f0ad5054f08cd6dd5641fab37d22f162234b | [] | [
"arxiv:2111.10367",
"annotations_creators:expert-generated",
"language:en",
"language_creators:found",
"license:cc0-1.0",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:automatic-speech-recognition",
"task_categories... | https://huggingface.co/datasets/asapp/slue/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- cc0-1.0
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: slue
pretty_name: SLUE (Spoken Language Understanding Evaluation benchmark)
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- automatic-speech-recognition
- audio-classification
- text-classification
- token-classification
task_ids:
- sentiment-analysis
- named-entity-recognition
configs:
- voxpopuli
- voxceleb
---
# Dataset Card for SLUE
## Table of Contents
- [Dataset Card for SLUE](#dataset-card-for-slue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Automatic Speech Recognition (ASR)](#automatic-speech-recognition-asr)
- [Named Entity Recognition (NER)](#named-entity-recognition-ner)
- [Sentiment Analysis (SA)](#sentiment-analysis-sa)
- [How-to-submit for your test set evaluation](#how-to-submit-for-your-test-set-evaluation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [voxpopuli](#voxpopuli)
- [voxceleb](#voxceleb)
- [Data Fields](#data-fields)
- [voxpopuli](#voxpopuli-1)
- [voxceleb](#voxceleb-1)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [SLUE-VoxPopuli Dataset](#slue-voxpopuli-dataset)
- [SLUE-VoxCeleb Dataset](#slue-voxceleb-dataset)
- [Original License of OXFORD VGG VoxCeleb Dataset](#original-license-of-oxford-vgg-voxceleb-dataset)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://asappresearch.github.io/slue-toolkit](https://asappresearch.github.io/slue-toolkit)
- **Repository:** [https://github.com/asappresearch/slue-toolkit/](https://github.com/asappresearch/slue-toolkit/)
- **Paper:** [https://arxiv.org/pdf/2111.10367.pdf](https://arxiv.org/pdf/2111.10367.pdf)
- **Leaderboard:** [https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html](https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html)
- **Size of downloaded dataset files:** 1.95 GB
- **Size of the generated dataset:** 9.59 MB
- **Total amount of disk used:** 1.95 GB
### Dataset Summary
We introduce the Spoken Language Understanding Evaluation (SLUE) benchmark. The goals of our work are to
- Track research progress on multiple SLU tasks
- Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks
- Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use.
For this benchmark, we provide new annotation of publicly available, natural speech data for training and evaluation. We also provide a benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks. Refer to [Toolkit](https://github.com/asappresearch/slue-toolkit) and [Paper](https://arxiv.org/pdf/2111.10367.pdf) for more details.
### Supported Tasks and Leaderboards
#### Automatic Speech Recognition (ASR)
Although this is not a SLU task, ASR can help analyze the performance of downstream SLU tasks on the same domain. Additionally, pipeline approaches depend on ASR outputs, making ASR relevant to SLU. ASR is evaluated using word error rate (WER).
#### Named Entity Recognition (NER)
Named entity recognition involves detecting the named entities and their tags (types) in a given sentence. We evaluate performance using micro-averaged F1 and label-F1 scores. The F1 score evaluates an unordered list of named entity phrase and tag pairs predicted for each sentence. Only the tag predictions are considered for label-F1.
#### Sentiment Analysis (SA)
Sentiment analysis refers to classifying a given speech segment as having negative, neutral, or positive sentiment. We evaluate SA using macro-averaged (unweighted) recall and F1 scores.[More Information Needed]
#### How-to-submit for your test set evaluation
See here https://asappresearch.github.io/slue-toolkit/how-to-submit.html
### Languages
The language data in SLUE is in English.
## Dataset Structure
### Data Instances
#### voxpopuli
- **Size of downloaded dataset files:** 398.45 MB
- **Size of the generated dataset:** 5.81 MB
- **Total amount of disk used:** 404.26 MB
An example of 'train' looks as follows.
```
{'id': '20131007-0900-PLENARY-19-en_20131007-21:26:04_3',
'audio': {'path': '/Users/username/.cache/huggingface/datasets/downloads/extracted/e35757b0971ac7ff5e2fcdc301bba0364857044be55481656e2ade6f7e1fd372/slue-voxpopuli/fine-tune/20131007-0900-PLENARY-19-en_20131007-21:26:04_3.ogg',
'array': array([ 0.00132601, 0.00058881, -0.00052187, ..., 0.06857217,
0.07835515, 0.07845446], dtype=float32),
'sampling_rate': 16000},
'speaker_id': 'None',
'normalized_text': 'two thousand and twelve for instance the new brussels i regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in europe even if the employer is domiciled outside europe. the commission will',
'raw_text': '2012. For instance, the new Brussels I Regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in Europe, even if the employer is domiciled outside Europe. The Commission will',
'raw_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'],
'start': [227, 177, 28, 0],
'length': [6, 6, 21, 4]},
'normalized_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'],
'start': [243, 194, 45, 0],
'length': [6, 6, 21, 23]},
'raw_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'],
'start': [227, 177, 28, 0],
'length': [6, 6, 21, 4]},
'normalized_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'],
'start': [243, 194, 45, 0],
'length': [6, 6, 21, 23]}}
```
#### voxceleb
- **Size of downloaded dataset files:** 1.55 GB
- **Size of the generated dataset:** 3.78 MB
- **Total amount of disk used:** 1.55 GB
An example of 'train' looks as follows.
```
{'id': 'id10059_229vKIGbxrI_00004',
'audio': {'path': '/Users/felixwu/.cache/huggingface/datasets/downloads/extracted/400facb6d2f2496ebcd58a5ffe5fbf2798f363d1b719b888d28a29b872751626/slue-voxceleb/fine-tune_raw/id10059_229vKIGbxrI_00004.flac',
'array': array([-0.00442505, -0.00204468, 0.00628662, ..., 0.00158691,
0.00100708, 0.00033569], dtype=float32),
'sampling_rate': 16000},
'speaker_id': 'id10059',
'normalized_text': 'of god what is a creator the almighty that uh',
'sentiment': 'Neutral',
'start_second': 0.45,
'end_second': 4.52}
```
### Data Fields
#### voxpopuli
- `id`: a `string` id of an instance.
- `audio`: audio feature of the raw audio. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `speaker_id`: a `string` of the speaker id.
- `raw_text`: a `string` feature that contains the raw transcription of the audio.
- `normalized_text`: a `string` feature that contains the normalized transcription of the audio which is **used in the standard evaluation**.
- `raw_ner`: the NER annotation of the `raw_text` using the same 18 NER classes as OntoNotes.
- `normalized_ner`: the NER annotation of the `normalized_text` using the same 18 NER classes as OntoNotes.
- `raw_combined_ner`: the NER annotation of the `raw_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`).
- `normalized_combined_ner`: the NER annotation of the `normalized_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`) which is **used in the standard evaluation**.
Each NER annotation is a dictionary containing three lists: `type`, `start`, and `length`. `type` is a list of the NER tag types. `start` is a list of the start character position of each named entity in the corresponding text. `length` is a list of the number of characters of each named entity.
#### voxceleb
- `id`: a `string` id of an instance.
- `audio`: audio feature of the raw audio. Please use `start_second` and `end_second` to crop the transcribed segment. For example, `dataset[0]["audio"]["array"][int(dataset[0]["start_second"] * dataset[0]["audio"]["sample_rate"]):int(dataset[0]["end_second"] * dataset[0]["audio"]["sample_rate"])]`. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `speaker_id`: a `string` of the speaker id.
- `normalized_text`: a `string` feature that contains the transcription of the audio segment.
- `sentiment`: a `string` feature which can be `Negative`, `Neutral`, or `Positive`.
- `start_second`: a `float` feature that specifies the start second of the audio segment.
- `end_second`: a `float` feature that specifies the end second of the audio segment.
### Data Splits
| |train|validation|test|
|---------|----:|---------:|---:|
|voxpopuli| 5000| 1753|1842|
|voxceleb | 5777| 1454|3553|
Here we use the standard split names in Huggingface's datasets, so the `train` and `validation` splits are the original `fine-tune` and `dev` splits of SLUE datasets, respectively.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
#### SLUE-VoxPopuli Dataset
SLUE-VoxPopuli dataset contains a subset of VoxPopuli dataset and the copyright of this subset remains the same with the original license, CC0. See also European Parliament's legal notice (https://www.europarl.europa.eu/legal-notice/en/)
Additionally, we provide named entity annotation (normalized_ner and raw_ner column in .tsv files) and it is covered with the same license as CC0.
#### SLUE-VoxCeleb Dataset
SLUE-VoxCeleb Dataset contains a subset of OXFORD VoxCeleb dataset and the copyright of this subset remains the same Creative Commons Attribution 4.0 International license as below. Additionally, we provide transcription, sentiment annotation and timestamp (start, end) that follows the same license to OXFORD VoxCeleb dataset.
##### Original License of OXFORD VGG VoxCeleb Dataset
VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
VoxCeleb2 contains over a million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube.
The speakers span a wide range of different ethnicities, accents, professions and ages.
We provide Youtube URLs, associated face detections, and timestamps, as
well as cropped audio segments and cropped face videos from the
dataset. The copyright of both the original and cropped versions
of the videos remains with the original owners.
The data is covered under a Creative Commons
Attribution 4.0 International license (Please read the
license terms here. https://creativecommons.org/licenses/by/4.0/).
Downloading this dataset implies agreement to follow the same
conditions for any modification and/or
re-distribution of the dataset in any form.
Additionally any entity using this dataset agrees to the following conditions:
THIS DATASET IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Please cite [1,2] below if you make use of the dataset.
[1] J. S. Chung, A. Nagrani, A. Zisserman
VoxCeleb2: Deep Speaker Recognition
INTERSPEECH, 2018.
[2] A. Nagrani, J. S. Chung, A. Zisserman
VoxCeleb: a large-scale speaker identification dataset
INTERSPEECH, 2017
### Citation Information
```
@inproceedings{shon2022slue,
title={Slue: New benchmark tasks for spoken language understanding evaluation on natural speech},
author={Shon, Suwon and Pasad, Ankita and Wu, Felix and Brusco, Pablo and Artzi, Yoav and Livescu, Karen and Han, Kyu J},
booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7927--7931},
year={2022},
organization={IEEE}
}
```
### Contributions
Thanks to [@fwu-asapp](https://github.com/fwu-asapp) for adding this dataset. |
ImageIN | null | null | null | false | 4 | false | ImageIN/ImageIn_annotations | 2022-09-26T12:20:03.000Z | null | false | ff88393aa85808a6172b21e19e27a40ab882a734 | [] | [
"task_categories:image-classification"
] | https://huggingface.co/datasets/ImageIN/ImageIn_annotations/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators: []
license: []
multilinguality: []
pretty_name: ImageIn hand labelled
size_categories: []
source_datasets: []
tags: []
task_categories:
- image-classification
task_ids: []
---
Initial annotated dataset derived from `ImageIN/IA_unlabelled` |
smkerr | null | null | null | false | null | false | smkerr/lorca | 2022-09-19T20:02:06.000Z | null | false | c76f26430961c9cb3dd896809d3b303225bd6003 | [] | [] | https://huggingface.co/datasets/smkerr/lorca/resolve/main/README.md | A piece of Federico García Lorca's body of work. |
darcksky | null | null | null | false | null | false | darcksky/All-Rings | 2022-09-19T20:13:29.000Z | null | false | 3958a8cdbd470eff2573faad9d0ff7eeac90e6c3 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/darcksky/All-Rings/resolve/main/README.md | ---
license: afl-3.0
---
|
wgarstka | null | null | null | false | null | false | wgarstka/test | 2022-09-19T20:10:45.000Z | null | false | da12b1d9362a363f50e046dd887987142fee4ff8 | [] | [
"license:other"
] | https://huggingface.co/datasets/wgarstka/test/resolve/main/README.md | ---
license: other
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-autoevaluate__zero-shot-classification-sample-autoevalu-1f3143-1511754885 | 2022-09-19T21:08:28.000Z | null | false | 9fbd8304e81d1eadc8eda9738dec458621f25f79 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:autoevaluate/zero-shot-classification-sample"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-autoevaluate__zero-shot-classification-sample-autoevalu-1f3143-1511754885/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/zero-shot-classification-sample
eval_info:
task: text_zero_shot_classification
model: Tristan/opt-30b-copy
metrics: []
dataset_name: autoevaluate/zero-shot-classification-sample
dataset_config: autoevaluate--zero-shot-classification-sample
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Tristan/opt-30b-copy
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
spacemanidol | null | null | null | false | 1 | false | spacemanidol/rewrite-noisy-queries | 2022-09-19T20:55:24.000Z | null | false | 7b69020abbf7a32f15059b9d57dc576ad84006c5 | [] | [
"license:mit"
] | https://huggingface.co/datasets/spacemanidol/rewrite-noisy-queries/resolve/main/README.md | ---
license: mit
---
|
din0s | null | null | null | false | 1 | false | din0s/asqa | 2022-09-20T16:14:54.000Z | null | false | 084060f16b46f3165318f760b2339208b19a0bde | [] | [
"arxiv:2204.06092",
"annotations_creators:crowdsourced",
"language:en",
"language_creators:expert-generated",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|ambig_qa",
"tags:factoid questions",
"tags:long-form answers",
"task_categories... | https://huggingface.co/datasets/din0s/asqa/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: ASQA
size_categories:
- 1K<n<10K
source_datasets:
- extended|ambig_qa
tags:
- factoid questions
- long-form answers
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for ASQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/google-research/language/tree/master/language/asqa
- **Paper:** https://arxiv.org/abs/2204.06092
- **Leaderboard:** https://ambigqa.github.io/asqa_leaderboard.html
### Dataset Summary
ASQA is the first long-form question answering dataset that focuses on ambiguous factoid questions. Different from previous long-form answers datasets, each question is annotated with both long-form answers and extractive question-answer pairs, which should be answerable by the generated passage. A generated long-form answer will be evaluated using both ROUGE and QA accuracy. In the paper, we show that these evaluation metrics are well-correlated with human judgments.
### Supported Tasks and Leaderboards
Long-form Question Answering. [Leaderboard](https://ambigqa.github.io/asqa_leaderboard.html)
### Languages
- English
## Dataset Structure
### Data Instances
```py
{
"ambiguous_question": "Where does the civil liberties act place the blame for the internment of u.s. citizens?",
"qa_pairs": [
{
"context": "No context provided",
"question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by apologizing on behalf of them?",
"short_answers": [
"the people of the United States"
],
"wikipage": None
},
{
"context": "No context provided",
"question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by making them pay reparations?",
"short_answers": [
"United States government"
],
"wikipage": None
}
],
"wikipages": [
{
"title": "Civil Liberties Act of 1988",
"url": "https://en.wikipedia.org/wiki/Civil%20Liberties%20Act%20of%201988"
}
],
"annotations": [
{
"knowledge": [
{
"content": "The Civil Liberties Act of 1988 (Pub.L. 100–383, title I, August 10, 1988, 102 Stat. 904, 50a U.S.C. § 1989b et seq.) is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II.",
"wikipage": "Civil Liberties Act of 1988"
}
],
"long_answer": "The Civil Liberties Act of 1988 is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II. In the act, the blame for the internment of U.S. citizens was placed on the people of the United States, by apologizing on behalf of them. Furthermore, the blame for the internment was placed on the United States government, by making them pay reparations."
}
],
"sample_id": -4557617869928758000
}
```
### Data Fields
- `ambiguous_question`: ambiguous question from AmbigQA.
- `annotations`: long-form answers to the ambiguous question constructed by ASQA annotators.
- `annotations/knowledge`: list of additional knowledge pieces.
- `annotations/knowledge/content`: a passage from Wikipedia.
- `annotations/knowledge/wikipage`: title of the Wikipedia page the passage was taken from.
- `annotations/long_answer`: annotation.
- `qa_pairs`: Q&A pairs from AmbigQA which are used for disambiguation.
- `qa_pairs/context`: additional context provided.
- `qa_pairs/question`: disambiguated question from AmbigQA.
- `qa_pairs/short_answers`: list of short answers from AmbigQA.
- `qa_pairs/wikipage`: title of the Wikipedia page the additional context was taken from.
- `sample_id`: the unique id of the sample
- `wikipages`: list of Wikipedia pages visited by AmbigQA annotators.
- `wikipages/title`: title of the Wikipedia page.
- `wikipages/url`: link to the Wikipedia page.
### Data Splits
| **Split** | **Instances** |
|-----------|---------------|
| Train | 4353 |
| Dev | 948 |
## Additional Information
### Contributions
Thanks to [@din0s](https://github.com/din0s) for adding this dataset. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-Tristan__zero-shot-classification-large-test-Tristan__z-8b146c-1511954902 | 2022-09-21T05:08:06.000Z | null | false | c5a4721b5d4ff814a1af2020df60566a313ea67b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:Tristan/zero-shot-classification-large-test"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-Tristan__zero-shot-classification-large-test-Tristan__z-8b146c-1511954902/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- Tristan/zero-shot-classification-large-test
eval_info:
task: text_zero_shot_classification
model: Tristan/opt-30b-copy
metrics: []
dataset_name: Tristan/zero-shot-classification-large-test
dataset_config: Tristan--zero-shot-classification-large-test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Tristan/opt-30b-copy
* Dataset: Tristan/zero-shot-classification-large-test
* Config: Tristan--zero-shot-classification-large-test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model. |
vincentchai | null | null | null | false | null | false | vincentchai/b52092000 | 2022-09-20T03:16:34.000Z | null | false | 53485f36c96f2307855b50421da83f27bfff2397 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/vincentchai/b52092000/resolve/main/README.md | ---
license: apache-2.0
---
|
Natmat | null | null | null | false | null | false | Natmat/Test | 2022-10-19T06:59:35.000Z | null | false | 922289449f1fd355224c344759378c53532a2189 | [] | [
"license:other"
] | https://huggingface.co/datasets/Natmat/Test/resolve/main/README.md | ---
license: other
---
|
bongsoo | null | null | null | false | 1 | false | bongsoo/social_science_en_ko | 2022-10-05T00:09:30.000Z | null | false | baa096440c81620325d5c6f774eacb668dbd1db8 | [] | [
"language:ko",
"license:apache-2.0"
] | https://huggingface.co/datasets/bongsoo/social_science_en_ko/resolve/main/README.md | ---
language:
- ko
license: apache-2.0
---
- 사회과학-en-ko 번역 말뭉치
|
bongsoo | null | null | null | false | null | false | bongsoo/news_talk_en_ko | 2022-10-05T00:09:50.000Z | null | false | 8ffecf6e6c61389f9c02f13f3875d810ff506fa3 | [] | [
"language:ko",
"license:apache-2.0"
] | https://huggingface.co/datasets/bongsoo/news_talk_en_ko/resolve/main/README.md | ---
language:
- ko
license: apache-2.0
---
- 뉴스&일상대화 en-ko 번역 말뭉치 |
NaturalTeam | null | null | null | false | null | false | NaturalTeam/KoBART_TEST | 2022-09-20T08:41:33.000Z | null | false | e58cab3ab22391abadb7397dcc938c07ec1e91a5 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/NaturalTeam/KoBART_TEST/resolve/main/README.md | ---
license: unknown
---
|
Shushant | null | null | null | false | 4 | false | Shushant/NepaliCovidTweets | 2022-09-20T08:59:06.000Z | null | false | f8da6feede333581902766efa79a7701e0287b44 | [] | [
"license:other"
] | https://huggingface.co/datasets/Shushant/NepaliCovidTweets/resolve/main/README.md | ---
license: other
---
|
firqaaa | null | null | null | false | 21 | false | firqaaa/mednli-id | 2022-09-21T08:39:34.000Z | null | false | 98320e25c9104df9dc4c16d690901cc12b608e0f | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/firqaaa/mednli-id/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-373400-1514054915 | 2022-09-21T15:33:56.000Z | null | false | fcbf84785bd5d498892cf01a322a92bb1a17f9bb | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:kmfoda/booksum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-373400-1514054915/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- kmfoda/booksum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14
metrics: []
dataset_name: kmfoda/booksum
dataset_config: kmfoda--booksum
dataset_split: test
col_mapping:
text: chapter
target: summary_text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
vuksan314 | null | null | null | false | null | false | vuksan314/Lavko | 2022-09-20T11:51:55.000Z | null | false | bec9eb5363a82c6de35a6426842e86f55db7e9c1 | [] | [
"license:cc"
] | https://huggingface.co/datasets/vuksan314/Lavko/resolve/main/README.md | ---
license: cc
---
|
varun-d | null | null | null | false | null | false | varun-d/demo-data | 2022-09-20T13:58:21.000Z | null | false | 773b86a2ed4dee382df30a17ea4e00c490e5d2d1 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/varun-d/demo-data/resolve/main/README.md | ---
license: apache-2.0
---
|
ksang | null | null | null | false | 1 | false | ksang/TwitchStreams | 2022-09-20T14:20:36.000Z | null | false | 3aaacdae72ffce33d77189f33dab28e9e4f7007a | [] | [] | https://huggingface.co/datasets/ksang/TwitchStreams/resolve/main/README.md | |
niallashley | null | null | null | false | null | false | niallashley/regenerate | 2022-09-20T15:00:01.000Z | null | false | a3d4cb163d1cbad84af92ed4f6e9b4ada4cb0d69 | [] | [
"license:cc"
] | https://huggingface.co/datasets/niallashley/regenerate/resolve/main/README.md | ---
license: cc
---
|
cjvt | null | @misc{rsdo4_en_sl,
title = {Parallel corpus {EN}-{SL} {RSDO4} 1.0},
author = {Repar, Andra{\v z} and Lebar Bajec, Iztok},
url = {http://hdl.handle.net/11356/1457},
year = {2021}
} | The RSDO4 parallel corpus of English-Slovene and Slovene-English translation pairs was collected as part of work
package 4 of the Slovene in the Digital Environment project. It contains texts collected from public institutions
and texts submitted by individual donors through the text collection portal created within the project. The corpus
consists of 964433 translation pairs (extracted from standard translation formats (TMX, XLIFF) or manually aligned)
in randomized order which can be used for machine translation training. | false | 1 | false | cjvt/rsdo4_en_sl | 2022-09-20T17:38:33.000Z | null | false | 97139a9fbab6912b3fd89604427d4304d20847e6 | [] | [
"annotations_creators:expert-generated",
"annotations_creators:found",
"language:en",
"language:sl",
"language_creators:crowdsourced",
"license:cc-by-sa-4.0",
"multilinguality:translation",
"size_categories:100K<n<1M",
"tags:parallel data",
"tags:rsdo",
"task_categories:translation",
"task_cat... | https://huggingface.co/datasets/cjvt/rsdo4_en_sl/resolve/main/README.md | ---
annotations_creators:
- expert-generated
- found
language:
- en
- sl
language_creators:
- crowdsourced
license:
- cc-by-sa-4.0
multilinguality:
- translation
pretty_name: RSDO4 en-sl parallel corpus
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- parallel data
- rsdo
task_categories:
- translation
- text2text-generation
- text-generation
task_ids: []
---
# Dataset Card for RSDO4 en-sl parallel corpus
### Dataset Summary
The RSDO4 parallel corpus of English-Slovene and Slovene-English translation pairs was collected as part of work package 4 of the Slovene in the Digital Environment project. It contains texts collected from public institutions and texts submitted by individual donors through the text collection portal created within the project. The corpus consists of 964433 translation pairs (extracted from standard translation formats (TMX, XLIFF) or manually aligned) in randomized order which can be used for machine translation training.
### Supported Tasks and Leaderboards
Machine translation.
### Languages
English, Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'en_seq': 'the total value of its assets exceeds EUR 30000000000;',
'sl_seq': 'skupna vrednost njenih sredstev presega 30000000000 EUR'
}
```
### Data Fields
- `en_seq`: a string containing the English sequence;
- `sl_seq`: a string containing the Slovene sequence.
## Additional Information
### Dataset Curators
Andraž Repar and Iztok Lebar Bajec.
### Licensing Information
CC BY-SA 4.0.
### Citation Information
```
@misc{rsdo4_en_sl,
title = {Parallel corpus {EN}-{SL} {RSDO4} 1.0},
author = {Repar, Andra{\v z} and Lebar Bajec, Iztok},
url = {http://hdl.handle.net/11356/1457},
year = {2021}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
nonnon | null | null | null | false | null | false | nonnon/test | 2022-09-25T13:59:28.000Z | null | false | 9ee9719a3ff0a5ef8d5e31eff4f5dd81a08fe47b | [] | [
"license:other"
] | https://huggingface.co/datasets/nonnon/test/resolve/main/README.md | ---
license: other
---
|
THUDM | null | null | HumanEval-X is a benchmark for the evaluation of the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks. | false | 1,242 | false | THUDM/humaneval-x | 2022-10-25T06:08:38.000Z | null | false | 62c78627f3072a1454fa0cb0184737cafe5e4198 | [] | [
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language:code",
"license:apache-2.0",
"multilinguality:multilingual",
"size_categories:unknown",
"task_categories:text-generation",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/THUDM/humaneval-x/resolve/main/README.md | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- apache-2.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: HumanEval-X
---
# HumanEval-X
## Dataset Description
[HumanEval-X](https://github.com/THUDM/CodeGeeX) is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation.
## Languages
The dataset contains coding problems in 5 programming languages: Python, C++, Java, JavaScript, and Go.
## Dataset Structure
To load the dataset you need to specify a subset among the 5 exiting languages `[python, cpp, go, java, js]`. By default `python` is loaded.
```python
from datasets import load_dataset
load_dataset("THUDM/humaneval-x", "js")
DatasetDict({
test: Dataset({
features: ['task_id', 'prompt', 'declaration', 'canonical_solution', 'test', 'example_test'],
num_rows: 164
})
})
```
```python
next(iter(data["test"]))
{'task_id': 'JavaScript/0',
'prompt': '/* Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> hasCloseElements([1.0, 2.0, 3.0], 0.5)\n false\n >>> hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n true\n */\nconst hasCloseElements = (numbers, threshold) => {\n',
'declaration': '\nconst hasCloseElements = (numbers, threshold) => {\n',
'canonical_solution': ' for (let i = 0; i < numbers.length; i++) {\n for (let j = 0; j < numbers.length; j++) {\n if (i != j) {\n let distance = Math.abs(numbers[i] - numbers[j]);\n if (distance < threshold) {\n return true;\n }\n }\n }\n }\n return false;\n}\n\n',
'test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) === true)\n console.assert(\n hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) === false\n )\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) === true)\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) === false)\n console.assert(hasCloseElements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) === false)\n}\n\ntestHasCloseElements()\n',
'example_test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.0], 0.5) === false)\n console.assert(\n hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) === true\n )\n}\ntestHasCloseElements()\n'}
```
## Data Fields
* ``task_id``: indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"].
* ``prompt``: the function declaration and docstring, used for code generation.
* ``declaration``: only the function declaration, used for code translation.
* ``canonical_solution``: human-crafted example solutions.
* ``test``: hidden test samples, used for evaluation.
* ``example_test``: public test samples (appeared in prompt), used for evaluation.
## Data Splits
Each subset has one split: test.
## Citation Information
Refer to https://github.com/THUDM/CodeGeeX. |
nlp-guild | null | null | null | false | 1 | false | nlp-guild/medical-data | 2022-09-20T16:47:13.000Z | null | false | 884ea34ad5711abf4fa430a58eed5fcaf6bebaea | [] | [
"license:mit"
] | https://huggingface.co/datasets/nlp-guild/medical-data/resolve/main/README.md | ---
license: mit
---
|
open-source-metrics | null | null | null | false | 1 | false | open-source-metrics/pytorch-image-models-dependents | 2022-11-09T16:13:01.000Z | null | false | 89a9d53b170ebb71ac075f010c167e2bee3a5d70 | [] | [
"license:apache-2.0",
"tags:github-stars"
] | https://huggingface.co/datasets/open-source-metrics/pytorch-image-models-dependents/resolve/main/README.md | ---
license: apache-2.0
pretty_name: pytorch-image-models metrics
tags:
- github-stars
---
# pytorch-image-models metrics
This dataset contains metrics about the huggingface/pytorch-image-models package.
Number of repositories in the dataset: 3615
Number of packages in the dataset: 89
## Package dependents
This contains the data available in the [used-by](https://github.com/huggingface/pytorch-image-models/network/dependents)
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 18 packages that have more than 1000 stars.
There are 39 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
[huggingface/transformers](https://github.com/huggingface/transformers): 70536
[fastai/fastai](https://github.com/fastai/fastai): 22776
[open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 21390
[MVIG-SJTU/AlphaPose](https://github.com/MVIG-SJTU/AlphaPose): 6424
[qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 6115
[awslabs/autogluon](https://github.com/awslabs/autogluon): 4818
[neuml/txtai](https://github.com/neuml/txtai): 2531
[open-mmlab/mmaction2](https://github.com/open-mmlab/mmaction2): 2357
[open-mmlab/mmselfsup](https://github.com/open-mmlab/mmselfsup): 2271
[lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 1999
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 70536
[commaai/openpilot](https://github.com/commaai/openpilot): 35919
[facebookresearch/detectron2](https://github.com/facebookresearch/detectron2): 22287
[ray-project/ray](https://github.com/ray-project/ray): 22057
[open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 21390
[NVIDIA/DeepLearningExamples](https://github.com/NVIDIA/DeepLearningExamples): 9260
[microsoft/unilm](https://github.com/microsoft/unilm): 6664
[pytorch/tutorials](https://github.com/pytorch/tutorials): 6331
[qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 6115
[hpcaitech/ColossalAI](https://github.com/hpcaitech/ColossalAI): 4944
### Package & Repository fork count
This section shows the package and repository fork count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 12 packages that have more than 200 forks.
There are 28 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
[huggingface/transformers](https://github.com/huggingface/transformers): 16175
[open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 7791
[fastai/fastai](https://github.com/fastai/fastai): 7296
[MVIG-SJTU/AlphaPose](https://github.com/MVIG-SJTU/AlphaPose): 1765
[qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 1217
[open-mmlab/mmaction2](https://github.com/open-mmlab/mmaction2): 787
[awslabs/autogluon](https://github.com/awslabs/autogluon): 638
[open-mmlab/mmselfsup](https://github.com/open-mmlab/mmselfsup): 321
[rwightman/efficientdet-pytorch](https://github.com/rwightman/efficientdet-pytorch): 265
[lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 247
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 16175
[open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 7791
[commaai/openpilot](https://github.com/commaai/openpilot): 6603
[facebookresearch/detectron2](https://github.com/facebookresearch/detectron2): 6033
[ray-project/ray](https://github.com/ray-project/ray): 3879
[pytorch/tutorials](https://github.com/pytorch/tutorials): 3478
[NVIDIA/DeepLearningExamples](https://github.com/NVIDIA/DeepLearningExamples): 2499
[microsoft/unilm](https://github.com/microsoft/unilm): 1223
[qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 1217
[layumi/Person_reID_baseline_pytorch](https://github.com/layumi/Person_reID_baseline_pytorch): 928
|
huggingface-projects | null | null | null | false | 29 | false | huggingface-projects/color-palettes-sd | 2022-11-15T13:10:21.000Z | null | false | 5809bfe0f26c5c281ece70f87aae259564ded886 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/huggingface-projects/color-palettes-sd/resolve/main/README.md | ---
license: cc-by-4.0
---
|
gexai | null | @InProceedings{ko2020inquisitive,
author = {Ko, Wei-Jen and Chen, Te-Yuan and Huang, Yiyan and Durrett, Greg and Li, Junyi Jessy},
title = {Inquisitive Question Generation for High Level Text Comprehension},
booktitle = {Proceedings of EMNLP},
year = {2020},
} | A dataset of about 20k questions that are elicited from readers as they naturally read through a document sentence by sentence. Compared to existing datasets, INQUISITIVE questions target more towards high-level (semantic and discourse) comprehension of text. Because these questions are generated while the readers are processing the information, the questions directly communicate gaps between the reader’s and writer’s knowledge about the events described in the text, and are not necessarily answered in the document itself. This type of question reflects a real-world scenario: if one has questions during reading, some of them are answered by the text later on, the rest are not, but any of them would help further the reader’s understanding at the particular point when they asked it. This resource could enable question generation models to simulate human-like curiosity and cognitive processing, which may open up a new realm of applications. | false | 7 | false | gexai/inquisitiveqg | 2022-09-20T21:22:53.000Z | null | false | deed3ddd239c882afb8c65feebe82015ba82bcb5 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/gexai/inquisitiveqg/resolve/main/README.md | ---
license: unknown
---
|
j0hngou | null | null | null | false | 2 | false | j0hngou/ccmatrix_en-fr | 2022-09-26T16:35:19.000Z | null | false | 4a8f8026a4dc86f31a7576da3a12b48008a6565a | [] | [
"language:en",
"language:fr"
] | https://huggingface.co/datasets/j0hngou/ccmatrix_en-fr/resolve/main/README.md | ---
language:
- en
- fr
--- |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-samsum-samsum-431a89-1518654983 | 2022-09-20T23:13:17.000Z | null | false | f0f93f25d29f82efdd73689b88b36c8fc85d4e41 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-samsum-samsum-431a89-1518654983/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-samsum-samsum-7e8d42-1518754984 | 2022-09-20T23:20:18.000Z | null | false | 5a6a80994c21d0d9b4f87e828633e9aa549a4a8c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-samsum-samsum-7e8d42-1518754984/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-61a81c-1518854985 | 2022-09-22T02:29:45.000Z | null | false | 850f60cb653353971f22827cf61e6b1d1a2a53a5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:kmfoda/booksum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-61a81c-1518854985/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- kmfoda/booksum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
metrics: []
dataset_name: kmfoda/booksum
dataset_config: kmfoda--booksum
dataset_split: test
col_mapping:
text: chapter
target: summary_text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-billsum-default-4428b0-1518954986 | 2022-09-22T04:13:05.000Z | null | false | bc5a20bfe51eff9d9e3e6bfe9d02ccb09cd15f72 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:billsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-billsum-default-4428b0-1518954986/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- billsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
metrics: []
dataset_name: billsum
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-samsum-samsum-b534aa-1519254997 | 2022-09-21T00:18:15.000Z | null | false | eb2885f64a337ab00115293d9856a96f80b30d40 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-samsum-samsum-b534aa-1519254997/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: pszemraj/pegasus-x-large-book-summary
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-x-large-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
Moussab | null | null | null | false | 2 | false | Moussab/ORKG-training-evaluation-set | 2022-10-12T13:44:47.000Z | null | false | a760d3533762a423ca38cb5f4d1d59a31f016a68 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Moussab/ORKG-training-evaluation-set/resolve/main/README.md | ---
license: afl-3.0
---
|
slartibartfast | null | null | null | false | 60 | false | slartibartfast/emojis2 | 2022-09-21T14:16:56.000Z | null | false | aac811df777aae214beb430564b14042ac1b4618 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/slartibartfast/emojis2/resolve/main/README.md | ---
license: openrail
---
|
Moussab | null | null | null | false | null | false | Moussab/evaluation-vanilla-models | 2022-09-21T00:44:35.000Z | null | false | 35887c2231bd760062d6b0089c0f147ae61a111e | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Moussab/evaluation-vanilla-models/resolve/main/README.md | ---
license: afl-3.0
---
|
Moussab | null | null | null | false | null | false | Moussab/evaluation-results-fine-tuned-models | 2022-09-21T00:46:23.000Z | null | false | 4eb43f034eb3fac376bb1c84851523adb09029f0 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Moussab/evaluation-results-fine-tuned-models/resolve/main/README.md | ---
license: afl-3.0
---
|
umm-maybe | null | null | null | false | 1 | false | umm-maybe/artificial-vs-human-art | 2022-10-04T16:57:44.000Z | null | false | 9d748fb6c40f4a59597a68968dc3d535a71e2292 | [] | [
"task_categories:image-classification"
] | https://huggingface.co/datasets/umm-maybe/artificial-vs-human-art/resolve/main/README.md | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: ai-image-detector
## Dataset Description
This dataset has been automatically processed by AutoTrain for project ai-image-detector.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<512x512 RGB PIL image>",
"target": 1
},
{
"image": "<512x512 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=2, names=['artificial', 'human'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 4283 |
| valid | 1072 |
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-billsum-default-dd03f7-1519455003 | 2022-09-21T17:34:50.000Z | null | false | ae75e6b3d921b85c9a7f5510181d1a32fc140c3c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:billsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-billsum-default-dd03f7-1519455003/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- billsum
eval_info:
task: summarization
model: pszemraj/pegasus-x-large-book-summary
metrics: []
dataset_name: billsum
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-x-large-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-launch__gov_report-plain_text-4ad6c8-1519755004 | 2022-09-21T07:37:56.000Z | null | false | 84e95341fadae3179e6f9418e04ab530f0411814 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-launch__gov_report-plain_text-4ad6c8-1519755004/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/pegasus-x-large-book-summary
metrics: []
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-x-large-book-summary
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.