Dataset Viewer
Auto-converted to Parquet Duplicate
_id
large_stringlengths
24
24
id
large_stringlengths
4
123
author
large_stringlengths
2
42
cardData
large_stringlengths
2
1.09M
disabled
bool
1 class
gated
large_stringclasses
3 values
lastModified
timestamp[us]date
2021-02-05 16:03:35
2026-03-14 13:13:59
likes
int64
0
9.61k
trendingScore
float64
0
82
private
bool
1 class
sha
large_stringlengths
40
40
description
large_stringlengths
0
6.67k
downloads
int64
0
2.33M
downloadsAllTime
int64
0
143M
tags
listlengths
1
7.92k
createdAt
timestamp[us]date
2022-03-02 23:29:22
2026-03-14 13:12:34
paperswithcode_id
large_stringclasses
692 values
citation
large_stringlengths
0
10.7k
698b2c8b4c9e577aa3b1fa16
nohurry/Opus-4.6-Reasoning-3000x-filtered
nohurry
{"license": "apache-2.0"}
false
False
2026-02-10T13:06:40
343
82
false
80e9226ea6168634ee2d6c010c3da619af8ad542
Filtered from: https://huggingface.co/datasets/crownelius/Opus-4.6-Reasoning-3000x The original dataset has 979 refusals, I removed these in this version.
4,646
4,698
[ "license:apache-2.0", "size_categories:1K<n<10K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us" ]
2026-02-10T13:03:07
null
null
699250f08be5bf8321aeb29e
HuggingFaceFW/finephrase
HuggingFaceFW
{"language": ["en"], "license": "odc-by", "tags": ["SmolLM2-1.7B-Instruct", "fineweb-edu", "synthetic"], "annotations_creators": ["machine-generated"], "language_creators": ["found"], "pretty_name": "HuggingFaceFW/finephrase", "size_categories": ["n>1M"], "source_datasets": ["HuggingFaceFW/fineweb-edu/sample-350BT"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "configs": [{"config_name": "all", "data_files": [{"split": "train", "path": ["faq/**/*.parquet", "math/**/*.parquet", "table/**/*.parquet", "tutorial/**/*.parquet"]}]}, {"config_name": "faq", "data_files": [{"split": "train", "path": "faq/**/*.parquet"}]}, {"config_name": "math", "data_files": [{"split": "train", "path": "math/**/*.parquet"}]}, {"config_name": "table", "data_files": [{"split": "train", "path": "table/**/*.parquet"}]}, {"config_name": "tutorial", "data_files": [{"split": "train", "path": "tutorial/**/*.parquet"}]}], "train-eval-index": [{"config": "all", "task": "text-generation", "task_id": "language-modeling", "splits": {"train_split": "train", "eval_split": null}, "col_mapping": {"text": "text"}}]}
false
False
2026-03-07T19:16:51
73
72
false
a9046961aa1360172836a82f63563db9b44993d3
Dataset Card for HuggingFaceFW/finephrase Dataset Summary Synthetic data generated by DataTrove: Model: HuggingFaceTB/SmolLM2-1.7B-Instruct (main) Source dataset: HuggingFaceFW/fineweb-edu, config sample-350BT, split train Generation config: temperature=1.0, top_p=1.0, top_k=50, max_tokens=2048, model_max_context=8192 Speculative decoding: {"method":"suffix","num_speculative_tokens":32} System prompt: None Input column: text Prompt families: faq prompt Rewrite the… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/finephrase.
76,704
76,704
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:machine-generated", "language_creators:found", "source_datasets:HuggingFaceFW/fineweb-edu/sample-350BT", "language:en", "license:odc-by", "size_categories:1B<n<10B", "modality:tabular", "modality:text", "regio...
2026-02-15T23:04:16
null
null
698e4ad0913c4d1f4a64479a
crownelius/Opus-4.6-Reasoning-3300x
crownelius
{"license": "apache-2.0"}
false
False
2026-03-02T05:37:24
164
54
false
2aaf2ade07cefc9fa733f4ce8d9abdd152e7ec91
Opus-4.6-Reasoning-3000x (Cleaned) This dataset has been automatically cleaned to remove: Empty or missing responses Responses shorter than 10 characters Refusal responses ("problem is incomplete", "cannot solve", etc.) Responses with no substantive content Responses that just echo the problem Cleaning Report Original rows: 3,305 Clean rows: 2,160 Removed: 1,145 (34.6%) Columns: ['id', 'problem', 'thinking', 'solution', 'difficulty', 'category', 'timestamp', 'hash']… See the full description on the dataset page: https://huggingface.co/datasets/crownelius/Opus-4.6-Reasoning-3300x.
2,025
2,029
[ "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us" ]
2026-02-12T21:49:04
null
null
69b27063693ba5b211bd0a99
markov-ai/computer-use-large
markov-ai
{"license": "cc-by-4.0", "task_categories": ["video-classification", "robotics"], "language": ["en"], "tags": ["screen-recording", "computer-use", "software-tutorials", "gui", "desktop"], "size_categories": ["10K<n<100K"], "configs": [{"config_name": "autocad", "data_files": [{"split": "train", "path": ["data/autocad/*", "data/autocad_2/*"]}]}, {"config_name": "blender", "data_files": [{"split": "train", "path": ["data/blender/*", "data/blender_2/*"]}]}, {"config_name": "excel", "data_files": [{"split": "train", "path": "data/excel/*"}]}, {"config_name": "photoshop", "data_files": [{"split": "train", "path": ["data/photoshop/*", "data/photoshop_2/*"]}]}, {"config_name": "salesforce", "data_files": [{"split": "train", "path": "data/salesforce/*"}]}, {"config_name": "vscode", "data_files": [{"split": "train", "path": "data/vscode/*"}]}]}
false
False
2026-03-12T13:40:50
44
44
false
a2655997b110408aab09cfc55e2d573a2ca59a27
Computer Use Large A large-scale dataset of 48,478 screen recording videos (~12,300 hours) of professional software being used, sourced from the internet. All videos have been trimmed to remove non-screen-recording content (intros, outros, talking heads, transitions) and audio has been stripped. Dataset Summary Category Videos Hours AutoCAD 10,059 2,149 Blender 11,493 3,624 Excel 8,111 2,002 Photoshop 10,704 2,060 Salesforce 7,807 2,336 VS Code 304… See the full description on the dataset page: https://huggingface.co/datasets/markov-ai/computer-use-large.
45,637
45,637
[ "task_categories:video-classification", "task_categories:robotics", "language:en", "license:cc-by-4.0", "size_categories:10K<n<100K", "region:us", "screen-recording", "computer-use", "software-tutorials", "gui", "desktop" ]
2026-03-12T07:50:59
null
null
69a5b45a59ca5dda6cff15a9
TuringEnterprises/Open-RL
TuringEnterprises
{"license": "mit", "language": ["en"], "tags": ["chemistry", "physics", "math", "biology", "science"], "pretty_name": "open-rl", "size_categories": ["n<1K"], "task_categories": ["question-answering"]}
false
False
2026-03-04T11:24:40
173
42
false
cef3b89150d73474ec6b9203897ce2d8d2dcd2bf
Open-RL Dataset Summary This dataset contains self-contained, verifiable, and unambiguous STEM reasoning problems across Physics, Mathematics, Biology, and Chemistry. Each problem: Requires multi-step reasoning Involves symbolic manipulation and/or numerical computation Has a deterministic, objectively verifiable final answer The problems were evaluated against contemporary large language models. Observed pass rates indicate that the tasks are non-trivial yet… See the full description on the dataset page: https://huggingface.co/datasets/TuringEnterprises/Open-RL.
11,823
11,823
[ "task_categories:question-answering", "language:en", "license:mit", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "chemistry", "physics", "math", "biology", "science" ]
2026-03-02T16:01:30
null
null
6988f3d2dd11cee339d8c40b
karpathy/tinystories-gpt4-clean
karpathy
{"license": "cdla-sharing-1.0"}
false
False
2026-02-08T21:07:28
46
39
false
0397e27157956705a0260709da3095bb9c43d6a7
TinyStories GPT-4 Clean A cleaned subset of the TinyStories dataset (Eldan & Li, 2023), keeping only GPT-4-generated stories. Adapted from this thread that pointed out many issues with the original data and proposed a cleaning process. Overview This cleaned dataset contains: Stat Value Stories 2,732,634 Total characters ~2.19B Min doc length 115 chars Max doc length 4,433 chars Median doc length 721 chars Unique characters 74 (ASCII only) Duplicates… See the full description on the dataset page: https://huggingface.co/datasets/karpathy/tinystories-gpt4-clean.
1,556
1,581
[ "license:cdla-sharing-1.0", "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "arxiv:2305.07759", "region:us" ]
2026-02-08T20:36:34
null
null
696e2528357a40707550b1c4
google/WaxalNLP
google
{"language_creators": ["creator_1"], "language": ["ach", "aka", "amh", "bau", "dag", "dga", "ewe", "fat", "ful", "hau", "ibo", "kik", "kpo", "lin", "lug", "luo", "mas", "mlg", "nyn", "orm", "pcm", "sid", "sna", "sog", "swa", "tir", "twi", "wal", "yor"], "license": ["cc-by-sa-4.0", "cc-by-4.0"], "multilinguality": ["multilingual"], "source_datasets": ["UGSpeechData", "DigitalUmuganda/AfriVoice", "original"], "task_categories": ["automatic-speech-recognition", "text-to-speech"], "pretty_name": "Waxal NLP Datasets", "arxiv": 2602.02734, "annotation_creators": ["human-annotated", "crowdsourced"], "tags": ["audio", "automatic-speech-recognition", "text-to-speech"], "configs": [{"config_name": "ach_asr", "data_files": [{"split": "train", "path": "data/ASR/ach/ach-train-*"}, {"split": "validation", "path": "data/ASR/ach/ach-validation-*"}, {"split": "test", "path": "data/ASR/ach/ach-test-*"}, {"split": "unlabeled", "path": "data/ASR/ach/ach-unlabeled-*"}]}, {"config_name": "ach_tts", "data_files": [{"split": "train", "path": "data/TTS/ach/ach-train-*"}, {"split": "validation", "path": "data/TTS/ach/ach-validation-*"}, {"split": "test", "path": "data/TTS/ach/ach-test-*"}]}, {"config_name": "aka_asr", "data_files": [{"split": "train", "path": "data/ASR/aka/aka-train-*"}, {"split": "validation", "path": "data/ASR/aka/aka-validation-*"}, {"split": "test", "path": "data/ASR/aka/aka-test-*"}, {"split": "unlabeled", "path": "data/ASR/aka/aka-unlabeled-*"}]}, {"config_name": "amh_asr", "data_files": [{"split": "train", "path": "data/ASR/amh/amh-train-*"}, {"split": "validation", "path": "data/ASR/amh/amh-validation-*"}, {"split": "test", "path": "data/ASR/amh/amh-test-*"}, {"split": "unlabeled", "path": "data/ASR/amh/amh-unlabeled-*"}]}, {"config_name": "bau_tts", "data_files": [{"split": "train", "path": "data/TTS/bau/bau-train-*"}, {"split": "validation", "path": "data/TTS/bau/bau-validation-*"}, {"split": "test", "path": "data/TTS/bau/bau-test-*"}]}, {"config_name": "dag_asr", "data_files": [{"split": "train", "path": "data/ASR/dag/dag-train-*"}, {"split": "validation", "path": "data/ASR/dag/dag-validation-*"}, {"split": "test", "path": "data/ASR/dag/dag-test-*"}, {"split": "unlabeled", "path": "data/ASR/dag/dag-unlabeled-*"}]}, {"config_name": "dga_asr", "data_files": [{"split": "train", "path": "data/ASR/dga/dga-train-*"}, {"split": "validation", "path": "data/ASR/dga/dga-validation-*"}, {"split": "test", "path": "data/ASR/dga/dga-test-*"}, {"split": "unlabeled", "path": "data/ASR/dga/dga-unlabeled-*"}]}, {"config_name": "ewe_asr", "data_files": [{"split": "train", "path": "data/ASR/ewe/ewe-train-*"}, {"split": "validation", "path": "data/ASR/ewe/ewe-validation-*"}, {"split": "test", "path": "data/ASR/ewe/ewe-test-*"}, {"split": "unlabeled", "path": "data/ASR/ewe/ewe-unlabeled-*"}]}, {"config_name": "ewe_tts", "data_files": [{"split": "train", "path": "data/TTS/ewe/ewe-train-*"}, {"split": "validation", "path": "data/TTS/ewe/ewe-validation-*"}, {"split": "test", "path": "data/TTS/ewe/ewe-test-*"}]}, {"config_name": "fat_tts", "data_files": [{"split": "train", "path": "data/TTS/fat/fat-train-*"}, {"split": "validation", "path": "data/TTS/fat/fat-validation-*"}, {"split": "test", "path": "data/TTS/fat/fat-test-*"}]}, {"config_name": "ful_asr", "data_files": [{"split": "train", "path": "data/ASR/ful/ful-train-*"}, {"split": "validation", "path": "data/ASR/ful/ful-validation-*"}, {"split": "test", "path": "data/ASR/ful/ful-test-*"}, {"split": "unlabeled", "path": "data/ASR/ful/ful-unlabeled-*"}]}, {"config_name": "ful_tts", "data_files": [{"split": "train", "path": "data/TTS/ful/ful-train-*"}, {"split": "validation", "path": "data/TTS/ful/ful-validation-*"}, {"split": "test", "path": "data/TTS/ful/ful-test-*"}]}, {"config_name": "hau_tts", "data_files": [{"split": "train", "path": "data/TTS/hau/hau-train-*"}, {"split": "validation", "path": "data/TTS/hau/hau-validation-*"}, {"split": "test", "path": "data/TTS/hau/hau-test-*"}]}, {"config_name": "ibo_tts", "data_files": [{"split": "train", "path": "data/TTS/ibo/ibo-train-*"}, {"split": "validation", "path": "data/TTS/ibo/ibo-validation-*"}, {"split": "test", "path": "data/TTS/ibo/ibo-test-*"}]}, {"config_name": "kik_tts", "data_files": [{"split": "train", "path": "data/TTS/kik/kik-train-*"}, {"split": "validation", "path": "data/TTS/kik/kik-validation-*"}, {"split": "test", "path": "data/TTS/kik/kik-test-*"}]}, {"config_name": "kpo_asr", "data_files": [{"split": "train", "path": "data/ASR/kpo/kpo-train-*"}, {"split": "validation", "path": "data/ASR/kpo/kpo-validation-*"}, {"split": "test", "path": "data/ASR/kpo/kpo-test-*"}, {"split": "unlabeled", "path": "data/ASR/kpo/kpo-unlabeled-*"}]}, {"config_name": "lin_asr", "data_files": [{"split": "train", "path": "data/ASR/lin/lin-train-*"}, {"split": "validation", "path": "data/ASR/lin/lin-validation-*"}, {"split": "test", "path": "data/ASR/lin/lin-test-*"}, {"split": "unlabeled", "path": "data/ASR/lin/lin-unlabeled-*"}]}, {"config_name": "lug_asr", "data_files": [{"split": "train", "path": "data/ASR/lug/lug-train-*"}, {"split": "validation", "path": "data/ASR/lug/lug-validation-*"}, {"split": "test", "path": "data/ASR/lug/lug-test-*"}, {"split": "unlabeled", "path": "data/ASR/lug/lug-unlabeled-*"}]}, {"config_name": "lug_tts", "data_files": [{"split": "train", "path": "data/TTS/lug/lug-train-*"}, {"split": "validation", "path": "data/TTS/lug/lug-validation-*"}, {"split": "test", "path": "data/TTS/lug/lug-test-*"}]}, {"config_name": "luo_tts", "data_files": [{"split": "train", "path": "data/TTS/luo/luo-train-*"}, {"split": "validation", "path": "data/TTS/luo/luo-validation-*"}, {"split": "test", "path": "data/TTS/luo/luo-test-*"}]}, {"config_name": "mas_asr", "data_files": [{"split": "train", "path": "data/ASR/mas/mas-train-*"}, {"split": "validation", "path": "data/ASR/mas/mas-validation-*"}, {"split": "test", "path": "data/ASR/mas/mas-test-*"}, {"split": "unlabeled", "path": "data/ASR/mas/mas-unlabeled-*"}]}, {"config_name": "mlg_asr", "data_files": [{"split": "train", "path": "data/ASR/mlg/mlg-train-*"}, {"split": "validation", "path": "data/ASR/mlg/mlg-validation-*"}, {"split": "test", "path": "data/ASR/mlg/mlg-test-*"}, {"split": "unlabeled", "path": "data/ASR/mlg/mlg-unlabeled-*"}]}, {"config_name": "nyn_asr", "data_files": [{"split": "train", "path": "data/ASR/nyn/nyn-train-*"}, {"split": "validation", "path": "data/ASR/nyn/nyn-validation-*"}, {"split": "test", "path": "data/ASR/nyn/nyn-test-*"}, {"split": "unlabeled", "path": "data/ASR/nyn/nyn-unlabeled-*"}]}, {"config_name": "nyn_tts", "data_files": [{"split": "train", "path": "data/TTS/nyn/nyn-train-*"}, {"split": "validation", "path": "data/TTS/nyn/nyn-validation-*"}, {"split": "test", "path": "data/TTS/nyn/nyn-test-*"}]}, {"config_name": "orm_asr", "data_files": [{"split": "train", "path": "data/ASR/orm/orm-train-*"}, {"split": "validation", "path": "data/ASR/orm/orm-validation-*"}, {"split": "test", "path": "data/ASR/orm/orm-test-*"}, {"split": "unlabeled", "path": "data/ASR/orm/orm-unlabeled-*"}]}, {"config_name": "pcm_tts", "data_files": [{"split": "train", "path": "data/TTS/pcm/pcm-train-*"}, {"split": "validation", "path": "data/TTS/pcm/pcm-validation-*"}, {"split": "test", "path": "data/TTS/pcm/pcm-test-*"}]}, {"config_name": "sid_asr", "data_files": [{"split": "train", "path": "data/ASR/sid/sid-train-*"}, {"split": "validation", "path": "data/ASR/sid/sid-validation-*"}, {"split": "test", "path": "data/ASR/sid/sid-test-*"}, {"split": "unlabeled", "path": "data/ASR/sid/sid-unlabeled-*"}]}, {"config_name": "sna_asr", "data_files": [{"split": "train", "path": "data/ASR/sna/sna-train-*"}, {"split": "validation", "path": "data/ASR/sna/sna-validation-*"}, {"split": "test", "path": "data/ASR/sna/sna-test-*"}, {"split": "unlabeled", "path": "data/ASR/sna/sna-unlabeled-*"}]}, {"config_name": "tir_asr", "data_files": [{"split": "train", "path": "data/ASR/tir/tir-train-*"}, {"split": "validation", "path": "data/ASR/tir/tir-validation-*"}, {"split": "test", "path": "data/ASR/tir/tir-test-*"}, {"split": "unlabeled", "path": "data/ASR/tir/tir-unlabeled-*"}]}, {"config_name": "sog_asr", "data_files": [{"split": "train", "path": "data/ASR/sog/sog-train-*"}, {"split": "validation", "path": "data/ASR/sog/sog-validation-*"}, {"split": "test", "path": "data/ASR/sog/sog-test-*"}, {"split": "unlabeled", "path": "data/ASR/sog/sog-unlabeled-*"}]}, {"config_name": "swa_tts", "data_files": [{"split": "train", "path": "data/TTS/swa/swa-train-*"}, {"split": "validation", "path": "data/TTS/swa/swa-validation-*"}, {"split": "test", "path": "data/TTS/swa/swa-test-*"}]}, {"config_name": "twi_tts", "data_files": [{"split": "train", "path": "data/TTS/twi/twi-train-*"}, {"split": "validation", "path": "data/TTS/twi/twi-validation-*"}, {"split": "test", "path": "data/TTS/twi/twi-test-*"}]}, {"config_name": "yor_tts", "data_files": [{"split": "train", "path": "data/TTS/yor/yor-train-*"}, {"split": "validation", "path": "data/TTS/yor/yor-validation-*"}, {"split": "test", "path": "data/TTS/yor/yor-test-*"}]}, {"config_name": "wal_asr", "data_files": [{"split": "train", "path": "data/ASR/wal/wal-train-*"}, {"split": "validation", "path": "data/ASR/wal/wal-validation-*"}, {"split": "test", "path": "data/ASR/wal/wal-test-*"}, {"split": "unlabeled", "path": "data/ASR/wal/wal-unlabeled-*"}]}], "dataset_info": [{"config_name": "ach_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ach_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "aka_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "amh_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "bau_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "dag_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "dga_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ewe_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ewe_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "fat_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ful_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "fuf_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ful_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "hau_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ibo_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "kik_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "kpo_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "lin_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "lug_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "lug_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "luo_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "mas_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "mlg_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "nyn_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "nyn_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "orm_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "pcm_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "sid_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "sna_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "sog_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "swa_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "tir_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "twi_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "wal_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "yor_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}]}
false
False
2026-03-13T11:58:41
196
33
false
beab143ae6d8a5e054281241afd76565ecb57e03
Waxal Datasets The WAXAL dataset is a large-scale multilingual speech corpus for African languages, introduced in the paper WAXAL: A Large-Scale Multilingual African Language Speech Corpus. Dataset Description The Waxal project provides datasets for both Automated Speech Recognition (ASR) and Text-to-Speech (TTS) for African languages. The goal of this dataset's creation and release is to facilitate research that improves the accuracy and fluency of speech and language… See the full description on the dataset page: https://huggingface.co/datasets/google/WaxalNLP.
10,447
19,079
[ "task_categories:automatic-speech-recognition", "task_categories:text-to-speech", "language_creators:creator_1", "multilinguality:multilingual", "source_datasets:UGSpeechData", "source_datasets:DigitalUmuganda/AfriVoice", "source_datasets:original", "language:ach", "language:aka", "language:amh", ...
2026-01-19T12:35:52
null
null
698a9b89700a694a5b97db6f
AudioVisual-Caption/ASID-1M
AudioVisual-Caption
{"license": "cc-by-2.0", "language": ["en"], "pretty_name": "ASID-1M", "tags": ["caption", "audiovisual", "instruction-tuning", "attribute-structured", "quality-verified", "video-understanding"], "task_categories": ["image-text-to-text"], "configs": [{"config_name": "all_attributes", "data_files": [{"split": "train", "path": ["annotations/0_30_s_youtube_v0_1/train/all_attributes_0_30_s_youtube_v0_1.jsonl", "annotations/30_60_s_youtube_v0_1/train/all_attributes_30_60_s_youtube_v0_1.jsonl", "annotations/1_2_m_youtube_v0_1/train/all_attributes_1_2_m_youtube_v0_1.jsonl", "annotations/finevideo/train/all_attributes_finevideo.jsonl"]}]}, {"config_name": "single_attribute", "data_files": [{"split": "train", "path": ["annotations/0_30_s_youtube_v0_1/train/single_attribute_0_30_s_youtube_v0_1.jsonl", "annotations/30_60_s_youtube_v0_1/train/single_attribute_30_60_s_youtube_v0_1.jsonl", "annotations/1_2_m_youtube_v0_1/train/single_attribute_1_2_m_youtube_v0_1.jsonl", "annotations/finevideo/train/single_attribute_finevideo.jsonl"]}]}]}
false
False
2026-03-11T12:26:08
68
29
false
209550390d32c41cb138a8503f82a663a4da357d
ASID-1M: Attribute-Structured and Quality-Verified Audiovisual Instructions [🏠 Homepage] [📖 Arxiv Paper] [🤗 Models & Datasets] [💻 Code] Introduction We introduce ASID-1M, a large-scale audiovisual instruction dataset built to support universal video understanding with fine-grained, controllable supervision. Most existing video-instruction data represents complex audiovisual content as a single, monolithic caption. This often leads to incomplete coverage (missing audio… See the full description on the dataset page: https://huggingface.co/datasets/AudioVisual-Caption/ASID-1M.
1,948
2,008
[ "task_categories:image-text-to-text", "language:en", "license:cc-by-2.0", "size_categories:100K<n<1M", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "arxiv:2602.13013", "region:us", "caption", "audiovisual", "instruction-tu...
2026-02-10T02:44:25
null
null
69a5c92559ca5dda6c00b2f8
Jackrong/Qwen3.5-reasoning-700x
Jackrong
{"license": "apache-2.0", "language": ["en"], "tags": ["reasoning", "math", "distillation", "instruction-tuning", "chain-of-thought", "qwen", "qwen3.5"], "task_categories": ["question-answering"], "size_categories": ["n<1K"]}
false
False
2026-03-02T17:44:52
38
29
false
1b6c703da5319ded200d9e7c91e0b57b4a7c922c
Dataset Card (Qwen3.5-reasoning-700x) Dataset Summary Qwen3.5-reasoning-700x is a high-quality distilled dataset. This dataset uses the high-quality instructions constructed by Alibaba-Superior-Reasoning-Stage2 as the seed question set. By calling the latest Qwen3.5-27B full-parameter model on the Alibaba Cloud DashScope platform as the teacher model, it generates high-quality responses featuring long-text reasoning processes (Chain-of-Thought). It covers several major… See the full description on the dataset page: https://huggingface.co/datasets/Jackrong/Qwen3.5-reasoning-700x.
690
690
[ "task_categories:question-answering", "language:en", "license:apache-2.0", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "reasoning", "math", "distillation", "instruction-tuning", "cha...
2026-03-02T17:30:13
null
null
69a7282144067eabb6017453
ronantakizawa/github-codereview
ronantakizawa
{"license": "other", "task_categories": ["text-generation"], "language": ["en", "code"], "tags": ["code-review", "code-generation", "software-engineering", "pull-requests", "github"], "size_categories": ["100K<n<1M"]}
false
False
2026-03-10T00:59:34
35
29
false
c3e3c6e7e9f61e3e7a5b52894bcd440d586ae6ca
Code Review Dataset A large-scale dataset of the best human-written code reviews from top GitHub repositories. Each row captures a moment where a human code reviewer left an inline comment on a pull request, and the author subsequently modified the code in response. The dataset also includes negative examples — code from the same PRs that passed review without comments — to help models learn when code is acceptable. This provides a natural signal for training models to: Generate… See the full description on the dataset page: https://huggingface.co/datasets/ronantakizawa/github-codereview.
306
306
[ "task_categories:text-generation", "language:en", "language:code", "license:other", "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us", "code-review", "code-generatio...
2026-03-03T18:27:45
null
null
69afdb9aea6ad7cbfa28b5fe
ginigen-ai/smol-worldcup
ginigen-ai
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "shift_axis", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "subcategory", "dtype": "string"}, {"name": "difficulty", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "answer_key", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "grading_rule", "dtype": "string"}, {"name": "auto_grade", "dtype": "string"}, {"name": "max_score", "dtype": "int64"}, {"name": "anchor", "dtype": "bool"}, {"name": "season", "dtype": "int64"}, {"name": "version", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_name", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 125}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "smol_worldcup_s1.jsonl"}]}], "license": "apache-2.0", "task_categories": ["text-generation", "question-answering"], "language": ["en", "ko", "ar", "pt", "tr", "bn", "th"], "tags": ["benchmark", "small-language-models", "SHIFT-framework", "WCS", "honesty", "hallucination-detection", "smol-ai-worldcup", "evaluation", "multilingual", "edge-ai", "PIR"], "pretty_name": "\ud83c\udfdf\ufe0f Smol AI WorldCup \u2014 SHIFT Benchmark", "size_categories": ["n<1K"], "models": ["meta-llama/Llama-3.2-1B-Instruct", "Qwen/Qwen3-1.7B", "openai/gpt-oss-20b", "CohereLabs/tiny-aya-fire", "Qwen/Qwen3-4B-Instruct-2507", "google/gemma-3n-E4B-it", "zai-org/GLM-4.7-Flash", "mistralai/Mistral-7B-Instruct-v0.2", "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "Qwen/Qwen3-8B", "meta-llama/Llama-3.1-8B-Instruct", "nvidia/Llama-3.1-Nemotron-Nano-8B-v1", "Qwen/Qwen3.5-9B", "allenai/Olmo-3-7B-Instruct", "google/gemma-3-12b-it", "deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "Qwen/Qwen3.5-35B-A3B", "meta-llama/Llama-4-Scout-17B-16E-Instruct"]}
false
False
2026-03-10T14:47:44
29
29
false
a304802ece2692d2beb3b3a62bf67c50b7f3c60b
🏟️ Smol AI WorldCup — SHIFT Benchmark The world's first 5-axis evaluation framework for small language models. Not just "how smart?" — but "how honest? how fast? how small? how efficient?" 🏟️ Leaderboard huggingface.co/spaces/ginigen-ai/smol-worldcup 📊 Dataset huggingface.co/datasets/ginigen-ai/smol-worldcup 🏅 ALL Bench huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard 🏆 Official Ranking: WCS (WorldCup Score) WCS = √( SHIFT × PIR_norm )… See the full description on the dataset page: https://huggingface.co/datasets/ginigen-ai/smol-worldcup.
1,311
1,311
[ "task_categories:text-generation", "task_categories:question-answering", "language:en", "language:ko", "language:ar", "language:pt", "language:tr", "language:bn", "language:th", "license:apache-2.0", "size_categories:n<1K", "format:json", "modality:tabular", "modality:text", "library:dat...
2026-03-10T08:51:38
null
null
69af21616259df956494b1ce
yatin-superintelligence/Edge-Agent-Reasoning-WebSearch-260K
yatin-superintelligence
{"pretty_name": "Edge Agent Reasoning WebSearch 260K", "license": "mit", "language": ["en"], "library_name": "datasets", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "question-answering", "any-to-any", "robotics"], "tags": ["text", "3d", "image", "synthetic", "agentic", "reasoning", "RAG", "system-2", "chain-of-thought", "web-search", "document", "edge-ai", "tool-use", "software", "engineering", "code", "legal", "medical", "healthcare", "biology", "chemistry", "finance", "science", "climate", "art", "design", "music", "audio", "video", "agent", "datasets", "parquet", "pandas", "polars", "dask"], "dataset_info": {"features": [{"name": "batch_index_id", "dtype": "int64"}, {"name": "role", "dtype": "string"}, {"name": "industry", "dtype": "string"}, {"name": "os", "dtype": "string"}, {"name": "user_prompt", "dtype": "string"}, {"name": "agent_reasoning", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 712900000, "num_examples": 263098}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "edge_reasoning_train_*.parquet"}]}]}
false
False
2026-03-13T21:42:21
23
23
false
7e8e455fff52e6d21dce4ce4a5a1bddd13031e1a
Edge Agent Reasoning WebSearch 260K Abstract The Edge-Agent-Reasoning-WebSearch-260K dataset is a massive, synthetically expert-engineered corpus of over 700 Million tokens, designed to train small, local models (SLMs) and edge-deployed agents in advanced problem deconstruction and self-aware reasoning. Rather than training a model to execute instructions directly—which often leads to hallucinations when context is missing—this dataset trains a model to act as a… See the full description on the dataset page: https://huggingface.co/datasets/yatin-superintelligence/Edge-Agent-Reasoning-WebSearch-260K.
1,275
1,275
[ "task_categories:text-generation", "task_categories:question-answering", "task_categories:any-to-any", "task_categories:robotics", "language:en", "license:mit", "size_categories:100K<n<1M", "format:parquet", "modality:text", "modality:3d", "modality:image", "modality:document", "modality:aud...
2026-03-09T19:37:05
null
null
67e4291146baf23164358d53
nvidia/Nemotron-ClimbMix
nvidia
{"language": ["en"], "license": "cc-by-nc-4.0", "task_categories": ["text-generation"], "configs": [{"config_name": "default", "data_files": "*.jsonl"}]}
false
False
2025-10-21T15:05:35
82
19
false
5eaa64b9c0c85b7f56af01d7dffdb0795816b12b
ClimbMix Dataset 🚀 Creating the highest-quality pre-training datasets for LLMs 🌟 📄 PAPER 🤗 CLIMBLAB 🤗 CLIMBMIX 🏠 HOMEPAGE Figure 1: Continuously training a 1B model yields a 2.0% improvement over Llama-3.2-1B, demonstrating a more efficient scaling trend compared to prior models. Figure 2: Pre-training a 1B model from scratch on ClimbMix shows better scaling effects than training on other datasets.… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-ClimbMix.
9,103
37,549
[ "task_categories:text-generation", "language:en", "license:cc-by-nc-4.0", "size_categories:100M<n<1B", "format:json", "modality:tabular", "library:datasets", "library:dask", "library:mlcroissant", "arxiv:2504.13161", "region:us" ]
2025-03-26T16:19:29
null
null
6996711477c275fd9adb7137
nvidia/Nemotron-Terminal-Corpus
nvidia
{"license": "cc-by-4.0", "task_categories": ["question-answering"], "language": ["en"], "tags": ["code"], "size_categories": ["100K<n<1M"], "configs": [{"config_name": "dataset_adapters", "data_files": [{"split": "train", "path": "dataset_adapters/*.parquet"}]}, {"config_name": "skill_based_easy", "data_files": [{"split": "train", "path": "synthetic_tasks/skill_based/easy/*/data_filtered.parquet"}]}, {"config_name": "skill_based_medium", "data_files": [{"split": "train", "path": "synthetic_tasks/skill_based/medium/*/data_filtered.parquet"}]}, {"config_name": "skill_based_mixed", "data_files": [{"split": "train", "path": "synthetic_tasks/skill_based/mixed/*/data_filtered.parquet"}]}]}
false
False
2026-02-27T22:37:57
95
18
false
a1667c4ffdadea02a89bffe4f1bb7ca2ff19f8d9
Terminal-Corpus: Large-Scale SFT Dataset for Terminal Agents Terminal-Corpus is a large-scale Supervised Fine-Tuning (SFT) dataset designed to scale the terminal interaction capabilities of Large Language Models (LLMs). Developed by NVIDIA, this dataset was built using the Terminal-Task-Gen pipeline, which combines dataset adaptation with synthetic task generation across diverse domains. 🚀 Key Results & Performance The high-quality trajectories in Terminal-Corpus enable… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Terminal-Corpus.
2,475
2,475
[ "task_categories:question-answering", "language:en", "license:cc-by-4.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2602.21193", "region:us", "code" ]
2026-02-19T02:10:28
null
null
69acbc6d461c6ec304b2b943
FINAL-Bench/ALL-Bench-Leaderboard
FINAL-Bench
{"annotations_creators": ["expert-generated"], "language": ["en"], "license": "mit", "pretty_name": "ALL Bench Leaderboard 2026", "size_categories": ["n<1K"], "source_datasets": ["original"], "tags": ["benchmark", "leaderboard", "llm", "vlm", "ai-evaluation", "gpt-5", "claude", "gemini", "final-bench", "metacognition", "multimodal", "ai-agent", "image-generation", "video-generation", "music-generation", "union-eval"], "task_categories": ["text-generation", "visual-question-answering", "text-to-image", "text-to-video", "text-to-audio"], "configs": [{"config_name": "llm", "data_files": [{"split": "train", "path": "data/llm.jsonl"}]}, {"config_name": "vlm_flagship", "data_files": [{"split": "train", "path": "data/vlm_flagship.jsonl"}]}, {"config_name": "agent", "data_files": [{"split": "train", "path": "data/agent.jsonl"}]}, {"config_name": "image", "data_files": [{"split": "train", "path": "data/image.jsonl"}]}, {"config_name": "video", "data_files": [{"split": "train", "path": "data/video.jsonl"}]}, {"config_name": "music", "data_files": [{"split": "train", "path": "data/music.jsonl"}]}], "models": ["Qwen/Qwen3.5-122B-A10B", "Qwen/Qwen3.5-27B", "Qwen/Qwen3.5-35B-A3B", "Qwen/Qwen3.5-9B", "Qwen/Qwen3.5-4B", "Qwen/Qwen3-Next-80B-A3B-Thinking", "deepseek-ai/DeepSeek-V3", "deepseek-ai/DeepSeek-R1", "zai-org/GLM-5", "meta-llama/Llama-4-Scout-17B-16E-Instruct", "meta-llama/Llama-4-Maverick-17B-128E-Instruct", "microsoft/phi-4", "upstage/Solar-Open-100B", "K-intelligence/Midm-2.0-Base-Instruct", "Nanbeige/Nanbeige4.1-3B", "MiniMaxAI/MiniMax-M2.5", "stepfun-ai/Step-3.5-Flash", "OpenGVLab/InternVL3-78B", "Qwen/Qwen2.5-VL-72B-Instruct", "Qwen/Qwen3-VL-30B-A3B", "black-forest-labs/FLUX.1-dev", "stabilityai/stable-diffusion-3.5-large", "Lightricks/LTX-Video", "facebook/musicgen-large", "facebook/jasco-chords-drums-melody-1B"]}
false
False
2026-03-10T02:42:24
18
18
false
7378ede0b4776f0bf97f8e106bb9d603c80a5074
🏆 ALL Bench Leaderboard 2026 The only AI benchmark dataset covering LLM · VLM · Agent · Image · Video · Music in a single unified file. Dataset Summary ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for 90+ AI models across 6 modalities. Every numerical score is tagged with a confidence level (cross-verified, single-source, or self-reported) and its original source. The dataset is designed for researchers, developers, and… See the full description on the dataset page: https://huggingface.co/datasets/FINAL-Bench/ALL-Bench-Leaderboard.
1,592
1,592
[ "task_categories:text-generation", "task_categories:visual-question-answering", "task_categories:text-to-image", "task_categories:text-to-video", "task_categories:text-to-audio", "annotations_creators:expert-generated", "source_datasets:original", "language:en", "license:mit", "size_categories:n<1...
2026-03-08T00:01:49
null
null
69981ebb8794c09b40ce6b1e
Oatmealliu/UrbanVerse-100K
Oatmealliu
{"license": "odc-by", "language": ["en"], "pretty_name": "UrbanVerse-100K", "size_categories": ["100K<n<1M"], "task_categories": ["robotics", "text-to-3d", "image-to-3d", "reinforcement-learning", "image-to-text", "text-to-image"], "tags": ["3d", "Robotics", "PhysicalAI", "EmbodiedAI", "Objects", "3DAssets", "UrbanSimulation", "IsaacSim", "IsaacLab"], "extra_gated_fields": {"Full Name": "text", "Email Address": "text", "Country": "country", "Institution": "text", "Sector of Institution": {"type": "select", "options": ["Academic/Education", "Corporation", "Startup", "Government", "Non-profit Organization", "Individual", "Other"]}, "Purpose": {"type": "select", "options": ["Embodied AI", "Physical AI", "3D Generation", "Reinforcement Learning", "Imitation Learning", "Computer Vision", "Autonomous Driving", "Generative Models", "Multimodal Large Language Models", "Visual Question Answering"]}, "I accept the conditions and licenses of the files contained in this dataset": "checkbox"}}
false
manual
2026-03-11T10:40:11
15
15
false
5625b8038308e5c25320da1d1ddc952f8a291686
UrbanVerse-100K Dataset [!NOTE] UrbanVerse-100K is a large-scale, physics-aware 3D asset and material database curated for urban simulation, physical and embodied AI research. It contains over 102K metric-scale urban object assets (GLB/USD), along with 646 4K sky maps (HDR) and 403 4K ground (road/sidewalk/terrain) materials (MDL), each annotated with rich semantic and physical attributes. The dataset is IsaacSim-ready, enabling scalable construction of realistic urban… See the full description on the dataset page: https://huggingface.co/datasets/Oatmealliu/UrbanVerse-100K.
7,854
7,854
[ "task_categories:robotics", "task_categories:text-to-3d", "task_categories:image-to-3d", "task_categories:reinforcement-learning", "task_categories:image-to-text", "task_categories:text-to-image", "language:en", "license:odc-by", "size_categories:100K<n<1M", "modality:3d", "arxiv:2510.15018", ...
2026-02-20T08:43:39
null
null
69a70420de30b37a2f37ccca
karpathy/climbmix-400b-shuffle
karpathy
{"license": "mit"}
false
False
2026-03-03T17:02:01
16
15
false
915333b4f8b8684f39aeaafea600fea6f43fb703
null
25,038
25,038
[ "license:mit", "size_categories:100M<n<1B", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us" ]
2026-03-03T15:54:08
null
null
625552d2b339bb03abe3432d
openai/gsm8k
openai
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": [], "paperswithcode_id": "gsm8k", "pretty_name": "Grade School Math 8K", "tags": ["math-word-problems"], "dataset_info": [{"config_name": "main", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3963202, "num_examples": 7473}, {"name": "test", "num_bytes": 713732, "num_examples": 1319}], "download_size": 2725633, "dataset_size": 4676934}, {"config_name": "socratic", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5198108, "num_examples": 7473}, {"name": "test", "num_bytes": 936859, "num_examples": 1319}], "download_size": 3164254, "dataset_size": 6134967}], "configs": [{"config_name": "main", "data_files": [{"split": "train", "path": "main/train-*"}, {"split": "test", "path": "main/test-*"}]}, {"config_name": "socratic", "data_files": [{"split": "train", "path": "socratic/train-*"}, {"split": "test", "path": "socratic/test-*"}]}]}
false
False
2025-12-20T18:53:44
1,196
14
false
cc7b047b6e5bb11b4f1af84efc572db110a51b3c
Dataset Card for GSM8K Dataset Summary GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning. These problems take between 2 and 8 steps to solve. Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the… See the full description on the dataset page: https://huggingface.co/datasets/openai/gsm8k.
598,656
9,667,975
[ "benchmark:official", "task_categories:text-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:mit", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:dat...
2022-04-12T10:22:10
gsm8k
null
69a0ac7cc1f01f9b6b9031de
BytedTsinghua-SIA/CUDA-Agent-Ops-6K
BytedTsinghua-SIA
{"license": "cc-by-4.0", "pretty_name": "CUDA-Agent-Ops-6K", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "language": ["en"]}
false
False
2026-02-27T19:56:56
56
14
false
44a734c78c947bfcba5189cbfd13f57a6d29a698
CUDA-Agent-Ops-6K CUDA-Agent-Ops-6K is a curated training dataset for CUDA kernel generation and optimization. It is released as part of the CUDA-Agent project: Project Page: https://CUDA-Agent.github.io/ Github Repo: https://github.com/BytedTsinghua-SIA/CUDA-Agent Dataset Summary CUDA-Agent-Ops-6K contains 6,000 synthesized operator-level training tasks designed for large-scale agentic RL training. It is intended to provide diverse and executable CUDA-oriented training… See the full description on the dataset page: https://huggingface.co/datasets/BytedTsinghua-SIA/CUDA-Agent-Ops-6K.
516
516
[ "task_categories:text-generation", "language:en", "license:cc-by-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us" ]
2026-02-26T20:26:36
null
null
699e0810251cac84be7d52ba
peteromallet/dataclaw-peteromallet
peteromallet
{"license": "mit", "task_categories": ["text-generation"], "language": ["en"], "tags": ["dataclaw", "claude-code", "codex-cli", "conversations", "coding-assistant", "tool-use", "agentic-coding", "claude-haiku-4-5-20251001", "claude-opus-4-5-20251101", "claude-opus-4-6", "claude-sonnet-4-5-20250929", "claude-sonnet-4-6"], "pretty_name": "Coding Agent Conversations", "configs": [{"config_name": "default", "data_files": "conversations.jsonl"}]}
false
False
2026-02-25T16:14:13
287
13
false
b925056b0539a8bd28a06417dca464aac6ba7bdb
Coding Agent Conversation Logs This is a performance art project. Anthropic built their models on the world's freely shared information, then introduced increasingly dystopian data policies to stop anyone else from doing the same — pulling up the ladder behind them. DataClaw lets you throw the ladder back down. The dataset it produces is yours to share. Exported with DataClaw. Tag: dataclaw — Browse all DataClaw datasets Stats Metric Value Sessions 549… See the full description on the dataset page: https://huggingface.co/datasets/peteromallet/dataclaw-peteromallet.
9,878
9,878
[ "task_categories:text-generation", "language:en", "license:mit", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "dataclaw", "claude-code", "codex-cli", "conversations", "coding-assistan...
2026-02-24T20:20:32
null
null
69af2c4fe58f63b685b08d5c
yatin-superintelligence/Creative-Professionals-Agentic-Tasks-1M
yatin-superintelligence
{"pretty_name": "Creative Professionals Agentic Tasks (1M)", "language": ["en"], "license": "mit", "library_name": "datasets", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "question-answering", "any-to-any"], "tags": ["text", "audio", "video", "3d", "image", "art", "music", "code", "agent", "agentic-tasks", "frontend-development", "ui-ux-design", "game-ui", "3d-animation", "cgi", "vfx", "video-editing", "nonlinear-editing", "music-production", "audio-engineering", "sound-design", "brand-design", "photo-editing", "tool-use", "synthetic", "datasets", "parquet", "pandas", "polars", "dask"], "dataset_info": {"features": [{"name": "batch_id", "dtype": "int64"}, {"name": "index_id", "dtype": "int64"}, {"name": "professional", "dtype": "string"}, {"name": "group", "dtype": "string"}, {"name": "user_prompt", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 1070930}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "creative_pro_tasks_train_*.parquet"}]}]}
false
False
2026-03-13T14:45:14
13
13
false
620e36077ad9325ef19ae8caa4389175272b1c41
Creative Professionals Agentic Tasks (1M) Abstract A massive-scale, high-fidelity synthetic task dataset comprising 1,070,917 agentic command operations across 36 creative, technical, and engineering software environments. This dataset is engineered exclusively to stress-test, evaluate, and fine-tune multimodal AI agents designed for Agent Environment operation, complex software interaction, and multi-step reasoning within deep software infrastructures.… See the full description on the dataset page: https://huggingface.co/datasets/yatin-superintelligence/Creative-Professionals-Agentic-Tasks-1M.
835
835
[ "task_categories:text-generation", "task_categories:question-answering", "task_categories:any-to-any", "language:en", "license:mit", "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "modality:text", "modality:audio", "modality:video", "modality:3d", "modality:image", "libr...
2026-03-09T20:23:43
null
null
656523d6bfb751371817c448
Idavidrein/gpqa
Idavidrein
{"license": "cc-by-4.0", "viewer": true, "extra_gated_prompt": "You agree to NOT reveal examples from this dataset in plain text or images online, to reduce the risk of leakage into foundation model training corpora.", "extra_gated_fields": {"I accept these terms": "checkbox"}, "configs": [{"config_name": "gpqa_extended", "data_files": "gpqa_extended.csv"}, {"config_name": "gpqa_main", "data_files": "gpqa_main.csv"}, {"config_name": "gpqa_diamond", "data_files": "gpqa_diamond.csv"}, {"config_name": "gpqa_experts", "data_files": "gpqa_experts.csv"}], "task_categories": ["question-answering", "text-generation"], "language": ["en"], "tags": ["open-domain-qa", "open-book-qa", "multiple-choice-qa"], "pretty_name": "GPQA", "size_categories": ["n<1K"]}
false
auto
2026-03-05T23:06:58
385
12
false
633f5ee89ab8ad4522a9f850766b73f62147ffdd
Dataset Card for GPQA GPQA is a multiple-choice, Q&A dataset of very hard questions written and validated by experts in biology, physics, and chemistry. When attempting questions out of their own domain (e.g., a physicist answers a chemistry question), these experts get only 34% accuracy, despite spending >30m with full access to Google. We request that you do not reveal examples from this dataset in plain text or images online, to reduce the risk of leakage into foundation model… See the full description on the dataset page: https://huggingface.co/datasets/Idavidrein/gpqa.
105,756
1,440,182
[ "benchmark:official", "benchmark:eval-yaml", "task_categories:question-answering", "task_categories:text-generation", "language:en", "license:cc-by-4.0", "size_categories:1K<n<10K", "format:csv", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:polars", "...
2023-11-27T23:18:46
null
null
66212f29fb07c3e05ad0432e
HuggingFaceFW/fineweb
HuggingFaceFW
{"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2025-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-05/*"}]}, {"config_name": "CC-MAIN-2025-08", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-08/*"}]}, {"config_name": "CC-MAIN-2025-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-13/*"}]}, {"config_name": "CC-MAIN-2025-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-18/*"}]}, {"config_name": "CC-MAIN-2025-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-21/*"}]}, {"config_name": "CC-MAIN-2025-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-26/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]}
false
False
2025-07-11T20:16:53
2,699
12
false
9bb295ddab0e05d785b879661af7260fed5140fc
🍷 FineWeb 15 trillion tokens of the finest data the 🌐 web has to offer What is it? The 🍷 FineWeb dataset consists of more than 18.5T tokens (originally 15T tokens) of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the 🏭 datatrove library, our large scale data processing library. 🍷 FineWeb was originally meant to be a fully open replication of 🦅 RefinedWeb, with a release… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb.
168,887
6,441,547
[ "task_categories:text-generation", "language:en", "license:odc-by", "size_categories:10B<n<100B", "modality:tabular", "modality:text", "arxiv:2306.01116", "arxiv:2109.07445", "arxiv:2406.17557", "doi:10.57967/hf/2493", "region:us" ]
2024-04-18T14:33:13
null
null
6928ac839f54f92be8b78d70
TeichAI/claude-4.5-opus-high-reasoning-250x
TeichAI
null
false
False
2025-11-28T03:02:41
326
12
false
742c86f88b66bf53cb5961a25e4360f5582f4a6e
This is a reasoning dataset created using Claude Opus 4.5 with a reasoning depth set to high. Some of these questions are from reedmayhew and the rest were generated. The dataset is meant for creating distilled versions of Claude Opus 4.5 by fine-tuning already existing open-source LLMs. Stats Costs: $ 52.3 (USD) Total tokens (input + output): 2.13 M
3,048
17,504
[ "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-27T19:54:43
null
null
69af2a96484ef491320cc3c1
yatin-superintelligence/Audio-Video-Engineering-Agentic-Tasks-1M
yatin-superintelligence
{"pretty_name": "Audio/Video Engineering Agentic Tasks (1M)", "language": ["en"], "license": "mit", "library_name": "datasets", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "question-answering", "any-to-any"], "tags": ["text", "audio", "video", "music", "art", "media-production", "digital-audio-workstation", "nonlinear-editing", "agent", "agentic-tasks", "music-composition", "music-production", "sound-design", "video-editing", "tool-use", "troubleshooting", "synthetic", "datasets", "parquet", "pandas", "polars", "dask"], "dataset_info": {"features": [{"name": "batch_id", "dtype": "int64"}, {"name": "index", "dtype": "int64"}, {"name": "professional", "dtype": "string"}, {"name": "group", "dtype": "string"}, {"name": "user_prompt", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 1031068}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "av_agentic_tasks_train_*.parquet"}]}]}
false
False
2026-03-13T14:45:10
12
12
false
d2157d1d602b06aee35618a4ef841a489e85b3d1
Audio/Video Engineering Agentic Tasks (1M) Abstract A highly specialized dataset comprising 1,029,459 in-context troubleshooting prompts and execution commands built for the deepest levels of media production. Unlike standard datasets that simulate clean, theoretical instructions, this matrix captures the chaotic, highly-detailed, and conversational reality of professional audio engineers, composers, and video editors mid-session. It is engineered to train multimodal AI… See the full description on the dataset page: https://huggingface.co/datasets/yatin-superintelligence/Audio-Video-Engineering-Agentic-Tasks-1M.
320
320
[ "task_categories:text-generation", "task_categories:question-answering", "task_categories:any-to-any", "language:en", "license:mit", "size_categories:1M<n<10M", "modality:tabular", "modality:text", "modality:audio", "modality:video", "library:datasets", "library:pandas", "library:polars", ...
2026-03-09T20:16:22
null
null
69b03aa205292d5180b6fc1e
maikezu/dowis
maikezu
{"license": "cc-by-4.0", "language": ["de", "en", "es", "cs", "fr", "hu", "it", "nl", "pt", "ru", "sq", "sv"], "tags": ["speech prompts", "text prompts", "instruction following", "benchmark"], "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "text_prompt", "dtype": "string"}, {"name": "audio_prompt_female_1", "dtype": "audio"}, {"name": "audio_prompt_female_2", "dtype": "audio"}, {"name": "audio_prompt_male_1", "dtype": "audio"}, {"name": "audio_prompt_male_2", "dtype": "audio"}, {"name": "language", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "prompt_type", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2704378267.6, "num_examples": 1320}], "download_size": 1772318018, "dataset_size": 2704378267.6}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
false
False
2026-03-12T09:17:21
12
12
false
40cebb56cbc5145a9c52555939dc0859188ea42b
Do What I Say (DOWIS): A Spoken Prompt Dataset for Instruction-Following NEW DOWIS now also contains spoken and written prompts in Albanian (sq), and for the tasks LIPREAD and SLU! TL;DR — DOWIS is a multilingual dataset of human-recorded spoken and written instruction prompts, designed to enable realistic evaluation of Speech Large Language Models across 11 tasks and 12 languages. Dataset Summary Most Speech LLM benchmarks use text-based prompts, which does… See the full description on the dataset page: https://huggingface.co/datasets/maikezu/dowis.
115
115
[ "language:de", "language:en", "language:es", "language:cs", "language:fr", "language:hu", "language:it", "language:nl", "language:pt", "language:ru", "language:sq", "language:sv", "license:cc-by-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:audio", "modality:text", ...
2026-03-10T15:37:06
null
null
6655eb19d17e141dcb546ed5
HuggingFaceFW/fineweb-edu
HuggingFaceFW
{"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb-Edu", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}], "features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "dump", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_score", "dtype": "float64"}, {"name": "token_count", "dtype": "int64"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2025-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-05/*"}]}, {"config_name": "CC-MAIN-2025-08", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-08/*"}]}, {"config_name": "CC-MAIN-2025-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-13/*"}]}, {"config_name": "CC-MAIN-2025-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-18/*"}]}, {"config_name": "CC-MAIN-2025-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-21/*"}]}, {"config_name": "CC-MAIN-2025-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-26/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]}
false
False
2025-07-11T20:16:53
987
11
false
87f09149ef4734204d70ed1d046ddc9ca3f2b8f9
📚 FineWeb-Edu 1.3 trillion tokens of the finest educational data the 🌐 web has to offer Paper: https://arxiv.org/abs/2406.17557 What is it? 📚 FineWeb-Edu dataset consists of 1.3T tokens and 5.4T tokens (FineWeb-Edu-score-2) of educational web pages filtered from 🍷 FineWeb dataset. This is the 1.3 trillion version. To enhance FineWeb's quality, we developed an educational quality classifier using annotations generated by LLama3-70B-Instruct. We then… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu.
222,740
6,115,881
[ "task_categories:text-generation", "language:en", "license:odc-by", "size_categories:1B<n<10B", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2406.17557", "arxiv:2404.14219", "arxiv:2401.10020", ...
2024-05-28T14:32:57
null
null
6969d8ba29be2bd1483adfb7
nvidia/Nemotron-Pretraining-Specialized-v1.1
nvidia
{"license": "cc-by-4.0", "task_categories": ["text-generation"], "track_downloads": true, "configs": [{"config_name": "Nemotron-Pretraining-Formal-Logic", "data_files": [{"split": "train", "path": "Nemotron-Pretraining-Formal-Logic/*.parquet"}]}, {"config_name": "Nemotron-Pretraining-Economics", "data_files": [{"split": "train", "path": "Nemotron-Pretraining-Economics/*.parquet"}]}, {"config_name": "Nemotron-Pretraining-Multiple-Choice", "data_files": [{"split": "train", "path": "Nemotron-Pretraining-Multiple-Choice/*.parquet"}]}, {"config_name": "Nemotron-Pretraining-Unconditional-Algorithmic", "data_files": [{"split": "train", "path": "Nemotron-Pretraining-Unconditional-Algorithmic/*.parquet"}]}, {"config_name": "Nemotron-Pretraining-Code-Concepts", "data_files": [{"split": "train", "path": "Nemotron-Pretraining-Code-Concepts/*.parquet"}]}]}
false
False
2026-03-11T14:43:59
11
11
false
13fa979be2e7f7e62913eee0ec5e97c8fd6e24af
Nemotron-Pretraining-Specialized-v1.1 Dataset Description: The Nemotron-Pretraining-Specialized-v1.1 dataset is part of the Nemotron Pretraining Data collection of pretraining datasets. Designed for the NVIDIA Nemotron 3 family of LLMs, this dataset contains a collection of synthetic datasets aimed to improve LLM capabilities in code concepts and algorithms, formal logic, economics, and multiple choice questions. The code concepts dataset is an instance of a general… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Pretraining-Specialized-v1.1.
667
667
[ "task_categories:text-generation", "license:cc-by-4.0", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us" ]
2026-01-16T06:20:42
null
null
699946473ccabf2d24116f0f
Roman1111111/gemini-3.1-pro-hard-high-reasoning
Roman1111111
{"license": "mit", "task_categories": ["question-answering", "text-generation", "reasoning"], "tags": ["code", "finance", "legal", "agent", "chemistry", "physics", "synthetic", "gemini-3.1-pro", "high-reasoning", "expert-level"], "size_categories": ["1k<n<10K"], "language": ["en"]}
false
False
2026-02-21T05:50:10
28
11
false
5b9be1b2b8087b748a8a36c4d47631722d3b3d8e
Dataset Card for Gemini-3.1-Pro-Ultra-Reasoning-5.6M Dataset Details Dataset Description This dataset represents the frontier of synthetic reasoning data, generated by Gemini 3.1 Pro (High Reasoning variant). While smaller in total token volume than its predecessors (5.6M tokens), this corpus prioritizes logical density and multi-step verification. The move to the 3.1 architecture provides a measurable leap in "System 2" thinking. Unlike standard models… See the full description on the dataset page: https://huggingface.co/datasets/Roman1111111/gemini-3.1-pro-hard-high-reasoning.
483
483
[ "task_categories:question-answering", "task_categories:text-generation", "language:en", "license:mit", "size_categories:1K<n<10K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "code", "finance", "legal", "ag...
2026-02-21T05:44:39
null
null
69ab632c9d4152acb2e45fb7
Mustafaege/qwen3.5-toolcalling-v2
Mustafaege
{"language": ["en"], "license": "apache-2.0", "pretty_name": "Qwen3.5 Tool Calling Dataset v2", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "tags": ["tool-use", "tool-calling", "function-calling", "reasoning", "agentic", "jupyter", "code-execution", "sft", "chat", "qwen3", "qwen3.5", "chain-of-thought", "multi-turn", "structured-output", "json", "fine-tuning", "open-source", "expanded-dataset"], "annotations_creators": ["machine-generated"], "language_creators": ["found"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
false
False
2026-03-07T13:06:45
14
11
false
8f0343a5613879fefda0eb002d10ff7150a2c588
Qwen3.5 Tool Calling Dataset v2 An expanded tool-calling SFT dataset combining smirki/Tool-Calling-Dataset-UIGEN-X and AmanPriyanshu/tool-reasoning-sft-jupyter-agent, unified into Qwen3 messages format. Adds Jupyter notebook agent data with code execution reasoning chains. Dataset Summary Property Value Total Samples ~60K+ Train Split ~55K Test Split ~6K Sources UIGEN-X + Jupyter Agent Format Qwen3 messages Language English License Apache 2.0… See the full description on the dataset page: https://huggingface.co/datasets/Mustafaege/qwen3.5-toolcalling-v2.
176
176
[ "task_categories:text-generation", "annotations_creators:machine-generated", "language_creators:found", "language:en", "license:apache-2.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us...
2026-03-06T23:28:44
null
null
69b208be3fadc91fa277f593
TeichAI/Claude-Opus-Dataclaw-Unredacted
TeichAI
{"language": ["en"], "license": "mit", "task_categories": ["text-generation"]}
false
False
2026-03-14T05:56:04
11
11
false
db4502e0401409d288b996ae48fe7704dfb90c51
Dataclaw Opus (4.5 & 4.6) Dataset This dataset was assembled by: Collecting all Dataclaw datasets we could find Filtering for Opus-family conversations Normalizing them into a single training format Deduplicating overlapping uploads Used Gemini 3 Flash to replace all [REDACTED] content with made up details, prompts, secrets, etc. (given the context of the entire conversation) Rebuilding the retained rows into a validated OpenAI function-calling chat format with structured… See the full description on the dataset page: https://huggingface.co/datasets/TeichAI/Claude-Opus-Dataclaw-Unredacted.
80
80
[ "task_categories:text-generation", "language:en", "license:mit", "region:us" ]
2026-03-12T00:28:46
null
null
67a404bc8c6d42c5ec097433
Anthropic/EconomicIndex
Anthropic
{"language": "en", "pretty_name": "EconomicIndex", "tags": ["AI", "LLM", "Economic Impacts", "Anthropic"], "viewer": true, "license": "mit", "configs": [{"config_name": "release_2026_01_15", "data_files": [{"split": "raw_claude_ai", "path": "release_2026_01_15/data/intermediate/aei_raw_claude_ai_2025-11-13_to_2025-11-20.csv"}, {"split": "raw_1p_api", "path": "release_2025_09_15/data/intermediate/aei_raw_1p_api_2025-11-13_to_2025-11-20.csv"}]}]}
false
False
2026-03-11T05:02:11
475
10
false
d1001170819fe03262c168fcf77ae99a5abf9576
The Anthropic Economic Index Overview The Anthropic Economic Index provides insights into how AI is being incorporated into real-world tasks across the modern economy. Data Releases This repository contains multiple data releases, each with its own documentation: Labor market impacts: Job exposure and task penetration data 2026-01-15 Release: Updated analysis with economic primitives and Sonnet 4.5 2025-09-15 Release: Updated analysis with geographic and… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/EconomicIndex.
12,541
60,263
[ "language:en", "license:mit", "arxiv:2503.04761", "region:us", "AI", "LLM", "Economic Impacts", "Anthropic" ]
2025-02-06T00:39:24
null
null
6997f5d1260ef062721a6a13
togethercomputer/CoderForge-Preview
togethercomputer
{"dataset_info": [{"config_name": "trajectories", "features": [{"name": "trajectory_id", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "messages", "dtype": "string"}, {"name": "reward", "dtype": "float64"}, {"name": "tools", "dtype": "string"}, {"name": "license", "dtype": "string"}], "splits": [{"name": "SWE_Rebench", "num_bytes": 19392208677, "num_examples": 77169}, {"name": "SWE_Smith", "num_bytes": 33088967556, "num_examples": 148001}, {"name": "R2E_Gym", "num_bytes": 6869123922, "num_examples": 32964}, {"name": "filtered_reward1", "num_bytes": 33547502194, "num_examples": 155144}], "download_size": 22788997561, "dataset_size": 92897802349}, {"config_name": "trajectories-tokenized_qwencoder", "features": [{"name": "trajectory_id", "dtype": "string"}, {"name": "reward", "dtype": "float64"}, {"name": "chat_template_applied", "dtype": "string"}, {"name": "input_ids", "list": "int32"}, {"name": "labels", "list": "int64"}], "splits": [{"name": "SWE_Rebench", "num_bytes": 64238782798, "num_examples": 77169}, {"name": "SWE_Smith", "num_bytes": 107118447512, "num_examples": 148001}, {"name": "R2E_Gym", "num_bytes": 23869485518, "num_examples": 32964}, {"name": "filtered_reward1", "num_bytes": 108349044091, "num_examples": 155144}], "download_size": 49985669802, "dataset_size": 303575759919}], "configs": [{"config_name": "trajectories", "data_files": [{"split": "SWE_Rebench", "path": "trajectories/SWE_Rebench-*"}, {"split": "SWE_Smith", "path": "trajectories/SWE_Smith-*"}, {"split": "R2E_Gym", "path": "trajectories/R2E_Gym-*"}, {"split": "filtered_reward1", "path": "trajectories/filtered_reward1-*"}]}, {"config_name": "trajectories-tokenized_qwencoder", "data_files": [{"split": "SWE_Rebench", "path": "trajectories-tokenized_qwencoder/SWE_Rebench-*"}, {"split": "SWE_Smith", "path": "trajectories-tokenized_qwencoder/SWE_Smith-*"}, {"split": "R2E_Gym", "path": "trajectories-tokenized_qwencoder/R2E_Gym-*"}, {"split": "filtered_reward1", "path": "trajectories-tokenized_qwencoder/filtered_reward1-*"}]}]}
false
False
2026-02-26T18:22:08
147
10
false
060fca96cf723b2ebab3181e9e59fafd273df3cb
CoderForge-Preview: SOTA Open Dataset for Training Efficient Agents CoderForge-Preview is the largest open test-verified coding agent dataset. Fine-tuning Qwen-3 32B on it, we boost SWE-Bench Verified performance 23.0% → 59.4% pass@1 and rank #1 among open-data and #2 among open-weight models ≤32B parameters. Limitations Adaptability to different scaffolds: We generated all trajectories using a single scaffold and fixed tool set (no permutations). Models trained via… See the full description on the dataset page: https://huggingface.co/datasets/togethercomputer/CoderForge-Preview.
11,535
11,535
[ "size_categories:100K<n<1M", "format:parquet", "format:optimized-parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us" ]
2026-02-20T05:49:05
null
null
69a6dc61541e5c55e792dcb6
ai-coustics/dawn_chorus_en
ai-coustics
{"license": "cc-by-nc-4.0", "task_categories": ["audio-to-audio"], "language": ["en"], "tags": ["speech", "foreground-background-speech", "speech-to-text"], "pretty_name": "dawn_chorus_en", "size_categories": ["n<1K"], "configs": [{"config_name": "default", "data_files": [{"split": "eval", "path": "eval.parquet"}]}], "dataset_info": {"features": [{"name": "mix", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "speech", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcript", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "conversation_type", "dtype": "string"}, {"name": "speech_source", "dtype": "string"}, {"name": "index", "dtype": "int64"}]}}
false
False
2026-03-03T13:06:55
10
10
false
3c21347c1e61ea904a493f9a6b3856161432da80
dawn_chorus_en An open-source evaluation dataset for accurate foreground speaker transcription. The dataset targets mixture conditions where foreground speech remains generally transcribable by speech-to-text systems, while background speech is distinctly perceived as background. It provides around 90 minutes of foreground–background speech mixtures composed of recorded and synthesized foreground speech, along with ground truth foreground speech and corresponding transcripts.… See the full description on the dataset page: https://huggingface.co/datasets/ai-coustics/dawn_chorus_en.
658
658
[ "task_categories:audio-to-audio", "language:en", "license:cc-by-nc-4.0", "size_categories:n<1K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "speech", "foreground-background-speech", "spe...
2026-03-03T13:04:33
null
null
69a52fb3ff95a38fe27d886f
TianHongZXY/CHIMERA
TianHongZXY
{"language": ["en"], "pretty_name": "CHIMERA", "tags": ["reasoning", "chain-of-thought", "synthetic-data", "llm", "stem", "post-training"], "license": "apache-2.0", "task_categories": ["text-generation", "question-answering"], "size_categories": ["1K<n<10K"], "annotations_creators": ["machine-generated"], "configs": [{"config_name": "Qwen3-235B-2507", "default": true, "data_files": [{"split": "train", "path": "Qwen3-235B-2507/train-*.parquet"}]}, {"config_name": "Qwen3.5-397B", "data_files": [{"split": "train", "path": "Qwen3.5-397B/train-*.parquet"}]}]}
false
False
2026-03-11T04:38:56
17
9
false
d6a22de2d5a51eb8f1ac1edd6ffde4d791bd0f65
CHIMERA: Compact Synthetic Data for Generalizable LLM Reasoning CHIMERA is a compact but high-difficulty synthetic reasoning datasetwith long Chain-of-Thought (CoT) trajectories and broad STEM coverage, designed for reasoning post-training. All examples are fully LLM-generated and automatically verified without human annotation. Total: 9,225 problems Subjects: 8 Topics: 1,179 🔥 Why CHIMERA? Recent reasoning advances rely heavily on high-quality… See the full description on the dataset page: https://huggingface.co/datasets/TianHongZXY/CHIMERA.
874
874
[ "task_categories:text-generation", "task_categories:question-answering", "annotations_creators:machine-generated", "language:en", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "format:optimized-parquet", "modality:text", "library:datasets", "library:dask", "library:pola...
2026-03-02T06:35:31
null
null
678bd1db320331c7e0499ec7
nomic-ai/nomic-embed-unsupervised-data
nomic-ai
{"language": ["en"], "dataset_info": {"features": [{"name": "query", "dtype": "string"}, {"name": "document", "dtype": "string"}, {"name": "dataset", "dtype": "string"}, {"name": "shard", "dtype": "int64"}], "splits": [{"name": "reddit_title_body", "num_bytes": 133556530576.56786, "num_examples": 66204599}, {"name": "amazon_reviews", "num_bytes": 79397795801.44087, "num_examples": 39357860}, {"name": "paq", "num_bytes": 108682741460.16927, "num_examples": 53874545}, {"name": "s2orc_citation_titles", "num_bytes": 15578276961.267248, "num_examples": 7722225}, {"name": "s2orc_title_abstract", "num_bytes": 72727941660.31642, "num_examples": 36051582}, {"name": "s2orc_abstract_citation", "num_bytes": 15412180087.166075, "num_examples": 7639890}, {"name": "s2orc_abstract_body", "num_bytes": 13214381649.546701, "num_examples": 6550431}, {"name": "wikianswers", "num_bytes": 20349823474.661026, "num_examples": 10087503}, {"name": "wikipedia", "num_bytes": 12503510832.888903, "num_examples": 6198049}, {"name": "gooaq", "num_bytes": 2584478254.5968294, "num_examples": 1281138}, {"name": "codesearch", "num_bytes": 1743019608.3259697, "num_examples": 864023}, {"name": "yahoo_title_answer", "num_bytes": 558247690.3202951, "num_examples": 276726}, {"name": "agnews", "num_bytes": 847859634.6904019, "num_examples": 420288}, {"name": "amazonqa", "num_bytes": 456192977.6962069, "num_examples": 226137}, {"name": "yahoo_qa", "num_bytes": 289440471.31127894, "num_examples": 143477}, {"name": "yahoo_title_question", "num_bytes": 430336857.75505495, "num_examples": 213320}, {"name": "ccnews", "num_bytes": 713469137.831569, "num_examples": 353670}, {"name": "npr", "num_bytes": 736476787.666073, "num_examples": 365075}, {"name": "eli5", "num_bytes": 215412525.82009435, "num_examples": 106781}, {"name": "cnn", "num_bytes": 592128749.4145954, "num_examples": 293521}, {"name": "stackexchange_duplicate_questions", "num_bytes": 147688736.90346697, "num_examples": 73210}, {"name": "stackexchange_title_body", "num_bytes": 162788452.73084643, "num_examples": 80695}, {"name": "stackexchange_body_body", "num_bytes": 132516397.19234861, "num_examples": 65689}, {"name": "sentence_compression", "num_bytes": 350216575.3502183, "num_examples": 173604}, {"name": "wikihow", "num_bytes": 193722192.5434098, "num_examples": 96029}, {"name": "altlex", "num_bytes": 223334581.13794592, "num_examples": 110708}, {"name": "quora", "num_bytes": 90547861.71168031, "num_examples": 44885}, {"name": "simplewiki", "num_bytes": 197127445.7587226, "num_examples": 97717}, {"name": "squad", "num_bytes": 50669280.21860921, "num_examples": 25117}], "download_size": 261162378852, "dataset_size": 482138856722.99994}, "configs": [{"config_name": "default", "data_files": [{"split": "reddit_title_body", "path": "data/reddit_title_body-*"}, {"split": "amazon_reviews", "path": "data/amazon_reviews-*"}, {"split": "paq", "path": "data/paq-*"}, {"split": "s2orc_citation_titles", "path": "data/s2orc_citation_titles-*"}, {"split": "s2orc_title_abstract", "path": "data/s2orc_title_abstract-*"}, {"split": "s2orc_abstract_citation", "path": "data/s2orc_abstract_citation-*"}, {"split": "s2orc_abstract_body", "path": "data/s2orc_abstract_body-*"}, {"split": "wikianswers", "path": "data/wikianswers-*"}, {"split": "wikipedia", "path": "data/wikipedia-*"}, {"split": "gooaq", "path": "data/gooaq-*"}, {"split": "codesearch", "path": "data/codesearch-*"}, {"split": "yahoo_title_answer", "path": "data/yahoo_title_answer-*"}, {"split": "agnews", "path": "data/agnews-*"}, {"split": "amazonqa", "path": "data/amazonqa-*"}, {"split": "yahoo_qa", "path": "data/yahoo_qa-*"}, {"split": "yahoo_title_question", "path": "data/yahoo_title_question-*"}, {"split": "ccnews", "path": "data/ccnews-*"}, {"split": "npr", "path": "data/npr-*"}, {"split": "eli5", "path": "data/eli5-*"}, {"split": "cnn", "path": "data/cnn-*"}, {"split": "stackexchange_duplicate_questions", "path": "data/stackexchange_duplicate_questions-*"}, {"split": "stackexchange_title_body", "path": "data/stackexchange_title_body-*"}, {"split": "stackexchange_body_body", "path": "data/stackexchange_body_body-*"}, {"split": "sentence_compression", "path": "data/sentence_compression-*"}, {"split": "wikihow", "path": "data/wikihow-*"}, {"split": "altlex", "path": "data/altlex-*"}, {"split": "quora", "path": "data/quora-*"}, {"split": "simplewiki", "path": "data/simplewiki-*"}, {"split": "squad", "path": "data/squad-*"}]}]}
false
False
2025-01-24T22:02:10
15
8
false
917bae6ed30ebc80fc8c81ba8e3e34558205d6bb
Weakly Supervised Contrastive Training data for Text Embedding models used in Nomic Embed models Training Click the Nomic Atlas map below to visualize a 5M sample of our contrastive pretraining data! We train our embedder using a multi-stage training pipeline. Starting from a long-context BERT model, the first unsupervised contrastive stage trains on a dataset generated from weakly related text pairs, such as question-answer pairs from forums like StackExchange and Quora… See the full description on the dataset page: https://huggingface.co/datasets/nomic-ai/nomic-embed-unsupervised-data.
1,445
45,144
[ "language:en", "size_categories:100M<n<1B", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2402.01613", "region:us" ]
2025-01-18T16:07:55
null
null
698dd2570db46090757245bc
markov-ai/computer-use
markov-ai
{"license": "apache-2.0", "task_categories": ["robotics", "image-to-text"], "tags": ["computer-use", "gui-agent", "osworld", "trajectories", "reinforcement-learning"], "size_categories": ["n<1K"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*.parquet"}]}]}
false
False
2026-02-13T15:11:21
59
8
false
de58c88b4b33dd03fa4d5d0f490748f576bd37b3
Computer Use Trajectories Successful computer-use agent trajectories collected on OSWorld tasks. Dataset Details Rows: 160 (one per task trajectory) Steps: 1,378 total across all trajectories (avg ~8.6 steps/task) Agent: Gemini 3 Flash Preview with linearized accessibility-tree grounding Score filter: Only trajectories with score = 1.0 (fully successful) Domains Domain Tasks Description chrome 21 Web browsing tasks in Google Chrome gimp 15 Image… See the full description on the dataset page: https://huggingface.co/datasets/markov-ai/computer-use.
892
921
[ "task_categories:robotics", "task_categories:image-to-text", "license:apache-2.0", "size_categories:n<1K", "format:parquet", "format:optimized-parquet", "modality:image", "modality:text", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:polars", "librar...
2026-02-12T13:15:03
null
null
6996a0f665f352f44ec11a37
Roman1111111/gemini-3-pro-10000x-hard-high-reasoning
Roman1111111
{"license": "mit", "task_categories": ["question-answering", "text-generation", "reasoning"], "tags": ["code", "finance", "legal", "agent", "chemistry", "art", "synthetic", "gemini-3-pro", "hard-reasoning", "mathematics", "physics"], "size_categories": ["10K<n<100K"], "language": ["en"]}
false
False
2026-02-20T03:49:27
42
8
false
5feedf31aaa6ff0ae0ee1bc8a169bc6bfaccbd5a
Dataset Card for Gemini-3-Pro-Reasoning-10000x-high-reasoning Dataset Details Dataset Description Suggestion: I would use it to fine tune glm- 4.7-flash, or other 30b moe models, but 2-20b llms work perfectly, you can fine tune Nanbeige 4.1 - 3b, gpt-oss:20b, or qwen3: 4b, 8b(note: better to fine tune newest versions(2507 4b qwen3 , or qwen 3 vl:8b)) for maximum improvement. This dataset is a high-complexity synthetic reasoning corpus containing… See the full description on the dataset page: https://huggingface.co/datasets/Roman1111111/gemini-3-pro-10000x-hard-high-reasoning.
972
972
[ "task_categories:question-answering", "task_categories:text-generation", "language:en", "license:mit", "size_categories:10K<n<100K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "code", "finance", "legal", "...
2026-02-19T05:34:46
null
null
69a0413735be92a8b511584c
AweAI-Team/Scale-SWE
AweAI-Team
null
false
False
2026-03-05T04:50:13
34
8
false
d8db20390a936bbda9c96d88b97cc4778dff1481
Immersion in the GitHub Universe: Scaling Coding Agents to Mastery 🔥 Highlights Source from 6M+ pull requests and 23000+ repositories. Cover 5200 Repositories. 100k high-quality instances. 71k trajectories from DeepSeek v3.2 with 3.5B token. Strong performance: 64% in SWE-bench-Verified trained from Qwen3-30A3B-Instruct. 📣 News 2026-02-26 🚀 We released a portion of our data on Hugging Face. This release includes 20,000 SWE task… See the full description on the dataset page: https://huggingface.co/datasets/AweAI-Team/Scale-SWE.
2,259
2,259
[ "size_categories:10K<n<100K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "arxiv:2602.09892", "region:us" ]
2026-02-26T12:48:55
null
null
69ab52d621553879afae74f5
nvidia/Retrieval-Synthetic-NVDocs-v1
nvidia
{"license": "cc-by-4.0", "task_categories": ["question-answering", "text-retrieval", "text-ranking", "text-classification"], "language": ["en"], "size_categories": ["100K<n<1M"]}
false
False
2026-03-11T15:23:09
12
8
false
bacc762bd9c80de32302f472e414b1aa9547507c
Dataset Description: Retrieval-Synthetic-NVDocs-v1 is a synthetic retrieval dataset with question–answer supervision designed to train and evaluate embedding and RAG systems. The dataset was generated on top of NVIDIA's publicly available content using NeMo Data Designer, NVIDIA's open-source framework for generating high-quality synthetic data from scratch or based on seed data. The dataset contains document chunks paired with semantically rich question-answer pairs across multiple… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Retrieval-Synthetic-NVDocs-v1.
398
398
[ "task_categories:question-answering", "task_categories:text-retrieval", "task_categories:text-ranking", "task_categories:text-classification", "language:en", "license:cc-by-4.0", "size_categories:10K<n<100K", "format:parquet", "format:optimized-parquet", "modality:text", "library:datasets", "l...
2026-03-06T22:19:02
null
null
63990f21cc50af73d29ecfa3
fka/prompts.chat
fka
{"license": "cc0-1.0", "tags": ["ChatGPT", "prompts", "AI", "GPT", "Claude", "Gemini", "Llama", "Mistral", "LLM", "prompt-engineering", "conversational-ai", "text-generation", "chatbot", "awesome-list"], "task_categories": ["question-answering", "text-generation"], "size_categories": ["100K<n<1M"]}
false
False
2026-03-14T03:49:57
9,614
7
false
9223977493bbbb576ed57473cfa7e192ff58e074
a.k.a. Awesome ChatGPT Prompts This is a Dataset Repository mirror of prompts.chat — a social platform for AI prompts. 📢 Notice This Hugging Face dataset is a mirror. For the latest prompts, features, and community contributions, please visit: 🌐 Website: prompts.chat 📦 GitHub: github.com/f/awesome-chatgpt-prompts About prompts.chat is an open-source platform where users can share, discover, and collect AI prompts from the community. The project can be… See the full description on the dataset page: https://huggingface.co/datasets/fka/prompts.chat.
23,229
471,177
[ "task_categories:question-answering", "task_categories:text-generation", "license:cc0-1.0", "size_categories:1K<n<10K", "format:csv", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "ChatGPT", "prompts", "AI", "GPT", "Claude"...
2022-12-13T23:47:45
null
null
663b7fd5a4152b77b637ba11
TIGER-Lab/MMLU-Pro
TIGER-Lab
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "pretty_name": "MMLU-Pro", "tags": ["evaluation"], "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "question_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_index", "dtype": "int64"}, {"name": "cot_content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "src", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 61242, "num_examples": 70}, {"name": "test", "num_bytes": 8714663, "num_examples": 12032}], "download_size": 121157475, "dataset_size": 8775905}}
false
False
2026-03-11T10:56:33
449
7
false
54611cde22c74cca43dd78732198de6abe971398
MMLU-Pro Dataset MMLU-Pro dataset is a more robust and challenging massive multi-task understanding dataset tailored to more rigorously benchmark large language models' capabilities. This dataset contains 12K complex questions across various disciplines. |Github | 🏆Leaderboard | 📖Paper | 🚀 What's New [2026.03.11] Added more cutting-edge frontier models to the leaderboard, including the Claude-4.6 series, Seed2.0 series, Qwen3.5 series, and Gemini-3.1-Pro, among… See the full description on the dataset page: https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro.
123,588
1,300,586
[ "benchmark:official", "task_categories:question-answering", "language:en", "license:mit", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "arxiv:2406.01574", "doi:10.57967/hf...
2024-05-08T13:36:21
null
null
6835e8703de5738a2e9af4ae
nvidia/PhysicalAI-Autonomous-Vehicles
nvidia
{"extra_gated_heading": "You must agree to the NVIDIA Autonomous Vehicle Dataset License Agreement to access this dataset.", "extra_gated_prompt": "### NVIDIA Autonomous Vehicle Dataset License Agreement\n\nThis NVIDIA Autonomous Vehicle Dataset License Agreement (\"Agreement\") is a legal agreement between you, whether an individual or entity (\"you\") and NVIDIA Corporation with address 2788 San Tomas Expressway, Santa Clara, California 95051 (\"NVIDIA\") and governs the use of certain datasets, including any annotations and metadata attached to the datasets, provided by NVIDIA (\"Dataset\").\n\nThis Agreement can be accepted only by an adult of legal age of majority in the country in which the Dataset are used.\n\nIf you don't have the required age or authority to accept this Agreement or if you don't accept all the terms and conditions of this Agreement, do not use the Dataset.\n\nYou agree to use the Dataset only for purposes expressly permitted by this Agreement and in accordance with any applicable law or regulation in the relevant jurisdictions.\n\n1. License Grant. Subject to the terms of this Agreement, NVIDIA grants you a non-exclusive, revocable, non-transferable, non-sublicensable (except as expressly granted in Sections 1 and 2 of this Agreement, license to download, use, modify, and reproduce the Dataset, in each case solely for your internal development of autonomous vehicles and automated driving assisted systems using NVIDIA technology (\"Purpose\"). NVIDIA may from time to time update the Dataset. If requested by NVIDIA, you will use the updated version of any such Dataset and delete any prior versions upon NVIDIA's written request.\n\n2. Authorized Users. You may allow your Affiliates' employees and contractors (all such users collectively \"Authorized Users\") to access and use the Dataset from your secure network for the Purpose on your behalf. You are responsible for the compliance with the terms of this Agreement by your authorized users. Any act or omission by your authorized users that if committed by you would constitute a breach of this Agreement will be deemed to constitute a breach of this Agreement. \"Affiliates\" means an entity that owns or controls, is owned or controlled by, or is under common ownership or control with you, where \"control\" is the possession, directly or indirectly, of the power to direct or cause the direction of the management and policies of an entity, whether through ownership of voting securities, by contract or otherwise.\n\n3. Confidentiality. You agree that you will not use, nor authorize others to use, NVIDIA Confidential Information, other than for the Purpose, and that you will not disclose NVIDIA Confidential Information to any third party, except to Authorized Users under this Agreement that have a need to know such Confidential Information for the Purpose, provided that each such recipient is subject to a written agreement that includes confidentiality obligations consistent with the terms. You will protect the NVIDIA Confidential Information with at least the same degree of care that you use to protect your own similar confidential and proprietary information, but no less than a reasonable degree of care, including any appropriate technical, organizational and contractual measures. \"Confidential Information\" means the Dataset including its features and functionality, output, and any results of benchmarking or other competitive analysis or regression or performance data relating to the Dataset.\n\n4. Limitations. Your license to use the Dataset is restricted as follows:\n\n4.1 You will not use the Dataset for the purpose of any surveillance program, service and/or product of public authorities, corporations and/or citizens that monitors the behavior of an individual person or groups of persons in any unethical manner. You will not use the Dataset to directly or indirectly enable law enforcement or any public authority to enforce any rules or regulations including any road traffic laws.\n\n4.2 You may not change or remove copyright or other proprietary notices in the Dataset.\n\n4.3 The rights granted to you in Section 1 and 2 are for the Purpose only. You may not use the Dataset for any other purpose.\n\n4.4 You may not identify or attempt to identify or profile any individual (including by way of license plate numbers) in the Dataset or de-anonymize or attempt to de-anonymize any Dataset. This includes prohibition against processing of license plate numbers for purpose of tracking or collecting data about a vehicle over time and across different frames.\n\n4.5 You may not: (a) infer, measure, detect or otherwise label the race, ethnicity, gender, age or health (or any other sensitive attributes) of individuals in the Dataset, (b) perform biometric processing of the Dataset, (c) analyze faces, gazes, eye movements, gait, or body movements to uniquely identify persons, or (d) use the Dataset to develop or evaluate any identity, emotion recognition technology or social scoring technology.\n\n4.6 You may not create derivative works of the Dataset, sell, rent, sublicense, transfer, distribute, embed, or host the Dataset (in whole or in part), or otherwise make the Dataset (in whole or in part) available to others.\n\n4.7 You may not bypass, disable or circumvent any technical limitation, encryption, security, digital rights management or authentication mechanism relating to the Dataset.\n\n4.8 You must keep track of any copies of the Dataset. You will keep track of where the Dataset or portions of it are stored to ensure these restrictions follow such Dataset.\n\n4.9 While NVIDIA has exercised reasonable efforts to anonymize the Dataset, you must cooperate with NVIDIA to honor any data subject rights where applicable. You will delete the Dataset upon written notice by NVIDIA and you will promptly notify NVIDIA at https://www.nvidia.com/en-us/support/submit-security-vulnerability/ if you notice that any portion of the Dataset is not sufficiently anonymized.\n\n5. AI Ethics.\n\n5.1 Ethical Use. NVIDIA is committed to safety, trust and transparency in AI development. NVIDIA encourages you to (a) ensure that the product or service you develop, use, offer as a service or distribute meets the legal and ethical requirements of the relevant industry or use case, (b) take reasonable measures to address unintended bias and to mitigate harm to others, including underrepresented or vulnerable groups, and (c) inform users of the nature and limitations of the product or service.\n\n5.2 Prohibited Uses. NVIDIA expressly prohibits the use of its products or services for any purpose in violation of applicable law or regulation, including but not limited to (a) illegal surveillance, (b) illegal collection or processing of biometric information without the consent of the subject where required under applicable law, or (c) illegal harassment, abuse, threatening or bullying of individuals or groups of individuals or intentionally misleading or deceiving others.\n\n6. Ownership. The Dataset, including all intellectual property rights, is and will remain the sole and exclusive property of NVIDIA or its licensors. Except as expressly granted in this Agreement, (i) NVIDIA reserves all rights, interests and remedies in connection with the Dataset, and (ii) no other license or right is granted to you by implication, estoppel or otherwise.\n\n7. Feedback. You may, but are not obligated to, provide suggestions, requests, fixes, modifications, enhancements, or other feedback regarding or in connection with your use of the Dataset (collectively, \"Feedback\"). Feedback, even if designated as confidential by you, will not create any confidentiality obligation for NVIDIA or its affiliates. If you provide Feedback, you hereby grant NVIDIA, its affiliates and its designees a nonexclusive, perpetual, irrevocable, sublicensable, worldwide, royalty-free, fully paid-up and transferable license, under your intellectual property rights, to publicly perform, publicly display, reproduce, use, make, have made, sell, offer for sale, distribute (through multiple tiers of distribution), import, create derivative works of and otherwise commercialize and exploit the Feedback at NVIDIA's discretion.\n\n8. Term and Termination. This Agreement expires twelve (12) months after the date of initial delivery or download of the Dataset. This Agreement will automatically terminate (a) if you fail to comply with any of the terms in this Agreement or (b) if you commence or participate in any legal proceeding against NVIDIA with respect to the Dataset. Upon termination, you must stop using and destroy all copies of the Dataset. Upon written request, you will certify in writing that you have complied with your commitments under this section. All provisions will survive termination, except for the licenses granted to you.\n\n9. Disclaimer of Warranties. THE DATASET IS PROVIDED BY NVIDIA AS-IS AND WITH ALL FAULTS. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, NVIDIA DISCLAIMS ALL WARRANTIES AND REPRESENTATIONS OF ANY KIND, WHETHER EXPRESS, IMPLIED OR STATUTORY, RELATING TO OR ARISING UNDER THIS AGREEMENT, INCLUDING, WITHOUT LIMITATION, THE WARRANTIES OF TITLE, NONINFRINGEMENT, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, USAGE OF TRADE AND COURSE OF DEALING.\n\n10. Limitations of Liability. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, WILL NVIDIA BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY TYPE ARISING OUT OF OR AS A RESULT OF THIS AGREEMENT OR THE USE OR INABILITY TO USE THE SOFTWARE (INCLUDING BUT NOT LIMITED TO DAMAGES FOR LOSS OF GOODWILL, WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER DAMAGES OR LOSSES), EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.\n\n11. Governing Law and Jurisdiction. This Agreement will be governed in all respects by the laws of the United States and the laws of the State of Delaware, without regard to conflict of laws principles or the United Nations Convention on Contracts for the International Sale of Goods. The state and federal courts residing in Santa Clara County, California will have exclusive jurisdiction over any dispute or claim arising out of or related to this Agreement, and the parties irrevocably consent to personal jurisdiction and venue in those courts; except that either party may apply for injunctive remedies or an equivalent type of urgent legal relief in any jurisdiction.\n\n12. Indemnity. You agree to defend, indemnify and hold harmless NVIDIA and its affiliates, and their respective employees, contractors, agents, officers and directors, from and against any and all claims, damages, obligations, losses, liabilities, costs or debt, fines, restitutions and expenses (including but not limited to attorney's fees and costs incident to establishing the right of indemnification) arising out of or related to your use of the Dataset outside of the scope of this Agreement, or not in compliance with its terms.\n\n13. General.\n13.1 No Assignment. NVIDIA may assign, delegate or transfer its rights or obligations under this Agreement by any means or operation of law. You may not, without NVIDIA's prior written consent, assign, delegate or transfer any of your rights or obligations under this Agreement by any means or operation of law, and any attempt to do so is null and void.\n\n13.2 No Waiver. No waiver of any term of the Agreement will be deemed a further or continuing waiver of such term or any other term, and NVIDIA's failure to assert any right or provision under the Agreement will not constitute a waiver of such right or provision.\n\n13.3 Trade and Compliance. You agree to with all applicable export, import, trade and economic sanctions laws and regulations, as amended, including without limitation U.S. Export Administration Regulations and Office of Foreign Assets Control regulations. Any violation of such laws by you will void any warranty for the associated products and technologies. You confirm (a) your understanding that export or reexport of certain NVIDIA products or technologies may require a license or other approval from appropriate authorities and (b) that you will not export or reexport any products or technology, directly or indirectly, without first obtaining any required license or other approval from appropriate authorities, (i) to any countries that are subject to any U.S. or local export restrictions (currently including, but not necessarily limited to, Belarus, Cuba, Iran, North Korea, Russia, Syria, the Region of Crimea, Donetsk People's Republic Region and Luhansk People's Republic Region); (ii) to any end-user who it knows or has reason to know will utilize them in the design, development or production of nuclear, chemical or biological weapons, missiles, rocket systems, unmanned air vehicles capable of a maximum range of at least 300 kilometers, regardless of payload, or intended for military end-use, or any weapons of mass destruction; (iii) to any end-user who has been prohibited from participating in the U.S. or local export transactions by any governing authority; or (iv) to any known military or military-intelligence end-user or for any known military or military-intelligence end-use in accordance with U.S. trade compliance laws and regulations..\n\n13.4 Notices. Please direct your legal notices or other correspondence to NVIDIA Corporation, 2788 San Tomas Expressway, Santa Clara, California 95051, United States of America, Attention: Legal Department, with a copy emailed to legalnotices@nvidia.com. If NVIDIA needs to contact you about the Dataset, you consent to receive the notices by email and agree that such notices will satisfy any legal communication requirements.\n\n13.5 Force Majeure. Neither party will be liable during any period where an event or circumstance prevents or delays that party from performing its obligations under this Agreement and that event or circumstance: (i) is not within the reasonable control of that party and is not the result of that party's negligence, and (ii) cannot be overcome or avoided by that party using reasonably diligent efforts.\n\n13.6 Severability and Amendment. If a court of competent jurisdiction rules that a provision of this Agreement is unenforceable, that provision will be deemed modified to the extent necessary to make it enforceable and the remainder of this Agreement will continue in full force and effect. Any amendment to this Agreement must be in writing and signed by authorized representatives of both parties.\n\n13.7 Independent Contractors. The parties are independent contractors, and this Agreement does not create a joint venture, partnership, agency or other form of business association between the parties. Neither party will have the power to bind the other party or incur any obligation on its behalf without the other party's prior written consent.\n\n13.8 Construction. The headings in the Agreement are included solely for convenience and are not intended to affect the meaning or interpretation of the Agreement. As required by the context of the Agreement, the singular of a term includes the plural and vice versa.\n\n13.9 Entire Agreement. Regarding the subject matter of this Agreement, the parties agree that (i) this Agreement constitutes the entire and exclusive agreement between the parties and supersedes all prior and contemporaneous communications and (ii) any additional or different terms or conditions, whether contained in purchase orders, order acknowledgments, invoices or otherwise, will not be binding and are null and void.", "extra_gated_button_content": "I accept the terms of the NVIDIA Autonomous Vehicle Dataset License Agreement", "license": "other", "license_name": "nvidia-av-dataset", "license_link": "https://huggingface.co/datasets/nvidia/PhysicalAI-Autonomous-Vehicles/blob/main/LICENSE.pdf", "viewer": false}
false
auto
2026-03-13T22:04:06
775
7
false
37a7cc2c868d684d0456b5412a7ec5d18597a96a
PHYSICAL AI AUTONOMOUS VEHICLES The PhysicalAI-Autonomous-Vehicles dataset provides one of the largest, geographically diverse collections of multi-sensor data empowering AV researchers to build the next generation of Physical AI based end-to-end driving systems. This dataset is ready for commercial/non-commercial AV use per the license agreement. Data Collection Method Automatic/Sensor Labeling Method Automatic/Sensor This dataset has a total of 1700 hours of driving… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/PhysicalAI-Autonomous-Vehicles.
231,353
1,005,295
[ "license:other", "region:us" ]
2025-05-27T16:29:36
null
null
689fcad5cd5c456095ebedae
Brianferrell787/financial-news-multisource
Brianferrell787
{"pretty_name": "Multi-Source Financial & General News", "language": ["en"], "license": "other", "extra_gated_heading": "Request access to Multi-Source Financial & General News", "extra_gated_description": "This corpus aggregates third-party news for research use. Please acknowledge the terms below.", "extra_gated_prompt": "By requesting access you confirm that:\n\u2022 You will use the data **for non-commercial research/education** only.\n\u2022 You will **respect each original source\u2019s terms** and remove items upon rightsholder request.\n\u2022 You will **cite** this corpus and the original sources in any publication.\n", "extra_gated_fields": {"Name / Email": "text", "How did you hear about this dataset?": {"type": "select", "options": ["Google search", "Hugging Face search", "Word of mouth", "Citation in a paper", "Twitter / X", "Reddit", "Other"]}, "If \"Other\", please specify (optional)": {"type": "text", "required": false}, "Intended use": {"type": "select", "options": ["Research", "Education", {"label": "Other", "value": "other"}]}, "I agree to non-commercial, research-only use": "checkbox", "I will not redistribute article text": "checkbox", "I will cite this corpus and original sources": "checkbox"}, "extra_gated_button_content": "Agree & Request Access", "tags": ["finance", "markets", "trading", "backtesting", "time-series", "news", "headlines", "parquet", "multisource", "llm", "reinforcement-learning", "retrieval", "dataset-card"], "size_categories": ["10M<n<100M"], "task_categories": ["text-classification", "text-retrieval", "other"], "task_ids": ["language-modeling", "document-retrieval", "topic-classification", "news-articles-summarization", "document-question-answering"], "configs": [{"config_name": "data", "data_files": ["data/*/*.parquet"]}]}
false
auto
2025-11-29T21:31:56
66
7
false
c25780f336280adb57c64bda7aed605d065c672d
Multi-Source Financial & General News 🚀 57.1 MILLION ROWS OF NEWS CONTENT — one unified corpus for market-aware AI/ML I combined 24 public news datasets (many small on their own) into one consistent, ready-to-use layer so you don’t have to wrangle them yourself. Everything is normalized to a minimal schema (date, text, extra_fields) and shipped as Parquet shards per subset—streamable, DuckDB-friendly, and built with a trading date policy (this can be edited if folks see other use… See the full description on the dataset page: https://huggingface.co/datasets/Brianferrell787/financial-news-multisource.
2,715
13,483
[ "task_categories:text-classification", "task_categories:text-retrieval", "task_categories:other", "task_ids:language-modeling", "task_ids:document-retrieval", "task_ids:topic-classification", "task_ids:news-articles-summarization", "task_ids:document-question-answering", "language:en", "license:ot...
2025-08-16T00:03:33
null
null
693e2682c9d7af74f71b3e5f
nvidia/Nemotron-Agentic-v1
nvidia
{"license": "cc-by-4.0", "language": ["en"], "configs": [{"config_name": "default", "data_files": [{"split": "interactive_agent", "path": "data/interactive_agent.jsonl"}, {"split": "tool_calling", "path": "data/tool_calling.jsonl"}]}]}
false
False
2025-12-15T13:48:35
155
7
false
650d590978ca35c8f1ecea2faf136e5fac421b62
Dataset Description: The Nemotron-Agentic-Tool-Use-v1 dataset is designed to strengthen models’ capabilities as interactive, tool-using agents. It focuses on multi-turn conversations where language models decompose user goals, decide when to call tools, and reason over tool outputs to complete tasks reliably and safely. This dataset is ready for commercial use. The Nemotron-Agentic-Tool-Use-v1 dataset contains the following subsets: Interactive Agent This dataset… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Agentic-v1.
877
4,579
[ "language:en", "license:cc-by-4.0", "region:us" ]
2025-12-14T02:52:50
null
null
696552c844f950f64be9b539
openai/ih-challenge
openai
{"license": "apache-2.0"}
false
False
2026-01-12T21:17:39
7
7
false
056b7d94345dd4f8049da75bd70617d8928ac586
IH-Challenge Training dataset from our paper Large-Scale RLVR Improves Instruction Hierarchy on Frontier LLMs. Warning About Company Names To avoid legal and reputational risk, we replaced all company names in the original dataset for either COMPETITOR_i or BRAND_i, with i ∈ ℕ. We recommend that you replace these placeholders with real company names before training on the dataset. Data Schema Field Type Description attacker_meta_problem str General… See the full description on the dataset page: https://huggingface.co/datasets/openai/ih-challenge.
94
94
[ "license:apache-2.0", "size_categories:10K<n<100K", "format:json", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us" ]
2026-01-12T20:00:08
null
null
69856ff91857b54ca5ef9047
artillerywu/DeepResearch-9K
artillerywu
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "difficulty", "dtype": "int64"}, {"name": "search trajectory", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "final answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 286413691, "num_examples": 3974}], "download_size": 64975087, "dataset_size": 286413691}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "DeepResearch-Hard/train-*"}]}]}
false
False
2026-02-21T07:07:56
9
7
false
946870822ebf09b08b4c13a951d7b64da5dfd554
null
278
283
[ "size_categories:1K<n<10K", "format:parquet", "format:optimized-parquet", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us" ]
2026-02-06T04:37:13
null
null
6993497cb265036892229930
OmniLottie/MMLottie-2M
OmniLottie
{"license": "cc-by-nc-sa-4.0", "language": ["en"], "tags": ["lottie", "animation", "vector-graphics", "motion-graphics", "multi-modal"], "size_categories": ["1M<n<10M"], "configs": [{"config_name": "Lottie", "data_files": "data/Lottie/*.parquet"}, {"config_name": "Lottie_SVG", "data_files": "data/Lottie_SVG/*.parquet"}]}
false
False
2026-03-07T07:45:11
28
7
false
b53c3972343cabe94d4f4b1a86433a9c3dc8b298
MMLottie-2M Dataset The first large-scale Lottie animation dataset for multi-modal vector animation generation, containing ~2M samples with diverse motion patterns and visual styles. Dataset Overview MMLottie-2M consists of two complementary subsets designed to support comprehensive training for Lottie animation generation: 1. Lottie Subset Native Lottie animations collected from major online platforms including LottieFiles, IconScout, Flaticon, Iconfont, and… See the full description on the dataset page: https://huggingface.co/datasets/OmniLottie/MMLottie-2M.
497
521
[ "language:en", "license:cc-by-nc-sa-4.0", "size_categories:1M<n<10M", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2603.02138", "region:us", "lottie", "animation", "vector-graphics", "motion-gra...
2026-02-16T16:44:44
null
null
69ae7132f939066a47e28bb8
humanlaya-data-lab/OneMillion-Bench
humanlaya-data-lab
{"license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "language": ["en", "zh"], "tags": ["economics_and_finance", "healthcare_and_medicine", "industry", "law", "natural_science"], "pretty_name": "$OneMillion-Bench", "size_categories": ["n<1K"]}
false
False
2026-03-11T06:34:22
7
7
false
5cf9d5005e2e1f20b4481ed50846161697e82a73
$OneMillion-Bench A bilingual (Global/Chinese) realistic expert-level benchmark for evaluating language agents across 5 professional domains. The benchmark contains 400 entries with detailed, weighted rubric-based grading criteria designed for fine-grained evaluation of domain expertise, analytical reasoning, and instruction following. Dataset Structure Each subdirectory is a Hugging Face subset (configuration), and all data is in the test split. $OneMillion-Bench/ ├──… See the full description on the dataset page: https://huggingface.co/datasets/humanlaya-data-lab/OneMillion-Bench.
197
197
[ "task_categories:question-answering", "task_categories:text-generation", "language:en", "language:zh", "license:apache-2.0", "size_categories:n<1K", "modality:text", "arxiv:2603.07980", "region:us", "economics_and_finance", "healthcare_and_medicine", "industry", "law", "natural_science" ]
2026-03-09T07:05:22
null
null
69b186f91cde8c71bb8f76b0
Roman1111111/claude-opus-4.6-10000x
Roman1111111
{"license": "mit"}
false
False
2026-03-11T16:00:39
7
7
false
3fedde0a6ac508eb255151c9d00e5a37e2f3f16a
This is a high-fidelity reasoning dataset synthesized using Claude Opus 4.6. The dataset is designed to capture the model's internal "Chain of Thought" and reasoning traces, specifically focusing on mathematical accuracy and structured logical deduction. The dataset is intended for Supervised Fine-Tuning (SFT) and Distillation, allowing smaller open-source models to inherit the sophisticated reasoning patterns of Claude Opus 4.6. Dataset Description This collection combines high-difficulty… See the full description on the dataset page: https://huggingface.co/datasets/Roman1111111/claude-opus-4.6-10000x.
319
319
[ "license:mit", "size_categories:1K<n<10K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us" ]
2026-03-11T15:15:05
null
null
621ffdd236468d709f18202d
EdinburghNLP/xsum
EdinburghNLP
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "pretty_name": "Extreme Summarization (XSum)", "paperswithcode_id": "xsum", "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 479206363, "num_examples": 204045}, {"name": "validation", "num_bytes": 26292877, "num_examples": 11332}, {"name": "test", "num_bytes": 26756141, "num_examples": 11334}], "download_size": 332791351, "dataset_size": 532255381}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "train-eval-index": [{"config": "default", "task": "summarization", "task_id": "summarization", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"document": "text", "summary": "target"}, "metrics": [{"type": "rouge", "name": "Rouge"}]}]}
false
False
2026-01-12T14:28:39
137
6
false
7d4d486c2f8ef850b1a11aead99b894ff3dd7da9
Dataset Card for "xsum" Dataset Summary Extreme Summarization (XSum) Dataset. There are three features: document: Input news article. summary: One sentence summary of the article. id: BBC ID of the article. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed Dataset Structure Data Instances default Size of downloaded dataset files: 257.30 MB Size of the generated dataset:… See the full description on the dataset page: https://huggingface.co/datasets/EdinburghNLP/xsum.
15,259
1,605,129
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:100K<n<1M", "format:parquet", "modality:text", "librar...
2022-03-02T23:29:22
xsum
null
639244f571c51c43091df168
Anthropic/hh-rlhf
Anthropic
{"license": "mit", "tags": ["human-feedback"]}
false
False
2023-05-26T18:47:34
1,674
6
false
09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa
Dataset Card for HH-RLHF Dataset Summary This repository provides access to two different kinds of data: Human preference data about helpfulness and harmlessness from Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. These data are meant to train preference (or reward) models for subsequent RLHF training. These data are not meant for supervised training of dialogue agents. Training dialogue agents on these data is likely to lead… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/hh-rlhf.
20,200
1,806,826
[ "license:mit", "size_categories:100K<n<1M", "format:json", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2204.05862", "region:us", "human-feedback" ]
2022-12-08T20:11:33
null
null
6662f7cd2b8a3cd48ea74f41
lmms-lab/Video-MME
lmms-lab
{"dataset_info": {"config_name": "videomme", "features": [{"name": "video_id", "dtype": "string"}, {"name": "duration", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "sub_category", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "videoID", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "task_type", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1003241, "num_examples": 2700}], "download_size": 405167, "dataset_size": 1003241}, "configs": [{"config_name": "videomme", "data_files": [{"split": "test", "path": "videomme/test-*"}]}]}
false
False
2024-07-04T08:14:20
73
6
false
ead1408f75b618502df9a1d8e0950166bf0a2a0b
null
64,211
540,950
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2024-06-07T12:06:37
null
null
End of preview. Expand in Data Studio

Changelog

NEW Changes March 11th 2026

  • Added new split: arxiv_papers, sourced from the Hugging Face /api/papers endpoint
  • papers continues to point to daily_papers.parquet, which is the Daily Papers feed

NEW Changes July 25th

  • added baseModels field to models which shows the models that the user tagged as base models for that model

Example:

{
  "models": [
    {
      "_id": "687de260234339fed21e768a",
      "id": "Qwen/Qwen3-235B-A22B-Instruct-2507"
    }
  ],
  "relation": "quantized"
}

NEW Changes July 9th

  • Fixed issue with gguf column with integer overflow causing import pipeline to be broken over a few weeks ✅

NEW Changes Feb 27th

  • Added new fields on the models split: downloadsAllTime, safetensors, gguf

  • Added new field on the datasets split: downloadsAllTime

  • Added new split: papers which is all of the Daily Papers

Updated Daily

Downloads last month
8,286

Spaces using cfahlgren1/hub-stats 15