The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 49 new columns ({'score_cadeval', 'score_arc-ai2', 'score_frontiermath-tier-4-2025-07-01-private', 'score_bbh', 'score_mmlu', 'score_gso-bench', 'score_arc-agi', 'score_videomme', 'release_date', 'is_open_source', 'score_vpct', 'score_hle', 'score_simplebench', 'score_triviaqa', 'model_type', 'score_frontiermath-2025-02-28-private', 'context_window', 'score_gsm8k', 'score_gpqa-diamond', 'provider_slug', 'score_anli', 'score_osworld', 'score_winogrande', 'score_balrog', 'score_cybench', 'score_piqa', 'score_swe-bench-verified-bash-only', 'input_price_per_mtok', 'score_hellaswag', 'score_geobench', 'output_price_per_mtok', 'score_arc-agi-2', 'score_lech-mazur-writing', 'score_aider-polyglot', 'score_simpleqa-verified', 'score_scienceqa', 'score_fiction-livebench', 'score_otis-mock-aime-2024-2025', 'score_weirdml', 'score_deepresearch-bench', 'avg_score', 'score_apex-agents', 'score_chess-puzzles', 'score_lambada', 'score_the-agent-company', 'score_terminal-bench', 'score_openbookqa', 'score_math-level-5', 'provider'}) and 10 missing columns ({'category', 'models_tested', 'top_score_1', 'top_model_1', 'unit', 'max_score', 'top_model_2', 'top_score_3', 'top_score_2', 'top_model_3'}).
This happened while the csv dataset builder was generating data using
hf://datasets/DropTheHQ/benchgecko-ai-models/models.csv (at revision ef51a045131e9855602e4e25afd7a94e86ec91a1), [/tmp/hf-datasets-cache/medium/datasets/66639039558395-config-parquet-and-info-DropTheHQ-benchgecko-ai-m-5349339c/hub/datasets--DropTheHQ--benchgecko-ai-models/snapshots/ef51a045131e9855602e4e25afd7a94e86ec91a1/benchmarks.csv (origin=hf://datasets/DropTheHQ/benchgecko-ai-models@ef51a045131e9855602e4e25afd7a94e86ec91a1/benchmarks.csv), /tmp/hf-datasets-cache/medium/datasets/66639039558395-config-parquet-and-info-DropTheHQ-benchgecko-ai-m-5349339c/hub/datasets--DropTheHQ--benchgecko-ai-models/snapshots/ef51a045131e9855602e4e25afd7a94e86ec91a1/models.csv (origin=hf://datasets/DropTheHQ/benchgecko-ai-models@ef51a045131e9855602e4e25afd7a94e86ec91a1/models.csv), /tmp/hf-datasets-cache/medium/datasets/66639039558395-config-parquet-and-info-DropTheHQ-benchgecko-ai-m-5349339c/hub/datasets--DropTheHQ--benchgecko-ai-models/snapshots/ef51a045131e9855602e4e25afd7a94e86ec91a1/providers.csv (origin=hf://datasets/DropTheHQ/benchgecko-ai-models@ef51a045131e9855602e4e25afd7a94e86ec91a1/providers.csv), /tmp/hf-datasets-cache/medium/datasets/66639039558395-config-parquet-and-info-DropTheHQ-benchgecko-ai-m-5349339c/hub/datasets--DropTheHQ--benchgecko-ai-models/snapshots/ef51a045131e9855602e4e25afd7a94e86ec91a1/scores.csv (origin=hf://datasets/DropTheHQ/benchgecko-ai-models@ef51a045131e9855602e4e25afd7a94e86ec91a1/scores.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1890, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 760, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
slug: string
name: string
provider: string
provider_slug: string
avg_score: double
input_price_per_mtok: double
output_price_per_mtok: double
context_window: int64
is_open_source: bool
release_date: string
model_type: string
score_aider-polyglot: double
score_anli: double
score_apex-agents: double
score_arc-agi: double
score_arc-agi-2: double
score_arc-ai2: double
score_balrog: double
score_bbh: double
score_cadeval: double
score_chess-puzzles: double
score_cybench: double
score_deepresearch-bench: double
score_fiction-livebench: double
score_frontiermath-2025-02-28-private: double
score_frontiermath-tier-4-2025-07-01-private: double
score_geobench: double
score_gpqa-diamond: double
score_gsm8k: double
score_gso-bench: double
score_hellaswag: double
score_hle: double
score_lambada: double
score_lech-mazur-writing: double
score_math-level-5: double
score_mmlu: double
score_openbookqa: double
score_osworld: double
score_otis-mock-aime-2024-2025: double
score_piqa: double
score_scienceqa: double
score_simplebench: double
score_simpleqa-verified: double
score_swe-bench-verified-bash-only: double
score_terminal-bench: double
score_the-agent-company: double
score_triviaqa: double
score_videomme: double
score_vpct: double
score_weirdml: double
score_winogrande: double
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 7048
to
{'slug': Value('string'), 'name': Value('string'), 'category': Value('string'), 'max_score': Value('int64'), 'unit': Value('string'), 'models_tested': Value('int64'), 'top_model_1': Value('string'), 'top_score_1': Value('float64'), 'top_model_2': Value('string'), 'top_score_2': Value('float64'), 'top_model_3': Value('string'), 'top_score_3': Value('float64')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1892, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 49 new columns ({'score_cadeval', 'score_arc-ai2', 'score_frontiermath-tier-4-2025-07-01-private', 'score_bbh', 'score_mmlu', 'score_gso-bench', 'score_arc-agi', 'score_videomme', 'release_date', 'is_open_source', 'score_vpct', 'score_hle', 'score_simplebench', 'score_triviaqa', 'model_type', 'score_frontiermath-2025-02-28-private', 'context_window', 'score_gsm8k', 'score_gpqa-diamond', 'provider_slug', 'score_anli', 'score_osworld', 'score_winogrande', 'score_balrog', 'score_cybench', 'score_piqa', 'score_swe-bench-verified-bash-only', 'input_price_per_mtok', 'score_hellaswag', 'score_geobench', 'output_price_per_mtok', 'score_arc-agi-2', 'score_lech-mazur-writing', 'score_aider-polyglot', 'score_simpleqa-verified', 'score_scienceqa', 'score_fiction-livebench', 'score_otis-mock-aime-2024-2025', 'score_weirdml', 'score_deepresearch-bench', 'avg_score', 'score_apex-agents', 'score_chess-puzzles', 'score_lambada', 'score_the-agent-company', 'score_terminal-bench', 'score_openbookqa', 'score_math-level-5', 'provider'}) and 10 missing columns ({'category', 'models_tested', 'top_score_1', 'top_model_1', 'unit', 'max_score', 'top_model_2', 'top_score_3', 'top_score_2', 'top_model_3'}).
This happened while the csv dataset builder was generating data using
hf://datasets/DropTheHQ/benchgecko-ai-models/models.csv (at revision ef51a045131e9855602e4e25afd7a94e86ec91a1), [/tmp/hf-datasets-cache/medium/datasets/66639039558395-config-parquet-and-info-DropTheHQ-benchgecko-ai-m-5349339c/hub/datasets--DropTheHQ--benchgecko-ai-models/snapshots/ef51a045131e9855602e4e25afd7a94e86ec91a1/benchmarks.csv (origin=hf://datasets/DropTheHQ/benchgecko-ai-models@ef51a045131e9855602e4e25afd7a94e86ec91a1/benchmarks.csv), /tmp/hf-datasets-cache/medium/datasets/66639039558395-config-parquet-and-info-DropTheHQ-benchgecko-ai-m-5349339c/hub/datasets--DropTheHQ--benchgecko-ai-models/snapshots/ef51a045131e9855602e4e25afd7a94e86ec91a1/models.csv (origin=hf://datasets/DropTheHQ/benchgecko-ai-models@ef51a045131e9855602e4e25afd7a94e86ec91a1/models.csv), /tmp/hf-datasets-cache/medium/datasets/66639039558395-config-parquet-and-info-DropTheHQ-benchgecko-ai-m-5349339c/hub/datasets--DropTheHQ--benchgecko-ai-models/snapshots/ef51a045131e9855602e4e25afd7a94e86ec91a1/providers.csv (origin=hf://datasets/DropTheHQ/benchgecko-ai-models@ef51a045131e9855602e4e25afd7a94e86ec91a1/providers.csv), /tmp/hf-datasets-cache/medium/datasets/66639039558395-config-parquet-and-info-DropTheHQ-benchgecko-ai-m-5349339c/hub/datasets--DropTheHQ--benchgecko-ai-models/snapshots/ef51a045131e9855602e4e25afd7a94e86ec91a1/scores.csv (origin=hf://datasets/DropTheHQ/benchgecko-ai-models@ef51a045131e9855602e4e25afd7a94e86ec91a1/scores.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
slug string | name string | category string | max_score int64 | unit string | models_tested int64 | top_model_1 string | top_score_1 float64 | top_model_2 string | top_score_2 float64 | top_model_3 string | top_score_3 float64 |
|---|---|---|---|---|---|---|---|---|---|---|---|
gpqa-diamond | GPQA diamond | knowledge | 100 | % | 115 | Gemini 3.1 Pro Preview | 92.13 | GPT-5.4 | 91.07 | Gemini 3 Pro | 90.15 |
otis-mock-aime-2024-2025 | OTIS Mock AIME 2024-2025 | math | 100 | % | 105 | GPT-5.2 Chat | 96.11 | GPT-5.2 | 96.11 | Gemini 3.1 Pro Preview | 95.6 |
mmlu | MMLU | knowledge | 100 | % | 92 | GPT-4o (2024-11-20) | 84.13 | DeepSeek V3 | 82.93 | Gemini 1.5 Pro (Sept 2024) | 82.53 |
math-level-5 | MATH level 5 | math | 100 | % | 89 | GPT-5 Chat | 98.13 | GPT-5 | 98.13 | GPT-5 Mini | 97.85 |
weirdml | WeirdML | coding | 100 | % | 87 | Claude Opus 4.6 | 77.9 | GPT-5.2 Chat | 72.2 | GPT-5.2 | 72.2 |
simplebench | SimpleBench | reasoning | 100 | % | 61 | Gemini 3.1 Pro Preview | 75.52 | Gemini 3 Pro | 71.68 | GPT-5.4 Pro | 68.92 |
frontiermath-2025-02-28-private | FrontierMath-2025-02-28-Private | math | 100 | % | 60 | GPT-5.4 Pro | 50 | GPT-5.4 | 47.6 | Claude Opus 4.6 | 40.7 |
aider-polyglot | Aider polyglot | coding | 100 | % | 55 | GPT-5 Chat | 88 | GPT-5 | 88 | o3 Pro | 84.9 |
fiction-livebench | Fiction.LiveBench | knowledge | 100 | % | 53 | GPT-5 Chat | 97.2 | GPT-5 | 97.2 | o3 Pro | 97.2 |
arc-agi-2 | ARC-AGI-2 | reasoning | 100 | % | 52 | GPT-5.4 Pro | 83.33 | Gemini 3.1 Pro Preview | 77.1 | GPT-5.4 | 73.95 |
lech-mazur-writing | Lech Mazur Writing | knowledge | 100 | % | 49 | Kimi K2 0905 | 87.29 | GPT-5 Chat | 87.23 | GPT-5 | 87.23 |
arc-ai2 | ARC AI2 | knowledge | 100 | % | 48 | DeepSeek V3 | 93.73 | Llama 3.1-405B | 93.73 | Qwen2.5 72B Instruct | 92.67 |
gsm8k | GSM8K | math | 100 | % | 48 | GPT-4o-mini (2024-07-18) | 91.3 | GPT-4o-mini | 91.3 | Qwen2.5 Coder 32B Instruct | 91.1 |
winogrande | Winogrande | knowledge | 100 | % | 47 | Llama 3.1-405B | 78.4 | Claude 3 Opus | 77 | Falcon-180B | 74.2 |
frontiermath-tier-4-2025-07-01-private | FrontierMath-Tier-4-2025-07-01-Private | math | 100 | % | 39 | GPT-5.4 Pro | 37.5 | GPT-5.4 | 27.1 | Claude Opus 4.6 | 22.9 |
bbh | BBH | reasoning | 100 | % | 37 | DeepSeek V3 | 83.33 | Llama 3.1-405B | 77.2 | phi-3-medium 14B | 75.2 |
hellaswag | HellaSwag | knowledge | 100 | % | 37 | Llama 3.1-405B | 85.6 | Falcon-180B | 85.33 | DeepSeek V3 | 85.2 |
arc-agi | ARC-AGI | reasoning | 100 | % | 37 | Gemini 3.1 Pro Preview | 98 | Claude Opus 4.6 | 94 | GPT-5.2 Chat | 86.2 |
simpleqa-verified | SimpleQA Verified | knowledge | 100 | % | 36 | Gemini 3.1 Pro Preview | 77.3 | Gemini 3 Pro | 72.9 | Qwen3 Max | 67.47 |
piqa | PIQA | knowledge | 100 | % | 36 | GPT-4o-mini (2024-07-18) | 77.4 | GPT-4o-mini | 77.4 | Gemini 1.5 Flash (Sep 2024) | 75 |
swe-bench-verified-bash-only | SWE-Bench Verified (Bash Only) | coding | 100 | % | 32 | Claude Opus 4.5 | 74.4 | Gemini 3 Pro | 74.2 | GPT-5.2 Chat | 71.8 |
triviaqa | TriviaQA | knowledge | 100 | % | 31 | Llama 2-70B | 87.6 | Claude 2 | 87.5 | LLaMA-65B | 86 |
chess-puzzles | Chess Puzzles | knowledge | 100 | % | 29 | Gemini 3.1 Pro Preview | 55 | GPT-5.2 Chat | 49 | GPT-5.2 | 49 |
geobench | GeoBench | knowledge | 100 | % | 29 | Gemini 3 Flash Preview | 88 | Gemini 3 Pro | 84 | GPT-5 Chat | 81 |
terminal-bench | Terminal Bench | coding | 100 | % | 27 | Gemini 3.1 Pro Preview | 78.4 | Claude Opus 4.6 | 69.9 | GPT-5.2 Chat | 64.9 |
hle | HLE | knowledge | 100 | % | 27 | Gemini 3 Pro | 34.37 | Claude Opus 4.6 | 31.13 | GPT-5 Pro | 28.19 |
openbookqa | OpenBookQA | knowledge | 100 | % | 27 | phi-3-mini 3.8B | 84 | phi-3-small 7.4B | 84 | phi-3-medium 14B | 83.2 |
vpct | VPCT | knowledge | 100 | % | 26 | Gemini 3 Pro | 86.5 | GPT-5.2 Chat | 76 | GPT-5.2 | 76 |
gso-bench | GSO-Bench | coding | 100 | % | 23 | Claude Opus 4.6 | 33.33 | GPT-5.2 Chat | 27.4 | GPT-5.2 | 27.4 |
apex-agents | APEX-Agents | agentic | 100 | % | 21 | GPT-5.4 | 35.9 | GPT-5.2 Chat | 34.3 | GPT-5.2 | 34.3 |
balrog | Balrog | knowledge | 100 | % | 20 | Gemini 3 Flash Preview | 48.1 | Grok 4 | 43.6 | DeepSeek-R1 | 34.9 |
cybench | Cybench | coding | 100 | % | 17 | Claude Sonnet 4.5 | 55 | Claude Opus 4.1 | 38 | Claude Opus 4 | 38 |
lambada | LAMBADA | knowledge | 100 | % | 16 | Falcon-180B | 79.8 | Llama 2-70B | 78.9 | LLaMA-65B | 77.7 |
cadeval | CadEval | coding | 100 | % | 15 | o3 | 74 | o4 Mini | 62 | o1 | 56 |
deepresearch-bench | DeepResearch Bench | knowledge | 100 | % | 12 | Claude Sonnet 4.5 | 52.6 | GPT-5 Chat | 51 | GPT-5 | 51 |
videomme | VideoMME | multimodal | 100 | % | 11 | Gemini 1.5 Pro (Feb 2024) | 66.67 | Qwen2.5 72B Instruct | 64.67 | GPT-4o (2024-11-20) | 62.53 |
the-agent-company | The Agent Company | agentic | 100 | % | 10 | DeepSeek V3.2 Exp | 42.9 | Claude Sonnet 4 | 33.1 | Claude 3.7 Sonnet | 30.9 |
osworld | OSWorld | agentic | 100 | % | 8 | Claude Opus 4.5 | 66.3 | Kimi K2.5 | 63.3 | Claude Sonnet 4.5 | 62.9 |
anli | ANLI | knowledge | 100 | % | 8 | phi-3-small 7.4B | 37.15 | Llama 3 8B Instruct | 35.95 | phi-3-medium 14B | 33.7 |
scienceqa | ScienceQA | knowledge | 100 | % | 5 | Claude 3 Haiku | 62.67 | Llama 2-13B | 41.04 | LLaMA-13B | 24.44 |
claude-instant | Claude Instant | null | null | null | null | null | null | null | null | null | null |
deepseek-v2-moe-236b-may-2024 | DeepSeek-V2 (MoE-236B, May 2024) | null | null | null | null | null | null | null | null | null | null |
qwen-2-5-coder-32b-instruct | Qwen2.5 Coder 32B Instruct | null | null | null | null | null | null | null | null | null | null |
stable-beluga-2 | Stable Beluga 2 | null | null | null | null | null | null | null | null | null | null |
gemini-3-1-pro-preview | Gemini 3.1 Pro Preview | null | null | null | null | null | null | null | null | null | null |
phi-3-small-7-4b | phi-3-small 7.4B | null | null | null | null | null | null | null | null | null | null |
falcon-180b | Falcon-180B | null | null | null | null | null | null | null | null | null | null |
qwen3-max | Qwen3 Max | null | null | null | null | null | null | null | null | null | null |
o3-pro | o3 Pro | null | null | null | null | null | null | null | null | null | null |
kimi-k2-0905 | Kimi K2 0905 | null | null | null | null | null | null | null | null | null | null |
llama-65b | LLaMA-65B | null | null | null | null | null | null | null | null | null | null |
qwen2-5-coder-7b-instruct | Qwen2.5 Coder 7B Instruct | null | null | null | null | null | null | null | null | null | null |
qwen-14b | Qwen-14B | null | null | null | null | null | null | null | null | null | null |
phi-3-mini-3-8b | phi-3-mini 3.8B | null | null | null | null | null | null | null | null | null | null |
deepseek-v3-2-exp | DeepSeek V3.2 Exp | null | null | null | null | null | null | null | null | null | null |
gpt-5-4-pro | GPT-5.4 Pro | null | null | null | null | null | null | null | null | null | null |
gpt-5-4 | GPT-5.4 | null | null | null | null | null | null | null | null | null | null |
phi-3-medium-14b | phi-3-medium 14B | null | null | null | null | null | null | null | null | null | null |
falcon-2-11b | Falcon 2 11B | null | null | null | null | null | null | null | null | null | null |
llama-33b | LLaMA-33B | null | null | null | null | null | null | null | null | null | null |
mixtral-8x7b-instruct | Mixtral 8x7B Instruct | null | null | null | null | null | null | null | null | null | null |
mistral-7b-v0-1 | Mistral 7B v0.1 | null | null | null | null | null | null | null | null | null | null |
gemma-7b | Gemma 7B | null | null | null | null | null | null | null | null | null | null |
deepseek-chat | DeepSeek V3 | null | null | null | null | null | null | null | null | null | null |
gemini-3-pro | Gemini 3 Pro | null | null | null | null | null | null | null | null | null | null |
llama-2-70b | Llama 2-70B | null | null | null | null | null | null | null | null | null | null |
qwen3-235b-a22b | Qwen3 235B A22B | null | null | null | null | null | null | null | null | null | null |
grok-4-fast | Grok 4 Fast | null | null | null | null | null | null | null | null | null | null |
claude-opus-4-6 | Claude Opus 4.6 | null | null | null | null | null | null | null | null | null | null |
gpt-5-2-chat | GPT-5.2 Chat | null | null | null | null | null | null | null | null | null | null |
gpt-5-2 | GPT-5.2 | null | null | null | null | null | null | null | null | null | null |
o1 | o1 | null | null | null | null | null | null | null | null | null | null |
falcon-40b | Falcon-40B | null | null | null | null | null | null | null | null | null | null |
gpt-5-chat | GPT-5 Chat | null | null | null | null | null | null | null | null | null | null |
gpt-5 | GPT-5 | null | null | null | null | null | null | null | null | null | null |
llama-2-13b | Llama 2-13B | null | null | null | null | null | null | null | null | null | null |
gemini-2-0-pro | Gemini 2.0 Pro | null | null | null | null | null | null | null | null | null | null |
nemotron-4-15b | Nemotron-4 15B | null | null | null | null | null | null | null | null | null | null |
yi-34b | Yi-34B | null | null | null | null | null | null | null | null | null | null |
qwen-2-5-72b-instruct | Qwen2.5 72B Instruct | null | null | null | null | null | null | null | null | null | null |
o3 | o3 | null | null | null | null | null | null | null | null | null | null |
deepseek-chat-v3-1 | DeepSeek V3.1 | null | null | null | null | null | null | null | null | null | null |
gemini-3-flash-preview | Gemini 3 Flash Preview | null | null | null | null | null | null | null | null | null | null |
llama-2-34b | Llama 2-34B | null | null | null | null | null | null | null | null | null | null |
gemini-2-0-flash-001 | Gemini 2.0 Flash | null | null | null | null | null | null | null | null | null | null |
llama-3-1-405b | Llama 3.1-405B | null | null | null | null | null | null | null | null | null | null |
qwen-7b | Qwen-7B | null | null | null | null | null | null | null | null | null | null |
deepseek-r1-may-2025 | DeepSeek-R1 (May 2025) | null | null | null | null | null | null | null | null | null | null |
baichuan2-13b | Baichuan2-13B | null | null | null | null | null | null | null | null | null | null |
qwen3-235b-a22b-thinking-2507 | Qwen3 235B A22B Thinking 2507 | null | null | null | null | null | null | null | null | null | null |
grok-4 | Grok 4 | null | null | null | null | null | null | null | null | null | null |
deepseek-r1 | DeepSeek-R1 | null | null | null | null | null | null | null | null | null | null |
claude-sonnet-4-6 | Claude Sonnet 4.6 | null | null | null | null | null | null | null | null | null | null |
grok-3-mini | Grok 3 Mini | null | null | null | null | null | null | null | null | null | null |
grok-3-mini-beta | Grok 3 Mini Beta | null | null | null | null | null | null | null | null | null | null |
claude-opus-4-5 | Claude Opus 4.5 | null | null | null | null | null | null | null | null | null | null |
o4-mini | o4 Mini | null | null | null | null | null | null | null | null | null | null |
phi-4 | Phi 4 | null | null | null | null | null | null | null | null | null | null |
llama-13b | LLaMA-13B | null | null | null | null | null | null | null | null | null | null |
gpt-4-turbo | GPT-4 Turbo | null | null | null | null | null | null | null | null | null | null |
BenchGecko AI Model Benchmarks 2026
A comprehensive dataset of 413 AI models with benchmark scores across 40 evaluations, pricing data, and provider information. Sourced from BenchGecko, an independent AI model tracking platform.
Dataset Summary
| Metric | Count |
|---|---|
| Models tracked | 413 |
| Benchmarks | 40 |
| Providers | 57 |
| Individual scores | 1,577 |
| Models with pricing | 346 |
| Open source models | 235 |
Last updated: March 2026
Files
| File | Description | Rows | Columns |
|---|---|---|---|
models.csv |
All models with metadata and benchmark scores | 413 | 51 |
benchmarks.csv |
All 40 benchmarks with categories and top performers | 40 | 12 |
providers.csv |
57 AI providers with model counts and avg pricing | 57 | 7 |
scores.csv |
Normalized model-benchmark score pairs (long format) | 1,577 | 7 |
Column Descriptions
models.csv
| Column | Type | Description |
|---|---|---|
slug |
string | Unique model identifier |
name |
string | Model display name |
provider |
string | Company/org that created the model |
provider_slug |
string | Provider identifier |
avg_score |
float | Average score across all benchmarks (0-100) |
input_price_per_mtok |
float | Input pricing in USD per million tokens |
output_price_per_mtok |
float | Output pricing in USD per million tokens |
context_window |
int | Maximum context window in tokens |
is_open_source |
bool | Whether model weights are publicly available |
release_date |
date | Model release date (YYYY-MM-DD) |
model_type |
string | Model modality (text, multimodal, etc.) |
score_* |
float | Individual benchmark scores (40 columns) |
benchmarks.csv
| Column | Type | Description |
|---|---|---|
slug |
string | Benchmark identifier |
name |
string | Benchmark display name |
category |
string | Category: knowledge, reasoning, math, coding, agentic, multimodal |
max_score |
float | Maximum possible score |
unit |
string | Score unit (typically %) |
models_tested |
int | Number of models evaluated |
top_model_1/2/3 |
string | Top 3 performing models |
top_score_1/2/3 |
float | Corresponding top scores |
providers.csv
| Column | Type | Description |
|---|---|---|
slug |
string | Provider identifier |
name |
string | Provider display name |
model_count |
int | Total models from this provider |
models_with_pricing |
int | Models with published pricing |
avg_input_price_per_mtok |
float | Average input price (USD/Mtok) |
avg_output_price_per_mtok |
float | Average output price (USD/Mtok) |
open_source_count |
int | Number of open source models |
scores.csv
Long-format table for easy filtering and analysis:
| Column | Type | Description |
|---|---|---|
model_slug |
string | Model identifier |
model_name |
string | Model name |
provider |
string | Provider name |
benchmark_slug |
string | Benchmark identifier |
benchmark_name |
string | Benchmark display name |
score |
float | Score value |
benchmark_category |
string | Benchmark category |
Benchmark Categories
| Category | Benchmarks | Description |
|---|---|---|
| Knowledge | 17 | Factual knowledge (MMLU, ARC, TriviaQA, etc.) |
| Math | 5 | Mathematical reasoning (GSM8K, MATH, FrontierMath) |
| Coding | 7 | Code generation/understanding (SWE-Bench, Aider, WeirdML) |
| Reasoning | 3 | Logical reasoning (BBH, SimpleBench, ARC-AGI-2) |
| Agentic | 3 | Agent capabilities (APEX-Agents, OSWorld, The Agent Company) |
| Multimodal | 1 | Video understanding (VideoMME) |
Usage Examples
Load with pandas
import pandas as pd
# Load all models
models = pd.read_csv("hf://datasets/DropTheHQ/benchgecko-ai-models/models.csv")
# Top 10 models by average score
print(models.nlargest(10, "avg_score")[["name", "provider", "avg_score"]])
# Compare open source vs proprietary
print(models.groupby("is_open_source")["avg_score"].describe())
Filter by provider
# All OpenAI models sorted by avg score
openai = models[models["provider"] == "OpenAI"].sort_values("avg_score", ascending=False)
print(openai[["name", "avg_score", "input_price_per_mtok"]].head(10))
Benchmark analysis
scores = pd.read_csv("hf://datasets/DropTheHQ/benchgecko-ai-models/scores.csv")
# Best model per benchmark
best = scores.loc[scores.groupby("benchmark_slug")["score"].idxmax()]
print(best[["benchmark_name", "model_name", "score", "benchmark_category"]])
Price-performance analysis
# Models with pricing data
priced = models[models["input_price_per_mtok"].notna()].copy()
priced["cost_efficiency"] = priced["avg_score"] / priced["input_price_per_mtok"]
print(priced.nlargest(10, "cost_efficiency")[["name", "avg_score", "input_price_per_mtok", "cost_efficiency"]])
Load with Hugging Face datasets
from datasets import load_dataset
ds = load_dataset("DropTheHQ/benchgecko-ai-models")
print(ds)
Source
Data sourced from BenchGecko -- an independent platform that tracks AI models, agents, benchmarks, and pricing. BenchGecko aggregates benchmark results from official papers, evaluation suites, and reproducible third-party tests.
Explore the full interactive leaderboard at benchgecko.ai.
License
This dataset is released under CC BY 4.0. You are free to share and adapt this data for any purpose, including commercial, as long as you give appropriate credit.
Citation
@dataset{benchgecko_ai_models_2026,
title={BenchGecko AI Model Benchmarks 2026},
author={BenchGecko},
year={2026},
url={https://benchgecko.ai},
publisher={Hugging Face},
license={CC BY 4.0},
note={413 AI models, 40 benchmarks, 57 providers, 1577 scores}
}
- Downloads last month
- 14