ComparEdge's picture
Upload README.md with huggingface_hub
f777c54 verified
metadata
license: cc-by-4.0
tags:
  - llm
  - benchmarks
  - ai-evaluation
  - model-comparison
  - features
  - rate-limits
size_categories:
  - n<1K

LLM Benchmark & Feature Matrix 2026

Which LLM is best at what? This dataset maps capabilities, performance, and limits of 22 major models.

Unlike pricing datasets, this focuses on what models can do — not just what they cost.

Files

File Description
llm-benchmarks-2026.csv MMLU, HumanEval, MATH, Arena ELO, coding/reasoning/multilingual rankings, tier (S+ to B)
llm-features-2026.csv 15 binary capabilities: vision, function calling, JSON mode, fine-tuning, tool use, web search, embeddings...
llm-rate-limits-2026.csv Free tier availability, RPM/TPM limits, batch discounts, cached input discounts

Models Covered

22 models from OpenAI, Anthropic, Google, Meta, Mistral, DeepSeek, xAI, Cohere

Use Cases

  • Model selection — Find models that support your required features (e.g., vision + function calling + fine-tuning)
  • Performance comparison — Which model scores highest on coding vs reasoning vs multilingual?
  • Rate limit planning — Can you stay within free tier? What are the paid RPM limits?
  • Tier analysis — S+ tier models vs A tier — is the premium worth it?

Key Insights

  • Only Google Gemini supports all 15 features (vision, search, embeddings, fine-tuning, code execution)
  • DeepSeek offers 90% cached input discount — massive savings for repetitive workloads
  • Groq has highest free tier RPM (30) with lowest latency
  • S+ tier models (o1, Claude Opus 4, Gemini 2.5 Pro, DeepSeek R1) all score >89 MMLU

Related

License

CC BY 4.0 — ComparEdge