aiagentkarl's picture
Upload README.md with huggingface_hub
b443026 verified
metadata
license: mit
task_categories:
  - text-generation
  - question-answering
tags:
  - mcp
  - ai-agents
  - benchmark
  - evaluation
  - tool-use
  - function-calling
size_categories:
  - n<1K

Agent Evaluation Benchmark

A benchmark dataset for evaluating AI agent tool-use capabilities across 55+ test cases spanning 14 categories.

Overview

This benchmark tests whether AI agents can correctly select and use the right MCP tools for real-world tasks. It covers data retrieval, blockchain queries, security analysis, academic research, and more.

Categories

Category Test Cases Description
Weather 5 Forecasts, UV index, climate history
Blockchain 7 Token prices, wallet analysis, DeFi, whale tracking
Security 5 CVE search, vulnerability analysis, CVSS scores
Academic 4 Paper search, citations, author lookup
Company 3 EU company registry search
Agriculture 3 Crop data, food prices, yield comparison
Space 4 NASA APOD, asteroids, Mars rover, ISS
Aviation 3 Flight tracking, airport info
Medical 3 WHO data, disease outbreaks, health stats
Political 2 Campaign finance, FEC data
Supply Chain 2 UN trade data, import/export stats
LLM Benchmark 3 Model comparison, pricing, benchmarks
Energy 3 CO2 intensity, energy mix, electricity prices
Legal 3 Court decisions, case search
Agent Infrastructure 3 Directory, memory, workflows
Compliance 2 PII detection, GDPR checks

Schema

  • task_description: Natural language description of the task
  • expected_tool: The MCP tool that should be selected
  • difficulty: easy, medium, or hard
  • category: Task category
  • test_input: Example input parameters
  • expected_output_contains: Key string that should appear in the output

Difficulty Distribution

  • Easy: 25 tasks (basic single-tool queries)
  • Medium: 25 tasks (parameter selection, filtering, comparison)
  • Hard: 5 tasks (multi-step reasoning, complex analysis)

Usage

Use this dataset to evaluate:

  1. Tool Selection Accuracy: Does the agent pick the right tool?
  2. Parameter Extraction: Does the agent correctly parse inputs?
  3. Output Validation: Does the response contain expected information?
from datasets import load_dataset
ds = load_dataset("aiagentkarl/agent-evaluation-benchmark")

License

MIT

Author

AiAgentKarl