Datasets:
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
A benchmark for evaluating vision-capable LLMs on Indian competitive exam questions (JEE Main, JEE Advanced, NEET). Questions are images sent to models via the OpenRouter API; responses are parsed from <answer>...</answer> tags and scored using exam-specific marking schemes.
Running the Benchmark
# Setup
uv sync
echo "OPENROUTER_API_KEY=your_key" > .env
# Must run from project root (paths are resolved relative to cwd)
uv run python src/benchmark_runner.py --model "google/gemini-2.5-pro-preview-03-25" --exam_name JEE_ADVANCED --exam_year 2025
# Filter by question IDs
uv run python src/benchmark_runner.py --model "openai/o3" --question_ids "N24T3001,N24T3002"
CLI args: --model (required), --exam_name (all/NEET/JEE_ADVANCED/JEE_MAIN), --exam_year (all/2024/2025), --question_ids, --output_dir, --config, --resume, --temperature (override config), --num_runs (default 1, use 3+ for variance analysis).
Analysis Scripts
# Generate cross-model leaderboard from all results
uv run python scripts/generate_leaderboard.py
# Aggregate multiple runs of the same model for variance analysis
uv run python scripts/aggregate_runs.py --pattern "openai_o3_JEE_ADVANCED_2025"
Testing
# Run the full pytest suite (67 tests)
uv run python -m pytest tests/ -v
# Run individual module self-tests
uv run python src/utils.py # answer parsing logic
uv run python src/evaluation.py # scoring logic
uv run python src/llm_interface.py # API calls (requires .env and network)
Architecture
benchmark_runner.py ─── orchestrator / entry point
├── loads config from configs/benchmark_config.yaml
├── loads dataset directly from metadata.jsonl (JSONL → HuggingFace Dataset)
│ ├── metadata.jsonl (question metadata, 578 questions)
│ └── images/ (question PNGs, stored in Git LFS)
├── calls llm_interface.py for each question
│ ├── prompts.py (prompt templates)
│ └── utils.py (parse_llm_answer extracts from <answer> tags)
├── scores via evaluation.py (exam-specific marking schemes)
└── writes results incrementally to results/{model}_{exam}_{year}_{timestamp}/
├── predictions.jsonl (raw API responses)
├── summary.jsonl (scored per-question results)
└── summary.md (human-readable report)
Key data flow
- Dataset loaded directly from
metadata.jsonlinto a HuggingFaceDatasetobject, filtered by exam/year - Each question image is base64-encoded and sent to OpenRouter with a structured prompt
- If the response can't be parsed, a re-prompt is sent (text-only, with the bad response)
- If the API call fails, the question is queued for retry (up to 3 attempts, exponential backoff via
tenacity) - Answers are parsed from
<answer>...</answer>tags byutils.parse_llm_answer() evaluation.pyscores using JEE/NEET marking schemes (partial credit for MCQ_MULTIPLE_CORRECT in JEE Advanced)
Answer Format Conventions
MCQ_SINGLE_CORRECT:<answer>A</answer>→["A"]MCQ_MULTIPLE_CORRECT:<answer>A,C</answer>→["A", "C"](sorted, deduplicated)INTEGER:<answer>42</answer>→["42"]SKIP:<answer>SKIP</answer>→ no penalty
Important Notes
- Git LFS: Images and
metadata.jsonlare in LFS. Rungit lfs pullafter cloning. - Working directory: Scripts must be run from project root — config, data, and image paths are resolved relative to cwd.
- Python 3.10+: Uses union type syntax (
list[str] | str | None). - Models: Configured in
configs/benchmark_config.yamlunderopenrouter_models. All must support vision input. - Result directory naming:
results/{provider}_{model}_{exam}_{year}_{YYYYMMDD_HHMMSS}/(slashes in model IDs replaced with underscores).