| | --- |
| | dataset_info: |
| | features: |
| | - name: prompt |
| | dtype: string |
| | - name: mcp_config |
| | dtype: string |
| | - name: id |
| | dtype: string |
| | - name: metadata |
| | dtype: string |
| | - name: setup_tool |
| | dtype: string |
| | - name: evaluate_tool |
| | dtype: string |
| | - name: system_prompt |
| | dtype: string |
| | - name: agent_config |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 130705 |
| | num_examples: 50 |
| | download_size: 45847 |
| | dataset_size: 130705 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | --- |
| | |
| | ## Task Categories |
| |
|
| | ### 1. Data Preparation and Hygiene (29 tasks) |
| | - De-duplication, type normalization, time parsing, joins/FX conversions, pivot tables |
| |
|
| | ### 2. Derivations & Extraction (16 tasks) |
| | - Correlations, z-scores, grouping logic, compliance filters (e.g., 1099) |
| |
|
| | ### 3. Modeling & Forecasts (5 tasks) |
| | - Revenue/breakeven projections, amortization schedules, depreciation calculations, scenario tables |
| |
|
| | ## Example Task |
| |
|
| | ``` |
| | For the ticker that has the greatest correlation between volume and next day price change % |
| | find the day with the greatest volume and the next days price change % |
| | |
| | - put the ticker in ANSWER A1 |
| | - put the volume in ANSWER B1 |
| | - put the next day price change in ANSWER C1 |
| | |
| | NOTE: |
| | - use CORREL to determine correlation for each ticker group |
| | - be sure to first sort the date by ticker z to a and then date ascending before calculating nextdaypricechange % |
| | ``` |
| |
|
| | ## System prompt |
| |
|
| | ``` |
| | All solutions should be put in the sheet called ‘ANSWER’. |
| | In the answer sheet, all dates should use the American standard format MM/DD/YYYY with no leading zero. |
| | All numbers should use the format and decimal place precision given in the input sheets (e.g., with or without a thousands separator should depend on the inputs), unless specified otherwise. |
| | ``` |
| |
|
| | ## Quick Start |
| |
|
| | ### Prerequisites |
| | 1. HUD API key: https://www.hud.so/project/api-keys |
| | 2. Anthropic API key: https://console.anthropic.com/settings/keys |
| |
|
| | ### Installation & Run |
| | ```bash |
| | # Install HUD SDK |
| | uv tool install hud-python |
| | |
| | # Configure API keys |
| | hud set HUD_API_KEY=... ANTHROPIC_API_KEY=... |
| | |
| | # Run evaluation with Claude |
| | hud eval hud-evals/SheetBench-50 claude |
| | |
| | # View full dataset |
| | hud get hud-evals/SheetBench-50 |
| | ``` |
| |
|
| | ## Key Features |
| | - **Production-grade**: Tasks validated by finance professionals from PwC, Cisco, Charles Schwab, Fannie Mae |
| | - **Blind validation**: Each task has single reproducible solution with expert consensus |
| | - **Full telemetry**: Records actions, reasoning traces, and screenshots |
| | - **Tool dexterity**: Tests real spreadsheet operations (pivots, formatting, formulas) |
| |
|
| |
|
| | ## Results |
| | - View example scorecard: https://www.hud.so/leaderboards/hud-evals/SheetBench-50?scorecard=19c2f4b7-ea8a-4c2b-866f-20ae57976d13 |
| | - Replay trajectories: https://www.hud.so/jobs/7c06c24e-22c7-4c9a-a667-1de4bb05b080 |
| |
|
| | ## Contact |
| | For enterprise evaluations or custom benchmarks: founders@hud.so |
| |
|