Finch / README.md
HaoyuDong's picture
Update README.md
33fb63b verified
metadata
license: cc-by-3.0
tags:
  - agent
  - workflow
  - multimodal
  - spreadsheet
  - pdf
  - image
  - code
  - finance
  - accouning
modalities:
  - text
  - spreadsheet
  - pdf
  - image
  - code
configs:
  - config_name: Finch_Dataset_All
    data_files:
      - split: test
        path:
          - finch_workflows_test.jsonl

Finch cover figure

Finch: Benchmarking Finance & Accounting across Spreadsheet-Centric Enterprise Workflows

This repository contains the dataset for Finch, an enterprise-grade benchmark for evaluating an agent’s ability to work like a skilled finance & accounting expert (work IQ) on real-world professionel workflows.


Dataset Description

Finch focuses on messy and long-horizon finance & accounting workflows that span:

data entry/import, structuring/formatting, web search, cross-sheet/file retrieval, calculation, financial modeling, validation, translation, visualization, and reporting.

The workflows are derived from real-world enterprise workspaces (primarily Enron, as well as corporations in the EUSES Corpus, investment and securities companies, World Bank, Canadian/British government agencies, and more), including:

  • Enterprise email threads where collaborators naturally describe, discuss, and track workflows
  • Large and messy spreadsheets with multimodal artifacts including text, tables, formulas, charts, pivots, images, etc
  • Interlinked PDFs and documents that provide additional business context

We adopt a three-step workflow labeling process:

  1. Inducing workflow types and instances from real collaborative context in enterprise email threads (Enron Corpus: 500,000 emails from 150 executives and employees).
  2. Deriving concrete workflow instances by analyzing changes across spreadsheet versions (15,000 versioned spreadsheets from Enron and EUSES) and designing workflows based on high-quality artifacts from investment and securities companies, World Bank, Canadian/British government agencies, WideSearch, Dabstep, and more.
  3. Conductin meticulous expert annotation of task instructions, input files, and reference outputs, involving hundreds of hours of expert work.

This process yields 172 enterprise-grade workflows—primarily multi-task composite, involving 1,710 spreadsheets and 27 million cells, capturing the intrinsic compositional, messy, multimodal, and collaborative nature of real-world finance & accounting work. In this release, we provide full annotations for the first 72 workflows, with the remaining 100 to be released in a subsequent update.

Experiment results show that even frontier agents (GPT 5.1 Pro and Claude Sonnet 4.5 Pro) solve fewer than 40% of the workflows, revealing a substantial performance gap for real-world enterprise scenarios.


📁 Dataset Structure

The instruction-tuning corpus is released in JSONL format.
Each line corresponds to one workflow-centric example:

{
  "id": "<workflow identifier>",
  "instruction_en": "<English task instruction for a finance & accounting workflow>",
  "source_files": ["<input file name>", "..."],
  "source_files_urls": ["<input file download URL>", "..."],
  "reference_outputs": {
    "files": ["<reference output file name>"],
    "text": "<textual reference output>"
  },
  "reference_file_urls": ["<reference output file download URL>"],
  "task_type": "<task category (e.g., reporting, modeling)>",
  "business_type": "<business domain (e.g., budgeting, trading)>"
}

📣 Feedback & Issues

If you find any issues with the dataset or have suggestions, please open a discussion in the Community tab — we value your feedback!

📧 Contact: finworkbench@gmail.com