live_protein_bench / README.md
Hauser7733
docs: add data provenance README + build script
85def2b
metadata
dataset_info:
  features:
    - name: protein_id
      dtype: string
    - name: sequence
      dtype: string
    - name: choice_A
      dtype: string
    - name: choice_B
      dtype: string
    - name: choice_C
      dtype: string
    - name: choice_D
      dtype: string
    - name: answer
      dtype: string
    - name: task
      dtype: string
    - name: question_text
      dtype: string
  splits:
    - name: test
      num_bytes: 1267780
      num_examples: 1797
  download_size: 430411
  dataset_size: 1267780
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
license: mit
task_categories:
  - multiple-choice
  - question-answering
language:
  - en
tags:
  - protein
  - biology
  - benchmark
  - llm-evaluation
pretty_name: LiveProteinBench (Processed with Answer Labels)
size_categories:
  - 1K<n<10K

LiveProteinBench (Processed with Answer Labels)

A processed mirror of Rongdingyi/LiveProteinBench with ground-truth answer labels resolved from upstream's origin_data/*.csv ground truth files.

This dataset is used by SiEval (model delivery quality verification system) to evaluate LLM capabilities on protein property and function prediction.

Paper: LiveProteinBench: A Contamination-Free Benchmark for LLM Protein Understanding

What's in this dataset

12 multiple-choice QA tasks covering protein functional annotation, structure/location, and physicochemical properties. Each sample has:

Field Type Description
protein_id str UniProt accession (e.g., A0A080WMA9)
sequence str Protein amino acid sequence
choice_A..D str Four multiple-choice options
answer str Correct choice label (A / B / C / D)
task str Task identifier (see table below)
question_text str Non-empty only for GO tasks (extra question text)

Tasks (1,797 samples total)

Task Samples Description
cofactor 186 Predict required cofactor
EC_number 200 Predict Enzyme Commission number
active_site 146 Predict active site residues
catalytic_activity 200 Predict catalyzed reaction
motif_position 52 Predict conserved motif positions
pathway 200 Predict metabolic/signaling pathway
ph 54 Predict optimal pH
temperature 41 Predict thermal adaptation category
transmembrane 134 Predict transmembrane region positions
GO_molecular_function 195 GO molecular function prediction
GO_cellular_component 196 GO cellular component prediction
GO_biological_process 193 GO biological process prediction

Usage

from datasets import load_dataset

ds = load_dataset("Hauser7733/live_protein_bench", split="test")
# Filter to a specific task
active_site = ds.filter(lambda x: x["task"] == "active_site")

Or via SiEval's Dataset wrapper:

from sieval.datasets import LiveProteinBenchDataset
ds = LiveProteinBenchDataset(task="active_site")

Data Provenance (Important — Please Read)

Why this mirror exists

The upstream Rongdingyi/LiveProteinBench QA files under dataset/QA/*.json contain only the question, protein sequence, and four multiple-choice options — they do not include a ground-truth answer key:

{
  "Protein Sequence": "MATPS...",
  "Protein Image": ["pictures/A0A080WMA9/..."],
  "Multiple Choices": {
    "A": "Position 256 (Proton acceptor)",
    "B": "Position 206 (For beta-ketoacyl synthase activity)",
    "C": "Position 154 (Acyl-thioester intermediate)",
    "D": "Position 209 (Proton donor)"
  }
}

Upstream's evaluation scripts (chat.py / chat_nopic.py) only query an LLM and save responses — they do not compute accuracy, because the ground-truth answers live elsewhere: in the dataset/origin_data/*.csv files, which contain raw UniProt-style annotations (not in A/B/C/D form).

How we resolved the answer labels

scripts/build_live_protein_bench.py reads both the upstream QA JSON and the corresponding origin_data/*.csv, then applies task-specific matchers to determine which choice matches the CSV ground truth:

Task Matcher strategy
cofactor Extract Name=... from CSV, exact string match against choices
EC_number Split CSV by ;, test set membership against choices
active_site Regex-extract ACT_SITE <pos> /note="...", vote on choice overlap
catalytic_activity, motif, GO Bidirectional substring containment
pathway Regex-extract PATHWAY block, case-insensitive hierarchical match
ph Regex-extract single value or X-Y range, float-compare to choices
temperature Regex-extract single value or range, compare against numeric choices
transmembrane Regex-extract TRANSMEM <range>, exact string match

Match rate: 1797/1797 samples (100%) — every sample was successfully resolved to a single A/B/C/D label.

Validation — why we trust these labels

The 100% match rate alone does not prove the labels are correct (a loose substring matcher could select the wrong choice). The labels were validated end-to-end via SiEval's performance alignment pipeline, which runs the same LLMs the upstream paper uses and checks that per-task accuracy distributions match the paper's reported numbers. All 12 sub-tasks passed the alignment check (relative error < 5%), which gives strong statistical evidence that our resolved labels match what the upstream paper's authors intended.

What is not in this mirror

Upstream's dataset/QA/ contains 3 additional task files — Ki.json, EC50.json, Kd.json (protein-ligand binding affinity prediction). These are deliberately not included here because:

  1. They are not listed in upstream's prompt.json (upstream's official task registry)
  2. They require an additional input modality (Molecule SMILES, not just protein sequence)
  3. Upstream's own paper reports 12 tasks, matching the 12 in this mirror

Upstream also defines a 13th task motif in prompt.json, but the corresponding motif.json QA file does not exist in the upstream repository — this is an upstream bug, not an omission in this mirror.

Rebuilding from source

If upstream Rongdingyi/LiveProteinBench updates its QA files, this mirror can be regenerated:

# 1. Clone upstream at a specific commit (for reproducibility)
git clone https://github.com/Rongdingyi/LiveProteinBench.git ./LiveProteinBench
git -C ./LiveProteinBench checkout f99f59bbead5

# 2. Run the build script
python scripts/build_live_protein_bench.py \
    --source ./LiveProteinBench \
    --output ./data/live_protein_bench

# 3. The output JSON files can then be pushed to HF Hub

The build script has zero runtime dependencies beyond the Python standard library.

Citation

@article{liveproteinbench2025,
  title={LiveProteinBench: A Contamination-Free Benchmark for Large Language Models on Protein Understanding},
  author={Rong, Dingyi and others},
  journal={arXiv preprint arXiv:2512.22257},
  year={2025}
}

License

This processed mirror follows the license of the upstream LiveProteinBench repository. Please refer to upstream's repository for the authoritative license terms.