HiPhO / README.md
nielsr's picture
nielsr HF Staff
Add Quick Start / Sample Usage section
877e41d verified
|
raw
history blame
8.72 kB
metadata
language:
  - en
  - zh
license:
  - mit
task_categories:
  - question-answering
  - image-text-to-text
tags:
  - physics
  - olympiad
  - benchmark
  - multimodal
  - llm-evaluation
  - science

πŸ₯‡ HiPhO: High School Physics Olympiad Benchmark

[πŸ† Leaderboard] [πŸ“Š Dataset] [✨ GitHub] [πŸ“„ Paper]

License: MIT

πŸ† New (Sep. 16): We launched "PhyArena", a physics reasoning leaderboard incorporating the HiPhO benchmark.

🌐 Introduction

HiPhO (High School Physics Olympiad Benchmark) is the first benchmark specifically designed to evaluate the physical reasoning abilities of (M)LLMs on real-world Physics Olympiads from 2024–2025.

hipho overview five rings

✨ Key Features

  1. Up-to-date Coverage: Includes 13 Olympiad exam papers from 2024–2025 across international and regional competitions.
  2. Mixed-modal Content: Supports four modality types, spanning from text-only to diagram-based problems.
  3. Professional Evaluation: Uses official marking schemes for answer-level and step-level grading.
  4. Human-level Comparison: Maps model scores to medal levels (Gold/Silver/Bronze) and compares with human performance.

πŸ† IPhO 2025 (Theory) Results

ipho2025 results
  • Top-1 Human Score: 29.2 / 30.0
  • Top-1 Model Score: 22.7 / 29.4 (Gemini-2.5-Pro)
  • Gold Threshold: 19.7
  • Silver Threshold: 12.1
  • Bronze Threshold: 7.2

Although models like Gemini-2.5-Pro and GPT-5 achieved gold-level scores, they still fall noticeably short of the very best human contestants.

πŸ“Š Dataset Overview

framework and stats

HiPhO contains:

  • 13 Physics Olympiads
  • 360 Problems
  • Categorized across:
    • 5 Physics Fields: Mechanics, Electromagnetism, Thermodynamics, Optics, Modern Physics
    • 4 Modality Types: Text-Only, Text+Illustration Figure, Text+Variable Figure, Text+Data Figure
    • 6 Answer Types: Expression, Numerical Value, Multiple Choice, Equation, Open-Ended, Inequality

Evaluation is conducted using:

  • Answer-level and step-level scoring, aligned with official marking schemes
  • Exam score as the evaluation metric
  • Medal-based comparison, using official thresholds for gold, silver, and bronze

πŸ–ΌοΈ Modality Categorization

modality examples
  • πŸ“ Text-Only (TO): Pure text, no figures
  • 🎯 Text+Illustration Figure (TI): Figures illustrate physical setups
  • πŸ“ Text+Variable Figure (TV): Figures define key variables or geometry
  • πŸ“Š Text+Data Figure (TD): Figures show plots, data, or functions absent from text

As models move from TO β†’ TD, performance drops sharplyβ€”highlighting the challenges of visual reasoning.

πŸ“ˆ Main Results

main results medal table
  • Closed-source reasoning MLLMs lead the benchmark, earning 6–12 gold medals (Top 5: Gemini-2.5-Pro, Gemini-2.5-Flash-Thinking, GPT-5, o3, Grok-4)
  • Open-source MLLMs mostly score at or below the bronze level
  • Open-source LLMs demonstrate stronger reasoning and generally outperform open-source MLLMs

πŸš€ Quick Start

Install Python Packages

You need to first create a conda environment and install relevant python packages

conda create -n pae python==3.10
conda activate pae

git clone https://github.com/amazon-science/PAE
cd PAE

# Install PAE
pip install -e .

# Install LLaVA
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA
pip install -e .
pip install -e ".[train]"
pip install flash-attn==2.5.9.post1 --no-build-isolation

Install Chrome

You should install google chrome and chrome driver with version 125.0.6422.141 for reproducing our results

sudo apt-get update
wget --no-verbose -O /tmp/chrome.deb https://dl.google.com/linux/chrome/deb/pool/main/g/google-chrome-stable/google-chrome-stable_125.0.6422.141-1_amd64.deb \
  && apt install -y /tmp/chrome.deb \
  && rm /tmp/chrome.deb

wget -O /tmp/chromedriver.zip https://storage.googleapis.com/chrome-for-testing-public/125.0.6422.141/linux64/chromedriver-linux64.zip
cd /tmp
unzip /tmp/chromedriver.zip
mv chromedriver-linux64/chromedriver /usr/local/bin
rm /tmp/chromedriver.zip
rm -r chromedriver-linux64
export PATH=$PATH:/usr/local/bin

Then you can verify that google chrome and chromedriver have been successfully installed with

google-chrome --version
# Google Chrome 125.0.6422.141
chromedriver --version
# ChromeDriver 125.0.6422.141

Play with the Model Yourself

import pae
from pae.models import LlavaAgent, ClaudeAgent
from accelerate import Accelerator
import torch
from tqdm import tqdm
from types import SimpleNamespace
from pae.environment.webgym import BatchedWebEnv
import os
from llava.model.language_model.llava_mistral import LlavaMistralForCausalLM

# ============= Instanstiate the agent =============
config_dict = {"use_lora": False, 
               "use_q4": False, # our 34b model is quantized to 4-bit, set it to True if you are using 34B model
               "use_anyres": False, 
               "temperature": 1.0, 
               "max_new_tokens": 512,
               "train_vision": False,
               "num_beams": 1,}
config = SimpleNamespace(**config_dict)

accelerator = Accelerator() 
agent = LlavaAgent(policy_lm = "yifeizhou/pae-llava-7b", # alternate models "yifeizhou/pae-llava-7b-webarena", "yifeizhou/pae-llava-34b"
                            device = accelerator.device, 
                            accelerator = accelerator,
                            config = config)

# ============= Instanstiate the environment =============
test_tasks = [{"web_name": "Google Map", 
               "id": "0",
          "ques": "Locate a parking lot near the Brooklyn Bridge that open 24 hours. Review the user comments about it.", 
          "web": "https://www.google.com/maps/"}]
save_path = "xxx"

test_env = BatchedWebEnv(tasks = test_tasks,
                        do_eval = False,
                        download_dir=os.path.join(save_path, 'test_driver', 'download'),
                        output_dir=os.path.join(save_path, 'test_driver', 'output'),
                        batch_size=1,
                        max_iter=10,)
# for you to check the images and actions 
image_histories = [] # stores the history of the paths of images
action_histories = [] # stores the history of actions

results = test_env.reset()
image_histories.append(results[0][0]["image"])

observations = [r[0] for r in results]
actions = agent.get_action(observations)
action_histories.append(actions[0])
dones = None

for _ in tqdm(range(3)):
    if dones is not None and all(dones):
        break
    results = test_env.step(actions)
    image_histories.append(results[0][0]["image"])
    observations = [r[0] for r in results]
    actions = agent.get_action(observations)
    action_histories.append(actions[0])
    dones = [r[2] for r in results]

print("Done!")
print("image_histories: ", image_histories)
print("action_histories: ", action_histories)

πŸ“₯ Download

πŸ”– Citation

@article{hipho2025,
  title={HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?},
  author={Yu, Fangchen and Wan, Haiyuan and Cheng, Qianjia and Zhang, Yuchen and Chen, Jiacheng and Han, Fujun and Wu, Yulun and Yao, Junchi and Hu, Ruilizhen and Ding, Ning and Cheng, Yu and Chen, Tao and Bai, Lei and Zhou, Dongzhan and Luo, Yun and Cui, Ganqu and Ye, Peng},
  journal={arXiv preprint arXiv:2509.07894},
  year={2025}
}