File size: 1,596 Bytes
34e7cca 3f8a68c 34e7cca a340b17 34e7cca 9b78214 a340b17 9b78214 a340b17 3f8a68c 34e7cca 3f8a68c 34e7cca 9e71329 8584499 9e71329 3f8a68c 9e71329 3f8a68c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
language:
- en
license: mit
size_categories:
- n<1K
task_categories:
- question-answering
pretty_name: TheoremQA
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: string
- name: Answer_type
dtype: string
- name: Picture
dtype: image
splits:
- name: test
num_bytes: 5025005
num_examples: 800
download_size: 4949475
dataset_size: 5025005
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
tags:
- science
- geometry
- mathematical-reasoning
---
# Dataset Card for "TheoremQA"
## Introduction
We propose the first question-answering dataset driven by STEM theorems. We annotated 800 QA pairs covering 350+ theorems spanning across Math, EE&CS, Physics and Finance. The dataset is collected by human experts with very high quality. We provide the dataset as a new benchmark to test the limit of large language models to apply theorems to solve challenging university-level questions. We provide a pipeline in the following to prompt LLMs and evaluate their outputs with WolframAlpha.
## How to use TheoremQA
```
from datasets import load_dataset
dataset = load_dataset("TIGER-Lab/TheoremQA")
for d in dataset['test']:
print(d)
```
## Arxiv Paper:
https://arxiv.org/abs/2305.12524
## Related Survey Paper:
This dataset is mentioned in the survey paper [A Survey of Deep Learning for Geometry Problem Solving](https://huggingface.co/papers/2507.11936).
## Code
https://github.com/wenhuchen/TheoremQA/tree/main
## Related Code (Survey Reading List)
https://github.com/majianz/gps-survey |