| language: | |
| - en | |
| license: mit | |
| size_categories: | |
| - n<1K | |
| task_categories: | |
| - question-answering | |
| pretty_name: TheoremQA | |
| dataset_info: | |
| features: | |
| - name: Question | |
| dtype: string | |
| - name: Answer | |
| dtype: string | |
| - name: Answer_type | |
| dtype: string | |
| - name: Picture | |
| dtype: image | |
| splits: | |
| - name: test | |
| num_bytes: 5025005 | |
| num_examples: 800 | |
| download_size: 4949475 | |
| dataset_size: 5025005 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: test | |
| path: data/test-* | |
| tags: | |
| - science | |
| - geometry | |
| - mathematical-reasoning | |
| # Dataset Card for "TheoremQA" | |
| ## Introduction | |
| We propose the first question-answering dataset driven by STEM theorems. We annotated 800 QA pairs covering 350+ theorems spanning across Math, EE&CS, Physics and Finance. The dataset is collected by human experts with very high quality. We provide the dataset as a new benchmark to test the limit of large language models to apply theorems to solve challenging university-level questions. We provide a pipeline in the following to prompt LLMs and evaluate their outputs with WolframAlpha. | |
| ## How to use TheoremQA | |
| ``` | |
| from datasets import load_dataset | |
| dataset = load_dataset("TIGER-Lab/TheoremQA") | |
| for d in dataset['test']: | |
| print(d) | |
| ``` | |
| ## Arxiv Paper: | |
| https://arxiv.org/abs/2305.12524 | |
| ## Related Survey Paper: | |
| This dataset is mentioned in the survey paper [A Survey of Deep Learning for Geometry Problem Solving](https://huggingface.co/papers/2507.11936). | |
| ## Code | |
| https://github.com/wenhuchen/TheoremQA/tree/main | |
| ## Related Code (Survey Reading List) | |
| https://github.com/majianz/gps-survey |