Ming-sheng Li
commited on
Commit
·
c4a9f22
1
Parent(s):
ff5b521
commit
Browse files- .DS_Store +0 -0
- README.md +38 -0
- images.zip +3 -0
- pretrain-data.zip +3 -0
- unified_formal_annotations.json +0 -0
.DS_Store
ADDED
|
Binary file (6.15 kB). View file
|
|
|
README.md
ADDED
|
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
## Introduction to GeoX
|
| 3 |
+
|
| 4 |
+
**GeoX** is a multi-modal large model designed for automatic geometric problem solving, utilizing three progressive training stages to enhance diagram understanding and reasoning. In this paper, we validate that the **formal vision-language training** paradigm is a simple-yet-effective solution for complex mathematical diagram learning.
|
| 5 |
+
|
| 6 |
+
## Data Preparation for GeoX
|
| 7 |
+
|
| 8 |
+
### Step 1. Data for Unimodal Pre-training
|
| 9 |
+
|
| 10 |
+
**Update 2024-07-16:** You can download our collected diagram images from [this link](https://huggingface.co/datasets/U4R/GeoX-data/pretrain-data.zip).
|
| 11 |
+
|
| 12 |
+
Additionally, we used existing geometric text to build a corpus, detailed in [our paper]().
|
| 13 |
+
|
| 14 |
+
### Step 2. Data for Geometry-Language Alignment
|
| 15 |
+
|
| 16 |
+
To train the GS-Former, please prepare the [unified formal annotations](https://huggingface.co/datasets/U4R/GeoX-data/unified_formal_annotations.json) and paired [images](https://huggingface.co/datasets/U4R/GeoX-data/images.zip).
|
| 17 |
+
|
| 18 |
+
### Step 3. Data for End-to-End Visual Instruction Tuning
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
We use the GeoQA, UniGeo, Geometry3K, and PGPS9K datasets for fine-tuning and evaluation:
|
| 24 |
+
|
| 25 |
+
1. **GeoQA**: Follow the instructions [here](https://github.com/chen-judge/GeoQA) to download the `GeoQA` dataset.
|
| 26 |
+
2. **UniGeo**: Follow the instructions [here](https://github.com/chen-judge/UniGeo) to download the `UniGeo` dataset.
|
| 27 |
+
3. **Geometry3K and PGPS9K**: Follow the instructions [here](https://github.com/mingliangzhang2018/PGPS) to download the `PGPS9K` datasets. The `Geometry3K` is also provided in this database.
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
<font color="#dd0000">Note:</font> Due to copyright restrictions, we are currently only providing links for the datasets. Full datasets for tuning and evaluation organized by us will be provided via email. If you need it, please contact us by [email](mailto:limc22@m.fudan.edu.cn).
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
For more details, please refer to [our paper]() and our [GitHub repository](https://github.com/UniModal4Reasoning/GeoX). If you find our work helpful, please consider starring ⭐ in this repository and citing us:
|
| 34 |
+
|
| 35 |
+
```bibtex
|
| 36 |
+
|
| 37 |
+
```
|
| 38 |
+
|
images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cac4b5d96f1fd48497f10adfb1f4f538a54df3d9db9a6089da9917e4f7d10720
|
| 3 |
+
size 68077756
|
pretrain-data.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9c34c921bc4161b042613e2de5fd5424ee97fc450042ae9896fe5af55b2459f4
|
| 3 |
+
size 1908999185
|
unified_formal_annotations.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|