Datasets:
Upload folder using huggingface_hub
Browse files- README.md +137 -0
- assets/pipeline.png +3 -0
- data/test.parquet +3 -0
- evaluation/README.md +3 -0
- evaluation/tasks/MolParse/MolParse.yaml +30 -0
- evaluation/tasks/MolParse/utils.py +95 -0
README.md
ADDED
|
@@ -0,0 +1,137 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- visual-question-answering
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- multimodal
|
| 9 |
+
pretty_name: MolParse
|
| 10 |
+
size_categories:
|
| 11 |
+
- 1K<n<10K
|
| 12 |
+
configs:
|
| 13 |
+
- config_name: default
|
| 14 |
+
data_files:
|
| 15 |
+
- split: test
|
| 16 |
+
path: data/test.parquet
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# MolParse Bench
|
| 20 |
+
|
| 21 |
+
<center><h1>MolParse</h1></center>
|
| 22 |
+
|
| 23 |
+
<p align="center">
|
| 24 |
+
<img src="./assets/pipeline.png" alt="MolParse" style="display: block; margin: auto; max-width: 70%;">
|
| 25 |
+
</p>
|
| 26 |
+
|
| 27 |
+
<p align="center">
|
| 28 |
+
| <a href="https://huggingface.co/datasets/InnovatorLab/MolParse"><b>HuggingFace</b></a> |
|
| 29 |
+
<a href="https://github.com/InnovatorLab/MolParse"><b>Code</b></a> |
|
| 30 |
+
</p>
|
| 31 |
+
|
| 32 |
+
---
|
| 33 |
+
|
| 34 |
+
## 🔥 Latest News
|
| 35 |
+
|
| 36 |
+
- **[2026/01]** MolParse v1.0 is officially released.
|
| 37 |
+
|
| 38 |
+
---
|
| 39 |
+
|
| 40 |
+
## Overview
|
| 41 |
+
|
| 42 |
+
**MolParse** is a large-scale multimodal dataset for **optical chemical structure parsing**, designed to evaluate and train models that convert **molecular structure images** into **structured chemical representations**.
|
| 43 |
+
|
| 44 |
+
The dataset focuses on realistic chemical diagrams commonly found in scientific literature and patents, emphasizing robustness to visual noise, diverse drawing styles, and complex molecular layouts. MolParse supports tasks that require precise visual perception and structured chemical understanding, rather than simple text recognition.
|
| 45 |
+
|
| 46 |
+
---
|
| 47 |
+
|
| 48 |
+
## Benchmark Scope
|
| 49 |
+
|
| 50 |
+
MolParse evaluates models across the following core capability dimensions:
|
| 51 |
+
|
| 52 |
+
### 1. Molecular Structure Perception
|
| 53 |
+
|
| 54 |
+
Assess the ability to accurately recognize:
|
| 55 |
+
- Atoms and bonds
|
| 56 |
+
- Ring systems and fused structures
|
| 57 |
+
- Substituents and functional groups
|
| 58 |
+
- Variable attachment points and abstract structures
|
| 59 |
+
|
| 60 |
+
### 2. Structured Chemical Representation
|
| 61 |
+
|
| 62 |
+
Evaluate the capacity to translate molecular images into:
|
| 63 |
+
- Linearized chemical strings
|
| 64 |
+
- Structured symbolic representations
|
| 65 |
+
- Machine-readable formats suitable for downstream reasoning
|
| 66 |
+
|
| 67 |
+
### 3. Robustness in Real-World Documents
|
| 68 |
+
|
| 69 |
+
Test model stability under:
|
| 70 |
+
- Noisy or low-quality scans
|
| 71 |
+
- Diverse drawing conventions
|
| 72 |
+
- Crowded layouts and overlapping annotations
|
| 73 |
+
- Variations in resolution and aspect ratio
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
## Dataset Characteristics
|
| 78 |
+
|
| 79 |
+
- **Task Format**: Image-to-Structure Parsing
|
| 80 |
+
- **Modalities**: Image + Text
|
| 81 |
+
- **Domain**: Chemistry
|
| 82 |
+
- **Languages**: English
|
| 83 |
+
- **Annotation**: Expert-verified
|
| 84 |
+
- **Data Scale**: Large-scale (millions of image–structure pairs)
|
| 85 |
+
|
| 86 |
+
---
|
| 87 |
+
|
| 88 |
+
## Task Types
|
| 89 |
+
|
| 90 |
+
Each MolParse sample supports one or more of the following task types:
|
| 91 |
+
|
| 92 |
+
1. **Molecular Image Captioning**
|
| 93 |
+
Convert molecular diagrams into structured chemical strings.
|
| 94 |
+
|
| 95 |
+
2. **Symbol and Topology Recognition**
|
| 96 |
+
Identify atoms, bonds, rings, and connection patterns.
|
| 97 |
+
|
| 98 |
+
3. **Complex Structure Parsing**
|
| 99 |
+
Handle abstract rings, variable groups, and non-canonical layouts.
|
| 100 |
+
|
| 101 |
+
4. **Noise-Robust Recognition**
|
| 102 |
+
Maintain parsing accuracy under visual distortion or interference.
|
| 103 |
+
|
| 104 |
+
---
|
| 105 |
+
|
| 106 |
+
## Data Usage
|
| 107 |
+
|
| 108 |
+
MolParse is suitable for:
|
| 109 |
+
- Training end-to-end optical chemical structure recognition models
|
| 110 |
+
- Evaluating vision-language and vision-only chemical parsers
|
| 111 |
+
- Scientific document understanding pipelines
|
| 112 |
+
- Downstream chemical reasoning and information extraction
|
| 113 |
+
|
| 114 |
+
---
|
| 115 |
+
|
| 116 |
+
## Download MolParse Dataset
|
| 117 |
+
|
| 118 |
+
You can load the MolParse dataset using the HuggingFace `datasets` library:
|
| 119 |
+
|
| 120 |
+
```python
|
| 121 |
+
from datasets import load_dataset
|
| 122 |
+
|
| 123 |
+
dataset = load_dataset("InnovatorLab/MolParse")
|
| 124 |
+
```
|
| 125 |
+
## Evaluations
|
| 126 |
+
|
| 127 |
+
We use [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) for evaluations.
|
| 128 |
+
Please refer to the files under [`./evaluation`](./evaluation/README.md) for detailed evaluation configurations and scripts.
|
| 129 |
+
|
| 130 |
+
---
|
| 131 |
+
|
| 132 |
+
## License
|
| 133 |
+
|
| 134 |
+
MolParse is released under the **MIT License**.
|
| 135 |
+
See [LICENSE](./LICENSE) for more details.
|
| 136 |
+
|
| 137 |
+
---
|
assets/pipeline.png
ADDED
|
Git LFS Details
|
data/test.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e679f2538f961b50efe59822b98ae315c930a6dcb1561ec3fc25da1d011f150b
|
| 3 |
+
size 34544964
|
evaluation/README.md
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Evaluations of MolParse
|
| 2 |
+
|
| 3 |
+
We evaluate the MolParse dataset using lmms-eval. The evaluation codes are listed in this folder.
|
evaluation/tasks/MolParse/MolParse.yaml
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
dataset_path: "InnovatorLab/MolParse"
|
| 2 |
+
task: "MolParse"
|
| 3 |
+
test_split: "test"
|
| 4 |
+
output_type: "generate_until"
|
| 5 |
+
|
| 6 |
+
doc_to_visual: !function utils.doc_to_visual
|
| 7 |
+
doc_to_text: !function utils.doc_to_text
|
| 8 |
+
doc_to_target: !function utils.doc_to_target
|
| 9 |
+
|
| 10 |
+
generation_kwargs:
|
| 11 |
+
max_new_tokens: 256
|
| 12 |
+
temperature: 0.0
|
| 13 |
+
top_p: 1.0
|
| 14 |
+
num_beams: 1
|
| 15 |
+
do_sample: false
|
| 16 |
+
|
| 17 |
+
process_results: !function utils.process_results
|
| 18 |
+
|
| 19 |
+
metric_list:
|
| 20 |
+
- metric: api_judge_accuracy
|
| 21 |
+
aggregation: !function utils.aggregation
|
| 22 |
+
higher_is_better: true
|
| 23 |
+
|
| 24 |
+
lmms_eval_specific_kwargs:
|
| 25 |
+
default:
|
| 26 |
+
pre_prompt: ""
|
| 27 |
+
post_prompt: ""
|
| 28 |
+
|
| 29 |
+
metadata:
|
| 30 |
+
- version: 1.0
|
evaluation/tasks/MolParse/utils.py
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
from typing import Dict, Any, List
|
| 3 |
+
from PIL import Image
|
| 4 |
+
from openai import OpenAI
|
| 5 |
+
|
| 6 |
+
API_KEY = os.environ.get("OPENAI_API_KEY", "EMPTY")
|
| 7 |
+
API_BASE_URL = os.environ.get("OPENAI_BASE_URL", "http://localhost:8000/v1")
|
| 8 |
+
MODEL_NAME = os.environ.get("OPENAI_MODEL_NAME", "default-model")
|
| 9 |
+
|
| 10 |
+
client = OpenAI(
|
| 11 |
+
base_url=API_BASE_URL,
|
| 12 |
+
api_key=API_KEY,
|
| 13 |
+
)
|
| 14 |
+
|
| 15 |
+
def api_judge_answer(question: str, ground_truth: str, model_prediction: str) -> bool:
|
| 16 |
+
"""
|
| 17 |
+
Use judge model API to judge if model prediction is correct
|
| 18 |
+
"""
|
| 19 |
+
system_prompt = """You are a professional evaluation assistant. Please carefully compare whether the model's predicted answer matches the standard answer.
|
| 20 |
+
|
| 21 |
+
Evaluation criteria:
|
| 22 |
+
1. For chemical formulas/E-SMILES: Consider correct if structures are identical
|
| 23 |
+
2. For numerical answers: Consider correct if values are the same (allow minor differences in decimal places)
|
| 24 |
+
3. For text answers: Consider correct if semantics are the same
|
| 25 |
+
4. For Yes/No questions: Consider correct if the answer direction is consistent
|
| 26 |
+
|
| 27 |
+
Please only answer "correct" or "incorrect", do not explain the reasons."""
|
| 28 |
+
|
| 29 |
+
user_prompt = f"""Question: {question}
|
| 30 |
+
|
| 31 |
+
Standard Answer: {ground_truth}
|
| 32 |
+
Model Prediction: {model_prediction}
|
| 33 |
+
|
| 34 |
+
Please judge whether the model prediction is correct? Only answer "correct" or "incorrect":"""
|
| 35 |
+
|
| 36 |
+
try:
|
| 37 |
+
completion = client.chat.completions.create(
|
| 38 |
+
model=MODEL_NAME,
|
| 39 |
+
messages=[
|
| 40 |
+
{"role": "system", "content": system_prompt},
|
| 41 |
+
{"role": "user", "content": user_prompt}
|
| 42 |
+
],
|
| 43 |
+
temperature=0.0,
|
| 44 |
+
max_tokens=10,
|
| 45 |
+
)
|
| 46 |
+
|
| 47 |
+
judgment = completion.choices[0].message.content.strip().lower()
|
| 48 |
+
|
| 49 |
+
if judgment == "correct":
|
| 50 |
+
return True
|
| 51 |
+
elif judgment == "incorrect":
|
| 52 |
+
return False
|
| 53 |
+
else:
|
| 54 |
+
print(f"Warning: Model returned unexpected judgment: '{judgment}'")
|
| 55 |
+
return False
|
| 56 |
+
|
| 57 |
+
except Exception as e:
|
| 58 |
+
print(f"API judgment error: {e}")
|
| 59 |
+
return False
|
| 60 |
+
|
| 61 |
+
def doc_to_visual(doc: Dict[str, Any]) -> List[Image.Image]:
|
| 62 |
+
images = doc.get("images", [])
|
| 63 |
+
if not images:
|
| 64 |
+
return []
|
| 65 |
+
if isinstance(images, Image.Image):
|
| 66 |
+
images = [images]
|
| 67 |
+
return [img.convert("RGB") for img in images]
|
| 68 |
+
|
| 69 |
+
def doc_to_text(doc, lmms_eval_specific_kwargs=None):
|
| 70 |
+
pre_prompt = lmms_eval_specific_kwargs.get("pre_prompt", "") if lmms_eval_specific_kwargs else ""
|
| 71 |
+
post_prompt = lmms_eval_specific_kwargs.get("post_prompt", "") if lmms_eval_specific_kwargs else ""
|
| 72 |
+
content = doc.get("question", "")
|
| 73 |
+
return f"{pre_prompt}{content}{post_prompt}"
|
| 74 |
+
|
| 75 |
+
def doc_to_target(doc):
|
| 76 |
+
return doc.get("answer", "")
|
| 77 |
+
|
| 78 |
+
def process_results(doc: Dict[str, Any], results: List[str]) -> Dict[str, Any]:
|
| 79 |
+
prediction = results[0] if isinstance(results, list) else results
|
| 80 |
+
target = doc_to_target(doc)
|
| 81 |
+
question = doc_to_text(doc)
|
| 82 |
+
api_judge_correct = False
|
| 83 |
+
try:
|
| 84 |
+
api_judge_correct = api_judge_answer(question, target, prediction)
|
| 85 |
+
except Exception as e:
|
| 86 |
+
print(f"API judgment failed during process_results, using basic matching: {e}")
|
| 87 |
+
return {
|
| 88 |
+
"api_judge_accuracy": float(api_judge_correct),
|
| 89 |
+
"question": question,
|
| 90 |
+
"raw_output": prediction,
|
| 91 |
+
"ground_truth": target
|
| 92 |
+
}
|
| 93 |
+
|
| 94 |
+
def aggregation(results: List[float]) -> float:
|
| 95 |
+
return sum(results) / len(results) if results else 0.0
|