Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
AI4Industry commited on
Commit
37d2f30
Β·
verified Β·
1 Parent(s): 4cfd78e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -64,6 +64,19 @@ This benchmark challenges visual-language models on their foundational knowledge
64
  The benchmark is released in both English and Chinese versions.
65
 
66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  ## 🎯 Benchmark Evaluation
68
 
69
  This benchmark evaluates model performance on multiple-choice question answering (MCQ) tasks.
 
64
  The benchmark is released in both English and Chinese versions.
65
 
66
 
67
+
68
+ ## πŸ“‘ Task Types
69
+
70
+ We categorize chemical reaction visual question answering tasks into six types:
71
+
72
+ - **Type 0 β€” Fact Extraction**: Direct retrieval of textual or numerical information from reaction schemes.
73
+ - **Type 1 β€” Reagent Roles and Functions Identification**: Identification of reagents and their functional roles, requiring chemical knowledge and reaction-type awareness.
74
+ - **Type 2 β€” Reaction Mechanism and Process Understanding**: Interpretation of reaction progression, including intermediates, catalytic cycles, and mechanistic steps.
75
+ - **Type 3 β€” Comparative Analysis and Reasoning**: Comparative evaluation, causal explanation, or outcome prediction under varying conditions.
76
+ - **Type 4 β€” Multi-step Synthesis and Global Understanding**: Comprehension of multi-step pathways, step-to-step coherence, and overall synthetic design.
77
+ - **Type 5 β€” Chemical Structure Recognition**: Extraction and reasoning-based parsing of chemical structures in SMILES or E-SMILES (as defined in the [MolParser](https://arxiv.org/abs/2411.11098) paper).
78
+
79
+
80
  ## 🎯 Benchmark Evaluation
81
 
82
  This benchmark evaluates model performance on multiple-choice question answering (MCQ) tasks.