DP commited on
Commit
29ad320
·
1 Parent(s): c9fbe51

update readme

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -14,7 +14,7 @@ size_categories:
14
 
15
  ![SpecVQA 示例图](example/SpecVQA.jpg)
16
 
17
- ## 1. Background
18
 
19
  Multimodal Large Language Models (MLLMs) have achieved notable progress in visual–language understanding and cross-modal reasoning, yet their capabilities remain limited when applied to the highly specialized task of spectral understanding. These limitations are further obscured by existing benchmarks, which either emphasize general object recognition or focus on simple chart-based data retrieval, lacking the scientific grounding needed to accurately assess or diagnose model performance in this domain.
20
 
 
14
 
15
  ![SpecVQA 示例图](example/SpecVQA.jpg)
16
 
17
+ ## 1. Introduction
18
 
19
  Multimodal Large Language Models (MLLMs) have achieved notable progress in visual–language understanding and cross-modal reasoning, yet their capabilities remain limited when applied to the highly specialized task of spectral understanding. These limitations are further obscured by existing benchmarks, which either emphasize general object recognition or focus on simple chart-based data retrieval, lacking the scientific grounding needed to accurately assess or diagnose model performance in this domain.
20