AprioriLv commited on
Commit
90a2f83
·
verified ·
1 Parent(s): 043b0cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -14,6 +14,8 @@ size_categories:
14
 
15
  ![SpecVQA 示例图](example/SpecVQA.jpg)
16
 
 
 
17
  ## 1. Background
18
 
19
  Multimodal Large Language Models (MLLMs) have achieved notable progress in visual–language understanding and cross-modal reasoning, yet their capabilities remain limited when applied to the highly specialized task of spectral understanding. These limitations are further obscured by existing benchmarks, which either emphasize general object recognition or focus on simple chart-based data retrieval, lacking the scientific grounding needed to accurately assess or diagnose model performance in this domain.
 
14
 
15
  ![SpecVQA 示例图](example/SpecVQA.jpg)
16
 
17
+ $$ E = mc^2 $$
18
+
19
  ## 1. Background
20
 
21
  Multimodal Large Language Models (MLLMs) have achieved notable progress in visual–language understanding and cross-modal reasoning, yet their capabilities remain limited when applied to the highly specialized task of spectral understanding. These limitations are further obscured by existing benchmarks, which either emphasize general object recognition or focus on simple chart-based data retrieval, lacking the scientific grounding needed to accurately assess or diagnose model performance in this domain.