junzhin commited on
Commit
a0126ff
·
verified ·
1 Parent(s): 7e9c37e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -5
README.md CHANGED
@@ -1,7 +1,27 @@
1
- ---
2
- license: apache-2.0
3
- pipeline_tag: any-to-any
4
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  # Model Card for UniMedVL
6
 
7
  UniMedVL is the first unified medical foundation model for seamless multimodal understanding and generation, following a clinically-inspired Observation-Knowledge-Analysis framework.
@@ -41,5 +61,6 @@ The model can be directly used for:
41
 
42
  - **Clinical Decision Making**: This model is for research purposes only and should NOT be used for actual clinical diagnosis or treatment decisions
43
 
 
44
 
45
-
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: any-to-any
4
+ language:
5
+ - en
6
+ - zh
7
+ metrics:
8
+ - accuracy
9
+ base_model:
10
+ - ByteDance-Seed/BAGEL-7B-MoT
11
+ - Qwen/Qwen2.5-VL-7B-Instruct
12
+ tags:
13
+ - medical
14
+ - vision-language
15
+ - multimodal
16
+ - unified-model
17
+ - medical-vqa
18
+ - text-to-image
19
+ - image-to-text
20
+ - medical-understanding
21
+ - report-generation
22
+ - interleaved-multimodal
23
+ - modality-transfer
24
+ ---
25
  # Model Card for UniMedVL
26
 
27
  UniMedVL is the first unified medical foundation model for seamless multimodal understanding and generation, following a clinically-inspired Observation-Knowledge-Analysis framework.
 
61
 
62
  - **Clinical Decision Making**: This model is for research purposes only and should NOT be used for actual clinical diagnosis or treatment decisions
63
 
64
+ ## Acknowledgments
65
 
66
+ We sincerely thank the [Bagel](https://github.com/jmistral/bagel) project for providing the foundational framework upon which our code and model training are built.