Add `library_name` tag and paper link to model card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -1,22 +1,23 @@
1
  ---
2
- license: apache-2.0
 
 
 
3
  datasets:
4
  - GoodBaiBai88/M3D-Cap
5
  - GoodBaiBai88/M3D-VQA
6
  language:
7
  - en
8
- base_model:
9
- - microsoft/Phi-3-mini-128k-instruct
10
- - GoodBaiBai88/M3D-CLIP
11
- - google/siglip-large-patch16-256
12
  pipeline_tag: image-text-to-text
 
13
  ---
14
 
15
  # Med-2E3-M3D
16
 
17
  ## Introduction
18
 
19
- A 3D medical LVLM, Med-2E3, trained on **3D** CT volumes and English medical texts ([M3D-Cap](https://huggingface.co/datasets/GoodBaiBai88/M3D-Cap) & [M3D-VQA](https://huggingface.co/datasets/GoodBaiBai88/M3D-VQA)), enabling tasks such as **report generation** and medical **VQA**.
20
 
21
  | | Config |
22
  | :--- | :---: |
@@ -40,4 +41,4 @@ Please refer to [Med-2E3](https://github.com/MSIIP/Med-2E3).
40
  journal={arXiv preprint arXiv:2411.12783},
41
  year={2024}
42
  }
43
- ```
 
1
  ---
2
+ base_model:
3
+ - microsoft/Phi-3-mini-128k-instruct
4
+ - GoodBaiBai88/M3D-CLIP
5
+ - google/siglip-large-patch16-256
6
  datasets:
7
  - GoodBaiBai88/M3D-Cap
8
  - GoodBaiBai88/M3D-VQA
9
  language:
10
  - en
11
+ license: apache-2.0
 
 
 
12
  pipeline_tag: image-text-to-text
13
+ library_name: transformers
14
  ---
15
 
16
  # Med-2E3-M3D
17
 
18
  ## Introduction
19
 
20
+ A 3D medical LVLM, Med-2E3, trained on **3D** CT volumes and English medical texts ([M3D-Cap](https://huggingface.co/datasets/GoodBaiBai88/M3D-Cap) & [M3D-VQA](https://huggingface.co/datasets/GoodBaiBai88/M3D-VQA)), enabling tasks such as **report generation** and medical **VQA**. This model is presented in the paper [Med-2E3: A 2D-Enhanced 3D Medical Multimodal Large Language Model](https://huggingface.co/papers/2411.12783).
21
 
22
  | | Config |
23
  | :--- | :---: |
 
41
  journal={arXiv preprint arXiv:2411.12783},
42
  year={2024}
43
  }
44
+ ```