abdurrahimyilmaz commited on
Commit
4c461ce
·
verified ·
1 Parent(s): bfc209f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -7
README.md CHANGED
@@ -1,10 +1,36 @@
 
 
 
 
 
 
 
1
  ---
2
- title: README
3
- emoji: 🦀
4
- colorFrom: purple
5
- colorTo: indigo
6
- sdk: static
7
- pinned: false
 
 
 
 
 
 
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🧠 DermaVLM Organization
2
+
3
+ **DermaVLM** is an open-source research organization focused on **resource-efficient medical Vision–Language Models (VLMs) for medicine (our use case is dermatology)**.
4
+ Our work combines **synthetic data generation**, **LLM/VLM training**, **model inference**, and **clinical image annotation tools** into a unified research ecosystem.
5
+
6
+ This organization is built around the **SCALEMED framework**, enabling scalable and cost-effective dermatology-focused AI research.
7
+
8
  ---
9
+
10
+ ## 🧪 Research Foundation
11
+
12
+ > **📄 Research Paper**
13
+ > This organization is part of the SCALEMED framework.
14
+ > **Read our paper on medRxiv**:
15
+ > **Resource-efficient medical vision language model for dermatology via a synthetic data generation framework**
16
+ > https://www.medrxiv.org/content/10.1101/2025.05.17.25327785v2
17
+ >
18
+ > If you use any of our repositories in your research, please cite our paper (see the [Citation](#-citation) section).
19
+
20
+ You can reach main codes at https://github.com/DermaVLM
21
  ---
22
 
23
+
24
+ ## 📖 Citation
25
+
26
+ If you use this organization or any associated repositories in your research, please cite:
27
+
28
+ ```bibtex
29
+ @article{yilmaz2025resource,
30
+ title={Resource-efficient medical vision language model for dermatology via a synthetic data generation framework},
31
+ author={Yilmaz, A and Yuceyalcin, F and Varol, R and Gokyayla, E and Erdem, O and Choi, D and Demircali, AA and Gencoglan, G and Posma, JM and Temelkuran, B},
32
+ year={2025}
33
+ }
34
+ ```
35
+
36
+ Feel free to explore the repositories, open issues, or contribute to advancing medical-focused small and succesful vision-language models.