mr02 commited on
Commit
2a93eb2
·
verified ·
1 Parent(s): 9a09082

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -55,8 +55,9 @@ configs:
55
 
56
  [Paper](https://huggingface.co/papers/2506.21355) | [Project page](https://smmile-benchmark.github.io) | [Code](https://github.com/eth-medical-ai-lab/smmile)
57
  <div align="center">
58
- <img src="./logo_final.png" alt="SMMILE Logo" width="350">
59
  </div>
 
60
  ## Introduction
61
 
62
  Multimodal in-context learning (ICL) remains underexplored despite the profound potential it could have in complex application domains such as medicine. Clinicians routinely face a long tail of tasks which they need to learn to solve from few examples, such as considering few relevant previous cases or few differential diagnoses. While MLLMs have shown impressive advances in medical visual question answering (VQA) or multi-turn chatting, their ability to learn multimodal tasks from context is largely unknown.
 
55
 
56
  [Paper](https://huggingface.co/papers/2506.21355) | [Project page](https://smmile-benchmark.github.io) | [Code](https://github.com/eth-medical-ai-lab/smmile)
57
  <div align="center">
58
+ <img src="./logo_final.png" alt="SMMILE Logo" width="400">
59
  </div>
60
+
61
  ## Introduction
62
 
63
  Multimodal in-context learning (ICL) remains underexplored despite the profound potential it could have in complex application domains such as medicine. Clinicians routinely face a long tail of tasks which they need to learn to solve from few examples, such as considering few relevant previous cases or few differential diagnoses. While MLLMs have shown impressive advances in medical visual question answering (VQA) or multi-turn chatting, their ability to learn multimodal tasks from context is largely unknown.