Update README.md
Browse files
README.md
CHANGED
|
@@ -20,6 +20,10 @@ pipeline_tag: image-text-to-text
|
|
| 20 |
|
| 21 |
# 💧 Dimple-7B
|
| 22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
**Dimple** is the first Discrete Diffusion Multimodal Large Language Model (DMLLM) that leverages a hybrid training paradigm combining autoregressive and diffusion-based instruction tuning. The model architecture is similar to Qwen and LLaVA, while introducing an **autoregressive-then-diffusion** training strategy:
|
| 25 |
|
|
|
|
| 20 |
|
| 21 |
# 💧 Dimple-7B
|
| 22 |
|
| 23 |
+
<p align="center">
|
| 24 |
+
🤗 <a href="https://huggingface.co/rp-yu/Dimple-7B">Model</a>   |    💬 <a href="https://huggingface.co/spaces/rp-yu/Dimple-7B">Demo: Chat with Dimple</a>   |   📑 <a href="https://arxiv.org/abs/">Paper</a>   |    ✨ <a href="https://github.com/yu-rp/Dimple">Code</a>  
|
| 25 |
+
</p>
|
| 26 |
+
|
| 27 |
|
| 28 |
**Dimple** is the first Discrete Diffusion Multimodal Large Language Model (DMLLM) that leverages a hybrid training paradigm combining autoregressive and diffusion-based instruction tuning. The model architecture is similar to Qwen and LLaVA, while introducing an **autoregressive-then-diffusion** training strategy:
|
| 29 |
|