Update README.md
Browse files
README.md
CHANGED
|
@@ -18,12 +18,12 @@ pipeline_tag: image-text-to-text
|
|
| 18 |
---
|
| 19 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/635364b3c41f548fe39db945/T6ffjtAkFkI76QjXmN6iR.png" alt="Dimple" style="width:100%;"/>
|
| 20 |
|
| 21 |
-
# 💧 Dimple-7B
|
| 22 |
|
| 23 |
<p align="center">
|
| 24 |
🤗 <a href="https://huggingface.co/rp-yu/Dimple-7B">Model</a>   |    💬 <a href="https://huggingface.co/spaces/rp-yu/Dimple-7B">Demo: Chat with Dimple</a>   |   📑 <a href="https://arxiv.org/abs/">Paper</a>   |    ✨ <a href="https://github.com/yu-rp/Dimple">Code</a>  
|
| 25 |
</p>
|
| 26 |
|
|
|
|
| 27 |
|
| 28 |
**Dimple** is the first Discrete Diffusion Multimodal Large Language Model (DMLLM) that leverages a hybrid training paradigm combining autoregressive and diffusion-based instruction tuning. The model architecture is similar to Qwen and LLaVA, while introducing an **autoregressive-then-diffusion** training strategy:
|
| 29 |
|
|
|
|
| 18 |
---
|
| 19 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/635364b3c41f548fe39db945/T6ffjtAkFkI76QjXmN6iR.png" alt="Dimple" style="width:100%;"/>
|
| 20 |
|
|
|
|
| 21 |
|
| 22 |
<p align="center">
|
| 23 |
🤗 <a href="https://huggingface.co/rp-yu/Dimple-7B">Model</a>   |    💬 <a href="https://huggingface.co/spaces/rp-yu/Dimple-7B">Demo: Chat with Dimple</a>   |   📑 <a href="https://arxiv.org/abs/">Paper</a>   |    ✨ <a href="https://github.com/yu-rp/Dimple">Code</a>  
|
| 24 |
</p>
|
| 25 |
|
| 26 |
+
# 💧 Dimple-7B
|
| 27 |
|
| 28 |
**Dimple** is the first Discrete Diffusion Multimodal Large Language Model (DMLLM) that leverages a hybrid training paradigm combining autoregressive and diffusion-based instruction tuning. The model architecture is similar to Qwen and LLaVA, while introducing an **autoregressive-then-diffusion** training strategy:
|
| 29 |
|