File size: 1,829 Bytes
4ca1dbe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
license: mit
license_name: mit
license_link: LICENSE
language:
- en
- ko
pipeline_tag: text-to-image
tags:
- text-to-image
- diffusion
- flow-matching
- motif
---

*Last update: 21st July 2025*

![image/png](https://cdn-uploads.huggingface.co/production/uploads/67eccc30771cc9fc058dc2a5/Tfqp_kHraoiNuvem-SfEu.png)

# New
**Motif Vision 6B-preview** marks the initial step in our “beyond LLM” strategy. This preview version, developed in January 2025, is exactly the same as the one currently deployed in our service: https://model-hub.motiftech.io.

We are actively working on improving the model, and the latest version—along with all accompanying artifacts—will be released in the near future.

---


## Introduction

We are excited to introduce **Motif Vision 6B Preview**, a powerful text-to-image model trained entirely from scratch. 🖼️✨

This model leverages a state-of-the-art **MMDiT** (Multi-modal Diffusion Transformer) architecture and utilizes **Flow Matching** for efficient and high-quality image generation. Motif Vision 6B Preview is our latest step in pushing the boundaries of generative AI.

---

## Training Information

The model was trained on a large-scale GPU cluster, demonstrating our commitment to developing cutting-edge models.

* **GPUs**: 96 AMD Instinct™ MI250 (24 nodes × 4 GPUs)
* **Training Time**: 90 days

*Notice: A detailed technical report will be released at a later time.*

---

## Availability

### Checkpoints
The model checkpoints are shared directly in this repository and are ready for use.

### Live Demo
You can try an interactive demo of Motif Vision 6B Preview right now on the **[Motif Model Hub](https://model-hub.motiftech.io/)**.

### Code Release
The source code for inference and training will be made publicly available soon. Stay tuned for updates!