GSAI-ML/LLaDA-o

#2080
by VaLtEc-BoY - opened

LLaDA-o
We introduce LLaDA-o, an effective and length-adaptive omni diffusion model for unified multimodal understanding and generation.

LLaDA-o extends diffusion language modeling to a broader multimodal setting, supporting both visual understanding and visual generation within a single framework. The released codebase provides a practical inference pipeline for interleaved text-image processing and a notebook-based workflow for reproducible experiments.

It was presented in the paper LLaDA-o: An Effective and Length-Adaptive Omni Diffusion Model.

Code: https://github.com/ML-GSAI/LLaDA-o

Highlights
Unified multimodal modeling for both understanding and generation
Support for text-to-image generation
Support for image understanding
Support for instruction-based image editing
Reproducible inference workflow through multimodal_demo.ipynb
Supported Tasks
The current release is designed for the following multimodal inference settings:

Text-to-image: generate images from natural language prompts
Image understanding: produce textual responses conditioned on an input image
Image editing: edit an image according to a textual instruction
Interleaved multimodal inference: process text and image context within a shared diffusion-based framework

Hey valtec, I see a lot of text, but where's the model lol ?

Sign up or log in to comment