4-step NextDiT

The 0.5B model for few steps image generation. It uses the SD3 VAE and Qwen 0.5B as text encoder.

The model learns from a denoised output of a larger pretrained model and a loss value from a discriminator model.

In its current state, it's evaluated as a refiner model in the second pass.

Setup

Clone the inference script to the current working directory.

from onediffusion.models.denoiser.nextdit.modeling_nextdit import NextDiT
NextDiT.from_pretrained('twodgirl/oneplus', subfolder='transformer')

Training

Due to the lack of visible progress and adaptation of the Lumina T2I and OneDiffusion models, this repo was born.

OneDiffusion has shown a better understanding of art styles and human poses than similar sized DiT models.

The architecture has proven its worth. The training script builds on it and follows the Flash Diffusion training method.

It was also possible to finetune such a model with limited VRAM.

Datasets

  • Zuntan/Animagine_XL_3.0-Character
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for twodgirl/archived-oneplus