|
|
--- |
|
|
license: mit |
|
|
tags: |
|
|
- diffusion |
|
|
- llm |
|
|
- conversational |
|
|
- difference-labs |
|
|
datasets: |
|
|
- smangrul/ultrachat-10k-chatml |
|
|
base_model: |
|
|
- darwinkernelpanic/DiffReaper-5L |
|
|
--- |
|
|
|
|
|
# DiffReaper-6 |
|
|
|
|
|
**DiffReaper-6** is a Large-scale Diffusion-based Large Language Model (Diffusion-LLM) developed by **DifferenceLabs**. |
|
|
|
|
|
It represents a significant architectural leap over the previous 5L version, transitioning to a more robust denoiser and a deeper transformer-based backbone to achieve actual conversational coherence. |
|
|
|
|
|
## Model Details |
|
|
- **Architecture**: Diffusion-Transformer (DiT) with Adaptive Layer Norm (adaLN-Single) modulation. |
|
|
- **Backbone**: 24 Layers, 24 Attention Heads, 1536 Hidden Dimension. |
|
|
- **Tokenizer**: BERT-base-uncased. |
|
|
- **Training Objective**: MSE on Denoising Latents (Predicting original embeddings from noisy input). |
|
|
- **Conditioning**: Prompt-concatenated latents with time-step embedding. |
|
|
|
|
|
## Training |
|
|
The model is being trained on an RTX 5090 using the `ultrachat-10k` dataset, focusing on conversational flow and instruction following. |
|
|
|
|
|
## Goal |
|
|
To prove that diffusion models can reach (and eventually exceed) the coherence of auto-regressive models while maintaining the creative "soul" and parallel generation benefits of diffusion. |