darwinkernelpanic commited on
Commit
e12a518
·
verified ·
1 Parent(s): 43547e0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: diffusers
4
+ tags:
5
+ - diffusion
6
+ - llm
7
+ - conversational
8
+ - difference-labs
9
+ datasets:
10
+ - smangrul/ultrachat-10k-chatml
11
+ ---
12
+
13
+ # DiffReaper-6
14
+
15
+ **DiffReaper-6** is a Large-scale Diffusion-based Large Language Model (Diffusion-LLM) developed by **DifferenceLabs**.
16
+
17
+ It represents a significant architectural leap over the previous 5L version, transitioning to a more robust denoiser and a deeper transformer-based backbone to achieve actual conversational coherence.
18
+
19
+ ## Model Details
20
+ - **Architecture**: Diffusion-Transformer (DiT) with Adaptive Layer Norm (adaLN-Single) modulation.
21
+ - **Backbone**: 24 Layers, 24 Attention Heads, 1536 Hidden Dimension.
22
+ - **Tokenizer**: BERT-base-uncased.
23
+ - **Training Objective**: MSE on Denoising Latents (Predicting original embeddings from noisy input).
24
+ - **Conditioning**: Prompt-concatenated latents with time-step embedding.
25
+
26
+ ## Training
27
+ The model is being trained on an RTX 5090 using the `ultrachat-10k` dataset, focusing on conversational flow and instruction following.
28
+
29
+ ## Goal
30
+ To prove that diffusion models can reach (and eventually exceed) the coherence of auto-regressive models while maintaining the creative "soul" and parallel generation benefits of diffusion.