DiffReaper
Collection
DiffReaper is a family of Conditioned Diffusion Large Language Models (DLLMs) built for fast, parallel text generation at scale.
β’
5 items
β’
Updated
DiffReaper-5 is a Conditioned Diffusion Large Language Model (DLLM) designed for high-throughput, parallel conversational text generation. Unlike standard autoregressive models (GPT-style), DiffReaper-5 operates in the continuous latent embedding space, denoising an entire response sequence in parallel.
Unlike autoregressive models, DiffReaper-5 generates the entire response in parallel through iterative denoising. Use the following logic to run inference:
import torch
import torch.nn.functional as F
# Assuming DiffReaperModel is defined as per train_autogrow.py
def generate(model, tokenizer, prompt, steps=10):
model.eval()
with torch.no_grad():
p_tokens = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
p_emb = model.token_embedding(p_tokens[:, :32]) # Hard conditioning
# Start from pure noise
r_noise = torch.randn(1, 32, 1024).to("cuda")
for i in range(steps):
t = torch.tensor([1000 - (i * (1000//steps)) - 1], device="cuda").long()
pred = model(torch.cat([p_emb, r_noise], dim=1), t)
r_0_pred = pred[:, 32:, :] # Extract response
r_noise = 0.4 * r_noise + 0.6 * r_0_pred # Iterative refinement
# Map to vocab using Cosine Similarity
norm_weights = F.normalize(model.token_embedding.weight, dim=-1)
norm_r = F.normalize(r_noise, dim=-1)
logits = torch.matmul(norm_r, norm_weights.T)
return tokenizer.decode(torch.argmax(logits, dim=-1)[0])
# --- Loading Example ---
# model = DiffReaperModel(vocab_size=50257, n_embd=1024, n_head=16, n_layer=12).to("cuda")
# model.load_state_dict(torch.load("cropmark_latest.pt"))
To fine-tune DiffReaper-5 on a custom dataset:
1 - F.cosine_similarity between predicted and target embeddings.The model's progress is monitored via the Cropmark Diagnostic.
checkpoint_log.txt and uploaded periodically.