DiffReaper-5 / README.md
darwinkernelpanic's picture
Update README.md
43689c5 verified
metadata
language:
  - en
license: openrail
library_name: diffusers
tags:
  - diffusion-llm
  - parallel-generation
  - custom-transformer
  - cropmark
datasets:
  - OpenAssistant/oasst1
metrics:
  - cosine_similarity
base_model:
  - darwinkernelpanic/DiffReaper-Talk

DiffReaper-5

DiffReaper-5 is a Conditioned Diffusion Large Language Model (DLLM) designed for high-throughput, parallel conversational text generation. Unlike standard autoregressive models (GPT-style), DiffReaper-5 operates in the continuous latent embedding space, denoising an entire response sequence in parallel.

Model Details

  • Architecture: Custom 12-layer Mercury-inspired Transformer.
  • Task: Conditioned Text Diffusion (Prompt-Response).
  • Latent Space: 1024-dimensional continuous embeddings.
  • Training Objective: Cosine Similarity Regression (Directional Loss).
  • Sampling: 10-step iterative parallel denoising.

Usage (Inference)

Unlike autoregressive models, DiffReaper-5 generates the entire response in parallel through iterative denoising. Use the following logic to run inference:

import torch
import torch.nn.functional as F
# Assuming DiffReaperModel is defined as per train_autogrow.py

def generate(model, tokenizer, prompt, steps=10):
    model.eval()
    with torch.no_grad():
        p_tokens = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
        p_emb = model.token_embedding(p_tokens[:, :32]) # Hard conditioning
        
        # Start from pure noise
        r_noise = torch.randn(1, 32, 1024).to("cuda")
        
        for i in range(steps):
            t = torch.tensor([1000 - (i * (1000//steps)) - 1], device="cuda").long()
            pred = model(torch.cat([p_emb, r_noise], dim=1), t)
            r_0_pred = pred[:, 32:, :] # Extract response
            r_noise = 0.4 * r_noise + 0.6 * r_0_pred # Iterative refinement
            
        # Map to vocab using Cosine Similarity
        norm_weights = F.normalize(model.token_embedding.weight, dim=-1)
        norm_r = F.normalize(r_noise, dim=-1)
        logits = torch.matmul(norm_r, norm_weights.T)
        return tokenizer.decode(torch.argmax(logits, dim=-1)[0])

# --- Loading Example ---
# model = DiffReaperModel(vocab_size=50257, n_embd=1024, n_head=16, n_layer=12).to("cuda")
# model.load_state_dict(torch.load("cropmark_latest.pt"))

Fine-tuning

To fine-tune DiffReaper-5 on a custom dataset:

  1. Objective: Use 1 - F.cosine_similarity between predicted and target embeddings.
  2. Conditioning: Ensure your data loader provides a fixed-length prompt prefix followed by the target response.
  3. Architecture: Maintain the 1024-dimensional latent space to stay compatible with the weights.

๐Ÿ“ˆ Diagnostic: Cropmark

The model's progress is monitored via the Cropmark Diagnostic.

  • Cropmark tests the model's ability to manifest a response (e.g., "I am good, how are you?") from pure Gaussian noise given a fixed prompt.
  • Results are logged in checkpoint_log.txt and uploaded periodically.