minseo25 nielsr HF Staff commited on
Commit
5580119
·
verified ·
1 Parent(s): f4f6c72

Improve model card: Add metadata and update paper/GitHub links (#1)

Browse files

- Improve model card: Add metadata and update paper/GitHub links (25638fcd212e5609a54ddf55fb152f0b59b1799f)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -1,9 +1,15 @@
 
 
 
 
 
 
1
  # CDLM-LLaDA LoRA adapter for LLaDA-8B-Instruct
2
 
3
  This repository hosts the LoRA adapter for the LLaDA-8B-Instruct diffusion LLM (dLLM), produced with the CDLM (Consistency Diffusion Language Models) method. CDLM integrates consistency modeling and a block-wise causal attention mask so the student model becomes fully KV-cache compatible while retaining the strong local bidirectional modeling within each block. In practice, the adapter enables significantly faster inference with competitive quality.
4
 
5
- - GitHub: https://github.com/minseo25/CDLM
6
- - Paper: TBA
7
 
8
 
9
  ## Model details
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: peft
5
+ ---
6
+
7
  # CDLM-LLaDA LoRA adapter for LLaDA-8B-Instruct
8
 
9
  This repository hosts the LoRA adapter for the LLaDA-8B-Instruct diffusion LLM (dLLM), produced with the CDLM (Consistency Diffusion Language Models) method. CDLM integrates consistency modeling and a block-wise causal attention mask so the student model becomes fully KV-cache compatible while retaining the strong local bidirectional modeling within each block. In practice, the adapter enables significantly faster inference with competitive quality.
10
 
11
+ - GitHub: https://github.com/SqueezeAILab/CDLM
12
+ - Paper: [CDLM: Consistency Diffusion Language Models For Faster Sampling](https://huggingface.co/papers/2511.19269)
13
 
14
 
15
  ## Model details