nielsr HF Staff commited on
Commit
25638fc
·
verified ·
1 Parent(s): f4f6c72

Improve model card: Add metadata and update paper/GitHub links

Browse files

This PR enhances the model card for the `CDLM-LLaDA LoRA adapter` by:

* Adding `license: mit`, `pipeline_tag: text-generation`, and `library_name: peft` to the YAML metadata block for better discoverability and user experience. The `library_name: peft` is added based on the `peft_type: LORA` found in `adapter_config.json`, indicating compatibility with the PEFT library.
* Updating the paper link to the official Hugging Face paper page: [CDLM: Consistency Diffusion Language Models For Faster Sampling](https://huggingface.co/papers/2511.19269).
* Correcting the GitHub repository link to its canonical source: `https://github.com/SqueezeAILab/CDLM`.

Please review and merge if these improvements look good!

Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -1,9 +1,15 @@
 
 
 
 
 
 
1
  # CDLM-LLaDA LoRA adapter for LLaDA-8B-Instruct
2
 
3
  This repository hosts the LoRA adapter for the LLaDA-8B-Instruct diffusion LLM (dLLM), produced with the CDLM (Consistency Diffusion Language Models) method. CDLM integrates consistency modeling and a block-wise causal attention mask so the student model becomes fully KV-cache compatible while retaining the strong local bidirectional modeling within each block. In practice, the adapter enables significantly faster inference with competitive quality.
4
 
5
- - GitHub: https://github.com/minseo25/CDLM
6
- - Paper: TBA
7
 
8
 
9
  ## Model details
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: peft
5
+ ---
6
+
7
  # CDLM-LLaDA LoRA adapter for LLaDA-8B-Instruct
8
 
9
  This repository hosts the LoRA adapter for the LLaDA-8B-Instruct diffusion LLM (dLLM), produced with the CDLM (Consistency Diffusion Language Models) method. CDLM integrates consistency modeling and a block-wise causal attention mask so the student model becomes fully KV-cache compatible while retaining the strong local bidirectional modeling within each block. In practice, the adapter enables significantly faster inference with competitive quality.
10
 
11
+ - GitHub: https://github.com/SqueezeAILab/CDLM
12
+ - Paper: [CDLM: Consistency Diffusion Language Models For Faster Sampling](https://huggingface.co/papers/2511.19269)
13
 
14
 
15
  ## Model details