File size: 3,114 Bytes
24e896e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: apache-2.0
language:
- en
- zh
base_model: tencent/WeDLM-7B
pipeline_tag: text-generation
tags:
- language model
- parallel-decoding
- chat
- instruct
---

# WeDLM-7B-Instruct

**WeDLM-7B-Instruct** is an instruction-tuned diffusion language model that performs parallel decoding under standard causal attention, fine-tuned from [WeDLM-7B](https://huggingface.co/tencent/WeDLM-7B).

For the base (pretrained) version, see [WeDLM-7B](https://huggingface.co/tencent/WeDLM-7B).

๐Ÿ“„ Paper (Coming Soon) | ๐ŸŒ [Project Page](https://wedlm.github.io) | ๐Ÿ’ป [GitHub](https://github.com/tencent/WeDLM)

## Model Details

| Attribute | Value |
|:----------|:------|
| Base Model | [WeDLM-7B](https://huggingface.co/tencent/WeDLM-7B) |
| Parameters | 7B |
| Context Length | 32,768 |

## Quick Start (Recommended)

For **fast inference**, use the `wedlm` engine:

```bash
pip install git+https://github.com/tencent/WeDLM.git
```

```python
from transformers import AutoTokenizer
from wedlm import LLM, SamplingParams

llm = LLM(model="tencent/WeDLM-7B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("tencent/WeDLM-7B-Instruct", trust_remote_code=True)

prompt = "Explain the difference between machine learning and deep learning."
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

outputs = llm.generate([text], SamplingParams(temperature=0.3, max_tokens=512))
print(outputs[0]["text"])
```

### Multi-turn Conversation

```python
messages = [
    {"role": "user", "content": "What is Python?"},
    {"role": "assistant", "content": "Python is a high-level programming language known for its simplicity and readability."},
    {"role": "user", "content": "Show me a hello world example."}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = llm.generate([text], SamplingParams(temperature=0.3, max_tokens=256))
```

## HuggingFace Transformers

For **training** or simple forward passes:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("tencent/WeDLM-7B-Instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    "tencent/WeDLM-7B-Instruct", 
    trust_remote_code=True,
    torch_dtype="auto",
    device_map="auto"
)

messages = [{"role": "user", "content": "Hello!"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model(**inputs)
```

> โš ๏ธ **Note:** The HuggingFace interface is for training/forward pass convenience. For optimized inference throughput, use the `wedlm` engine above.

## Performance

| Benchmark | Qwen2.5-7B-Instruct | WeDLM-7B-Instruct |
|:----------|:-------------------:|:-----------------:|
| ARC-C (0-shot) | 86.09 | 89.59 |
| GSM8K (3-shot) | 89.91 | 87.57 |
| MATH (4-shot) | 45.00 | 55.40 |
| HumanEval (4-shot) | 76.22 | 75.00 |
| MMLU (5-shot) | 71.98 | 70.52 |

## Citation (Coming soon)


## License

Apache 2.0