File size: 1,249 Bytes
5d04f40 f51ddd5 5d04f40 f51ddd5 5d04f40 f51ddd5 5d04f40 f51ddd5 5d04f40 f51ddd5 5d04f40 fa0ed63 5d04f40 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | ---
license: apache-2.0
datasets:
- songff/UltraPrompt
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
---
# P-Aligner
## Quick Start
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
raw_instruction = "What is the capital of France?"
model_path = "P-Aligner"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = LLM(
model=model_path,
gpu_memory_utilization=0.9,
enable_prefix_caching=True,
dtype="bfloat16",
)
outputs = model.generate(
[raw_instruction],
sampling_params=SamplingParams(
temperature=0.0,
max_tokens=2048,
),
)
better_instruction = tokenizer.parse_output(
outputs[0].outputs[0].text,
raw_instruction,
)
print(better_instruction)
```
If you find this work useful, please consider citing:
```
@misc{song2025paligner,
title={P-Aligner: Enabling Pre-Alignment of Language Models via Principled Instruction Synthesis},
author={Song, Feifan and Gao, Bofei and Song, Yifan and Liu, Yi and Xiong, Weimin and Song, Yuyang and Liu, Tianyu and Wang, Guoyin and Wang, Houfeng},
year={2025},
eprint={2508.04626},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |