File size: 1,883 Bytes
f67766f bd93c68 f67766f 225f85f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | ---
language:
- en
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B
pipeline_tag: text-generation
---
# PingVortexLM1-0.5B
A fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) trained on custom English conversational data.
This model is not aimed at coding or multilingual use, just solid general English conversation.
Built by [PingVortex Labs](https://github.com/PingVortexLabs).
---
## Model Details
+ **Base model:** Qwen/Qwen2.5-0.5B
+ **Parameters:** 0.5B
+ **Context length:** 8192 tokens
+ **Language:** English only
+ **Format:** ChatML
+ **License:** Apache 2.0
---
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "pvlabs/PingVortexLM1-0.5B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, dtype=torch.bfloat16, device_map="auto")
def chat(user_message):
prompt = (
f"<|im_start|>system\nYou are a helpful assistant<|im_end|>\n"
f"<|im_start|>user\n{user_message}<|im_end|>\n"
f"<|im_start|>assistant\n"
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
)
response = tokenizer.decode(output[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True)
return response
print(chat("Hello"))
```
---
## Prompt Format (ChatML)
The model uses the standard ChatML format:
```
<|im_start|>system
You are a helpful assistant<|im_end|>
<|im_start|>user
Your message here<|im_end|>
<|im_start|>assistant
```
It is recommended to always include the system prompt.
---
*Made by [PingVortex](https://pingvortex.com).* |