HuggingFaceH4/ultrachat_200k
Viewer • Updated • 515k • 68.4k • 705
How to use Felladrin/TinyMistral-248M-Chat-v4 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Felladrin/TinyMistral-248M-Chat-v4")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Felladrin/TinyMistral-248M-Chat-v4")
model = AutoModelForCausalLM.from_pretrained("Felladrin/TinyMistral-248M-Chat-v4")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use Felladrin/TinyMistral-248M-Chat-v4 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Felladrin/TinyMistral-248M-Chat-v4"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Felladrin/TinyMistral-248M-Chat-v4",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/Felladrin/TinyMistral-248M-Chat-v4
How to use Felladrin/TinyMistral-248M-Chat-v4 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Felladrin/TinyMistral-248M-Chat-v4" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Felladrin/TinyMistral-248M-Chat-v4",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Felladrin/TinyMistral-248M-Chat-v4" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Felladrin/TinyMistral-248M-Chat-v4",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use Felladrin/TinyMistral-248M-Chat-v4 with Docker Model Runner:
docker model run hf.co/Felladrin/TinyMistral-248M-Chat-v4
<|im_start|> and <|im_end|>)<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
import torch
model_path = "Felladrin/TinyMistral-248M-Chat-v4"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path).to(device)
streamer = TextStreamer(tokenizer)
messages = [
{
"role": "system",
"content": "You are a highly knowledgeable and friendly assistant. Your goal is to understand and respond to user inquiries with clarity. Your interactions are always respectful, helpful, and focused on delivering the most accurate information to the user.",
},
{
"role": "user",
"content": "Hey! Got a question for you!",
},
{
"role": "assistant",
"content": "Sure! What's it?",
},
{
"role": "user",
"content": "What are some potential applications for quantum computing?",
},
]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
inputs = tokenizer(prompt, return_tensors="pt").to(device)
model.generate(
inputs.input_ids,
attention_mask=inputs.attention_mask,
max_length=tokenizer.model_max_length,
streamer=streamer,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
do_sample=True,
temperature=0.6,
top_p=0.8,
top_k=0,
min_p=0.1,
typical_p=0.2,
repetition_penalty=1.176,
)
This model was trained with SFTTrainer using the following settings:
| Hyperparameter | Value |
|---|---|
| Learning rate | 2e-5 |
| Total train batch size | 32 |
| Max. sequence length | 2048 |
| Weight decay | 0.01 |
| Warmup ratio | 0.1 |
| NEFTune Noise Alpha | 5 |
| Optimizer | Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| Scheduler | cosine |
| Seed | 42 |
Then, the model was fine-tuned with DPO through LLaMA-Factory using the following hyperparameters and command:
| Parameter | Value |
|---|---|
| Dataset | HuggingFaceH4/ultrafeedback_binarized |
| Learning rate | 1e-06 |
| Train batch size | 4 |
| Eval batch size | 8 |
| Seed | 42 |
| Distributed type | multi-GPU |
| Number of devices | 8 |
| Gradient accumulation steps | 4 |
| Total train batch size | 128 |
| Total eval batch size | 64 |
| Optimizer | adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 |
| LR scheduler type | cosine |
| LR scheduler warmup ratio | 0.1 |
| Number of epochs | 2.0 |
llamafactory-cli train \
--stage dpo \
--do_train True \
--model_name_or_path ~/TinyMistral-248M-Chat \
--preprocessing_num_workers $(python -c "import os; print(max(1, os.cpu_count() - 2))") \
--dataloader_num_workers $(python -c "import os; print(max(1, os.cpu_count() - 2))") \
--finetuning_type full \
--flash_attn auto \
--enable_liger_kernel True \
--dataset_dir data \
--dataset ultrafeedback \
--cutoff_len 1024 \
--learning_rate 1e-6 \
--num_train_epochs 2.0 \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 4 \
--lr_scheduler_type linear \
--max_grad_norm 1.0 \
--logging_steps 10 \
--save_steps 50 \
--save_total_limit 1 \
--warmup_ratio 0.1 \
--packing False \
--report_to tensorboard \
--output_dir ~/TinyMistral-248M-Chat-v4 \
--pure_bf16 True \
--plot_loss True \
--trust_remote_code True \
--ddp_timeout 180000000 \
--include_tokens_per_second True \
--include_num_input_tokens_seen True \
--optim adamw_8bit \
--pref_beta 0.5 \
--pref_ftx 0 \
--pref_loss simpo \
--gradient_checkpointing True
Base model
Locutusque/TinyMistral-248M