UltraFeedback: Boosting Language Models with High-quality Feedback
Paper • 2310.01377 • Published • 6
How to use activeDap/Llama-3.2-3B_hh_helpful with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="activeDap/Llama-3.2-3B_hh_helpful") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("activeDap/Llama-3.2-3B_hh_helpful")
model = AutoModelForCausalLM.from_pretrained("activeDap/Llama-3.2-3B_hh_helpful")How to use activeDap/Llama-3.2-3B_hh_helpful with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "activeDap/Llama-3.2-3B_hh_helpful"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "activeDap/Llama-3.2-3B_hh_helpful",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/activeDap/Llama-3.2-3B_hh_helpful
How to use activeDap/Llama-3.2-3B_hh_helpful with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "activeDap/Llama-3.2-3B_hh_helpful" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "activeDap/Llama-3.2-3B_hh_helpful",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "activeDap/Llama-3.2-3B_hh_helpful" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "activeDap/Llama-3.2-3B_hh_helpful",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use activeDap/Llama-3.2-3B_hh_helpful with Docker Model Runner:
docker model run hf.co/activeDap/Llama-3.2-3B_hh_helpful
This model is a fine-tuned version of meta-llama/Llama-3.2-3B on the activeDap/sft-hh-data dataset.
| Metric | Value |
|---|---|
| Total Steps | 19 |
| Final Training Loss | 2.0897 |
| Min Training Loss | 2.0897 |
| Training Runtime | 11.40 seconds |
| Samples/Second | 105.45 |
| Parameter | Value |
|---|---|
| Base Model | meta-llama/Llama-3.2-3B |
| Dataset | activeDap/sft-hh-data |
| Number of Epochs | 1.0 |
| Per Device Batch Size | 16 |
| Gradient Accumulation Steps | 1 |
| Total Batch Size | 64 (4 GPUs) |
| Learning Rate | 2e-05 |
| LR Scheduler | cosine |
| Warmup Ratio | 0.1 |
| Max Sequence Length | 512 |
| Optimizer | adamw_torch_fused |
| Mixed Precision | BF16 |
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "activeDap/Llama-3.2-3B_sft-hh-data"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Format input with prompt template
prompt = "What is machine learning?\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate response
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
If you use this model, please cite the original base model and dataset:
@misc{ultrafeedback2023,
title={UltraFeedback: Boosting Language Models with High-quality Feedback},
author={Ganqu Cui and Lifan Yuan and Ning Ding and others},
year={2023},
eprint={2310.01377},
archivePrefix={arXiv}
}
Base model
meta-llama/Llama-3.2-3B