Text Generation
Transformers
PyTorch
TensorBoard
Safetensors
gpt_bigcode
Generated from Trainer
text-generation-inference
Instructions to use HuggingFaceH4/starchat-beta with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use HuggingFaceH4/starchat-beta with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-beta")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/starchat-beta") model = AutoModelForCausalLM.from_pretrained("HuggingFaceH4/starchat-beta") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use HuggingFaceH4/starchat-beta with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "HuggingFaceH4/starchat-beta" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/starchat-beta", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/HuggingFaceH4/starchat-beta
- SGLang
How to use HuggingFaceH4/starchat-beta with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "HuggingFaceH4/starchat-beta" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/starchat-beta", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "HuggingFaceH4/starchat-beta" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/starchat-beta", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use HuggingFaceH4/starchat-beta with Docker Model Runner:
docker model run hf.co/HuggingFaceH4/starchat-beta
The inference api returns inComplete response
#8
by aidan377 - opened
When I use the inference api,it return very short answer,can you help to figure out the reason?Do I use wrong code?
The code to request:
prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
prompt = prompt_template.format(query="How do I sort a list in Python?")
# print(prompt)
output = query({
"inputs": prompt,
"parameters":{
"max_new_token":256,
"temperature":0.2,
"do_sample":True,
"top_k":50,
"top_p":0.95,
"eos_token_id":49155,
"return_full_text":False
}
})
The response:
[{'generated_text': '\nThere are multiple ways to sort a list in Python. One of the most common ways is to'}]
try to use "return_full_text":True
try to use "return_full_text":True
I have tried,the same results.Is there any limitation of api usage?