bigcode/starcoderdata
Viewer • Updated • 207M • 29.2k • 507
How to use itsliupeng/openllama-7b-base with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="itsliupeng/openllama-7b-base") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("itsliupeng/openllama-7b-base")
model = AutoModelForCausalLM.from_pretrained("itsliupeng/openllama-7b-base")How to use itsliupeng/openllama-7b-base with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "itsliupeng/openllama-7b-base"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "itsliupeng/openllama-7b-base",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/itsliupeng/openllama-7b-base
How to use itsliupeng/openllama-7b-base with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "itsliupeng/openllama-7b-base" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "itsliupeng/openllama-7b-base",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "itsliupeng/openllama-7b-base" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "itsliupeng/openllama-7b-base",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use itsliupeng/openllama-7b-base with Docker Model Runner:
docker model run hf.co/itsliupeng/openllama-7b-base
A Reproduction of OpenLLaMA using 128 H100 GPUs in Bfloat16.
The pretrain data consists of Falcon, Starcoder, and the wikipedia, arxiv, books, stackexchange from RedPajama. In total, this encompassed nearly 1 trillion tokens.
The model was trained over a single epoch, incorporating 2000 warm-up steps and a cosine learning rate schedule, starting at 3e-5 with 4M batch size.
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 47.09 |
| AI2 Reasoning Challenge (25-Shot) | 46.16 |
| HellaSwag (10-Shot) | 76.40 |
| MMLU (5-Shot) | 42.82 |
| TruthfulQA (0-shot) | 36.65 |
| Winogrande (5-shot) | 70.88 |
| GSM8k (5-shot) | 9.63 |