How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Undi95/CodeEngine"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Undi95/CodeEngine",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/Undi95/CodeEngine
Quick Links

model: https://huggingface.co/jondurbin/airoboros-l2-13b-2.1 + lora: https://huggingface.co/jondurbin/airoboros-lmoe-13b-2.1/tree/main/adapters/code

For Dampf.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 50.96
ARC (25-shot) 58.36
HellaSwag (10-shot) 82.27
MMLU (5-shot) 54.18
TruthfulQA (0-shot) 45.18
Winogrande (5-shot) 74.59
GSM8K (5-shot) 1.52
DROP (3-shot) 40.59
Downloads last month
797
Inference Providers NEW

Spaces using Undi95/CodeEngine 9