How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "QuantFactory/Replete-Coder-Instruct-8b-Merged-GGUF"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "QuantFactory/Replete-Coder-Instruct-8b-Merged-GGUF",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/QuantFactory/Replete-Coder-Instruct-8b-Merged-GGUF:
Quick Links

QuantFactory/Replete-Coder-Instruct-8b-Merged-GGUF

This is quantized version of Replete-AI/Replete-Coder-Instruct-8b-Merged created using llama.cpp

Model Description

This is a Ties merge between the following models:

The Coding, and Overall performance of this models seems to be better than both base models used in the merge. Benchmarks are coming in the future.

Downloads last month
44
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support