How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Rupesh2/testing3"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Rupesh2/testing3",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/Rupesh2/testing3
Quick Links

Testing

Testing is a merge of the following models using mergekit:

🧩 Configuration

```yaml models:

  • model: Orenguteng/Llama-3-8B-Lexi-Uncensored
  • model: stephenlzc/dolphin-llama3-zh-cn-uncensored parameters: density: 0.53 weight: 0.4
  • model: georgesung/llama3_8b_chat_uncensored parameters: density: 0.53 weight: 0.3
  • model: aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K parameters: density: 0.53 weight: 0.3 merge_method: dare_ties base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored parameters: int8_mask: true dtype: bfloat16 ```
Downloads last month
5
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support