How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "NLPark/Test0_SLIDE"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "NLPark/Test0_SLIDE",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/NLPark/Test0_SLIDE
Quick Links

Shi-Ci Language Identify & Decode Expositor

8B, Ruozhiba...

Chinese Released as an early preview of our v3 LLMs. The v3 series covers the "Shi-Ci", "AnFeng" and "Cecilia" LLM products. The sizes are labelled from small to large "Nano" "Leap" "Pattern" "Avocet "Robin" "Kestrel"

Downloads last month
10
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
Input a message to start chatting with NLPark/Test0_SLIDE.

Model tree for NLPark/Test0_SLIDE

Quantizations
1 model

Spaces using NLPark/Test0_SLIDE 9