How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "NLPark/Test1_SLIDE"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "NLPark/Test1_SLIDE",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/NLPark/Test1_SLIDE
Quick Links

Shi-Ci Language Identify & Decode Expositor

8B, Ruozhiba...

Chinese, English Test 1 of all. Released as an early preview of our v3 LLMs. The v3 series covers the "Shi-Ci", "AnFeng" and "Cecilia" LLM products. The sizes are labelled from small to large "Nano" "Leap" "Pattern" "Avocet "Robin" "Kestrel"

Downloads last month
8,145
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
Input a message to start chatting with NLPark/Test1_SLIDE.

Model tree for NLPark/Test1_SLIDE

Merges
1 model

Spaces using NLPark/Test1_SLIDE 9