Instructions to use Tiiny/SmallThinker-21BA3B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Tiiny/SmallThinker-21BA3B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Tiiny/SmallThinker-21BA3B-Instruct")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Tiiny/SmallThinker-21BA3B-Instruct", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Tiiny/SmallThinker-21BA3B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Tiiny/SmallThinker-21BA3B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tiiny/SmallThinker-21BA3B-Instruct", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Tiiny/SmallThinker-21BA3B-Instruct
- SGLang
How to use Tiiny/SmallThinker-21BA3B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Tiiny/SmallThinker-21BA3B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tiiny/SmallThinker-21BA3B-Instruct", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Tiiny/SmallThinker-21BA3B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tiiny/SmallThinker-21BA3B-Instruct", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Tiiny/SmallThinker-21BA3B-Instruct with Docker Model Runner:
docker model run hf.co/Tiiny/SmallThinker-21BA3B-Instruct
Are there any other frameworks tested besides transformers that can be deployed?
Are there any other frameworks tested besides transformers that can be deployed?
For example, vllm, llama.cpp, sglang, etc. Since I see the released models include both safetensors and gguf, I think lamma.cpp should be able to support them? Also, does your own framework, PowerInfer, have adaptations for this model family?
Yes, you can try our model with the latest llama.cpp and PowerInfer. For other frameworks such as vLLM, we have already submitted a PR, available at https://github.com/vllm-project/vllm/pull/21670.
The paper's abstract suggests this co-designed system largely removes the requirement for costly GPUs, implying CPU optimization. I'm curious if GPUs still see performance gains from its novel network architecture.
Yes, you can try our model with the latest llama.cpp and PowerInfer. For other frameworks such as vLLM, we have already submitted a PR, available at https://github.com/vllm-project/vllm/pull/21670.
Thank you for your reply. It's very helpful.