How to use from
SGLangInstall from pip and serve model
# Install SGLang from pip:
pip install sglang# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "beyoru/BronCode-Thinker" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "beyoru/BronCode-Thinker",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'Use Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "beyoru/BronCode-Thinker" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "beyoru/BronCode-Thinker",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'Quick Links
Overview
This model is optimized for concise and structured reasoning, delivering high-quality outputs with minimal verbosity. By prioritizing efficient internal reasoning over long, explicit explanations, the model provides more practical and focused responses.
This approach results in:
- Improved response quality
- Faster inference
- Lower token usage
- Better suitability for real-world and production use cases
Key Differences from Base Model
- The
<think>token has been removed from the chat template. (Qwen3-4B-Thinking-2507 – Discussion #5) - Token generation has been reduced compared to the base model, leading to more concise outputs while maintaining reasoning quality.
Intended Use
This model is well-suited for applications that require:
- Clear and direct answers
- Efficient reasoning without excessive verbosity
- Lower inference costs and faster response times
- Downloads last month
- -
Model tree for beyoru/BronCode-Thinker
Base model
Qwen/Qwen3-4B-Thinking-2507
# Gated model: Login with a HF token with gated access permission hf auth login