How to use from
SGLang
# Gated model: Login with a HF token with gated access permission
hf auth login
Install from pip and serve model
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
    --model-path "beyoru/BronCode-Thinker" \
    --host 0.0.0.0 \
    --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "beyoru/BronCode-Thinker",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker images
docker run --gpus all \
    --shm-size 32g \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=<secret>" \
    --ipc=host \
    lmsysorg/sglang:latest \
    python3 -m sglang.launch_server \
        --model-path "beyoru/BronCode-Thinker" \
        --host 0.0.0.0 \
        --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "beyoru/BronCode-Thinker",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Quick Links

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Overview

This model is optimized for concise and structured reasoning, delivering high-quality outputs with minimal verbosity. By prioritizing efficient internal reasoning over long, explicit explanations, the model provides more practical and focused responses.

This approach results in:

  • Improved response quality
  • Faster inference
  • Lower token usage
  • Better suitability for real-world and production use cases

Key Differences from Base Model

  • The <think> token has been removed from the chat template. (Qwen3-4B-Thinking-2507 – Discussion #5)
  • Token generation has been reduced compared to the base model, leading to more concise outputs while maintaining reasoning quality.

Intended Use

This model is well-suited for applications that require:

  • Clear and direct answers
  • Efficient reasoning without excessive verbosity
  • Lower inference costs and faster response times
Downloads last month
-
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for beyoru/BronCode-Thinker

Finetuned
(232)
this model