Instructions to use Wladastic/Mini-Think-Base-1B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Wladastic/Mini-Think-Base-1B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Wladastic/Mini-Think-Base-1B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Wladastic/Mini-Think-Base-1B") model = AutoModelForCausalLM.from_pretrained("Wladastic/Mini-Think-Base-1B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Wladastic/Mini-Think-Base-1B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Wladastic/Mini-Think-Base-1B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Wladastic/Mini-Think-Base-1B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Wladastic/Mini-Think-Base-1B
- SGLang
How to use Wladastic/Mini-Think-Base-1B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Wladastic/Mini-Think-Base-1B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Wladastic/Mini-Think-Base-1B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Wladastic/Mini-Think-Base-1B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Wladastic/Mini-Think-Base-1B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Wladastic/Mini-Think-Base-1B with Docker Model Runner:
docker model run hf.co/Wladastic/Mini-Think-Base-1B
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Wladastic/Mini-Think-Base-1B")
model = AutoModelForCausalLM.from_pretrained("Wladastic/Mini-Think-Base-1B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))MiniThink-1B-base
MiniThink-1B is an experiment to reproduce the "Aha!" moment in AI. Is is trained using a modified version of the method used in the Unsloth R1 training blog and the notebook provided for training LLama 3.1 8B to learn R1 reasoning .
MiniThink is a fine-tuned version of the unsloth/Llama-3.2-1B-Instruct model.
Model Details
- Base Model:
unsloth/Llama-3.2-1B-Instruct - Training: Fine-tuned using progressive LoRA (ranks: 16 → 32 → 64) with Unsloth's optimization framework
- Task: Mathematical and logical reasoning with explicit, step-by-step thought processes
- Training Data: GSM8K dataset enhanced with think-aloud prompting
- Input Format: Questions requiring detailed, structured reasoning
- Output Format: A comprehensive thinking process enclosed in
<think>tags, followed by the final answer
Dataset used
The model was trained on a modified version of Openai's GSM8K dataset, which contains 8K math word problems with one-number answers. To improve the training results, the dataset was slightly modified to exclude comma or period-separated numbers.
System Prompt
The model is trained with the following system prompt to guide its reasoning process:
# Define special tokens for thinking process
THINK_START = "<think>"
THINK_END = "</think>"
SYSTEM_PROMPT = f"""Show your reasoning process using <think> tags, then provide your answer. For example:
Question: "Janet has 3 apples. She buys 2 more. How many apples does she have?"
{THINK_START}
Let me solve this step by step:
- Janet starts with 3 apples
- She buys 2 more apples
- I need to add: 3 + 2 = 5
Wait, let me verify:
- Initial apples: 3
- Added apples: 2
Yes, the total is 5 apples
{THINK_END}
5"""
Usage
The model expects a chat-like input and responds with a structured breakdown of its reasoning. For example:
Input:
Question: “Janet has 3 apples. She buys 2 more. How many apples does she have?”
Output:
<think>
Let me solve this step by step:
- Janet starts with 3 apples
- She buys 2 more apples
- I need to add: 3 + 2 = 5
Wait, let me verify:
- Initial apples: 3
- Added apples: 2
Yes, the total is 5 apples
</think>
5
Limitations
- Being a 1B-parameter model, its performance is naturally more limited compared to larger models.
- Optimized for mathematical and logical tasks; however, complex computations may occasionally yield errors.
- Always verify critical outputs.
Training
The model was trained using:
- Progressive LoRA: Gradually increasing ranks from 16 to 32 and finally 64
- Mixed Precision Training: Utilizing bf16 where supported for optimal performance
- GRPO (Guided Reward Policy Optimization): Implemented via the Unsloth framework for guided training
- Data: GSM8K dataset enriched with explicit think-aloud examples
License
This model adheres to the licensing terms of the base Llama-3.2 1B model. Please refer to Meta's Llama-3.2 1B license for details on usage terms and conditions.
Framework
Developed using the Unsloth Framework, this model leverages advanced techniques like GRPO and progressive LoRA optimization for efficient training and fine-tuning of large language models.
- Downloads last month
- 14

# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Wladastic/Mini-Think-Base-1B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)