Instructions to use alpha-ai/Reason-With-Choice-3B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use alpha-ai/Reason-With-Choice-3B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="alpha-ai/Reason-With-Choice-3B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("alpha-ai/Reason-With-Choice-3B") model = AutoModelForCausalLM.from_pretrained("alpha-ai/Reason-With-Choice-3B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use alpha-ai/Reason-With-Choice-3B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "alpha-ai/Reason-With-Choice-3B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "alpha-ai/Reason-With-Choice-3B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/alpha-ai/Reason-With-Choice-3B
- SGLang
How to use alpha-ai/Reason-With-Choice-3B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "alpha-ai/Reason-With-Choice-3B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "alpha-ai/Reason-With-Choice-3B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "alpha-ai/Reason-With-Choice-3B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "alpha-ai/Reason-With-Choice-3B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use alpha-ai/Reason-With-Choice-3B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for alpha-ai/Reason-With-Choice-3B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for alpha-ai/Reason-With-Choice-3B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for alpha-ai/Reason-With-Choice-3B to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="alpha-ai/Reason-With-Choice-3B", max_seq_length=2048, ) - Docker Model Runner
How to use alpha-ai/Reason-With-Choice-3B with Docker Model Runner:
docker model run hf.co/alpha-ai/Reason-With-Choice-3B
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("alpha-ai/Reason-With-Choice-3B")
model = AutoModelForCausalLM.from_pretrained("alpha-ai/Reason-With-Choice-3B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Website - https://www.alphaai.biz
Uploaded model
- Developed by: alphaaico
- License: apache-2.0
- Finetuned from model : meta-llama/Llama-3.2-3B-Instruct
- Training Framework: Unsloth + Hugging Face TRL
- Finetuning Techniques: GRPO + Reward Modelling
Overview
Welcome to the next evolution of AI reasoning! Reason-With-Choice-3B is not just another fine-tuned model, it's a game-changer. It doesn't just generate reasoning, it chooses whether reasoning is even necessary before delivering an answer. This self-reflective capability allows it to introspect, analyze, and adapt to the complexity of each question, ensuring the most efficient and insightful response possible.
Think about it: most AI models blindly generate reasoning even when unnecessary, leading to bloated, redundant responses. Not this one. With its built-in decision-making, Reason-With-Choice-3B determines if deep reasoning is needed or if a direct answer will suffice—bringing unparalleled efficiency and intelligence to your AI-driven applications.
Key Highlights
- Reasoning & Self-Reflection: The model first decides if reasoning is necessary and then either provides step-by-step logic or directly answers the question.
- Structured Output: Responses follow a strict format with
<think>,<reflection>, and<answer>sections, ensuring clarity and interpretability. - Optimized Training: Trained using GRPO (Guided Reward Policy Optimization) to enforce structured responses and improve decision-making.
- Efficient Inference: Fine-tuned with Unsloth & Hugging Face's TRL, ensuring faster inference speeds and optimized resource utilization.
Prompt Structure
The model generates responses in the following structured format:
<think>
[Detailed reasoning, if required. Otherwise, this section remains empty.]
</think>
<reflection>
[Internal thought process explaining whether reasoning was needed.]
</reflection>
<answer>
[Final response.]
</answer>
Key Features
- Decision-Making Capability: The model intelligently determines whether reasoning is necessary before answering.
- Improved Accuracy: Training with reward functions ensures adherence to logical response structure.
- Structured Outputs: Guarantees that each response follows a predictable and interpretable format.
- Enhanced Efficiency: Optimized inference with vLLM for fast token generation and low memory footprint.
- Multi-Use Case Compatibility: Can be used for Q&A systems, logical reasoning tasks, and AI-assisted decision-making.
Quantization Levels Available
- q4_k_m
- q5_k_m
- q8_0
- 16-bit (Full Precision)
GGUF Versions - https://huggingface.co/alpha-ai/Reason-With-Choice-3B-GGUF
Ideal Configuration for Usage
- Temperature: 0.8
- Top-p: 0.95
- Max Tokens: 1024
Use Cases
Reason-With-Choice-3B is ideal for:
- AI Research: Investigating decision-making and reasoning processes in AI.
- Conversational AI: Enhancing chatbot intelligence with structured reasoning.
- Automated Decision Support: Assisting in structured, step-by-step problem-solving.
- Educational Tools: Providing logical explanations for learning and problem-solving.
- Business Intelligence: AI-assisted decision-making for operational and strategic planning.
Limitations & Considerations
- Domain Adaptation: May require further fine-tuning for domain-specific tasks.
- Inference Time: Increased processing time when reasoning is necessary.
- Potential Biases: Outputs depend on training data and may require verification for critical applications.
License
This model is released under the Apache-2.0 license.
Acknowledgments
Special thanks to the Unsloth team for optimizing the fine-tuning pipeline and to Hugging Face's TRL for enabling advanced fine-tuning techniques.
Security & Format Considerations
This model has been saved in .bin format due to Unsloth's default serialization method. If security is a concern, we recommend converting to .safetensors using:
from transformers import AutoModel
from safetensors.torch import save_file
model = AutoModel.from_pretrained("path/to/model")
state_dict = model.state_dict()
save_file(state_dict, "model.safetensors")
print("Model converted to safetensors successfully.")
Alternatively, GGUF models are available for optimized inference with llama.cpp, exllama, and other runtime frameworks.
Choose the format best suited to your security, performance, and deployment requirements.
- Downloads last month
- 7
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="alpha-ai/Reason-With-Choice-3B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)