Instructions to use ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - llama-cpp-python
How to use ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning", filename="gemma-2b-reasoning.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning # Run inference directly in the terminal: llama-cli -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning # Run inference directly in the terminal: llama-cli -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning # Run inference directly in the terminal: ./llama-cli -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning # Run inference directly in the terminal: ./build/bin/llama-cli -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning
Use Docker
docker model run hf.co/ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning
- LM Studio
- Jan
- vLLM
How to use ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning
- Ollama
How to use ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning with Ollama:
ollama run hf.co/ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning
- Unsloth Studio new
How to use ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning to start chatting
- MLX LM
How to use ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning", "messages": [ {"role": "user", "content": "Hello"} ] }' - Docker Model Runner
How to use ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning with Docker Model Runner:
docker model run hf.co/ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning
- Lemonade
How to use ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning
Run and chat with the model
lemonade run user.gemma-2-2b-it-R1-Reasoning-{{QUANT_TAG}}List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning# Run inference directly in the terminal:
llama-cli -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-ReasoningUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning# Run inference directly in the terminal:
./llama-cli -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-ReasoningBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning# Run inference directly in the terminal:
./build/bin/llama-cli -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-ReasoningUse Docker
docker model run hf.co/ApatheticWithoutTheA/gemma-2-2b-it-R1-ReasoningModel Summary
This model is a fine-tuned version of gemma-2-2b-it, optimized for instruction-following and reasoning tasks. It was trained using MLX and LoRA on the sequelbox/Raiden-DeepSeek-R1 dataset, which consists of 62.9k examples generated by Deepseek R1. The fine-tuning process ran for 600 iterations to enhance the model’s ability to reason through more complex problems.
Model Details
- Base Model: gemma-2-2b-it
- Fine-tuning Method: MLX + LoRA
- Dataset: sequelbox/Raiden-DeepSeek-R1
- Iterations: 600
Capabilities
This model improves upon gemma-2-2b-it with additional instruction-following and reasoning capabilities derived from Deepseek R1-generated examples. The model will answer questions with a straight-forward answer for simple questions, and generate long chain-of-thought reasoning text for more complex problems. It is well-suited for:
- Question answering
- Reasoning-based tasks
- Coding
- Running on consumer hardware
Limitations
- Sometimes chain-of-thought reasoning is not triggered for more complex problems when it probably should be. You can nudge the model if needed by simply asking it to show its thoughts and it will generate think tags and begin reasoning.
- With harder than average complex reasoning problems, the model can get stuck in long "thinking" thought loops without ever coming to a conclusive answer.
- Downloads last month
- 39
Quantized
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning# Run inference directly in the terminal: llama-cli -hf ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning