Instructions to use lmstudio-community/OpenThinker3-7B-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use lmstudio-community/OpenThinker3-7B-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="lmstudio-community/OpenThinker3-7B-GGUF", filename="OpenThinker3-7B-Q3_K_L.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use lmstudio-community/OpenThinker3-7B-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
Use Docker
docker model run hf.co/lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use lmstudio-community/OpenThinker3-7B-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "lmstudio-community/OpenThinker3-7B-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmstudio-community/OpenThinker3-7B-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
- Ollama
How to use lmstudio-community/OpenThinker3-7B-GGUF with Ollama:
ollama run hf.co/lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
- Unsloth Studio new
How to use lmstudio-community/OpenThinker3-7B-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for lmstudio-community/OpenThinker3-7B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for lmstudio-community/OpenThinker3-7B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for lmstudio-community/OpenThinker3-7B-GGUF to start chatting
- Pi new
How to use lmstudio-community/OpenThinker3-7B-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use lmstudio-community/OpenThinker3-7B-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use lmstudio-community/OpenThinker3-7B-GGUF with Docker Model Runner:
docker model run hf.co/lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
- Lemonade
How to use lmstudio-community/OpenThinker3-7B-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull lmstudio-community/OpenThinker3-7B-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.OpenThinker3-7B-GGUF-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)💫 Community Model> OpenThinker3 7B by Open-Thoughts
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord.
Model creator: open-thoughts
Original model: OpenThinker3-7B
GGUF quantization: provided by bartowski based on llama.cpp release b5596
Technical Details
Supports a context length of 32k tokens
Trained on the new OpenThoughts3-1.2M dataset, consisting of 850k math questions, 200k code questions, and 100k science questions
More details available in their paper and blog post
Special thanks
🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.
Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
- Downloads last month
- 48
3-bit
4-bit
6-bit
8-bit
Model tree for lmstudio-community/OpenThinker3-7B-GGUF
Base model
Qwen/Qwen2.5-7B
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="lmstudio-community/OpenThinker3-7B-GGUF", filename="", )