Instructions to use QuantFactory/Marco-o1-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QuantFactory/Marco-o1-GGUF with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("QuantFactory/Marco-o1-GGUF", dtype="auto") - llama-cpp-python
How to use QuantFactory/Marco-o1-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/Marco-o1-GGUF", filename="Marco-o1.Q2_K.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/Marco-o1-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Marco-o1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Marco-o1-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Marco-o1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Marco-o1-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/Marco-o1-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/Marco-o1-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/Marco-o1-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/Marco-o1-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/Marco-o1-GGUF:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use QuantFactory/Marco-o1-GGUF with Ollama:
ollama run hf.co/QuantFactory/Marco-o1-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/Marco-o1-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Marco-o1-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Marco-o1-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/Marco-o1-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/Marco-o1-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/Marco-o1-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/Marco-o1-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/Marco-o1-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Marco-o1-GGUF-Q4_K_M
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf QuantFactory/Marco-o1-GGUF:# Run inference directly in the terminal:
llama-cli -hf QuantFactory/Marco-o1-GGUF:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf QuantFactory/Marco-o1-GGUF:# Run inference directly in the terminal:
./llama-cli -hf QuantFactory/Marco-o1-GGUF:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf QuantFactory/Marco-o1-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf QuantFactory/Marco-o1-GGUF:Use Docker
docker model run hf.co/QuantFactory/Marco-o1-GGUF:QuantFactory/Marco-o1-GGUF
This is quantized version of AIDC-AI/Marco-o1 created using llama.cpp
Original Model Card
🍓 Marco-o1: Towards Open Reasoning Models for Open-Ended Solutions
⭐ MarcoPolo Team ⭐
🎯 Marco-o1 not only focuses on disciplines with standard answers, such as mathematics, physics, and coding—which are well-suited for reinforcement learning (RL)—but also places greater emphasis on open-ended resolutions. We aim to address the question: "Can the o1 model effectively generalize to broader domains where clear standards are absent and rewards are challenging to quantify?"
Currently, Marco-o1 Large Language Model (LLM) is powered by Chain-of-Thought (CoT) fine-tuning, Monte Carlo Tree Search (MCTS), reflection mechanisms, and _innovative reasoning strategies_—optimized for complex real-world problem-solving tasks.
🚀 Highlights
Currently, our work is distinguished by the following highlights:
- 🍀 Fine-Tuning with CoT Data: We develop Marco-o1-CoT by performing full-parameter fine-tuning on the base model using open-source CoT dataset combined with our self-developed synthetic data.
- 🍀 Solution Space Expansion via MCTS: We integrate LLMs with MCTS (Marco-o1-MCTS), using the model's output confidence to guide the search and expand the solution space.
- 🍀 Reasoning Action Strategy: We implement novel reasoning action strategies and a reflection mechanism (Marco-o1-MCTS Mini-Step), including exploring different action granularities within the MCTS framework and prompting the model to self-reflect, thereby significantly enhancing the model's ability to solve complex problems.
- 🍀 Application in Translation Tasks: We are the first to apply Large Reasoning Models (LRM) to Machine Translation task, exploring inference time scaling laws in the multilingual and translation domain.
OpenAI recently introduced the groundbreaking o1 model, renowned for its exceptional reasoning capabilities. This model has demonstrated outstanding performance on platforms such as AIME, CodeForces, surpassing other leading models. Inspired by this success, we aimed to push the boundaries of LLMs even further, enhancing their reasoning abilities to tackle complex, real-world challenges.
🌍 Marco-o1 leverages advanced techniques like CoT fine-tuning, MCTS, and Reasoning Action Strategies to enhance its reasoning power. As shown in Figure 2, by fine-tuning Qwen2-7B-Instruct with a combination of the filtered Open-O1 CoT dataset, Marco-o1 CoT dataset, and Marco-o1 Instruction dataset, Marco-o1 improved its handling of complex tasks. MCTS allows exploration of multiple reasoning paths using confidence scores derived from softmax-applied log probabilities of the top-k alternative tokens, guiding the model to optimal solutions. Moreover, our reasoning action strategy involves varying the granularity of actions within steps and mini-steps to optimize search efficiency and accuracy.
Figure 2: The overview of Marco-o1.
🌏 As shown in Figure 3, Marco-o1 achieved accuracy improvements of +6.17% on the MGSM (English) dataset and +5.60% on the MGSM (Chinese) dataset, showcasing enhanced reasoning capabilities.
Figure 3: The main results of Marco-o1.
🌎 Additionally, in translation tasks, we demonstrate that Marco-o1 excels in translating slang expressions, such as translating "这个鞋拥有踩屎感" (literal translation: "This shoe offers a stepping-on-poop sensation.") to "This shoe has a comfortable sole," demonstrating its superior grasp of colloquial nuances.
Figure 4: The demostration of translation task using Marco-o1.
For more information,please visit our Github.
Usage
Load Marco-o1-CoT model:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("AIDC-AI/Marco-o1") model = AutoModelForCausalLM.from_pretrained("AIDC-AI/Marco-o1")Inference:
Execute the inference script (you can give any customized inputs inside):
./src/talk_with_model.py # Use vLLM ./src/talk_with_model_vllm.py
👨🏻💻 Acknowledgement
Main Contributors
From MarcoPolo Team, AI Business, Alibaba International Digital Commerce:
- Yu Zhao
- Huifeng Yin
- Hao Wang
- Longyue Wang
Citation
If you find Marco-o1 useful for your research and applications, please cite:
@misc{zhao2024marcoo1openreasoningmodels,
title={Marco-o1: Towards Open Reasoning Models for Open-Ended Solutions},
author={Yu Zhao and Huifeng Yin and Bo Zeng and Hao Wang and Tianqi Shi and Chenyang Lyu and Longyue Wang and Weihua Luo and Kaifu Zhang},
year={2024},
eprint={2411.14405},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.14405},
}
LICENSE
This project is licensed under Apache License Version 2 (SPDX-License-identifier: Apache-2.0).
DISCLAIMER
We used compliance checking algorithms during the training process, to ensure the compliance of the trained model and dataset to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model is completely free of copyright issues or improper content. If you believe anything infringes on your rights or generates improper content, please contact us, and we will promptly address the matter.
- Downloads last month
- 362
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit

Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Marco-o1-GGUF:# Run inference directly in the terminal: llama-cli -hf QuantFactory/Marco-o1-GGUF: