How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/olmo-2:
# Run inference directly in the terminal:
llama-cli -hf cortexso/olmo-2:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/olmo-2:
# Run inference directly in the terminal:
llama-cli -hf cortexso/olmo-2:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf cortexso/olmo-2:
# Run inference directly in the terminal:
./llama-cli -hf cortexso/olmo-2:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf cortexso/olmo-2:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf cortexso/olmo-2:
Use Docker
docker model run hf.co/cortexso/olmo-2:
Quick Links

Overview

OLMo-2 is a series of Open Language Models designed to enable the science of language models. These models are trained on the Dolma dataset, with all code, checkpoints, logs (coming soon), and associated training details made openly available.

The OLMo-2 13B Instruct November 2024 is a post-trained variant of the OLMo-2 13B model, which has undergone supervised fine-tuning on an OLMo-specific variant of the Tülu 3 dataset. Additional training techniques include Direct Preference Optimization (DPO) and Reinforcement Learning from Virtual Rewards (RLVR), optimizing it for state-of-the-art performance across various tasks, including chat, MATH, GSM8K, and IFEval.

Variants

No Variant Cortex CLI command
1 Olmo-2-7b cortex run olmo-2:7b
2 Olmo-2-13b cortex run olmo-2:13b
3 Olmo-2-32b cortex run olmo-2:32b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexhub/olmo-2
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run olmo-2
    

Credits

Downloads last month
317
GGUF
Model size
32B params
Architecture
olmo2
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for cortexso/olmo-2