TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
Paper • 2411.15124 • Published • 68
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/tulu3:# Run inference directly in the terminal:
llama-cli -hf cortexso/tulu3:# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf cortexso/tulu3:# Run inference directly in the terminal:
./llama-cli -hf cortexso/tulu3:git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf cortexso/tulu3:# Run inference directly in the terminal:
./build/bin/llama-cli -hf cortexso/tulu3:docker model run hf.co/cortexso/tulu3:Tülu3 is a state-of-the-art instruction-following model family developed by Allen Institute for AI. It is designed to excel in a wide range of tasks beyond standard chat applications, including complex problem-solving in domains such as MATH, GSM8K, and IFEval. The Tülu3 series provides a fully open-source ecosystem, offering access to datasets, training code, and fine-tuning recipes to facilitate advanced model customization and experimentation.
| No | Variant | Cortex CLI command |
|---|---|---|
| 1 | Tulu3-8b | cortex run tulu3:8b |
cortexhub/tulu3
cortex run tulu3
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf cortexso/tulu3:# Run inference directly in the terminal: llama-cli -hf cortexso/tulu3: