How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/athene:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf cortexso/athene:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf cortexso/athene:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf cortexso/athene:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf cortexso/athene:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf cortexso/athene:Q4_K_MUse Docker
docker model run hf.co/cortexso/athene:Q4_K_MQuick Links
Overview
Athene-V2-Chat-72B is an open-weight LLM that competes on par with GPT-4o across various benchmarks. It is currently ranked as the best open model on Chatbot Arena, where it outperforms GPT-4o-0513 (the highest-ranked GPT-4o model on Arena) in hard and math categories. It also matches GPT-4o-0513 in coding, instruction following, longer queries, and multi-turn conversations.
Trained through RLHF with Qwen-2.5-72B-Instruct as the base model, Athene-V2-Chat-72B excels in chat, math, and coding. Additionally, its sister model, Athene-V2-Agent-72B, surpasses GPT-4o in complex function calling and agentic applications, further extending its capabilities.
Variants
| No | Variant | Cortex CLI command |
|---|---|---|
| 1 | Athene-72b | cortex run athene:72b |
Use it with Jan (UI)
- Install Jan using Quickstart
- Use in Jan model Hub:
cortexhub/athene
Use it with Cortex (CLI)
- Install Cortex using Quickstart
- Run the model with command:
cortex run athene
Credits
- Author: Nexusflow
- Converter: Homebrew
- Original License: Licence
- Papers: Athene V2 Blog
- Downloads last month
- 2
Hardware compatibility
Log In to add your hardware
4-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf cortexso/athene:Q4_K_M# Run inference directly in the terminal: llama-cli -hf cortexso/athene:Q4_K_M