cortexso
/

How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/gemma2:
# Run inference directly in the terminal:
llama-cli -hf cortexso/gemma2:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/gemma2:
# Run inference directly in the terminal:
llama-cli -hf cortexso/gemma2:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf cortexso/gemma2:
# Run inference directly in the terminal:
./llama-cli -hf cortexso/gemma2:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf cortexso/gemma2:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf cortexso/gemma2:
Use Docker
docker model run hf.co/cortexso/gemma2:
Quick Links

Overview

The Gemma, state-of-the-art open model trained with the Gemma datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Gemma family with the 4B, 7B version in two variants 8K and 128K which is the context length (in tokens) that it can support.

Variants

No Variant Cortex CLI command
1 Gemma2-2b cortex run gemma2:2b
2 Gemma2-9b cortex run gemma2:9b
3 Gemma2-27b cortex run gemma2:27b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexso/gemma2
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run gemma2
    

Credits

Downloads last month
223
GGUF
Model size
27B params
Architecture
gemma2
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including cortexso/gemma2

Paper for cortexso/gemma2