EVE-Instruct GGUF

GGUF quantizations of eve-esa/EVE-Instruct, produced with llama.cpp using an importance matrix for improved quality.

EVE-Instruct is a 24B Mistral Small 3.2 finetune specialised for Earth Observation and Earth Science, created by ESA Phi-lab, Pi School, and Mistral AI.

Available Quantizations

File Quant Size
eve_v05-Q4_K_M.gguf Q4_K_M 14.3 GB
eve_v05-Q5_K_M.gguf Q5_K_M 16.8 GB
eve_v05-Q6_K.gguf Q6_K 19.3 GB
eve_v05-Q8_0.gguf Q8_0 25.1 GB

Usage

llama.cpp

llama-cli -m eve_v05-Q5_K_M.gguf -ngl 99 -c 8192 -p "Your prompt"

Ollama

# Create a Modelfile
echo 'FROM ./eve_v05-Q5_K_M.gguf' > Modelfile
ollama create eve -f Modelfile
ollama run eve

LM Studio

Download any of the GGUF files above and load directly in LM Studio.

Downloads last month
14
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jejwalsh/EVE-Instruct-GGUF

Quantized
(3)
this model