Instructions to use QuantFactory/EXAONE-Deep-2.4B-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QuantFactory/EXAONE-Deep-2.4B-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="QuantFactory/EXAONE-Deep-2.4B-GGUF") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("QuantFactory/EXAONE-Deep-2.4B-GGUF", dtype="auto") - llama-cpp-python
How to use QuantFactory/EXAONE-Deep-2.4B-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/EXAONE-Deep-2.4B-GGUF", filename="EXAONE-Deep-2.4B.Q2_K.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/EXAONE-Deep-2.4B-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use QuantFactory/EXAONE-Deep-2.4B-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "QuantFactory/EXAONE-Deep-2.4B-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/EXAONE-Deep-2.4B-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M
- SGLang
How to use QuantFactory/EXAONE-Deep-2.4B-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "QuantFactory/EXAONE-Deep-2.4B-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/EXAONE-Deep-2.4B-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "QuantFactory/EXAONE-Deep-2.4B-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/EXAONE-Deep-2.4B-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use QuantFactory/EXAONE-Deep-2.4B-GGUF with Ollama:
ollama run hf.co/QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/EXAONE-Deep-2.4B-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/EXAONE-Deep-2.4B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/EXAONE-Deep-2.4B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/EXAONE-Deep-2.4B-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/EXAONE-Deep-2.4B-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/EXAONE-Deep-2.4B-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/EXAONE-Deep-2.4B-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.EXAONE-Deep-2.4B-GGUF-Q4_K_M
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:# Run inference directly in the terminal:
llama-cli -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:# Run inference directly in the terminal:
./llama-cli -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:Use Docker
docker model run hf.co/QuantFactory/EXAONE-Deep-2.4B-GGUF:QuantFactory/EXAONE-Deep-2.4B-GGUF
This is quantized version of LGAI-EXAONE/EXAONE-Deep-2.4B created using llama.cpp
Original Model Card
EXAONE-Deep-2.4B
Introduction
We introduce EXAONE Deep, which exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research. Evaluation results show that 1) EXAONE Deep 2.4B outperforms other models of comparable size, 2) EXAONE Deep 7.8B outperforms not only open-weight models of comparable scale but also a proprietary reasoning model OpenAI o1-mini, and 3) EXAONE Deep 32B demonstrates competitive performance against leading open-weight models.
For more details, please refer to our documentation, blog and GitHub.
This repository contains the reasoning 2.4B language model with the following features:
- Number of Parameters (without embeddings): 2.14B
- Number of Layers: 30
- Number of Attention Heads: GQA with 32 Q-heads and 8 KV-heads
- Vocab Size: 102,400
- Context Length: 32,768 tokens
- Tie Word Embeddings: True (unlike 7.8B and 32B models)
Quickstart
We recommend to use transformers v4.43.1 or later.
Here is the code snippet to run conversational inference with the model:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
from threading import Thread
model_name = "LGAI-EXAONE/EXAONE-Deep-2.4B"
streaming = True # choose the streaming option
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Choose your prompt:
# Math example (AIME 2024)
prompt = r"""Let $x,y$ and $z$ be positive real numbers that satisfy the following system of equations:
\[\log_2\left({x \over yz}\right) = {1 \over 2}\]\[\log_2\left({y \over xz}\right) = {1 \over 3}\]\[\log_2\left({z \over xy}\right) = {1 \over 4}\]
Then the value of $\left|\log_2(x^4y^3z^2)\right|$ is $\tfrac{m}{n}$ where $m$ and $n$ are relatively prime positive integers. Find $m+n$.
Please reason step by step, and put your final answer within \boxed{}."""
# Korean MCQA example (CSAT Math 2025)
prompt = r"""Question : $a_1 = 2$인 수열 $\{a_n\}$과 $b_1 = 2$인 등차수열 $\{b_n\}$이 모든 자연수 $n$에 대하여\[\sum_{k=1}^{n} \frac{a_k}{b_{k+1}} = \frac{1}{2} n^2\]을 만족시킬 때, $\sum_{k=1}^{5} a_k$의 값을 구하여라.
Options :
A) 120
B) 125
C) 130
D) 135
E) 140
Please reason step by step, and you should write the correct option alphabet (A, B, C, D or E) within \\boxed{}."""
messages = [
{"role": "user", "content": prompt}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
if streaming:
streamer = TextIteratorStreamer(tokenizer)
thread = Thread(target=model.generate, kwargs=dict(
input_ids=input_ids.to("cuda"),
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=32768,
do_sample=True,
temperature=0.6,
top_p=0.95,
streamer=streamer
))
thread.start()
for text in streamer:
print(text, end="", flush=True)
else:
output = model.generate(
input_ids.to("cuda"),
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=32768,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(tokenizer.decode(output[0]))
Note
The EXAONE Deep models are trained with an optimized configuration, so we recommend following the Usage Guideline section to achieve optimal performance.
Evaluation
The following table shows the evaluation results of reasoning tasks such as math and coding. The full evaluation results can be found in the documentation.
| Models | MATH-500 (pass@1) | AIME 2024 (pass@1 / cons@64) | AIME 2025 (pass@1 / cons@64) | CSAT Math 2025 (pass@1) | GPQA Diamond (pass@1) | Live Code Bench (pass@1) |
|---|---|---|---|---|---|---|
| EXAONE Deep 32B | 95.7 | 72.1 / 90.0 | 65.8 / 80.0 | 94.5 | 66.1 | 59.5 |
| DeepSeek-R1-Distill-Qwen-32B | 94.3 | 72.6 / 83.3 | 55.2 / 73.3 | 84.1 | 62.1 | 57.2 |
| QwQ-32B | 95.5 | 79.5 / 86.7 | 67.1 / 76.7 | 94.4 | 63.3 | 63.4 |
| DeepSeek-R1-Distill-Llama-70B | 94.5 | 70.0 / 86.7 | 53.9 / 66.7 | 88.8 | 65.2 | 57.5 |
| DeepSeek-R1 (671B) | 97.3 | 79.8 / 86.7 | 66.8 / 80.0 | 89.9 | 71.5 | 65.9 |
| EXAONE Deep 7.8B | 94.8 | 70.0 / 83.3 | 59.6 / 76.7 | 89.9 | 62.6 | 55.2 |
| DeepSeek-R1-Distill-Qwen-7B | 92.8 | 55.5 / 83.3 | 38.5 / 56.7 | 79.7 | 49.1 | 37.6 |
| DeepSeek-R1-Distill-Llama-8B | 89.1 | 50.4 / 80.0 | 33.6 / 53.3 | 74.1 | 49.0 | 39.6 |
| OpenAI o1-mini | 90.0 | 63.6 / 80.0 | 54.8 / 66.7 | 84.4 | 60.0 | 53.8 |
| EXAONE Deep 2.4B | 92.3 | 52.5 / 76.7 | 47.9 / 73.3 | 79.2 | 54.3 | 46.6 |
| DeepSeek-R1-Distill-Qwen-1.5B | 83.9 | 28.9 / 52.7 | 23.9 / 36.7 | 65.6 | 33.8 | 16.9 |
Deployment
EXAONE Deep models can be inferred in the various frameworks, such as:
TensorRT-LLMvLLMSGLangllama.cppOllamaLM-Studio
Please refer to our EXAONE Deep GitHub for more details about the inference frameworks.
Quantization
We provide the pre-quantized EXAONE Deep models with AWQ and several quantization types in GGUF format. Please refer to our EXAONE Deep collection to find corresponding quantized models.
Usage Guideline
To achieve the expected performance, we recommend using the following configurations:
- Ensure the model starts with
<thought>\nfor reasoning steps. The model's output quality may be degraded when you omit it. You can easily apply this feature by usingtokenizer.apply_chat_template()withadd_generation_prompt=True. Please check the example code on Quickstart section. - The reasoning steps of EXAONE Deep models enclosed by
<thought>\n...\n</thought>usually have lots of tokens, so previous reasoning steps may be necessary to be removed in multi-turn situation. The provided tokenizer handles this automatically. - Avoid using system prompt, and build the instruction on the user prompt.
- Additional instructions help the models reason more deeply, so that the models generate better output.
- For math problems, the instructions "Please reason step by step, and put your final answer within \boxed{}." are helpful.
- For more information on our evaluation setting including prompts, please refer to our Documentation.
- In our evaluation, we use
temperature=0.6andtop_p=0.95for generation. - When evaluating the models, it is recommended to test multiple times to assess the expected performance accurately.
Limitation
The EXAONE language model has certain limitations and may occasionally generate inappropriate responses. The language model generates responses based on the output probability of tokens, and it is determined during learning from training data. While we have made every effort to exclude personal, harmful, and biased information from the training data, some problematic content may still be included, potentially leading to undesirable responses. Please note that the text generated by EXAONE language model does not reflects the views of LG AI Research.
- Inappropriate answers may be generated, which contain personal, harmful or other inappropriate information.
- Biased responses may be generated, which are associated with age, gender, race, and so on.
- The generated responses rely heavily on statistics from the training data, which can result in the generation of semantically or syntactically incorrect sentences.
- Since the model does not reflect the latest information, the responses may be false or contradictory.
LG AI Research strives to reduce potential risks that may arise from EXAONE language models. Users are not allowed to engage in any malicious activities (e.g., keying in illegal information) that may induce the creation of inappropriate outputs violating LG AI’s ethical principles when using EXAONE language models.
License
The model is licensed under EXAONE AI Model License Agreement 1.1 - NC
Citation
@article{exaone-deep,
title={EXAONE Deep: Reasoning Enhanced Language Models},
author={{LG AI Research}},
journal={arXiv preprint arXiv:2503.12524},
year={2025}
}
Contact
LG AI Research Technical Support: contact_us@lgresearch.ai
- Downloads last month
- 64
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for QuantFactory/EXAONE-Deep-2.4B-GGUF
Base model
LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/EXAONE-Deep-2.4B-GGUF:# Run inference directly in the terminal: llama-cli -hf QuantFactory/EXAONE-Deep-2.4B-GGUF: