Instructions to use QuantFactory/YugoGPT-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use QuantFactory/YugoGPT-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/YugoGPT-GGUF", filename="YugoGPT.Q2_K.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/YugoGPT-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/YugoGPT-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/YugoGPT-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/YugoGPT-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/YugoGPT-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/YugoGPT-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/YugoGPT-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/YugoGPT-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/YugoGPT-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/YugoGPT-GGUF:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use QuantFactory/YugoGPT-GGUF with Ollama:
ollama run hf.co/QuantFactory/YugoGPT-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/YugoGPT-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/YugoGPT-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/YugoGPT-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/YugoGPT-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/YugoGPT-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/YugoGPT-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/YugoGPT-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/YugoGPT-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.YugoGPT-GGUF-Q4_K_M
List all available models
lemonade list
output = llm(
"Once upon a time,",
max_tokens=512,
echo=True
)
print(output)QuantFactory/YugoGPT-GGUF
This is quantized version of gordicaleksa/YugoGPT created using llama.cpp
Original Model Card
This repo contains YugoGPT - the best open-source base 7B LLM for BCS (Bosnian, Croatian, Serbian) languages developed by Aleksa Gordić.
You can access more powerful iterations of YugoGPT already through the recently announced RunaAI's API platform!
Serbian LLM eval results compared to Mistral 7B, LLaMA 2 7B, and GPT2-orao (also see this LinkedIn post):

Eval was computed using https://github.com/gordicaleksa/serbian-llm-eval
It was trained on tens of billions of BCS tokens and is based off of Mistral 7B.
Notes
YugoGPT is a base model and therefore does not have any moderation mechanisms.
Since it's a base model it won't follow your instructions as it's just a powerful autocomplete engine.
If you want an access to much more powerful BCS LLMs (some of which are powering yugochat) - you can access the models through RunaAI's API
Credits
The data for the project was obtained with the help of Nikola Ljubešić, CLARIN.SI, and CLASSLA. Thank you!
Project Sponsors
A big thank you to the project sponsors!
Platinum sponsors 🌟
- Ivan (anon)
- Things Solver
Gold sponsors 🟡
- qq (anon)
- Adam Sofronijevic
- Yanado
- Mitar Perovic
- Nikola Ivancevic
- Rational Development DOO
- Ivan i Natalija Kokić
Silver sponsors ⚪
psk.rs, OmniStreak, Luka Važić, Miloš Durković, Marjan Radeski, Marjan Stankovic, Nikola Stojiljkovic, Mihailo Tomić, Bojan Jevtic, Jelena Jovanović, Nenad Davidović, Mika Tasich, TRENCH-NS, Nemanja Grujičić, tim011
Also a big thank you to the following individuals:
- Slobodan Marković - for spreading the word! :)
- Aleksander Segedi - for help around bookkeeping!
Citation
@article{YugoGPT,
author = "Gordić Aleksa",
title = "YugoGPT - an open-source LLM for Serbian, Bosnian, and Croatian languages",
year = "2024"
howpublished = {\url{https://huggingface.co/gordicaleksa/YugoGPT}},
}
- Downloads last month
- 733
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/YugoGPT-GGUF", filename="", )