Model Card for Villanova-2B-2603-GGUF
Villanova-2B-2603 is a fully open, multilingual instruction-tuned Large Language Model developed by Villanova.AI. Part of the Villanova project, it is designed to advance open European language technology with native support for five European languages. All model weights, training data sources, and training details are publicly released.
This repo contains GGUF format model files for the VillanovaAI/Villanova-2B-2603 model.
Model Family
Villanova-2B-Base-2603 β Base model (4.4T)
ββ³ Villanova-2B-2603 β SFT / Instruct
βββ³ Villanova-2B-2603-GGUF β Quantized β π This model
ββ³ Villanova-2B-VL-2603 β Vision-Language Instruct
βββ³ Villanova-2B-VL-2603-GGUF β Quantized
Villanova-2B-Base-2512-Preview β Base model (2.2T) (previous version, not recommended)
ββ³ Villanova-2B-2512-Preview β SFT / Instruct (previous version, not recommended)
About GGUF
GGUF is a format introduced by llama.cpp.
It is a file format for storing and distributing LLMs that is designed for portability and efficient inference on the edge.
Quick Usage with llama.cpp
You can run this model directly using the llama-cli tool (part of llama.cpp).
To run the model with the Q8_0 quantization directly from Hugging Face:
llama-cli -hf VillanovaAI/Villanova-2B-2603-GGUF:Q8_0
- Downloads last month
- 10
8-bit
16-bit
Model tree for VillanovaAI/Villanova-2B-2603-GGUF
Base model
VillanovaAI/Villanova-2B-Base-2603