MAJESTIC-FIN-R1 GGUF

MAJESTIC-FIN-R1 is a fine-tuned LiquidAI/LFM2-2.6B model exported to GGUF for Ollama, llama.cpp, and lightweight CPU deployment.

Available files

  • MAJESTIC-FIN-R1-F16.gguf: highest-fidelity GGUF export.
  • MAJESTIC-FIN-R1-Q8_0.gguf: smaller GGUF export for Ollama and free CPU hosting.
  • template: Ollama chat template for this model family.
  • params: default Ollama runtime parameters.
  • Modelfile: local Ollama import file.

Run with Ollama from Hugging Face

ollama run hf.co/EREN121232/MAJESTIC-FIN-R1-gguf:Q8_0

Run with Ollama locally

  1. Download MAJESTIC-FIN-R1-Q8_0.gguf and Modelfile.
  2. Keep them in the same folder.
  3. Run:
ollama create majestic-fin-r1 -f Modelfile
ollama run majestic-fin-r1

Free hosted demo and API

A public Hugging Face Space can serve the Q8_0 build on free CPU hardware. The companion Space for this repo is:

  • https://huggingface.co/spaces/EREN121232/MAJESTIC-FIN-R1-Free-API

Once the Space is live, use the footer link Use via API to inspect endpoints, or call the /chat endpoint directly from Python, JavaScript, or curl.

Downloads last month
391
Safetensors
Model size
3B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for EREN121232/MAJESTIC-FIN-R1-gguf

Quantized
(21)
this model

Space using EREN121232/MAJESTIC-FIN-R1-gguf 1