New Years 1.5B (GGUF)

License: MIT Format: GGUF Runtime: llama.cpp Base: Qwen2.5-1.5B-Instruct Hugging Face LinkedIn

New Years 1.5B is a compact seasonal model fine-tuned for festive, upbeat, and cozy text generation.
This repository provides GGUF builds optimized for local inference using the llama.cpp ecosystem and compatible runtimes.

Quick links


Overview

This is a tone/personality-focused model. It emphasizes celebration, warmth, and “new beginnings” vibe rather than deep reasoning or strict technical accuracy.

Recommended for:

  • Creative writing and short stories
  • New Year / winter-themed roleplay
  • Light conversational assistants
  • Local demos and low-resource systems

Not optimized for:

  • Complex reasoning
  • Factual retrieval
  • Long-horizon planning

Model Details

  • Model name: New Years 1.5B
  • Base model: Qwen2.5-1.5B-Instruct
  • Fine-tuning: LoRA (merged)
  • Parameters: ~1.5B
  • Format: GGUF (llama.cpp compatible)
  • Language: English
  • License: MIT (base model license applies)

Quantized Files

All files are produced from the same merged model and differ only in quantization.

File Quantization Approx. Size
newyears1-5b.TQ1_0.gguf TQ1_0 ~0.35 GB
newyears1-5b.Q2_K.gguf Q2_K ~0.52 GB
newyears1-5b.Q3_K_S.gguf Q3_K_S ~0.60 GB
newyears1-5b.Q3_K_M.gguf Q3_K_M ~0.65 GB
newyears1-5b.Q4_K_S.gguf Q4_K_S ~0.75 GB
newyears1-5b.Q4_K_M.gguf Q4_K_M ~0.80 GB
newyears1-5b.Q5_K_S.gguf Q5_K_S ~0.90 GB
newyears1-5b.Q5_K_M.gguf Q5_K_M ~0.94 GB
newyears1-5b.Q6_K.gguf Q6_K ~1.05 GB
newyears1-5b.Q8_0.gguf Q8_0 ~1.35 GB

Recommendations

  • Default (balanced): Q4_K_M
  • Higher quality: Q5_K_M, Q6_K, Q8_0
  • Low RAM systems: Q3_K_M, Q2_K
  • Ultra-low memory (experimental): TQ1_0

Usage (llama.cpp)

CPU-only

./llama-cli \
  -m newyears1-5b.Q4_K_M.gguf \
  -ngl 0 \
  -c 4096 \
  -p "Write a cozy New Year's Eve story set in a snowy small town, full of hope and new beginnings."
Downloads last month
279
GGUF
Model size
2B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support