NewYears1.5b / README.md
ghostai1's picture
Update README.md
4b10d31 verified
---
license: mit
language:
- en
pipeline_tag: text-generation
tags:
- gguf
- llama.cpp
- qwen
- qwen2.5
- instruct
- lora
- roleplay
- holiday
- storytelling
- local-inference
- quantization
- ggml
- llm
---
# New Years 1.5B (GGUF)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
[![Format: GGUF](https://img.shields.io/badge/Format-GGUF-orange)](#quantized-files)
[![Runtime: llama.cpp](https://img.shields.io/badge/Runtime-llama.cpp-purple)](https://github.com/ggerganov/llama.cpp)
[![Base: Qwen2.5-1.5B-Instruct](https://img.shields.io/badge/Base-Qwen2.5--1.5B--Instruct-lightgrey)](#model-details)
[![Hugging Face](https://img.shields.io/badge/Hugging%20Face-Model-yellow)](https://huggingface.co/ghostai1/NewYears-1_5b)
[![LinkedIn](https://img.shields.io/badge/LinkedIn-ccengineering-0A66C2?logo=linkedin&logoColor=white)](https://www.linkedin.com/in/ccengineering/)
**New Years 1.5B** is a compact seasonal model fine-tuned for **festive, upbeat, and cozy** text generation.
This repository provides **GGUF** builds optimized for **local inference** using the **llama.cpp** ecosystem and compatible runtimes.
**Quick links**
- Model: https://huggingface.co/ghostai1/NewYears-1_5b
- Runtime: https://github.com/ggerganov/llama.cpp
- Author (LinkedIn): https://www.linkedin.com/in/ccengineering/
---
## Overview
This is a **tone/personality-focused** model. It emphasizes celebration, warmth, and “new beginnings” vibe rather than deep reasoning or strict technical accuracy.
Recommended for:
- Creative writing and short stories
- New Year / winter-themed roleplay
- Light conversational assistants
- Local demos and low-resource systems
Not optimized for:
- Complex reasoning
- Factual retrieval
- Long-horizon planning
---
## Model Details
- **Model name:** New Years 1.5B
- **Base model:** Qwen2.5-1.5B-Instruct
- **Fine-tuning:** LoRA (merged)
- **Parameters:** ~1.5B
- **Format:** GGUF (llama.cpp compatible)
- **Language:** English
- **License:** MIT (base model license applies)
---
## Quantized Files
All files are produced from the same merged model and differ only in quantization.
| File | Quantization | Approx. Size |
|---|---|---:|
| `newyears1-5b.TQ1_0.gguf` | TQ1_0 | ~0.35 GB |
| `newyears1-5b.Q2_K.gguf` | Q2_K | ~0.52 GB |
| `newyears1-5b.Q3_K_S.gguf` | Q3_K_S | ~0.60 GB |
| `newyears1-5b.Q3_K_M.gguf` | Q3_K_M | ~0.65 GB |
| `newyears1-5b.Q4_K_S.gguf` | Q4_K_S | ~0.75 GB |
| `newyears1-5b.Q4_K_M.gguf` | Q4_K_M | ~0.80 GB |
| `newyears1-5b.Q5_K_S.gguf` | Q5_K_S | ~0.90 GB |
| `newyears1-5b.Q5_K_M.gguf` | Q5_K_M | ~0.94 GB |
| `newyears1-5b.Q6_K.gguf` | Q6_K | ~1.05 GB |
| `newyears1-5b.Q8_0.gguf` | Q8_0 | ~1.35 GB |
### Recommendations
- **Default (balanced):** `Q4_K_M`
- **Higher quality:** `Q5_K_M`, `Q6_K`, `Q8_0`
- **Low RAM systems:** `Q3_K_M`, `Q2_K`
- **Ultra-low memory (experimental):** `TQ1_0`
---
## Usage (llama.cpp)
### CPU-only
```bash
./llama-cli \
-m newyears1-5b.Q4_K_M.gguf \
-ngl 0 \
-c 4096 \
-p "Write a cozy New Year's Eve story set in a snowy small town, full of hope and new beginnings."