Spark-270M-FP16 / README.md
TitleOS's picture
Update README.md
f0005f7 verified
---
license: mpl-2.0
library_name: transformers
tags:
- gemma-3
- synthetic-data
- textbooks
- distillation
- utility
- summarization
- lightning
- conversational
base_model: google/gemma-3-270m
datasets:
- TitleOS/Spark-Lightning-Synthetic-Textbooks
language:
- en
pipeline_tag: text-generation
---
# Spark-270M
**Spark-270M** is a highly compact, utility-focused language model with **270 million parameters**. It is a fine-tune of Google's [Gemma 3 270M](https://huggingface.co/google/gemma-3-270m), designed to punch significantly above its weight class by leveraging high-quality synthetic data distillation.
The model functions as a "dense information engine"—specializing in generating concise title summaries, search engine queries, and logical follow-up questioning—while retaining the creative conversational flair inherited from its teacher model's lineage.
## ⚡ Model Details
- **Model Name:** Spark-270M
- **Base Architecture:** [Google Gemma 3 270M](https://huggingface.co/google/gemma-3-270m)
- **Parameters:** 270M active parameters
- **Context Window:** 32k tokens
- **Teacher Model:** Lightning-1.7B (Custom model fine-tuned on Hermes 3)
- **Training Type:** Synthetic "Textbook" Distillation (SFT)
## 📚 Training Methodology: "Textbooks Are All You Need"
Spark-270M was trained using a distinct data pipeline inspired by the *Textbooks Are All You Need* (Microsoft Phi) research paper.
Instead of training on raw web scrapes, Spark-270M was fine-tuned exclusively on a series of **synthetic textbooks** generated by a larger parent model, **Lightning-1.7B**.
### The Teacher: Lightning-1.7B
The data generator, Lightning-1.7B, was itself fine-tuned on the [Hermes 3 dataset](https://huggingface.co/nousresearch/hermes-3-llama-3.1-8b). This lineage allows Spark-270M to inherit specific behavioral traits from Hermes 3—namely creativity, steerability, and a refusal to be "boring"—despite being distilled into a rigid textbook format.
The synthetic data focused on:
1. **High-density reasoning chains:** Explaining complex topics in compressed formats.
2. **Utility Tasks:** Converting conversational fluff into actionable queries.
3. **Socratic Dialogue:** Modeling inquisitive follow-up questioning.
## 🛠️ Intended Use & Capabilities
Spark-270M is designed to be a lightweight **Utility Model**. It is ideal for edge devices, rapid prototyping, or functioning as a specific "node" in a larger agentic system (e.g., the summarizer node or the query-generator node).
### Primary Capabilities
* **Dense Title Summarization:** Converting long conversation threads into information-dense, short titles or abstracts.
* **Search Query Generation:** Formulating precise, keyword-rich search queries based on vague user input.
* **Proactive Questioning:** Generating relevant follow-up questions to clarify user intent or deepen a topic.
## 💻 Example Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "TitleOS/Spark-270M"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
# Example: Generating a search query from a user problem
input_text = """
User: I need to fix my sink, it's leaking from the bottom pipe where the U-shape thing is.
Task: Generate 3 search engine queries for this problem.
Response:
"""
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=128)
print(tokenizer.d ecode(outputs[0]))
```
Quants:
Q4_K_M: https://huggingface.co/TitleOS/Spark-270M-FP16-Q4_K_M-GGUF
Q8: https://huggingface.co/TitleOS/Spark-270M-FP16-Q8_0-GGUF
FP16: https://huggingface.co/TitleOS/Spark-270M-FP16
Adaptor: https://huggingface.co/TitleOS/Spark-270M-LoRA