Update README.md
Browse files
README.md
CHANGED
|
@@ -1,59 +1,87 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
library_name: transformers
|
| 4 |
-
model_name: Spark-270M
|
| 5 |
tags:
|
| 6 |
-
-
|
| 7 |
-
-
|
| 8 |
-
-
|
| 9 |
-
-
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
-
#
|
| 14 |
|
| 15 |
-
|
| 16 |
-
It has been trained using [TRL](https://github.com/huggingface/trl).
|
| 17 |
|
| 18 |
-
|
| 19 |
|
| 20 |
-
|
| 21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
-
|
| 24 |
-
generator
|
| 25 |
-
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
| 26 |
-
print(output["generated_text"])
|
| 27 |
-
```
|
| 28 |
|
| 29 |
-
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
|
|
|
| 33 |
|
| 34 |
-
|
|
|
|
| 35 |
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
-
|
| 39 |
-
- Transformers: 4.57.3
|
| 40 |
-
- Pytorch: 2.9.1
|
| 41 |
-
- Datasets: 4.4.1
|
| 42 |
-
- Tokenizers: 0.22.1
|
| 43 |
|
| 44 |
-
|
|
|
|
| 45 |
|
| 46 |
|
|
|
|
| 47 |
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
title = {{TRL: Transformer Reinforcement Learning}},
|
| 53 |
-
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
|
| 54 |
-
year = 2020,
|
| 55 |
-
journal = {GitHub repository},
|
| 56 |
-
publisher = {GitHub},
|
| 57 |
-
howpublished = {\url{https://github.com/huggingface/trl}}
|
| 58 |
-
}
|
| 59 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
+
license: mpl-2.0
|
| 3 |
library_name: transformers
|
|
|
|
| 4 |
tags:
|
| 5 |
+
- gemma-3
|
| 6 |
+
- synthetic-data
|
| 7 |
+
- textbooks
|
| 8 |
+
- distillation
|
| 9 |
+
- utility
|
| 10 |
+
- summarization
|
| 11 |
+
- lightning
|
| 12 |
+
- conversational
|
| 13 |
+
base_model: google/gemma-3-270m
|
| 14 |
+
datasets:
|
| 15 |
+
- TitleOS/Spark-Lightning-Synthetic-Textbooks
|
| 16 |
+
language:
|
| 17 |
+
- en
|
| 18 |
+
pipeline_tag: text-generation
|
| 19 |
---
|
| 20 |
|
| 21 |
+
# Spark-270M
|
| 22 |
|
| 23 |
+
**Spark-270M** is a highly compact, utility-focused language model with **270 million parameters**. It is a fine-tune of Google's [Gemma 3 270M](https://huggingface.co/google/gemma-3-270m), designed to punch significantly above its weight class by leveraging high-quality synthetic data distillation.
|
|
|
|
| 24 |
|
| 25 |
+
The model functions as a "dense information engine"—specializing in generating concise title summaries, search engine queries, and logical follow-up questioning—while retaining the creative conversational flair inherited from its teacher model's lineage.
|
| 26 |
|
| 27 |
+
## ⚡ Model Details
|
| 28 |
+
|
| 29 |
+
- **Model Name:** Spark-270M
|
| 30 |
+
- **Base Architecture:** [Google Gemma 3 270M](https://huggingface.co/google/gemma-3-270m)
|
| 31 |
+
- **Parameters:** 270M active parameters
|
| 32 |
+
- **Context Window:** 32k tokens
|
| 33 |
+
- **Teacher Model:** Lightning-1.7B (Custom model fine-tuned on Hermes 3)
|
| 34 |
+
- **Training Type:** Synthetic "Textbook" Distillation (SFT)
|
| 35 |
+
|
| 36 |
+
## 📚 Training Methodology: "Textbooks Are All You Need"
|
| 37 |
+
|
| 38 |
+
Spark-270M was trained using a distinct data pipeline inspired by the *Textbooks Are All You Need* (Microsoft Phi) research paper.
|
| 39 |
+
|
| 40 |
+
Instead of training on raw web scrapes, Spark-270M was fine-tuned exclusively on a series of **synthetic textbooks** generated by a larger parent model, **Lightning-1.7B**.
|
| 41 |
|
| 42 |
+
### The Teacher: Lightning-1.7B
|
| 43 |
+
The data generator, Lightning-1.7B, was itself fine-tuned on the [Hermes 3 dataset](https://huggingface.co/nousresearch/hermes-3-llama-3.1-8b). This lineage allows Spark-270M to inherit specific behavioral traits from Hermes 3—namely creativity, steerability, and a refusal to be "boring"—despite being distilled into a rigid textbook format.
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
+
The synthetic data focused on:
|
| 46 |
+
1. **High-density reasoning chains:** Explaining complex topics in compressed formats.
|
| 47 |
+
2. **Utility Tasks:** Converting conversational fluff into actionable queries.
|
| 48 |
+
3. **Socratic Dialogue:** Modeling inquisitive follow-up questioning.
|
| 49 |
|
| 50 |
+
## 🛠️ Intended Use & Capabilities
|
| 51 |
+
|
| 52 |
+
Spark-270M is designed to be a lightweight **Utility Model**. It is ideal for edge devices, rapid prototyping, or functioning as a specific "node" in a larger agentic system (e.g., the summarizer node or the query-generator node).
|
| 53 |
+
|
| 54 |
+
### Primary Capabilities
|
| 55 |
+
* **Dense Title Summarization:** Converting long conversation threads into information-dense, short titles or abstracts.
|
| 56 |
+
* **Search Query Generation:** Formulating precise, keyword-rich search queries based on vague user input.
|
| 57 |
+
* **Proactive Questioning:** Generating relevant follow-up questions to clarify user intent or deepen a topic.
|
| 58 |
+
|
| 59 |
+
## 💻 Example Usage
|
| 60 |
+
|
| 61 |
+
```python
|
| 62 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 63 |
|
| 64 |
+
model_id = "TitleOS/Spark-270M"
|
| 65 |
|
| 66 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 67 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
|
| 68 |
|
| 69 |
+
# Example: Generating a search query from a user problem
|
| 70 |
+
input_text = """
|
| 71 |
+
User: I need to fix my sink, it's leaking from the bottom pipe where the U-shape thing is.
|
| 72 |
+
Task: Generate 3 search engine queries for this problem.
|
| 73 |
+
Response:
|
| 74 |
+
"""
|
| 75 |
|
| 76 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
|
| 78 |
+
outputs = model.generate(**input_ids, max_new_tokens=128)
|
| 79 |
+
print(tokenizer.d ecode(outputs[0]))
|
| 80 |
|
| 81 |
|
| 82 |
+
Quants:
|
| 83 |
|
| 84 |
+
Q4_K_M: https://huggingface.co/TitleOS/Spark-270M-FP16-Q4_K_M-GGUF
|
| 85 |
+
Q8: https://huggingface.co/TitleOS/Spark-270M-FP16-Q8_0-GGUF
|
| 86 |
+
FP16: https://huggingface.co/TitleOS/Spark-270M-FP16
|
| 87 |
+
Adaptor: https://huggingface.co/TitleOS/Spark-270M-LoRA
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|