Datasets:
feat: Add instruction-tuned dataset generation for LLM fine-tuning
Browse filesAdd script to convert raw NEON articles into instruction-tuned formats
suitable for LoRA and other fine-tuning methods.
Features:
- Support for 4 output formats: alpaca, chat, sharegpt, completion
- Varied German instruction templates for better model generalization
- Length filtering options (--min-length, --max-length)
- Category translation to human-readable German names
- Metadata preservation for traceability
Generated datasets are gitignored as they can be easily regenerated
from the source data (~18k articles processed in seconds).
- .gitignore +8 -0
- README.md +139 -35
- scripts/generate_instructions.py +370 -0
- scripts/requirements.txt +5 -0
.gitignore
CHANGED
|
@@ -1 +1,9 @@
|
|
| 1 |
**/*.env
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
**/*.env
|
| 2 |
+
|
| 3 |
+
# Python
|
| 4 |
+
__pycache__/
|
| 5 |
+
*.py[cod]
|
| 6 |
+
.venv/
|
| 7 |
+
|
| 8 |
+
*.jsonl
|
| 9 |
+
!stern_neon_user_poetry.jsonl
|
README.md
CHANGED
|
@@ -1,22 +1,25 @@
|
|
| 1 |
---
|
| 2 |
license: wtfpl
|
| 3 |
task_categories:
|
| 4 |
-
- text-classification
|
| 5 |
-
- summarization
|
| 6 |
-
- text-generation
|
| 7 |
-
- sentence-similarity
|
| 8 |
language:
|
| 9 |
-
- de
|
| 10 |
-
- en
|
| 11 |
tags:
|
| 12 |
-
- art
|
| 13 |
-
- poetry
|
| 14 |
-
- literature
|
| 15 |
-
- articles
|
| 16 |
-
- opinion
|
|
|
|
|
|
|
|
|
|
| 17 |
pretty_name: Stern NEON Articles
|
| 18 |
size_categories:
|
| 19 |
-
- 10K<n<100K
|
| 20 |
---
|
| 21 |
|
| 22 |
# Structured Stern NEON Community Articles
|
|
@@ -47,6 +50,7 @@ This dataset can be used for text classification, question answering, text gener
|
|
| 47 |
- **Question Answering**: The dataset can be used to train a model to answer questions about the articles.
|
| 48 |
- **Text Generation**: The dataset can be used to train a model to generate new articles.
|
| 49 |
- **Text-to-Text Generation**: The dataset can be used to train a model to generate summaries of the articles.
|
|
|
|
| 50 |
|
| 51 |
### Out-of-Scope Use
|
| 52 |
|
|
@@ -78,17 +82,17 @@ The main structure of the dataset is as follows:
|
|
| 78 |
|
| 79 |
```json
|
| 80 |
{
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
}
|
| 93 |
```
|
| 94 |
|
|
@@ -166,11 +170,108 @@ wird doch alles besser und gut und dann werde ich auch gesünder aussehen.
|
|
| 166 |
Morgen fängt das schon an.
|
| 167 |
```
|
| 168 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 169 |
## JSONL Normalization Script
|
| 170 |
|
| 171 |
A Python script (`normalize_jsonl.py`) is included in this repository to help clean and prepare the dataset for LLM fine-tuning. This script uses an OpenAI-compatible API to normalize the `text` field of each entry, ensuring high-quality, consistent data for model training.
|
| 172 |
|
| 173 |
### Features
|
|
|
|
| 174 |
- **Filters entries**: Skips entries with empty or missing `text` fields
|
| 175 |
- **State tracking**: Skips already processed entries (marked as normalized or failed) to avoid duplicate work and API calls
|
| 176 |
- **AI-powered normalization**: Uses OpenAI or compatible APIs to clean, standardize, and preserve the literary quality of the text
|
|
@@ -182,22 +283,23 @@ A Python script (`normalize_jsonl.py`) is included in this repository to help cl
|
|
| 182 |
### Usage
|
| 183 |
|
| 184 |
1. **Install dependencies**
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
2. **Set your API key**
|
| 189 |
-
|
| 190 |
-
|
| 191 |
3. **Run the script**
|
| 192 |
-
|
| 193 |
-
|
| 194 |
-
|
| 195 |
-
|
| 196 |
-
|
| 197 |
-
|
| 198 |
-
|
| 199 |
|
| 200 |
#### Command Line Options
|
|
|
|
| 201 |
- `input_file` — Path to input JSONL file (required)
|
| 202 |
- `-o, --output` — Output file for normalized entries (default: normalized_entries.jsonl)
|
| 203 |
- `-f, --failed` — Output file for failed entries (default: failed_normalizations.jsonl)
|
|
@@ -209,10 +311,12 @@ A Python script (`normalize_jsonl.py`) is included in this repository to help cl
|
|
| 209 |
- `--force-reprocess` — Force reprocessing of already normalized entries
|
| 210 |
|
| 211 |
#### State Handling & Resume
|
|
|
|
| 212 |
- The script automatically skips already processed entries (normalized or failed), allowing you to resume processing if interrupted.
|
| 213 |
- To reprocess all entries, use the `--force-reprocess` flag.
|
| 214 |
|
| 215 |
#### Example
|
|
|
|
| 216 |
```bash
|
| 217 |
python normalize_jsonl.py stern_neon_user_poetry.jsonl --max-entries 10
|
| 218 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: wtfpl
|
| 3 |
task_categories:
|
| 4 |
+
- text-classification
|
| 5 |
+
- summarization
|
| 6 |
+
- text-generation
|
| 7 |
+
- sentence-similarity
|
| 8 |
language:
|
| 9 |
+
- de
|
| 10 |
+
- en
|
| 11 |
tags:
|
| 12 |
+
- art
|
| 13 |
+
- poetry
|
| 14 |
+
- literature
|
| 15 |
+
- articles
|
| 16 |
+
- opinion
|
| 17 |
+
- fine-tuning
|
| 18 |
+
- lora
|
| 19 |
+
- instruction-tuning
|
| 20 |
pretty_name: Stern NEON Articles
|
| 21 |
size_categories:
|
| 22 |
+
- 10K<n<100K
|
| 23 |
---
|
| 24 |
|
| 25 |
# Structured Stern NEON Community Articles
|
|
|
|
| 50 |
- **Question Answering**: The dataset can be used to train a model to answer questions about the articles.
|
| 51 |
- **Text Generation**: The dataset can be used to train a model to generate new articles.
|
| 52 |
- **Text-to-Text Generation**: The dataset can be used to train a model to generate summaries of the articles.
|
| 53 |
+
- **Fine-Tuning / LoRA**: The dataset can be converted to instruction format for fine-tuning LLMs (see [Instruction-Tuned Dataset Generation](#instruction-tuned-dataset-generation) below).
|
| 54 |
|
| 55 |
### Out-of-Scope Use
|
| 56 |
|
|
|
|
| 82 |
|
| 83 |
```json
|
| 84 |
{
|
| 85 |
+
"created": 1190362560,
|
| 86 |
+
"author": "Wortfechter",
|
| 87 |
+
"profile_url": "http://www.neon.de/user/Wortfechter",
|
| 88 |
+
"title": "Kopftuch im Flughafen",
|
| 89 |
+
"subtitle": "In Teheran ist das Kopftuch gesetzlich vorgeschrieben. Auch im Flughafen.",
|
| 90 |
+
"text": "Auf unserem Weg nach Kambodscha führte uns unser Flug über Teheran nach Bangkok. Das erste, was wir auf unserer Reise also zu sehen bekommen sollten, war der Sicherheitsbereich im Imam Khomeini Airport in Teheran. Alle Mädels waren bestens ausgerüstet mit langen Klamotten und Tüchern, die sie mehr oder weniger professionell um ihren Kopf zu wickeln versuchten. Da in Teheran das Kopftuch für Frauen gesetzlich vorgeschrieben ist, mussten auch wir auf dem Flughafen alles außer unserer Hände und Gesichter verdecken.\n \n\n Als ich nun also mehr schlecht als recht eingewickelt aus dem Flugzeug stolperte, wurde ich vom Passkontrolleur erstmal gefragt, ob ich denn Muslimin sei. Auf meine Verneinung hin wollte er wissen, warum ich denn dann ein Kopftuch trage. Sehr witzig, der Mann.\n \n Im Flughafen war es natürlich warm, unter den Kopftüchern war es natürlich noch wärmer und in langen Klamotten erst recht. Zunächst wurden wir mit riesigen Muffins und Kaffee/Cola/Wasser for free doch recht nett begrüßt, doch nach einigen Stunden wurde es verdammt nervig. Es würde mich nicht wundern, wenn einige der Mädels eine richtige Abneigung entwickeln würden gegen dieses Land, in dem man als Frau selbst auf dem Flughafen alles außer der Hände und dem Gesicht verdecken muss. Nicht, dass wir halbnackt durch den Flughafen hüpfen wollten, aber bei den Temperaturen wäre ein T-Shirt schon was Feines gewesen. Im Flugzeug wurden vorsorglich an alle Unwissenden Kittel verteilt, auf dem Rückflug gab es sogar Kopftücher, dafür aber auch deutsche Touris, die aussahen, als würden sie in Thailand einen ganz bestimmten Tourismus pflegen und unsere Mädels anpöbelten, weil sie \"zu lange\" am öffentlichen internetfähigen Rechner standen.\n \n Als wir die Kopftücher nach einiger Zeit lüfteten, wurden wir gleich dezent darauf hingewiesen, dass es Vorschrift sei, sie zu tragen. Keine Begründung. Einfach nur \"That's a rule\" - and that's it.\n \n\n Bei allem Verständnis für andere Kulturen und Sitten fühlte ich mich - unter meinem Kopftuch schwitzend - ein wenig in meiner persönlichen Freiheit eingeschränkt.",
|
| 91 |
+
"url": "http://www.neon.de:80/artikel/kaufen/reise/kopftuch-im-flughafen/652640",
|
| 92 |
+
"archive_url": "https://web.archive.org/web/20140815233601/http://www.neon.de:80/artikel/kaufen/reise/kopftuch-im-flughafen/652640",
|
| 93 |
+
"main_category": "kaufen",
|
| 94 |
+
"sub_category": "reise",
|
| 95 |
+
"id": 652640
|
| 96 |
}
|
| 97 |
```
|
| 98 |
|
|
|
|
| 170 |
Morgen fängt das schon an.
|
| 171 |
```
|
| 172 |
|
| 173 |
+
## Instruction-Tuned Dataset Generation
|
| 174 |
+
|
| 175 |
+
The repository includes a script to convert the raw articles into instruction-tuned formats suitable for fine-tuning LLMs with LoRA or other methods.
|
| 176 |
+
|
| 177 |
+
### Supported Output Formats
|
| 178 |
+
|
| 179 |
+
| Format | Description | Use With |
|
| 180 |
+
| ------------ | ------------------------------- | ---------------------------- |
|
| 181 |
+
| `alpaca` | Instruction/Input/Output format | Most trainers, LLaMA-Factory |
|
| 182 |
+
| `chat` | ChatML messages format | Chat model fine-tuning |
|
| 183 |
+
| `sharegpt` | ShareGPT conversations format | Axolotl, Unsloth |
|
| 184 |
+
| `completion` | Simple text completion format | Basic text generation |
|
| 185 |
+
|
| 186 |
+
### Usage
|
| 187 |
+
|
| 188 |
+
```bash
|
| 189 |
+
# Install dependencies (uses only Python stdlib)
|
| 190 |
+
pip install -r scripts/requirements.txt
|
| 191 |
+
|
| 192 |
+
# Generate Alpaca format (default)
|
| 193 |
+
python scripts/generate_instructions.py stern_neon_user_poetry.jsonl -o neon_alpaca.jsonl
|
| 194 |
+
|
| 195 |
+
# Generate ShareGPT format for Axolotl/Unsloth
|
| 196 |
+
python scripts/generate_instructions.py stern_neon_user_poetry.jsonl -o neon_sharegpt.jsonl -f sharegpt
|
| 197 |
+
|
| 198 |
+
# Filter by text length (500-5000 characters)
|
| 199 |
+
python scripts/generate_instructions.py stern_neon_user_poetry.jsonl -o neon_filtered.jsonl --min-length 500 --max-length 5000
|
| 200 |
+
```
|
| 201 |
+
|
| 202 |
+
### Command Line Options
|
| 203 |
+
|
| 204 |
+
| Option | Description | Default |
|
| 205 |
+
| -------------- | --------------------------------------------------------- | ---------- |
|
| 206 |
+
| `input` | Path to input JSONL file | (required) |
|
| 207 |
+
| `-o, --output` | Path to output JSONL file | (required) |
|
| 208 |
+
| `-f, --format` | Output format: `alpaca`, `chat`, `sharegpt`, `completion` | `alpaca` |
|
| 209 |
+
| `--min-length` | Minimum text length in characters | `100` |
|
| 210 |
+
| `--max-length` | Maximum text length in characters | No limit |
|
| 211 |
+
| `--seed` | Random seed for reproducibility | `42` |
|
| 212 |
+
|
| 213 |
+
### Example Output (Alpaca Format)
|
| 214 |
+
|
| 215 |
+
```json
|
| 216 |
+
{
|
| 217 |
+
"instruction": "Schreibe einen Artikel mit dem Titel: \"Morgen.\"",
|
| 218 |
+
"input": "",
|
| 219 |
+
"output": "Ich frage mich, ob das wirklich ich bin...",
|
| 220 |
+
"metadata": {
|
| 221 |
+
"title": "Morgen.",
|
| 222 |
+
"author": "entfesselt",
|
| 223 |
+
"category": "Gefühle / Erwachsenwerden",
|
| 224 |
+
"id": 988270
|
| 225 |
+
}
|
| 226 |
+
}
|
| 227 |
+
```
|
| 228 |
+
|
| 229 |
+
### Example Output (ShareGPT Format)
|
| 230 |
+
|
| 231 |
+
```json
|
| 232 |
+
{
|
| 233 |
+
"conversations": [
|
| 234 |
+
{
|
| 235 |
+
"from": "system",
|
| 236 |
+
"value": "Du bist ein kreativer Autor im Stil der Stern NEON Community..."
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"from": "human",
|
| 240 |
+
"value": "Schreibe einen Artikel mit dem Titel: \"Morgen.\""
|
| 241 |
+
},
|
| 242 |
+
{ "from": "gpt", "value": "Ich frage mich, ob das wirklich ich bin..." }
|
| 243 |
+
],
|
| 244 |
+
"id": "neon_988270"
|
| 245 |
+
}
|
| 246 |
+
```
|
| 247 |
+
|
| 248 |
+
### Instruction Templates
|
| 249 |
+
|
| 250 |
+
The script uses varied instruction templates to improve model generalization:
|
| 251 |
+
|
| 252 |
+
- Title-based: _"Schreibe einen Artikel mit dem Titel: {title}"_
|
| 253 |
+
- Category-based: _"Schreibe einen persönlichen Artikel über {category}."_
|
| 254 |
+
- Combined: _"Schreibe einen {category}-Artikel mit dem Titel {title}."_
|
| 255 |
+
- With context: _"Schreibe einen Artikel mit dem Titel {title}. Thema: {subtitle}"_
|
| 256 |
+
|
| 257 |
+
### Training Recommendations
|
| 258 |
+
|
| 259 |
+
For fine-tuning with LoRA, we recommend:
|
| 260 |
+
|
| 261 |
+
1. **Base Models**: Llama 3 8B, Mistral 7B, or German models like LeoLM
|
| 262 |
+
2. **Training Frameworks**:
|
| 263 |
+
- [Unsloth](https://github.com/unslothai/unsloth) — Fastest, 2x speed, use `sharegpt` format
|
| 264 |
+
- [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) — Most flexible, use `sharegpt` format
|
| 265 |
+
- [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) — Use `alpaca` format
|
| 266 |
+
|
| 267 |
+
---
|
| 268 |
+
|
| 269 |
## JSONL Normalization Script
|
| 270 |
|
| 271 |
A Python script (`normalize_jsonl.py`) is included in this repository to help clean and prepare the dataset for LLM fine-tuning. This script uses an OpenAI-compatible API to normalize the `text` field of each entry, ensuring high-quality, consistent data for model training.
|
| 272 |
|
| 273 |
### Features
|
| 274 |
+
|
| 275 |
- **Filters entries**: Skips entries with empty or missing `text` fields
|
| 276 |
- **State tracking**: Skips already processed entries (marked as normalized or failed) to avoid duplicate work and API calls
|
| 277 |
- **AI-powered normalization**: Uses OpenAI or compatible APIs to clean, standardize, and preserve the literary quality of the text
|
|
|
|
| 283 |
### Usage
|
| 284 |
|
| 285 |
1. **Install dependencies**
|
| 286 |
+
```bash
|
| 287 |
+
pip install -r requirements.txt
|
| 288 |
+
```
|
| 289 |
2. **Set your API key**
|
| 290 |
+
- Export as environment variable: `export OPENAI_API_KEY="your-api-key"`
|
| 291 |
+
- Or use the `--api-key` flag
|
| 292 |
3. **Run the script**
|
| 293 |
+
```bash
|
| 294 |
+
python normalize_jsonl.py stern_neon_user_poetry.jsonl
|
| 295 |
+
```
|
| 296 |
+
This will create:
|
| 297 |
+
- `normalized_entries.jsonl` — Successfully normalized entries
|
| 298 |
+
- `failed_normalizations.jsonl` — Entries that failed normalization
|
| 299 |
+
- `normalize_log.txt` — Detailed log of the process
|
| 300 |
|
| 301 |
#### Command Line Options
|
| 302 |
+
|
| 303 |
- `input_file` — Path to input JSONL file (required)
|
| 304 |
- `-o, --output` — Output file for normalized entries (default: normalized_entries.jsonl)
|
| 305 |
- `-f, --failed` — Output file for failed entries (default: failed_normalizations.jsonl)
|
|
|
|
| 311 |
- `--force-reprocess` — Force reprocessing of already normalized entries
|
| 312 |
|
| 313 |
#### State Handling & Resume
|
| 314 |
+
|
| 315 |
- The script automatically skips already processed entries (normalized or failed), allowing you to resume processing if interrupted.
|
| 316 |
- To reprocess all entries, use the `--force-reprocess` flag.
|
| 317 |
|
| 318 |
#### Example
|
| 319 |
+
|
| 320 |
```bash
|
| 321 |
python normalize_jsonl.py stern_neon_user_poetry.jsonl --max-entries 10
|
| 322 |
```
|
scripts/generate_instructions.py
ADDED
|
@@ -0,0 +1,370 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Generate instruction-tuned dataset from Stern NEON articles.
|
| 4 |
+
|
| 5 |
+
This script converts the raw JSONL articles into various formats suitable
|
| 6 |
+
for fine-tuning LLMs with LoRA or other methods.
|
| 7 |
+
|
| 8 |
+
Output formats:
|
| 9 |
+
- Alpaca format (instruction, input, output)
|
| 10 |
+
- ChatML/Messages format (for chat models)
|
| 11 |
+
- Completion format (simple text format)
|
| 12 |
+
"""
|
| 13 |
+
|
| 14 |
+
import json
|
| 15 |
+
import random
|
| 16 |
+
import argparse
|
| 17 |
+
from pathlib import Path
|
| 18 |
+
from typing import Generator
|
| 19 |
+
from dataclasses import dataclass
|
| 20 |
+
|
| 21 |
+
# Category translations for German instructions
|
| 22 |
+
CATEGORY_TRANSLATIONS = {
|
| 23 |
+
"fuehlen": "Gefühle",
|
| 24 |
+
"kaufen": "Konsum & Lifestyle",
|
| 25 |
+
"freie-zeit": "Freizeit",
|
| 26 |
+
"sehen": "Beobachtungen",
|
| 27 |
+
"machen": "Aktivitäten",
|
| 28 |
+
"wissen": "Wissen",
|
| 29 |
+
"erwachsen-werden": "Erwachsenwerden",
|
| 30 |
+
"familie": "Familie",
|
| 31 |
+
"liebe": "Liebe",
|
| 32 |
+
"sex": "Sexualität",
|
| 33 |
+
"freundschaft": "Freundschaft",
|
| 34 |
+
"reise": "Reisen",
|
| 35 |
+
"computer-internet": "Computer & Internet",
|
| 36 |
+
"musik": "Musik",
|
| 37 |
+
"film-fernsehen": "Film & Fernsehen",
|
| 38 |
+
"buecher": "Bücher",
|
| 39 |
+
"sport": "Sport",
|
| 40 |
+
"essen-trinken": "Essen & Trinken",
|
| 41 |
+
"mode": "Mode",
|
| 42 |
+
"wohnen": "Wohnen",
|
| 43 |
+
"arbeit": "Arbeit",
|
| 44 |
+
"studium": "Studium",
|
| 45 |
+
"schule": "Schule",
|
| 46 |
+
"politik": "Politik",
|
| 47 |
+
"gesellschaft": "Gesellschaft",
|
| 48 |
+
}
|
| 49 |
+
|
| 50 |
+
# Instruction templates - varied to improve model generalization
|
| 51 |
+
INSTRUCTION_TEMPLATES = [
|
| 52 |
+
# Title-based
|
| 53 |
+
"Schreibe einen Artikel mit dem Titel: \"{title}\"",
|
| 54 |
+
"Verfasse einen persönlichen Text zum Thema: \"{title}\"",
|
| 55 |
+
"Erstelle einen NEON-Artikel mit der Überschrift: \"{title}\"",
|
| 56 |
+
|
| 57 |
+
# Category-based
|
| 58 |
+
"Schreibe einen persönlichen Artikel über {category}.",
|
| 59 |
+
"Verfasse einen emotionalen Text zum Thema {category}.",
|
| 60 |
+
|
| 61 |
+
# Title + Category
|
| 62 |
+
"Schreibe einen {category}-Artikel mit dem Titel \"{title}\".",
|
| 63 |
+
"Erstelle einen persönlichen Text über {category}. Der Titel soll sein: \"{title}\"",
|
| 64 |
+
|
| 65 |
+
# With subtitle context
|
| 66 |
+
"Schreibe einen Artikel mit dem Titel \"{title}\". Thema: {subtitle}",
|
| 67 |
+
"Verfasse einen Text zum Thema: {subtitle}",
|
| 68 |
+
]
|
| 69 |
+
|
| 70 |
+
# System prompts for chat format
|
| 71 |
+
SYSTEM_PROMPTS = [
|
| 72 |
+
"Du bist ein kreativer Autor im Stil der Stern NEON Community. Du schreibst persönliche, emotionale und authentische Texte über das Leben junger Erwachsener in Deutschland.",
|
| 73 |
+
"Du bist ein talentierter Autor für persönliche Essays und Erfahrungsberichte. Dein Schreibstil ist introspektiv, ehrlich und berührend.",
|
| 74 |
+
"Du schreibst im Stil von Stern NEON: persönlich, nachdenklich, manchmal melancholisch, immer authentisch. Deine Texte handeln vom Erwachsenwerden, von Liebe, Freundschaft und den kleinen Momenten des Lebens.",
|
| 75 |
+
]
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
@dataclass
|
| 79 |
+
class Article:
|
| 80 |
+
"""Represents a single article from the dataset."""
|
| 81 |
+
title: str
|
| 82 |
+
subtitle: str | None
|
| 83 |
+
text: str
|
| 84 |
+
author: str
|
| 85 |
+
main_category: str
|
| 86 |
+
sub_category: str
|
| 87 |
+
article_id: int
|
| 88 |
+
|
| 89 |
+
@classmethod
|
| 90 |
+
def from_json(cls, data: dict) -> "Article | None":
|
| 91 |
+
"""Create an Article from JSON data, returns None if invalid."""
|
| 92 |
+
text = data.get("text", "").strip()
|
| 93 |
+
title = data.get("title", "").strip()
|
| 94 |
+
|
| 95 |
+
# Skip articles with empty or very short text
|
| 96 |
+
if not text or len(text) < 100 or not title:
|
| 97 |
+
return None
|
| 98 |
+
|
| 99 |
+
return cls(
|
| 100 |
+
title=title,
|
| 101 |
+
subtitle=data.get("subtitle"),
|
| 102 |
+
text=text,
|
| 103 |
+
author=data.get("author", "Anonym"),
|
| 104 |
+
main_category=data.get("main_category", ""),
|
| 105 |
+
sub_category=data.get("sub_category", ""),
|
| 106 |
+
article_id=data.get("id", 0),
|
| 107 |
+
)
|
| 108 |
+
|
| 109 |
+
def get_category_name(self) -> str:
|
| 110 |
+
"""Get human-readable category name."""
|
| 111 |
+
sub = CATEGORY_TRANSLATIONS.get(self.sub_category, self.sub_category)
|
| 112 |
+
main = CATEGORY_TRANSLATIONS.get(self.main_category, self.main_category)
|
| 113 |
+
if sub and sub != main:
|
| 114 |
+
return f"{main} / {sub}"
|
| 115 |
+
return main or "Allgemein"
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
def load_articles(input_path: Path) -> Generator[Article, None, None]:
|
| 119 |
+
"""Load and parse articles from JSONL file."""
|
| 120 |
+
with open(input_path, "r", encoding="utf-8") as f:
|
| 121 |
+
for line in f:
|
| 122 |
+
line = line.strip()
|
| 123 |
+
if not line:
|
| 124 |
+
continue
|
| 125 |
+
try:
|
| 126 |
+
data = json.loads(line)
|
| 127 |
+
article = Article.from_json(data)
|
| 128 |
+
if article:
|
| 129 |
+
yield article
|
| 130 |
+
except json.JSONDecodeError as e:
|
| 131 |
+
print(f"Warning: Failed to parse line: {e}")
|
| 132 |
+
continue
|
| 133 |
+
|
| 134 |
+
|
| 135 |
+
def generate_instruction(article: Article) -> str:
|
| 136 |
+
"""Generate a varied instruction for an article."""
|
| 137 |
+
# Choose template based on available data
|
| 138 |
+
available_templates = INSTRUCTION_TEMPLATES[:3] # Title-based always available
|
| 139 |
+
|
| 140 |
+
if article.main_category or article.sub_category:
|
| 141 |
+
available_templates.extend(INSTRUCTION_TEMPLATES[3:7]) # Category templates
|
| 142 |
+
|
| 143 |
+
if article.subtitle:
|
| 144 |
+
available_templates.extend(INSTRUCTION_TEMPLATES[7:]) # Subtitle templates
|
| 145 |
+
|
| 146 |
+
template = random.choice(available_templates)
|
| 147 |
+
|
| 148 |
+
return template.format(
|
| 149 |
+
title=article.title,
|
| 150 |
+
category=article.get_category_name(),
|
| 151 |
+
subtitle=article.subtitle or "",
|
| 152 |
+
)
|
| 153 |
+
|
| 154 |
+
|
| 155 |
+
def to_alpaca_format(article: Article) -> dict:
|
| 156 |
+
"""Convert article to Alpaca instruction format."""
|
| 157 |
+
return {
|
| 158 |
+
"instruction": generate_instruction(article),
|
| 159 |
+
"input": "",
|
| 160 |
+
"output": article.text,
|
| 161 |
+
"metadata": {
|
| 162 |
+
"title": article.title,
|
| 163 |
+
"author": article.author,
|
| 164 |
+
"category": article.get_category_name(),
|
| 165 |
+
"id": article.article_id,
|
| 166 |
+
}
|
| 167 |
+
}
|
| 168 |
+
|
| 169 |
+
|
| 170 |
+
def to_chat_format(article: Article) -> dict:
|
| 171 |
+
"""Convert article to ChatML/messages format."""
|
| 172 |
+
system_prompt = random.choice(SYSTEM_PROMPTS)
|
| 173 |
+
user_message = generate_instruction(article)
|
| 174 |
+
|
| 175 |
+
return {
|
| 176 |
+
"messages": [
|
| 177 |
+
{"role": "system", "content": system_prompt},
|
| 178 |
+
{"role": "user", "content": user_message},
|
| 179 |
+
{"role": "assistant", "content": article.text},
|
| 180 |
+
],
|
| 181 |
+
"metadata": {
|
| 182 |
+
"title": article.title,
|
| 183 |
+
"author": article.author,
|
| 184 |
+
"id": article.article_id,
|
| 185 |
+
}
|
| 186 |
+
}
|
| 187 |
+
|
| 188 |
+
|
| 189 |
+
def to_completion_format(article: Article) -> dict:
|
| 190 |
+
"""Convert article to simple completion format."""
|
| 191 |
+
category = article.get_category_name()
|
| 192 |
+
subtitle_line = f"\n### Untertitel: {article.subtitle}" if article.subtitle else ""
|
| 193 |
+
|
| 194 |
+
text = f"""### Kategorie: {category}
|
| 195 |
+
### Titel: {article.title}{subtitle_line}
|
| 196 |
+
|
| 197 |
+
{article.text}"""
|
| 198 |
+
|
| 199 |
+
return {
|
| 200 |
+
"text": text,
|
| 201 |
+
"metadata": {
|
| 202 |
+
"title": article.title,
|
| 203 |
+
"author": article.author,
|
| 204 |
+
"id": article.article_id,
|
| 205 |
+
}
|
| 206 |
+
}
|
| 207 |
+
|
| 208 |
+
|
| 209 |
+
def to_sharegpt_format(article: Article) -> dict:
|
| 210 |
+
"""Convert article to ShareGPT format (used by Axolotl and others)."""
|
| 211 |
+
system_prompt = random.choice(SYSTEM_PROMPTS)
|
| 212 |
+
user_message = generate_instruction(article)
|
| 213 |
+
|
| 214 |
+
return {
|
| 215 |
+
"conversations": [
|
| 216 |
+
{"from": "system", "value": system_prompt},
|
| 217 |
+
{"from": "human", "value": user_message},
|
| 218 |
+
{"from": "gpt", "value": article.text},
|
| 219 |
+
],
|
| 220 |
+
"id": f"neon_{article.article_id}",
|
| 221 |
+
}
|
| 222 |
+
|
| 223 |
+
|
| 224 |
+
FORMAT_HANDLERS = {
|
| 225 |
+
"alpaca": to_alpaca_format,
|
| 226 |
+
"chat": to_chat_format,
|
| 227 |
+
"completion": to_completion_format,
|
| 228 |
+
"sharegpt": to_sharegpt_format,
|
| 229 |
+
}
|
| 230 |
+
|
| 231 |
+
|
| 232 |
+
def process_dataset(
|
| 233 |
+
input_path: Path,
|
| 234 |
+
output_path: Path,
|
| 235 |
+
output_format: str = "alpaca",
|
| 236 |
+
min_length: int = 100,
|
| 237 |
+
max_length: int | None = None,
|
| 238 |
+
seed: int = 42,
|
| 239 |
+
) -> dict:
|
| 240 |
+
"""Process the entire dataset and write to output file."""
|
| 241 |
+
random.seed(seed)
|
| 242 |
+
|
| 243 |
+
handler = FORMAT_HANDLERS.get(output_format)
|
| 244 |
+
if not handler:
|
| 245 |
+
raise ValueError(f"Unknown format: {output_format}. Choose from: {list(FORMAT_HANDLERS.keys())}")
|
| 246 |
+
|
| 247 |
+
stats = {
|
| 248 |
+
"total_processed": 0,
|
| 249 |
+
"skipped_short": 0,
|
| 250 |
+
"skipped_long": 0,
|
| 251 |
+
"categories": {},
|
| 252 |
+
}
|
| 253 |
+
|
| 254 |
+
with open(output_path, "w", encoding="utf-8") as out_file:
|
| 255 |
+
for article in load_articles(input_path):
|
| 256 |
+
# Length filtering
|
| 257 |
+
text_len = len(article.text)
|
| 258 |
+
if text_len < min_length:
|
| 259 |
+
stats["skipped_short"] += 1
|
| 260 |
+
continue
|
| 261 |
+
if max_length and text_len > max_length:
|
| 262 |
+
stats["skipped_long"] += 1
|
| 263 |
+
continue
|
| 264 |
+
|
| 265 |
+
# Convert to output format
|
| 266 |
+
output_record = handler(article)
|
| 267 |
+
out_file.write(json.dumps(output_record, ensure_ascii=False) + "\n")
|
| 268 |
+
|
| 269 |
+
# Update stats
|
| 270 |
+
stats["total_processed"] += 1
|
| 271 |
+
cat = article.get_category_name()
|
| 272 |
+
stats["categories"][cat] = stats["categories"].get(cat, 0) + 1
|
| 273 |
+
|
| 274 |
+
return stats
|
| 275 |
+
|
| 276 |
+
|
| 277 |
+
def main():
|
| 278 |
+
parser = argparse.ArgumentParser(
|
| 279 |
+
description="Generate instruction-tuned dataset from Stern NEON articles",
|
| 280 |
+
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 281 |
+
epilog="""
|
| 282 |
+
Examples:
|
| 283 |
+
# Generate Alpaca format (default)
|
| 284 |
+
python generate_instructions.py ../stern_neon_user_poetry.jsonl -o ../neon_alpaca.jsonl
|
| 285 |
+
|
| 286 |
+
# Generate ChatML format for chat models
|
| 287 |
+
python generate_instructions.py ../stern_neon_user_poetry.jsonl -o ../neon_chat.jsonl -f chat
|
| 288 |
+
|
| 289 |
+
# Generate ShareGPT format for Axolotl
|
| 290 |
+
python generate_instructions.py ../stern_neon_user_poetry.jsonl -o ../neon_sharegpt.jsonl -f sharegpt
|
| 291 |
+
|
| 292 |
+
# Filter by text length
|
| 293 |
+
python generate_instructions.py ../stern_neon_user_poetry.jsonl -o ../neon_filtered.jsonl --min-length 500 --max-length 5000
|
| 294 |
+
"""
|
| 295 |
+
)
|
| 296 |
+
|
| 297 |
+
parser.add_argument(
|
| 298 |
+
"input",
|
| 299 |
+
type=Path,
|
| 300 |
+
help="Path to input JSONL file (stern_neon_user_poetry.jsonl)",
|
| 301 |
+
)
|
| 302 |
+
parser.add_argument(
|
| 303 |
+
"-o", "--output",
|
| 304 |
+
type=Path,
|
| 305 |
+
required=True,
|
| 306 |
+
help="Path to output JSONL file",
|
| 307 |
+
)
|
| 308 |
+
parser.add_argument(
|
| 309 |
+
"-f", "--format",
|
| 310 |
+
choices=list(FORMAT_HANDLERS.keys()),
|
| 311 |
+
default="alpaca",
|
| 312 |
+
help="Output format (default: alpaca)",
|
| 313 |
+
)
|
| 314 |
+
parser.add_argument(
|
| 315 |
+
"--min-length",
|
| 316 |
+
type=int,
|
| 317 |
+
default=100,
|
| 318 |
+
help="Minimum text length in characters (default: 100)",
|
| 319 |
+
)
|
| 320 |
+
parser.add_argument(
|
| 321 |
+
"--max-length",
|
| 322 |
+
type=int,
|
| 323 |
+
default=None,
|
| 324 |
+
help="Maximum text length in characters (default: no limit)",
|
| 325 |
+
)
|
| 326 |
+
parser.add_argument(
|
| 327 |
+
"--seed",
|
| 328 |
+
type=int,
|
| 329 |
+
default=42,
|
| 330 |
+
help="Random seed for reproducibility (default: 42)",
|
| 331 |
+
)
|
| 332 |
+
|
| 333 |
+
args = parser.parse_args()
|
| 334 |
+
|
| 335 |
+
if not args.input.exists():
|
| 336 |
+
print(f"Error: Input file not found: {args.input}")
|
| 337 |
+
return 1
|
| 338 |
+
|
| 339 |
+
print(f"Processing {args.input}...")
|
| 340 |
+
print(f"Output format: {args.format}")
|
| 341 |
+
print(f"Output file: {args.output}")
|
| 342 |
+
print()
|
| 343 |
+
|
| 344 |
+
stats = process_dataset(
|
| 345 |
+
input_path=args.input,
|
| 346 |
+
output_path=args.output,
|
| 347 |
+
output_format=args.format,
|
| 348 |
+
min_length=args.min_length,
|
| 349 |
+
max_length=args.max_length,
|
| 350 |
+
seed=args.seed,
|
| 351 |
+
)
|
| 352 |
+
|
| 353 |
+
print("=" * 50)
|
| 354 |
+
print("Processing complete!")
|
| 355 |
+
print(f" Total articles processed: {stats['total_processed']}")
|
| 356 |
+
print(f" Skipped (too short): {stats['skipped_short']}")
|
| 357 |
+
print(f" Skipped (too long): {stats['skipped_long']}")
|
| 358 |
+
print()
|
| 359 |
+
print("Top categories:")
|
| 360 |
+
sorted_cats = sorted(stats["categories"].items(), key=lambda x: x[1], reverse=True)
|
| 361 |
+
for cat, count in sorted_cats[:10]:
|
| 362 |
+
print(f" {cat}: {count}")
|
| 363 |
+
print()
|
| 364 |
+
print(f"Output written to: {args.output}")
|
| 365 |
+
|
| 366 |
+
return 0
|
| 367 |
+
|
| 368 |
+
|
| 369 |
+
if __name__ == "__main__":
|
| 370 |
+
exit(main())
|
scripts/requirements.txt
CHANGED
|
@@ -2,4 +2,9 @@ pymongo
|
|
| 2 |
aiohttp
|
| 3 |
requests
|
| 4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
|
|
|
|
| 2 |
aiohttp
|
| 3 |
requests
|
| 4 |
|
| 5 |
+
# For instruction dataset generation (no extra deps needed, stdlib only)
|
| 6 |
+
# Optional: for dataset validation and HuggingFace upload
|
| 7 |
+
# datasets
|
| 8 |
+
# huggingface_hub
|
| 9 |
+
|
| 10 |
|