dotwee commited on
Commit
23d652c
·
1 Parent(s): 180f42c

Normalize stern neon articles through external LLM w/ code in ./scripts

Browse files
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ **/*.env
README.md CHANGED
@@ -1,15 +1,20 @@
1
  ---
2
- license: unknown
3
  task_categories:
4
  - text-classification
5
- - question-answering
6
  - text-generation
7
- - text2text-generation
8
  language:
9
  - de
 
10
  tags:
11
  - art
12
- pretty_name: Structured Stern NEON Community Articles
 
 
 
 
13
  size_categories:
14
  - 10K<n<100K
15
  ---
@@ -161,6 +166,59 @@ wird doch alles besser und gut und dann werde ich auch gesünder aussehen.
161
  Morgen fängt das schon an.
162
  ```
163
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
164
  ## Dataset Creation
165
 
166
  ### Curation Rationale
 
1
  ---
2
+ license: wtfpl
3
  task_categories:
4
  - text-classification
5
+ - summarization
6
  - text-generation
7
+ - sentence-similarity
8
  language:
9
  - de
10
+ - en
11
  tags:
12
  - art
13
+ - poetry
14
+ - literature
15
+ - articles
16
+ - opinion
17
+ pretty_name: Stern NEON Articles
18
  size_categories:
19
  - 10K<n<100K
20
  ---
 
166
  Morgen fängt das schon an.
167
  ```
168
 
169
+ ## JSONL Normalization Script
170
+
171
+ A Python script (`normalize_jsonl.py`) is included in this repository to help clean and prepare the dataset for LLM fine-tuning. This script uses an OpenAI-compatible API to normalize the `text` field of each entry, ensuring high-quality, consistent data for model training.
172
+
173
+ ### Features
174
+ - **Filters entries**: Skips entries with empty or missing `text` fields
175
+ - **State tracking**: Skips already processed entries (marked as normalized or failed) to avoid duplicate work and API calls
176
+ - **AI-powered normalization**: Uses OpenAI or compatible APIs to clean, standardize, and preserve the literary quality of the text
177
+ - **Error handling**: Entries that fail normalization are saved to a separate file
178
+ - **Progress logging**: Detailed logs and progress updates are written to `normalize_log.txt`
179
+ - **Rate limiting**: Adjustable delay between API calls to respect rate limits
180
+ - **Force reprocessing**: Optionally reprocess all entries, ignoring previous state
181
+
182
+ ### Usage
183
+
184
+ 1. **Install dependencies**
185
+ ```bash
186
+ pip install -r requirements.txt
187
+ ```
188
+ 2. **Set your API key**
189
+ - Export as environment variable: `export OPENAI_API_KEY="your-api-key"`
190
+ - Or use the `--api-key` flag
191
+ 3. **Run the script**
192
+ ```bash
193
+ python normalize_jsonl.py stern_neon_user_poetry.jsonl
194
+ ```
195
+ This will create:
196
+ - `normalized_entries.jsonl` — Successfully normalized entries
197
+ - `failed_normalizations.jsonl` — Entries that failed normalization
198
+ - `normalize_log.txt` — Detailed log of the process
199
+
200
+ #### Command Line Options
201
+ - `input_file` — Path to input JSONL file (required)
202
+ - `-o, --output` — Output file for normalized entries (default: normalized_entries.jsonl)
203
+ - `-f, --failed` — Output file for failed entries (default: failed_normalizations.jsonl)
204
+ - `-k, --api-key` — OpenAI API key (or set OPENAI_API_KEY env var)
205
+ - `-u, --base-url` — Base URL for OpenAI-compatible API (for local models, etc.)
206
+ - `-m, --model` — Model to use (default: gpt-3.5-turbo)
207
+ - `--max-entries` — Maximum entries to process (for testing)
208
+ - `--delay` — Delay between API calls in seconds (default: 0.5)
209
+ - `--force-reprocess` — Force reprocessing of already normalized entries
210
+
211
+ #### State Handling & Resume
212
+ - The script automatically skips already processed entries (normalized or failed), allowing you to resume processing if interrupted.
213
+ - To reprocess all entries, use the `--force-reprocess` flag.
214
+
215
+ #### Example
216
+ ```bash
217
+ python normalize_jsonl.py stern_neon_user_poetry.jsonl --max-entries 10
218
+ ```
219
+
220
+ See the script source for more details and customization options.
221
+
222
  ## Dataset Creation
223
 
224
  ### Curation Rationale
scripts/README_MONGODB.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MongoDB Setup for Stern Neon Dataset
2
+
3
+ This guide explains how to set up MongoDB using Docker and import your JSONL dataset.
4
+
5
+ ## Prerequisites
6
+
7
+ - Docker and Docker Compose installed
8
+ - Python 3.6+ with pip
9
+ - JSONL dataset file (e.g., `stern_neon_user_poetry.jsonl`)
10
+
11
+ ## Setup Instructions
12
+
13
+ ### 1. Start MongoDB with Docker Compose
14
+
15
+ ```bash
16
+ # Start the MongoDB container and Mongo Express web UI
17
+ docker-compose up -d
18
+ ```
19
+
20
+ This will start:
21
+ - MongoDB server on port 27017
22
+ - Mongo Express web UI on port 8081
23
+
24
+ ### 2. Install Python Dependencies
25
+
26
+ ```bash
27
+ # Install required Python package
28
+ pip install pymongo
29
+ ```
30
+
31
+ ### 3. Import JSONL Data into MongoDB
32
+
33
+ ```bash
34
+ # Import the default file (stern_neon_user_poetry.jsonl)
35
+ python import_jsonl_to_mongodb.py
36
+
37
+ # Or specify a different file
38
+ python import_jsonl_to_mongodb.py normalized_entries.jsonl
39
+ ```
40
+
41
+ ## Accessing MongoDB
42
+
43
+ ### Connection Details
44
+
45
+ - **Host**: localhost
46
+ - **Port**: 27017
47
+ - **Username**: admin
48
+ - **Password**: password
49
+ - **Database**: stern_neon_db
50
+ - **Collection**: articles
51
+
52
+ ### Using Mongo Express
53
+
54
+ Access the web UI at: http://localhost:8081
55
+
56
+ ### Using MongoDB Shell
57
+
58
+ ```bash
59
+ # Connect to MongoDB container
60
+ docker exec -it mongo mongosh -u admin -p password
61
+
62
+ # Select database
63
+ use stern_neon_db
64
+
65
+ # Query documents
66
+ db.articles.find().limit(5)
67
+ ```
68
+
69
+ ## Environment Variables
70
+
71
+ You can customize the MongoDB connection by setting these environment variables:
72
+
73
+ ```bash
74
+ export MONGO_HOST=localhost
75
+ export MONGO_PORT=27017
76
+ export MONGO_USER=admin
77
+ export MONGO_PASSWORD=password
78
+ export MONGO_DB=stern_neon_db
79
+ export MONGO_COLLECTION=articles
80
+ ```
81
+
82
+ ## Data Structure
83
+
84
+ The imported documents will maintain the same structure as in your JSONL file, with each entry having fields like:
85
+ - `id`: Unique identifier
86
+ - `title`: Article title
87
+ - `subtitle`: Article subtitle
88
+ - `text`: Main content
89
+ - Other fields from your dataset
90
+
91
+ An index is automatically created on the `id` field for faster lookups.
scripts/docker-compose.yml ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: structured-stern-neon-articles
2
+
3
+ services:
4
+ mongo:
5
+ image: mongodb/mongodb-community-server:latest
6
+ restart: always
7
+ env_file:
8
+ - mongodb.env
9
+ ports:
10
+ - "27017:27017"
11
+ volumes:
12
+ - mongodb_data:/data/db
13
+ networks:
14
+ - mongo_network
15
+
16
+ mongo-express:
17
+ image: mongo-express:latest
18
+ restart: always
19
+ ports:
20
+ - "8081:8081"
21
+
22
+ environment:
23
+ ME_CONFIG_MONGODB_ENABLE_ADMIN: true
24
+ ME_CONFIG_MONGODB_AUTH_USERNAME: admin
25
+ ME_CONFIG_MONGODB_AUTH_PASSWORD: password
26
+ ME_CONFIG_MONGODB_URL: mongodb://admin:password@mongo:27017/
27
+
28
+ ME_CONFIG_BASICAUTH_ENABLED: true
29
+ ME_CONFIG_BASICAUTH_USERNAME: mongoexpressuser
30
+ ME_CONFIG_BASICAUTH_PASSWORD: mongoexpresspass
31
+
32
+ depends_on:
33
+ - mongo
34
+
35
+ networks:
36
+ - mongo_network
37
+
38
+
39
+ networks:
40
+ mongo_network:
41
+ driver: bridge
42
+
43
+ volumes:
44
+ mongodb_data:
scripts/import_jsonl_to_mongodb.py ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Import JSONL data into MongoDB.
4
+ Usage: python import_jsonl_to_mongodb.py [jsonl_file]
5
+ """
6
+ import json
7
+ import sys
8
+ from pymongo import MongoClient
9
+ import os
10
+
11
+ # Default file if not specified
12
+ DEFAULT_JSONL_FILE = 'stern_neon_user_poetry.jsonl'
13
+
14
+ def import_jsonl_to_mongodb(jsonl_file):
15
+ """Import JSONL data into MongoDB."""
16
+ # MongoDB connection settings
17
+ mongo_host = os.environ.get('MONGO_HOST', 'localhost')
18
+ mongo_port = int(os.environ.get('MONGO_PORT', 27017))
19
+ mongo_user = os.environ.get('MONGO_USER', 'admin')
20
+ mongo_password = os.environ.get('MONGO_PASSWORD', 'password')
21
+ mongo_db = os.environ.get('MONGO_DB', 'stern_neon_db')
22
+ mongo_collection = os.environ.get('MONGO_COLLECTION', 'articles')
23
+
24
+ # Connect to MongoDB
25
+ connection_string = f"mongodb://{mongo_user}:{mongo_password}@{mongo_host}:{mongo_port}/?authSource=admin"
26
+ client = MongoClient(connection_string)
27
+ db = client[mongo_db]
28
+ collection = db[mongo_collection]
29
+
30
+ # Read and import JSONL file
31
+ count = 0
32
+ batch_size = 10
33
+ batch = []
34
+
35
+ print(f"Importing data from {jsonl_file} to MongoDB ({mongo_host}:{mongo_port})...")
36
+ print(f"Database: {mongo_db}, Collection: {mongo_collection}")
37
+
38
+ with open(jsonl_file, 'r', encoding='utf-8') as f:
39
+ for line_num, line in enumerate(f, 1):
40
+ if not line.strip():
41
+ continue
42
+
43
+ try:
44
+ # Parse JSON line
45
+ document = json.loads(line)
46
+
47
+ # Add to batch
48
+ batch.append(document)
49
+ count += 1
50
+
51
+ # Insert batch when it reaches batch_size
52
+ if len(batch) >= batch_size:
53
+ collection.insert_many(batch)
54
+ print(f"Imported {count} documents...")
55
+ batch = []
56
+
57
+ except json.JSONDecodeError as e:
58
+ print(f"Error parsing line {line_num}: {e}")
59
+ except Exception as e:
60
+ print(f"Error importing line {line_num}: {e}")
61
+
62
+ # Insert remaining documents
63
+ if batch:
64
+ collection.insert_many(batch)
65
+
66
+ print(f"Import complete. Total documents imported: {count}")
67
+
68
+ # Create index on 'id' field for faster lookups
69
+ if count > 0:
70
+ print("Creating index on 'id' field...")
71
+ collection.create_index('id')
72
+ print("Index created.")
73
+
74
+ if __name__ == '__main__':
75
+ # Get JSONL file from command line argument or use default
76
+ jsonl_file = sys.argv[1] if len(sys.argv) > 1 else DEFAULT_JSONL_FILE
77
+
78
+ if not os.path.exists(jsonl_file):
79
+ print(f"Error: File '{jsonl_file}' not found.")
80
+ sys.exit(1)
81
+
82
+ import_jsonl_to_mongodb(jsonl_file)
scripts/normalize.env.example ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ MONGODB_HOST=localhost
2
+ MONGODB_PORT=27017
3
+ MONGODB_USER=admin
4
+ MONGODB_PASSWORD=password
5
+ MONGODB_DATABASE=stern_neon_db
6
+ MONGODB_COLLECTION=articles
7
+
8
+ OPENAI_API_URL=
9
+ OPENAI_API_KEY=
10
+ OPENAI_MODEL=
scripts/normalize_articles.py ADDED
@@ -0,0 +1,314 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Normalize MongoDB `articles` documents for LLM fine-tuning using an OpenAI-compatible API.
4
+ Supports concurrent processing with configurable concurrency level.
5
+
6
+ Environment variables:
7
+ - MONGO_HOST (default: localhost)
8
+ - MONGO_PORT (default: 27017)
9
+ - MONGO_USER (default: admin)
10
+ - MONGO_PASSWORD (default: password)
11
+ - MONGO_DB (default: stern_neon_db)
12
+ - MONGO_COLLECTION (default: articles)
13
+ - OPENAI_API_KEY (required)
14
+ - OPENAI_BASE_URL (optional; e.g., http://localhost:11434/v1)
15
+ - OPENAI_MODEL (default: gpt-4o-mini)
16
+
17
+ Usage examples:
18
+ python normalize_articles.py --limit 100 --dry-run
19
+ python normalize_articles.py --resume-from 652e... --batch-size 20 --concurrency 5
20
+ """
21
+ import argparse
22
+ import asyncio
23
+ import os
24
+ import sys
25
+ import time
26
+ from typing import Any, Dict, Optional, List, Tuple
27
+
28
+ from pymongo import MongoClient
29
+ from pymongo.collection import Collection
30
+ from bson import ObjectId
31
+
32
+ import aiohttp
33
+
34
+
35
+ def load_env_file(env_path: str) -> None:
36
+ """Load key=value pairs from a .env-like file into os.environ.
37
+
38
+ Lines starting with '#' or empty lines are ignored. Keys and values are stripped.
39
+ Values are not unescaped; simple literal assignment only.
40
+ """
41
+ if not env_path:
42
+ return
43
+ if not os.path.exists(env_path):
44
+ return
45
+ try:
46
+ with open(env_path, 'r', encoding='utf-8') as f:
47
+ for raw_line in f:
48
+ line = raw_line.strip()
49
+ if not line or line.startswith('#'):
50
+ continue
51
+ if '=' not in line:
52
+ continue
53
+ key, value = line.split('=', 1)
54
+ key = key.strip()
55
+ value = value.strip()
56
+ os.environ[key] = value
57
+ except Exception as e:
58
+ print(f"Warning: failed to load env file '{env_path}': {e}")
59
+
60
+
61
+ def get_mongo_collection() -> Collection:
62
+ # Support both MONGODB_* and MONGO_* names, prefer MONGODB_* if present
63
+ mongo_host = os.environ.get('MONGODB_HOST') or os.environ.get('MONGO_HOST', 'localhost')
64
+ mongo_port = int(os.environ.get('MONGODB_PORT') or os.environ.get('MONGO_PORT', 27017))
65
+ mongo_user = os.environ.get('MONGODB_USER') or os.environ.get('MONGO_USER', 'admin')
66
+ mongo_password = os.environ.get('MONGODB_PASSWORD') or os.environ.get('MONGO_PASSWORD', 'password')
67
+ mongo_db = os.environ.get('MONGODB_DATABASE') or os.environ.get('MONGO_DB', 'stern_neon_db')
68
+ mongo_collection = os.environ.get('MONGODB_COLLECTION') or os.environ.get('MONGO_COLLECTION', 'articles')
69
+
70
+ connection_string = f"mongodb://{mongo_user}:{mongo_password}@{mongo_host}:{mongo_port}/?authSource=admin"
71
+ client = MongoClient(connection_string)
72
+ db = client[mongo_db]
73
+ return db[mongo_collection]
74
+
75
+
76
+ async def normalize_text_via_openai_compatible(text: str, api_key: str, base_url: Optional[str], model: str, session: aiohttp.ClientSession, timeout: int = 60) -> str:
77
+ """Send text to an OpenAI-compatible Chat Completions API and return normalized text.
78
+
79
+ The function uses a simple prompt to clean and normalize content for LLM fine-tuning.
80
+ """
81
+ url = (base_url.rstrip('/') if base_url else 'https://api.openai.com/v1') + '/chat/completions'
82
+ headers = {
83
+ 'Authorization': f'Bearer {api_key}',
84
+ 'Content-Type': 'application/json',
85
+ }
86
+ payload = {
87
+ 'model': model,
88
+ 'temperature': 0.1,
89
+ 'messages': [
90
+ {
91
+ 'role': 'system',
92
+ 'content': (
93
+ 'You are a precise text normalization assistant for preparing training data for LLM fine-tuning.\n'
94
+ 'TASK: Normalize ONLY the provided main article text. Return ONLY the normalized text with no extra commentary, no markdown, no metadata.\n'
95
+ 'REQUIREMENTS:\n'
96
+ '1) Fix obvious typos and spelling errors.\n'
97
+ '2) Normalize punctuation and spacing inconsistencies.\n'
98
+ '3) Remove excessive whitespace/newlines, but preserve intentional line breaks for poetry and paragraphs.\n'
99
+ ' - Allow at most three consecutive empty lines.\n'
100
+ '4) Ensure proper capitalization where appropriate.\n'
101
+ '5) Fix encoding issues or strange characters.\n'
102
+ '6) Maintain the original meaning, literary quality, style, and voice.\n'
103
+ '7) Preserve intentional formatting (e.g., poetry line breaks), but avoid over-spacing.\n'
104
+ '8) Remove any metadata or non-content text (e.g., headers, footers, navigation, ads).\n'
105
+ '9) Normalize quote characters to straight ASCII single (\'\') and double (\"\") quotes.\n'
106
+ 'CONSTRAINTS: Do not add content. Do not summarize. Do not rephrase stylistically beyond necessary corrections. Output plain text only.'
107
+ ),
108
+ },
109
+ {
110
+ 'role': 'user',
111
+ 'content': text,
112
+ },
113
+ ],
114
+ }
115
+
116
+ print(f"Sending text to {url} with model {model}")
117
+
118
+ try:
119
+ async with session.post(url, json=payload, headers=headers, timeout=aiohttp.ClientTimeout(total=timeout)) as resp:
120
+ if resp.status != 200:
121
+ response_text = await resp.text()
122
+ raise RuntimeError(f"OpenAI-compatible API error: {resp.status} {response_text}")
123
+
124
+ data = await resp.json()
125
+ try:
126
+ content = data['choices'][0]['message']['content']
127
+ return content.strip()
128
+ except Exception:
129
+ raise RuntimeError(f"Unexpected API response format: {data}")
130
+ except asyncio.TimeoutError:
131
+ raise RuntimeError(f"Request timeout after {timeout} seconds")
132
+ except Exception as e:
133
+ raise RuntimeError(f"Request failed: {e}")
134
+
135
+
136
+ def normalize_quote_characters(text: str) -> str:
137
+ """Normalize various curly and localized quotes to straight ASCII quotes.
138
+
139
+ This is a deterministic post-process to ensure consistent quotes regardless of model behavior.
140
+ """
141
+ if not text:
142
+ return text
143
+ replacements = {
144
+ '“': '"', '”': '"', '„': '"', '‟': '"', '«': '"', '»': '"',
145
+ '‟': '"', '"': '"',
146
+ '‘': '\'', '’': '\'', '‚': '\'', '‛': '\'', '‹': '\'', '›': '\'', ''': '\'',
147
+ }
148
+ out = text
149
+ for src, dst in replacements.items():
150
+ out = out.replace(src, dst)
151
+ return out
152
+
153
+
154
+ def build_revision(original: Dict[str, Any], normalized_text: str) -> Dict[str, Any]:
155
+ """Return a new revision object to be stored alongside the original under the same _id.
156
+
157
+ Stores a minimal revision metadata and the normalized text. Does not overwrite original fields.
158
+ """
159
+ return {
160
+ 'revision_type': 'normalized',
161
+ 'normalized_at': int(time.time()),
162
+ 'source_fields': ['text'],
163
+ 'text': normalized_text,
164
+ }
165
+
166
+
167
+ async def process_single_document(doc: Dict[str, Any], api_key: str, base_url: Optional[str], model: str, session: aiohttp.ClientSession, dry_run: bool, collection: Collection) -> bool:
168
+ """Process a single document for normalization."""
169
+ text = str(doc.get('text', '')).strip()
170
+ if not text:
171
+ return False
172
+
173
+ try:
174
+ normalized = await normalize_text_via_openai_compatible(text, api_key=api_key, base_url=base_url, model=model, session=session)
175
+ normalized = normalize_quote_characters(normalized)
176
+ except Exception as e:
177
+ print(f"_id={doc.get('_id')} normalization failed: {e}")
178
+ return False
179
+
180
+ revision = build_revision(doc, normalized)
181
+
182
+ update = {
183
+ '$push': { 'revisions': revision }
184
+ }
185
+
186
+ if dry_run:
187
+ print(f"DRY-RUN _id={doc.get('_id')} would append a normalized revision")
188
+ print("--- ORIGINAL TEXT ---")
189
+ print(text)
190
+ print("--- NORMALIZED TEXT ---")
191
+ print(normalized)
192
+ print("======================\n")
193
+ else:
194
+ print(f"Updating _id={doc.get('_id')} with normalized text")
195
+ collection.update_one({ '_id': doc['_id'] }, update)
196
+
197
+ return True
198
+
199
+
200
+ async def process_documents_batch(docs: List[Dict[str, Any]], api_key: str, base_url: Optional[str], model: str, dry_run: bool, collection: Collection, semaphore: asyncio.Semaphore) -> int:
201
+ """Process a batch of documents concurrently."""
202
+
203
+ print(f"Processing batch of {len(docs)} documents")
204
+ async def process_with_semaphore(doc):
205
+ async with semaphore:
206
+ async with aiohttp.ClientSession() as session:
207
+ return await process_single_document(doc, api_key, base_url, model, session, dry_run, collection)
208
+
209
+ tasks = [process_with_semaphore(doc) for doc in docs]
210
+ results = await asyncio.gather(*tasks, return_exceptions=True)
211
+
212
+ # Count successful normalizations
213
+ successful = sum(1 for result in results if result is True)
214
+ return successful
215
+
216
+
217
+ def process_documents(collection: Collection, limit: Optional[int], resume_from: Optional[str], batch_size: int, dry_run: bool, api_key: str, base_url: Optional[str], model: str, concurrency: int = 5) -> None:
218
+ async def run_async_processing():
219
+ query: Dict[str, Any] = {
220
+ # skip docs without text or with empty/whitespace-only text
221
+ 'text': { '$type': 'string', '$regex': r'\S' },
222
+ # only process documents that do NOT already contain a normalized revision
223
+ '$or': [
224
+ { 'revisions': { '$exists': False } },
225
+ { 'revisions': { '$not': { '$elemMatch': { 'revision_type': 'normalized' } } } },
226
+ ],
227
+ }
228
+ if resume_from:
229
+ try:
230
+ query['_id'] = { '$gt': ObjectId(resume_from) }
231
+ except Exception:
232
+ print(f"Warning: invalid --resume-from ObjectId: {resume_from}. Ignoring.")
233
+
234
+ # Print amount of documents to process
235
+ print(f"Processing {collection.count_documents(query)} documents")
236
+
237
+ cursor = collection.find(query, no_cursor_timeout=True).sort('_id', 1)
238
+ processed = 0
239
+ batch: List[Dict[str, Any]] = []
240
+ semaphore = asyncio.Semaphore(concurrency)
241
+
242
+ try:
243
+ for doc in cursor:
244
+ if limit is not None and processed >= limit:
245
+ break
246
+
247
+ batch.append(doc)
248
+
249
+ # Process batch when it reaches batch_size or we're at the end
250
+ if len(batch) >= batch_size:
251
+ batch_processed = await process_documents_batch(batch, api_key, base_url, model, dry_run, collection, semaphore)
252
+ processed += batch_processed
253
+ batch = []
254
+
255
+ if batch_size > 0 and processed % batch_size == 0:
256
+ print(f"Processed {processed} documents...")
257
+
258
+ # Process remaining documents in the last batch
259
+ if batch:
260
+ batch_processed = await process_documents_batch(batch, api_key, base_url, model, dry_run, collection, semaphore)
261
+ processed += batch_processed
262
+
263
+ finally:
264
+ cursor.close()
265
+
266
+ print(f"Done. Total processed: {processed}")
267
+
268
+ # Run the async processing
269
+ asyncio.run(run_async_processing())
270
+
271
+
272
+ def parse_args() -> argparse.Namespace:
273
+ parser = argparse.ArgumentParser(description='Normalize MongoDB articles using an OpenAI-compatible API')
274
+ parser.add_argument('--env-file', type=str, default='normalize.env', help='Path to env file with configuration')
275
+ parser.add_argument('--limit', type=int, default=None, help='Limit number of documents to process')
276
+ parser.add_argument('--resume-from', type=str, default=None, help='Resume from a given ObjectId (exclusive)')
277
+ parser.add_argument('--batch-size', type=int, default=20, help='Progress print frequency')
278
+ parser.add_argument('--concurrency', type=int, default=5, help='Number of concurrent API calls')
279
+ parser.add_argument('--dry-run', action='store_true', help='Preview changes: print original and normalized text; no DB writes')
280
+ return parser.parse_args()
281
+
282
+
283
+ def main() -> None:
284
+ args = parse_args()
285
+
286
+ # Load env file first, allowing it to supply all needed variables
287
+ if args.env_file:
288
+ load_env_file(args.env_file)
289
+
290
+ api_key = os.environ.get('OPENAI_API_KEY')
291
+ if not api_key:
292
+ print('Error: OPENAI_API_KEY is required in environment.')
293
+ sys.exit(1)
294
+
295
+ # Support OPENAI_API_URL as well as OPENAI_BASE_URL
296
+ base_url = os.environ.get('OPENAI_API_URL') or os.environ.get('OPENAI_BASE_URL')
297
+ model = os.environ.get('OPENAI_MODEL', 'gemini-flash-lite-latest')
298
+
299
+ collection = get_mongo_collection()
300
+ process_documents(
301
+ collection=collection,
302
+ limit=args.limit,
303
+ resume_from=args.resume_from,
304
+ batch_size=args.batch_size,
305
+ dry_run=args.dry_run,
306
+ api_key=api_key,
307
+ base_url=base_url,
308
+ model=model,
309
+ concurrency=args.concurrency,
310
+ )
311
+
312
+
313
+ if __name__ == '__main__':
314
+ main()
scripts/normalize_jsonl.py ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Script to normalize JSONL entries for LLM fine-tuning.
4
+ Filters out entries without text and normalizes content using OpenAI-compatible API.
5
+ """
6
+
7
+ import json
8
+ import logging
9
+ import os
10
+ import sys
11
+ from pathlib import Path
12
+ from typing import Dict, Any, Optional
13
+ import time
14
+ import argparse
15
+
16
+ try:
17
+ import openai
18
+ except ImportError:
19
+ print("Error: openai package not found. Install with: pip install openai")
20
+ sys.exit(1)
21
+
22
+ # Configure logging
23
+ logging.basicConfig(
24
+ level=logging.INFO,
25
+ format='%(asctime)s - %(levelname)s - %(message)s',
26
+ handlers=[
27
+ logging.FileHandler('normalize_log.txt'),
28
+ logging.StreamHandler()
29
+ ]
30
+ )
31
+ logger = logging.getLogger(__name__)
32
+
33
+ class JSONLNormalizer:
34
+ def __init__(self, api_key: str, base_url: str = None, model: str = "gpt-3.5-turbo"):
35
+ """
36
+ Initialize the normalizer with OpenAI-compatible API settings.
37
+
38
+ Args:
39
+ api_key: API key for the service
40
+ base_url: Base URL for API (optional, defaults to OpenAI)
41
+ model: Model name to use for normalization
42
+ """
43
+ self.client = openai.OpenAI(
44
+ api_key=api_key,
45
+ base_url=base_url
46
+ )
47
+ self.model = model
48
+ self.processed_count = 0
49
+ self.skipped_count = 0
50
+ self.failed_count = 0
51
+ self.already_normalized_count = 0
52
+
53
+ def normalize_text(self, text: str, title: str = "", subtitle: str = "") -> Optional[str]:
54
+ """
55
+ Normalize text content using the API.
56
+
57
+ Args:
58
+ text: Main text content to normalize
59
+ title: Article title for context
60
+ subtitle: Article subtitle for context
61
+
62
+ Returns:
63
+ Normalized text or None if normalization fails
64
+ """
65
+ try:
66
+ system_prompt = """You are an expert text editor helping to prepare content for LLM fine-tuning.
67
+
68
+ Your task is to normalize and clean text while preserving its meaning and literary quality. Make these improvements:
69
+
70
+ 1. Fix obvious typos and spelling errors
71
+ 2. Normalize punctuation and spacing inconsistencies
72
+ 3. Remove excessive whitespace and newlines (but preserve intentional line breaks for poetry/paragraphs)
73
+ 4. Ensure proper capitalization
74
+ 5. Fix encoding issues or strange characters
75
+ 6. Maintain the original style and voice
76
+ 7. Preserve intentional formatting (like poetry line breaks)
77
+ 8. Remove any metadata or non-content text
78
+
79
+ Return ONLY the cleaned text, nothing else."""
80
+
81
+ user_prompt = f"""Title: {title}
82
+ Subtitle: {subtitle}
83
+
84
+ Text to normalize:
85
+ {text}"""
86
+
87
+ response = self.client.chat.completions.create(
88
+ model=self.model,
89
+ messages=[
90
+ {"role": "system", "content": system_prompt},
91
+ {"role": "user", "content": user_prompt}
92
+ ],
93
+ temperature=0.1,
94
+ max_tokens=4000
95
+ )
96
+
97
+ normalized_text = response.choices[0].message.content.strip()
98
+ return normalized_text
99
+
100
+ except Exception as e:
101
+ logger.error(f"API normalization failed: {str(e)}")
102
+ return None
103
+
104
+ def is_valid_entry(self, entry: Dict[Any, Any]) -> bool:
105
+ """
106
+ Check if entry has valid text content.
107
+
108
+ Args:
109
+ entry: JSONL entry dictionary
110
+
111
+ Returns:
112
+ True if entry has non-empty text field
113
+ """
114
+ text = entry.get('text', '')
115
+ return isinstance(text, str) and text.strip() != ''
116
+
117
+ def is_already_normalized(self, entry: Dict[Any, Any]) -> bool:
118
+ """
119
+ Check if entry has already been normalized.
120
+
121
+ Args:
122
+ entry: JSONL entry dictionary
123
+
124
+ Returns:
125
+ True if entry has already been normalized
126
+ """
127
+ return entry.get('_normalized', False) or entry.get('_normalization_failed', False)
128
+
129
+ def process_jsonl(self, input_file: str, output_file: str, failed_file: str,
130
+ max_entries: Optional[int] = None, delay: float = 0.5,
131
+ force_reprocess: bool = False, append: bool = False):
132
+ """
133
+ Process the JSONL file and normalize entries.
134
+
135
+ Args:
136
+ input_file: Path to input JSONL file
137
+ output_file: Path to output normalized JSONL file
138
+ failed_file: Path to file for failed normalizations
139
+ max_entries: Maximum number of entries to process (for testing)
140
+ delay: Delay between API calls to avoid rate limits
141
+ force_reprocess: If True, reprocess already normalized entries
142
+ append: If True, append to existing output files instead of overwriting them
143
+ """
144
+ logger.info(f"Starting normalization of {input_file}")
145
+ logger.info(f"Output file: {output_file} (mode: {'append' if append else 'overwrite'})")
146
+ logger.info(f"Failed entries file: {failed_file} (mode: {'append' if append else 'overwrite'})")
147
+ if force_reprocess:
148
+ logger.info("Force reprocess enabled - will reprocess already normalized entries")
149
+
150
+ # Determine file modes based on append flag
151
+ output_mode = 'a' if append else 'w'
152
+ failed_mode = 'a' if append else 'w'
153
+
154
+ with open(input_file, 'r', encoding='utf-8') as infile, \
155
+ open(output_file, output_mode, encoding='utf-8') as outfile, \
156
+ open(failed_file, failed_mode, encoding='utf-8') as failfile:
157
+
158
+ for line_num, line in enumerate(infile, 1):
159
+ try:
160
+ # Parse JSON line
161
+ entry = json.loads(line.strip())
162
+
163
+ # Skip entries without valid text
164
+ if not self.is_valid_entry(entry):
165
+ logger.debug(f"Line {line_num}: Skipping entry without text")
166
+ self.skipped_count += 1
167
+ continue
168
+
169
+ # Check if already normalized or failed (unless forcing reprocess)
170
+ if not force_reprocess and self.is_already_normalized(entry):
171
+ title = entry.get('title', '')
172
+ logger.debug(f"Line {line_num}: Entry '{title[:50]}...' already processed")
173
+
174
+ # Write to appropriate file based on previous result
175
+ if entry.get('_normalized', False):
176
+ outfile.write(json.dumps(entry, ensure_ascii=False) + '\n')
177
+ elif entry.get('_normalization_failed', False):
178
+ failfile.write(json.dumps(entry, ensure_ascii=False) + '\n')
179
+
180
+ self.already_normalized_count += 1
181
+
182
+ # Check max_entries limit after counting already normalized entries
183
+ if max_entries and (self.processed_count + self.already_normalized_count) >= max_entries:
184
+ logger.info(f"Reached maximum entries limit: {max_entries}")
185
+ break
186
+ continue
187
+
188
+ # Check max_entries limit before processing new entries
189
+ if max_entries and (self.processed_count + self.already_normalized_count) >= max_entries:
190
+ logger.info(f"Reached maximum entries limit: {max_entries}")
191
+ break
192
+
193
+ # Extract content for normalization
194
+ original_text = entry['text']
195
+ title = entry.get('title', '')
196
+ subtitle = entry.get('subtitle', '')
197
+
198
+ logger.info(f"Line {line_num}: Normalizing entry '{title[:50]}...'")
199
+
200
+ # Normalize the text
201
+ normalized_text = self.normalize_text(original_text, title, subtitle)
202
+
203
+ if normalized_text:
204
+ # Update entry with normalized text
205
+ entry['text'] = normalized_text
206
+ entry['_original_length'] = len(original_text)
207
+ entry['_normalized_length'] = len(normalized_text)
208
+ entry['_normalized'] = True
209
+
210
+ # Write to output file
211
+ outfile.write(json.dumps(entry, ensure_ascii=False) + '\n')
212
+ self.processed_count += 1
213
+ logger.info(f"Line {line_num}: Successfully normalized")
214
+ else:
215
+ # Write failed entry to failed file
216
+ entry['_normalization_failed'] = True
217
+ failfile.write(json.dumps(entry, ensure_ascii=False) + '\n')
218
+ self.failed_count += 1
219
+ logger.warning(f"Line {line_num}: Normalization failed")
220
+
221
+ # Rate limiting delay (only for new API calls)
222
+ if delay > 0:
223
+ time.sleep(delay)
224
+
225
+ except json.JSONDecodeError as e:
226
+ logger.error(f"Line {line_num}: JSON decode error: {str(e)}")
227
+ self.failed_count += 1
228
+ except Exception as e:
229
+ logger.error(f"Line {line_num}: Unexpected error: {str(e)}")
230
+ self.failed_count += 1
231
+
232
+ # Progress update
233
+ if line_num % 10 == 0:
234
+ total_processed = self.processed_count + self.already_normalized_count
235
+ logger.info(f"Progress: Processed {line_num} lines, "
236
+ f"Total processed: {total_processed}, "
237
+ f"Newly normalized: {self.processed_count}, "
238
+ f"Already processed: {self.already_normalized_count}, "
239
+ f"Skipped: {self.skipped_count}, "
240
+ f"Failed: {self.failed_count}")
241
+
242
+ # Final summary
243
+ total_processed = self.processed_count + self.already_normalized_count
244
+ logger.info("=" * 50)
245
+ logger.info("NORMALIZATION COMPLETE")
246
+ logger.info(f"Total lines processed: {line_num}")
247
+ logger.info(f"Total entries processed: {total_processed}")
248
+ logger.info(f"Newly normalized: {self.processed_count}")
249
+ logger.info(f"Already processed (skipped): {self.already_normalized_count}")
250
+ logger.info(f"Skipped (no text): {self.skipped_count}")
251
+ logger.info(f"Failed: {self.failed_count}")
252
+ logger.info("=" * 50)
253
+
254
+ def main():
255
+ parser = argparse.ArgumentParser(description='Normalize JSONL entries for LLM fine-tuning')
256
+ parser.add_argument('input_file', help='Input JSONL file path')
257
+ parser.add_argument('-o', '--output', default='normalized_entries.jsonl',
258
+ help='Output file for normalized entries (default: normalized_entries.jsonl)')
259
+ parser.add_argument('-f', '--failed', default='failed_normalizations.jsonl',
260
+ help='Output file for failed entries (default: failed_normalizations.jsonl)')
261
+ parser.add_argument('-k', '--api-key', help='OpenAI API key (or set OPENAI_API_KEY env var)')
262
+ parser.add_argument('-u', '--base-url', help='Base URL for OpenAI-compatible API')
263
+ parser.add_argument('-m', '--model', default='gpt-3.5-turbo',
264
+ help='Model to use (default: gpt-3.5-turbo)')
265
+ parser.add_argument('--max-entries', type=int, help='Maximum entries to process (for testing)')
266
+ parser.add_argument('--delay', type=float, default=0.5,
267
+ help='Delay between API calls in seconds (default: 0.5)')
268
+ parser.add_argument('--force-reprocess', action='store_true',
269
+ help='Force reprocessing of already normalized entries')
270
+ parser.add_argument('--append', action='store_true',
271
+ help='Append to existing output files instead of overwriting them')
272
+
273
+ args = parser.parse_args()
274
+
275
+ # Get API key
276
+ api_key = args.api_key or os.getenv('OPENAI_API_KEY')
277
+ if not api_key:
278
+ logger.error("API key required. Use --api-key or set OPENAI_API_KEY environment variable")
279
+ sys.exit(1)
280
+
281
+ # Check input file exists
282
+ if not Path(args.input_file).exists():
283
+ logger.error(f"Input file not found: {args.input_file}")
284
+ sys.exit(1)
285
+
286
+ # Initialize normalizer
287
+ normalizer = JSONLNormalizer(
288
+ api_key=api_key,
289
+ base_url=args.base_url,
290
+ model=args.model
291
+ )
292
+
293
+ # Process the file
294
+ try:
295
+ normalizer.process_jsonl(
296
+ input_file=args.input_file,
297
+ output_file=args.output,
298
+ failed_file=args.failed,
299
+ max_entries=args.max_entries,
300
+ delay=args.delay,
301
+ force_reprocess=args.force_reprocess,
302
+ append=args.append
303
+ )
304
+ except KeyboardInterrupt:
305
+ logger.info("Process interrupted by user")
306
+ except Exception as e:
307
+ logger.error(f"Process failed: {str(e)}")
308
+ sys.exit(1)
309
+
310
+ if __name__ == "__main__":
311
+ main()
scripts/requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ pymongo
2
+ aiohttp
3
+ requests
4
+
5
+
stern_neon_user_poetry.jsonl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fbbb7e4eb9229ec2864f9a32aa95a370a7c3b7ac1b46f565df55d598af7fdc87
3
- size 78398883
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db8bc2adbb7c9d94e042586ee1367390b52bfcf358d427701ba5150b90b4984a
3
+ size 71686011