Boredoom17 commited on
Commit
a69e3d8
·
verified ·
1 Parent(s): 453291b

Add DATA_PROCESSING.md documentation

Browse files
Files changed (1) hide show
  1. DATA_PROCESSING.md +180 -0
DATA_PROCESSING.md ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Data Processing Pipeline
2
+
3
+ This document describes the technical workflow for constructing and maintaining the Nepali Text Corpus.
4
+
5
+ ## Overview
6
+
7
+ The corpus is built using a DuckDB-based pipeline (`scripts/merge.py`) that ingests raw CSVs, applies filtering and normalization, and produces stratified parquet outputs optimized for different research domains.
8
+
9
+ ```
10
+ Raw Data (CSV) → Validation & Normalization → Domain Stratification → Parquet Output
11
+ ```
12
+
13
+ ## Input Sources
14
+
15
+ | Source | File | Format | Records | Notes |
16
+ |--------|------|--------|---------|-------|
17
+ | IRIISNEPAL | `iriisnepal_raw.csv` | CSV | ~6.1M | Manually curated formal Nepali |
18
+ | YouTube | `youtube_comments_clean.csv` | CSV | ~431K | Pre-cleaned by `clean.py` |
19
+ | Wikipedia | `wikipedia_nepali.csv` | CSV | ~291K | Extracted from wiki dump |
20
+ | News | `nepali_news.csv` | CSV | ~87K | Scraped from live news feeds |
21
+
22
+ ## Preprocessing Steps
23
+
24
+ ### 1. Text Validation
25
+ All records are filtered on:
26
+ - **Non-null check:** `text IS NOT NULL`
27
+ - **Non-empty check:** `trim(text) <> ''`
28
+ - **Minimum length (IRIIS only):** `length(split(trim(text), ' ')) >= 5` (5+ words)
29
+ - **Script validation (IRIIS only):** Must contain at least one Devanagari character
30
+
31
+ ### 2. Script Detection
32
+ Automatic classification for news and YouTube sources:
33
+ ```
34
+ IF text contains [ऀ-ॿ] THEN 'devanagari'
35
+ ELSE IF text contains [A-Za-z] THEN 'latin'
36
+ ELSE 'other'
37
+ ```
38
+
39
+ ### 3. Metadata Assignment
40
+ Each row is enriched with:
41
+ - **source:** Origin identifier (e.g., `iriisnepal`, `youtube_comments`, `wikipedia_nepali`)
42
+ - **domain:** Content category (formal, colloquial, encyclopedia, news)
43
+ - **script:** Writing system detected
44
+ - **lang:** ISO 639-1 code (`ne` for Nepali)
45
+ - **date_collected:** Processing date
46
+ - **license:** Source-specific license
47
+
48
+ ### 4. Deduplication
49
+ - No exact-duplicate removal (preserves all unique utterances)
50
+ - Partial duplicates retained (colloquial speech naturally repeats common phrases)
51
+
52
+ ## Output Datasets
53
+
54
+ ### nepali_corpus_full.parquet
55
+ **Purpose:** Complete merged corpus for general research
56
+ **Rows:** 7,167,456
57
+ **Ordering:**
58
+ 1. Formal domain (IRIISNEPAL)
59
+ 2. Encyclopedia domain (Wikipedia)
60
+ 3. News domain
61
+ 4. Colloquial domain (YouTube)
62
+
63
+ Within each domain, ordered by `source`, then `length DESC` for visibility.
64
+
65
+ ### nepali_corpus_formal.parquet
66
+ **Purpose:** Formal writing for LM pretraining
67
+ **Rows:** 6,378,206 (formal + encyclopedia + news)
68
+ **Domains included:** `formal`, `encyclopedia`, `news`
69
+ **Ordering:** Domain priority → source → length DESC
70
+
71
+ **Professional dataset preview:** Leading rows are formal Wikipedia and news articles (clean, representative examples).
72
+
73
+ ### nepali_corpus_colloquial.parquet
74
+ **Purpose:** Conversational Nepali for sociolinguistic analysis
75
+ **Rows:** 431,648 (YouTube comments only)
76
+ **Script distribution:**
77
+ - Devanagari: 123,804 comments
78
+ - Latin (Roman): 307,999 comments
79
+ - Mixed: 19,845 comments
80
+
81
+ **Ordering:** Devanagari (longest first) → Latin → Mixed (ensures Hugging Face viewer surfaces Devanagari examples first)
82
+
83
+ ### nepali_corpus_roman.parquet
84
+ **Purpose:** Roman-script Nepali subset
85
+ **Rows:** 307,999 (latin script from YouTube)
86
+ **Derived from:** Colloquial corpus with `script = 'latin'`
87
+
88
+ ### nepali_corpus_wikipedia.parquet
89
+ **Purpose:** Encyclopedia-style Nepali (NOT published separately; kept for analysis)
90
+ **Rows:** 291,767
91
+ **Note:** Wikipedia data is merged into `nepali_corpus_formal.parquet` for public release.
92
+
93
+ ## Performance Optimizations
94
+
95
+ ### DuckDB Configuration
96
+ ```sql
97
+ PRAGMA threads = 2 -- CPU parallelism
98
+ PRAGMA preserve_insertion_order = false
99
+ PRAGMA memory_limit = '8GB' -- RAM cap
100
+ PRAGMA temp_directory = '...' -- Disk spillover
101
+ ```
102
+
103
+ ### Parquet Compression
104
+ - **Codec:** Zstandard (ZSTD)
105
+ - **Compression Level:** Default (best balance)
106
+ - **Result:** 5.95 GB file for 7.1M rows (≈850 bytes/row average)
107
+
108
+ ### Processing Speed
109
+ - Typical merge run: ~2-5 minutes on 8GB RAM
110
+ - DuckDB streaming keeps memory footprint constant
111
+
112
+ ## Quality Metrics
113
+
114
+ ### Content Representation
115
+ - **Formal (88%):** Academic, journalistic, encyclopedic content
116
+ - **Colloquial (6%):** Conversational, social media discourse
117
+ - **Roman script (4%):** Transliterated Nepali
118
+ - **Mixed script (<1%):** Code-switching examples
119
+
120
+ ### Text Statistics
121
+ - **Average length:** ~120 UTF-8 characters
122
+ - **Median length:** ~85 characters
123
+ - **Max length:** ~50,000 characters (rare outliers)
124
+ - **Devanagari percentage:** ~87% of rows
125
+
126
+ ### Coverage
127
+ - **Unique sources:** 5 primary (IRIIS, Wikipedia, YouTube, Kantipur, Setopati, +3 others)
128
+ - **Time span:** Circa 2016–2026 (mixed historical and current)
129
+ - **Geographic scope:** Nepal-centric; diaspora content included in YouTube
130
+
131
+ ## Maintenance & Updates
132
+
133
+ ### Incremental Updates
134
+ To add new data:
135
+ 1. Append new rows to source CSV (e.g., `nepali_news.csv`)
136
+ 2. Re-run `scripts/merge.py`
137
+ 3. Output parquets are regenerated from scratch
138
+ 4. Run `scripts/publish.py` to sync to Hugging Face
139
+
140
+ ### Re-running the Pipeline
141
+ ```bash
142
+ cd /Users/ad/research/nepali-text
143
+ venv/bin/python scripts/merge.py
144
+ ```
145
+
146
+ ## Schema Reference
147
+
148
+ All parquet files use this schema:
149
+
150
+ | Column | Type | Description |
151
+ |--------|------|-------------|
152
+ | text | string | UTF-8 Nepali text |
153
+ | source | string | Data source identifier |
154
+ | domain | string | Content type (formal, colloquial, encyclopedia, news) |
155
+ | script | string | Writing system (devanagari, latin, mixed) |
156
+ | lang | string | ISO 639-1 language code (always 'ne') |
157
+ | date_collected | string | ISO 8601 processing date |
158
+ | license | string | Source license (MIT, CC BY 4.0, CC BY-SA 4.0, source-dependent) |
159
+
160
+ ## Known Limitations
161
+
162
+ 1. **YouTube content:** No content filtering; raw comments may contain offensive language
163
+ 2. **News licensing:** Publisher permissions uncertain; use cautiously in commercial settings
164
+ 3. **Script detection:** Simple regex-based; mixed-language text occasionally misclassified
165
+ 4. **Deduplication:** No semantic deduplication; similar paraphrases retained
166
+ 5. **Temporal bias:** Majority of data from 2020–2026; pre-2020 IRIIS content underrepresented
167
+
168
+ ## Future Improvements
169
+
170
+ - [ ] Semantic deduplication using embeddings
171
+ - [ ] Fine-grained toxicity filtering for colloquial subset
172
+ - [ ] Add date ranges per source for temporal filtering
173
+ - [ ] Multilingual metadata (code-mixed Hindi, English)
174
+ - [ ] Validation splits for supervised task benchmarking
175
+
176
+ ---
177
+
178
+ **Last Updated:** April 2, 2026
179
+ **Pipeline Version:** 1.0
180
+ **Maintainer:** Boredoom17