Update dataset card
Browse files
README.md
CHANGED
|
@@ -10,42 +10,22 @@ configs:
|
|
| 10 |
- config_name: default
|
| 11 |
data_files:
|
| 12 |
- split: train
|
| 13 |
-
path: data/train
|
| 14 |
tags:
|
| 15 |
- simple-wikipedia
|
| 16 |
- wikipedia
|
| 17 |
- markdown
|
| 18 |
- sqlite
|
| 19 |
-
dataset_info:
|
| 20 |
-
features:
|
| 21 |
-
- name: page_id
|
| 22 |
-
dtype: int64
|
| 23 |
-
- name: title
|
| 24 |
-
dtype: string
|
| 25 |
-
- name: content
|
| 26 |
-
dtype: string
|
| 27 |
-
- name: content_no_link
|
| 28 |
-
dtype: string
|
| 29 |
-
- name: importance
|
| 30 |
-
dtype: string
|
| 31 |
-
- name: truncated
|
| 32 |
-
dtype: bool
|
| 33 |
-
- name: error
|
| 34 |
-
dtype: bool
|
| 35 |
-
splits:
|
| 36 |
-
- name: train
|
| 37 |
-
num_bytes: 6993133
|
| 38 |
-
num_examples: 1000
|
| 39 |
-
download_size: 4042374
|
| 40 |
-
dataset_size: 6993133
|
| 41 |
---
|
| 42 |
|
| 43 |
# Simple English Wikipedia (Markdown)
|
| 44 |
|
|
|
|
|
|
|
| 45 |
- Dump date: 2025-12-01
|
| 46 |
- Source dump: https://dumps.wikimedia.org/simplewiki/20251201/simplewiki-20251201-pages-articles.xml.bz2
|
| 47 |
- SHA-1: ee583946e86857e9f1e155f80bd3cd8b5d6dade7
|
| 48 |
-
- Records:
|
| 49 |
- Refresh cadence: Weekly on Sundays at 11:00 UTC
|
| 50 |
|
| 51 |
## Dataset Structure
|
|
@@ -62,7 +42,7 @@ Columns:
|
|
| 62 |
## Processing
|
| 63 |
|
| 64 |
- Downloaded `pages-articles` XML dump and verified SHA-1.
|
| 65 |
-
- Kept only namespace 0 articles and
|
| 66 |
- Stripped templates/ref/gallery blocks and file/category links; converted headings, lists, tables, and internal/external links to Markdown with page IDs.
|
| 67 |
- Stored a SQLite mirror (`pages` table) alongside the Hugging Face dataset.
|
| 68 |
- Markdown links point to the target page's numeric ID for fast lookup without a title-to-ID join.
|
|
@@ -78,4 +58,28 @@ ds = load_dataset("juno-labs/simple_wikipedia", split="train")
|
|
| 78 |
print(ds[0])
|
| 79 |
```
|
| 80 |
|
| 81 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
- config_name: default
|
| 11 |
data_files:
|
| 12 |
- split: train
|
| 13 |
+
path: data/train-*.parquet
|
| 14 |
tags:
|
| 15 |
- simple-wikipedia
|
| 16 |
- wikipedia
|
| 17 |
- markdown
|
| 18 |
- sqlite
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
---
|
| 20 |
|
| 21 |
# Simple English Wikipedia (Markdown)
|
| 22 |
|
| 23 |
+
Recurring weekly snapshot of Simple English Wikipedia (https://simple.wikipedia.org/), which uses shorter sentences and limited vocabulary compared to the main English Wikipedia. This makes it smaller, easier to parse, and better suited for on-device or bandwidth‑constrained assistants while still covering broad general knowledge. Ideal as an offline Wikipedia MCP server backing a household AI assistant.
|
| 24 |
+
|
| 25 |
- Dump date: 2025-12-01
|
| 26 |
- Source dump: https://dumps.wikimedia.org/simplewiki/20251201/simplewiki-20251201-pages-articles.xml.bz2
|
| 27 |
- SHA-1: ee583946e86857e9f1e155f80bd3cd8b5d6dade7
|
| 28 |
+
- Records: 1000
|
| 29 |
- Refresh cadence: Weekly on Sundays at 11:00 UTC
|
| 30 |
|
| 31 |
## Dataset Structure
|
|
|
|
| 42 |
## Processing
|
| 43 |
|
| 44 |
- Downloaded `pages-articles` XML dump and verified SHA-1.
|
| 45 |
+
- Kept only namespace 0 articles, skipped redirects, and dropped titles beginning with “List of”.
|
| 46 |
- Stripped templates/ref/gallery blocks and file/category links; converted headings, lists, tables, and internal/external links to Markdown with page IDs.
|
| 47 |
- Stored a SQLite mirror (`pages` table) alongside the Hugging Face dataset.
|
| 48 |
- Markdown links point to the target page's numeric ID for fast lookup without a title-to-ID join.
|
|
|
|
| 58 |
print(ds[0])
|
| 59 |
```
|
| 60 |
|
| 61 |
+
SQLite usage (`simplewiki.sqlite` mirrors the same columns):
|
| 62 |
+
|
| 63 |
+
```bash
|
| 64 |
+
sqlite3 simplewiki.sqlite "SELECT page_id, title, substr(content,1,200) || '...' FROM pages LIMIT 5;"
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
You can also mount it in code:
|
| 68 |
+
|
| 69 |
+
```python
|
| 70 |
+
import sqlite3
|
| 71 |
+
conn = sqlite3.connect("simplewiki.sqlite")
|
| 72 |
+
cur = conn.cursor()
|
| 73 |
+
for row in cur.execute("SELECT title, content FROM pages WHERE page_id = ?", (7553,)):
|
| 74 |
+
print(row)
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
## Categorization
|
| 79 |
+
|
| 80 |
+
Model: openai/gpt-5-mini
|
| 81 |
+
Prompt template:
|
| 82 |
+
```
|
| 83 |
+
You are an expert product manager for a household smart-speaker AI in the United States. Classify how important (to one of: "low", "medium", or "high") it is to store this article offline for day-to-day user queries. Respond with JSON containing only a "label" field set to one of: "low", "medium", "high".
|
| 84 |
+
```
|
| 85 |
+
|