Update dataset card
Browse files
README.md
CHANGED
|
@@ -10,65 +10,26 @@ configs:
|
|
| 10 |
- config_name: default
|
| 11 |
data_files:
|
| 12 |
- split: train
|
| 13 |
-
path: data/train
|
| 14 |
tags:
|
| 15 |
- simple-wikipedia
|
| 16 |
- wikipedia
|
| 17 |
- markdown
|
| 18 |
- sqlite
|
| 19 |
-
dataset_info:
|
| 20 |
-
features:
|
| 21 |
-
- name: page_id
|
| 22 |
-
dtype: int64
|
| 23 |
-
- name: title
|
| 24 |
-
dtype: string
|
| 25 |
-
- name: content
|
| 26 |
-
dtype: string
|
| 27 |
-
- name: content_no_link
|
| 28 |
-
dtype: string
|
| 29 |
-
- name: importance
|
| 30 |
-
dtype: string
|
| 31 |
-
- name: truncated
|
| 32 |
-
dtype: bool
|
| 33 |
-
- name: error
|
| 34 |
-
dtype: bool
|
| 35 |
-
splits:
|
| 36 |
-
- name: train
|
| 37 |
-
num_bytes: 801895499
|
| 38 |
-
num_examples: 274447
|
| 39 |
-
download_size: 434287948
|
| 40 |
-
dataset_size: 801895499
|
| 41 |
---
|
| 42 |
|
| 43 |
# Simple English Wikipedia (Markdown)
|
| 44 |
|
| 45 |
Recurring weekly snapshot of Simple English Wikipedia (https://simple.wikipedia.org/), which uses shorter sentences and limited vocabulary compared to the main English Wikipedia. This makes it smaller, easier to parse, and better suited for on-device or bandwidth‑constrained assistants while still covering broad general knowledge. Ideal as an offline Wikipedia MCP server backing a household AI assistant.
|
| 46 |
|
| 47 |
-
- Dump date: 2025-12-
|
| 48 |
-
- Source dump: https://dumps.wikimedia.org/simplewiki/
|
| 49 |
-
- SHA-1:
|
| 50 |
-
- Records:
|
| 51 |
- Refresh cadence: Weekly on Sundays at 11:00 UTC
|
| 52 |
|
| 53 |
## Dataset Structure
|
| 54 |
|
| 55 |
-
- Hugging Face split: `train`
|
| 56 |
-
- SQLite mirror: `simplewiki.sqlite` (contains `pages` and `infobox` tables)
|
| 57 |
-
|
| 58 |
-
## Hugging Face Dataset Columns (train split)
|
| 59 |
-
|
| 60 |
-
- `page_id` (int64): Unique page identifier from Wikimedia dump.
|
| 61 |
-
- `title` (string): Article title.
|
| 62 |
-
- `content` (string): Article body converted to markdown with internal and external links preserved.
|
| 63 |
-
- `content_no_link` (string): Same content with markdown links stripped to plain text.
|
| 64 |
-
- `importance` (string): Importance for a household smart speaker assistant (`low`, `medium`, `high`, or `unknown` when not categorized).
|
| 65 |
-
- `truncated` (bool): `true` when the source article exceeded 40,000 characters; only the first two paragraphs are parsed and stored in this case; otherwise `false`.
|
| 66 |
-
- `error` (bool): `true` when the article could not be parsed (content fields are empty in this case); otherwise `false`.
|
| 67 |
-
|
| 68 |
-
## SQLite Database
|
| 69 |
-
|
| 70 |
-
### `pages` table
|
| 71 |
-
|
| 72 |
Columns:
|
| 73 |
- `page_id` (int64): Unique page identifier from Wikimedia dump.
|
| 74 |
- `title` (string): Article title.
|
|
@@ -78,38 +39,13 @@ Columns:
|
|
| 78 |
- `truncated` (bool): `true` when the source article exceeded 40,000 characters; only the first two paragraphs are parsed and stored in this case; otherwise `false`.
|
| 79 |
- `error` (bool): `true` when the article could not be parsed (content fields are empty in this case); otherwise `false`.
|
| 80 |
|
| 81 |
-
Example:
|
| 82 |
-
|
| 83 |
-
```bash
|
| 84 |
-
sqlite3 simplewiki.sqlite "SELECT page_id, title, substr(content,1,200) || '...' AS preview FROM pages ORDER BY page_id LIMIT 5;"
|
| 85 |
-
```
|
| 86 |
-
|
| 87 |
-
### `infobox` table
|
| 88 |
-
|
| 89 |
-
Columns:
|
| 90 |
-
- `page_id` (int64): Page ID of the article the infobox belongs to.
|
| 91 |
-
- `article_title` (string): Title of the article.
|
| 92 |
-
- `infobox_type` (string): Template type (e.g., `country`, `person`, `airport`).
|
| 93 |
-
- `label` (string): Raw parameter name from the infobox template.
|
| 94 |
-
- `value` (string): Cleaned text value with wiki links flattened to plain text.
|
| 95 |
-
|
| 96 |
-
Examples:
|
| 97 |
-
|
| 98 |
-
```bash
|
| 99 |
-
# All fields for Australia
|
| 100 |
-
sqlite3 simplewiki.sqlite "SELECT label, value FROM infobox WHERE page_id = 27 ORDER BY label LIMIT 10;"
|
| 101 |
-
|
| 102 |
-
# Join back to article text
|
| 103 |
-
sqlite3 simplewiki.sqlite "SELECT p.title, i.label, i.value FROM infobox i JOIN pages p ON p.page_id = i.page_id WHERE i.infobox_type = 'country' AND i.label = 'capital' LIMIT 5;"
|
| 104 |
-
```
|
| 105 |
-
|
| 106 |
## Processing
|
| 107 |
|
| 108 |
- Downloaded `pages-articles` XML dump and verified SHA-1.
|
| 109 |
- Kept only namespace 0 articles, skipped redirects, and dropped titles beginning with “List of”.
|
| 110 |
- Stripped templates/ref/gallery blocks and file/category links; converted headings, lists, tables, and internal/external links to Markdown with page IDs.
|
| 111 |
-
- Stored a SQLite mirror
|
| 112 |
-
- Markdown links point to the target page's numeric ID for fast lookup without a title-to-ID join
|
| 113 |
|
| 114 |
## Usage
|
| 115 |
|
|
@@ -122,13 +58,10 @@ ds = load_dataset("juno-labs/simple_wikipedia", split="train")
|
|
| 122 |
print(ds[0])
|
| 123 |
```
|
| 124 |
|
| 125 |
-
SQLite usage (`simplewiki.sqlite` mirrors the same columns
|
| 126 |
|
| 127 |
```bash
|
| 128 |
sqlite3 simplewiki.sqlite "SELECT page_id, title, substr(content,1,200) || '...' FROM pages LIMIT 5;"
|
| 129 |
-
|
| 130 |
-
# Infobox lookup example (Australia):
|
| 131 |
-
sqlite3 simplewiki.sqlite "SELECT label, value FROM infobox WHERE page_id = 27 ORDER BY label LIMIT 5;"
|
| 132 |
```
|
| 133 |
|
| 134 |
You can also mount it in code:
|
|
@@ -148,10 +81,10 @@ Importance labels indicate how useful an article is for day-to-day offline house
|
|
| 148 |
|
| 149 |
- Model: openai/gpt-oss-120b
|
| 150 |
- distribution:
|
| 151 |
-
- low:
|
| 152 |
-
- medium:
|
| 153 |
-
- high:
|
| 154 |
-
- unknown:
|
| 155 |
|
| 156 |
Prompt template:
|
| 157 |
```
|
|
|
|
| 10 |
- config_name: default
|
| 11 |
data_files:
|
| 12 |
- split: train
|
| 13 |
+
path: data/train-*.parquet
|
| 14 |
tags:
|
| 15 |
- simple-wikipedia
|
| 16 |
- wikipedia
|
| 17 |
- markdown
|
| 18 |
- sqlite
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
---
|
| 20 |
|
| 21 |
# Simple English Wikipedia (Markdown)
|
| 22 |
|
| 23 |
Recurring weekly snapshot of Simple English Wikipedia (https://simple.wikipedia.org/), which uses shorter sentences and limited vocabulary compared to the main English Wikipedia. This makes it smaller, easier to parse, and better suited for on-device or bandwidth‑constrained assistants while still covering broad general knowledge. Ideal as an offline Wikipedia MCP server backing a household AI assistant.
|
| 24 |
|
| 25 |
+
- Dump date: 2025-12-20
|
| 26 |
+
- Source dump: https://dumps.wikimedia.org/simplewiki/20251220/simplewiki-20251220-pages-articles.xml.bz2
|
| 27 |
+
- SHA-1: 4a236585535cb0516c0009f99102133b2a578b21
|
| 28 |
+
- Records: 274447
|
| 29 |
- Refresh cadence: Weekly on Sundays at 11:00 UTC
|
| 30 |
|
| 31 |
## Dataset Structure
|
| 32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
Columns:
|
| 34 |
- `page_id` (int64): Unique page identifier from Wikimedia dump.
|
| 35 |
- `title` (string): Article title.
|
|
|
|
| 39 |
- `truncated` (bool): `true` when the source article exceeded 40,000 characters; only the first two paragraphs are parsed and stored in this case; otherwise `false`.
|
| 40 |
- `error` (bool): `true` when the article could not be parsed (content fields are empty in this case); otherwise `false`.
|
| 41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
## Processing
|
| 43 |
|
| 44 |
- Downloaded `pages-articles` XML dump and verified SHA-1.
|
| 45 |
- Kept only namespace 0 articles, skipped redirects, and dropped titles beginning with “List of”.
|
| 46 |
- Stripped templates/ref/gallery blocks and file/category links; converted headings, lists, tables, and internal/external links to Markdown with page IDs.
|
| 47 |
+
- Stored a SQLite mirror (`pages` table) alongside the Hugging Face dataset.
|
| 48 |
+
- Markdown links point to the target page's numeric ID for fast lookup without a title-to-ID join.
|
| 49 |
|
| 50 |
## Usage
|
| 51 |
|
|
|
|
| 58 |
print(ds[0])
|
| 59 |
```
|
| 60 |
|
| 61 |
+
SQLite usage (`simplewiki.sqlite` mirrors the same columns):
|
| 62 |
|
| 63 |
```bash
|
| 64 |
sqlite3 simplewiki.sqlite "SELECT page_id, title, substr(content,1,200) || '...' FROM pages LIMIT 5;"
|
|
|
|
|
|
|
|
|
|
| 65 |
```
|
| 66 |
|
| 67 |
You can also mount it in code:
|
|
|
|
| 81 |
|
| 82 |
- Model: openai/gpt-oss-120b
|
| 83 |
- distribution:
|
| 84 |
+
- low: 2.23%
|
| 85 |
+
- medium: 5.15%
|
| 86 |
+
- high: 1.19%
|
| 87 |
+
- unknown: 91.43%
|
| 88 |
|
| 89 |
Prompt template:
|
| 90 |
```
|