Update README.md
Browse files
README.md
CHANGED
|
@@ -1,36 +1,56 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
language
|
| 27 |
-
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
- knowledge
|
| 33 |
-
-
|
| 34 |
-
-
|
| 35 |
-
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TreeCorpus
|
| 2 |
+
|
| 3 |
+
TreeCorpus is a comprehensive, structured dataset derived from the latest Wikipedia dumps, specially processed to serve as high-quality training data for conversational AI models. This dataset transforms Wikipedia's encyclopedic knowledge into a format optimized for natural language understanding and generation tasks.
|
| 4 |
+
|
| 5 |
+
## Dataset Statistics
|
| 6 |
+
|
| 7 |
+
- **Size**: 26.27 GB (26,272,580,250 bytes)
|
| 8 |
+
- **Examples**: 2,882,766 articles
|
| 9 |
+
- **Download Size**: 13.33 GB (13,326,529,312 bytes)
|
| 10 |
+
- **Language**: English
|
| 11 |
+
|
| 12 |
+
## Data Structure
|
| 13 |
+
|
| 14 |
+
Each entry in the dataset contains:
|
| 15 |
+
- `id` (string): Unique Wikipedia article identifier
|
| 16 |
+
- `title` (string): Article title
|
| 17 |
+
- `text` (string): Clean, processed text content
|
| 18 |
+
- `url` (string): Source Wikipedia URL
|
| 19 |
+
- `timestamp` (string): Processing timestamp
|
| 20 |
+
|
| 21 |
+
## Key Features
|
| 22 |
+
|
| 23 |
+
- **Clean, Structured Content**: Meticulously processed to remove markup, templates, references, and other non-content elements while preserving the informational value of Wikipedia articles.
|
| 24 |
+
- **Rich Metadata**: Each entry includes article ID, title, clean text content, source URL, and timestamp.
|
| 25 |
+
- **Comprehensive Coverage**: Incorporates the full spectrum of Wikipedia's knowledge base, spanning nearly 3 million articles across countless topics.
|
| 26 |
+
- **Conversational Optimization**: Content is processed specifically to support training of dialogue systems, conversational agents, and knowledge-grounded language models.
|
| 27 |
+
- **Regular Updates**: Built from the latest Wikipedia dumps to ensure current information.
|
| 28 |
+
|
| 29 |
+
## Usage
|
| 30 |
+
|
| 31 |
+
This dataset is ideal for:
|
| 32 |
+
- Training large language models requiring broad knowledge bases
|
| 33 |
+
- Fine-tuning conversational agents for knowledge-intensive tasks
|
| 34 |
+
- Question-answering systems that need factual grounding
|
| 35 |
+
- Research in knowledge representation and retrieval in natural language
|
| 36 |
+
|
| 37 |
+
## License and Citation
|
| 38 |
+
|
| 39 |
+
TreeCorpus is derived from Wikipedia content available under the CC BY-SA 3.0 license. When using this dataset, please provide appropriate attribution to both this dataset and Wikipedia.
|
| 40 |
+
|
| 41 |
+
## Dataset Configuration
|
| 42 |
+
|
| 43 |
+
The dataset is configured with a default split:
|
| 44 |
+
- Split name: train
|
| 45 |
+
- Data files pattern: data/train-*
|
| 46 |
+
|
| 47 |
+
## Creation Process
|
| 48 |
+
|
| 49 |
+
TreeCorpus was created using a specialized pipeline that:
|
| 50 |
+
1. Downloads the latest Wikipedia dumps
|
| 51 |
+
2. Processes XML content to extract articles
|
| 52 |
+
3. Cleans and standardizes text by removing markup, templates, and non-content elements
|
| 53 |
+
4. Structures data in a consistent, machine-readable format
|
| 54 |
+
5. Filters out redirects, stubs, and non-article content
|
| 55 |
+
|
| 56 |
+
For more details on the methodology and processing pipeline, please see the accompanying code documentation.
|