File size: 2,983 Bytes
3055bc1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f25865c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
language:
- en
license: cc-by-sa-3.0
tags:
- treecorpus
- wikipedia
- encyclopedia
- knowledge-base
- factual-knowledge
- training-data
- conversational-ai
- nlp
- language-model
- text-corpus
- qa-dataset
- structured-data
- large-scale
pretty_name: 'TreeCorpus: Wikipedia Knowledge for AI Models'
size_categories:
- 10M<n<100M
---

# TreeCorpus

TreeCorpus is a comprehensive, structured dataset derived from the latest Wikipedia dumps, specially processed to serve as high-quality training data for conversational AI models. This dataset transforms Wikipedia's encyclopedic knowledge into a format optimized for natural language understanding and generation tasks.

## Dataset Statistics

- **Size**: 26.27 GB (26,272,580,250 bytes)
- **Examples**: 2,882,766 articles
- **Download Size**: 13.33 GB (13,326,529,312 bytes)
- **Language**: English

## Data Structure

Each entry in the dataset contains:
- `id` (string): Unique Wikipedia article identifier
- `title` (string): Article title
- `text` (string): Clean, processed text content
- `url` (string): Source Wikipedia URL
- `timestamp` (string): Processing timestamp

## Key Features

- **Clean, Structured Content**: Meticulously processed to remove markup, templates, references, and other non-content elements while preserving the informational value of Wikipedia articles.
- **Rich Metadata**: Each entry includes article ID, title, clean text content, source URL, and timestamp.
- **Comprehensive Coverage**: Incorporates the full spectrum of Wikipedia's knowledge base, spanning nearly 3 million articles across countless topics.
- **Conversational Optimization**: Content is processed specifically to support training of dialogue systems, conversational agents, and knowledge-grounded language models.
- **Regular Updates**: Built from the latest Wikipedia dumps to ensure current information.

## Usage

This dataset is ideal for:
- Training large language models requiring broad knowledge bases
- Fine-tuning conversational agents for knowledge-intensive tasks
- Question-answering systems that need factual grounding
- Research in knowledge representation and retrieval in natural language

## License and Citation

TreeCorpus is derived from Wikipedia content available under the CC BY-SA 3.0 license. When using this dataset, please provide appropriate attribution to both this dataset and Wikipedia.

## Dataset Configuration

The dataset is configured with a default split:
- Split name: train
- Data files pattern: data/train-*

## Creation Process

TreeCorpus was created using a specialized pipeline that:
1. Downloads the latest Wikipedia dumps
2. Processes XML content to extract articles
3. Cleans and standardizes text by removing markup, templates, and non-content elements
4. Structures data in a consistent, machine-readable format
5. Filters out redirects, stubs, and non-article content

For more details on the methodology and processing pipeline, please see the accompanying code documentation.