akkiisfrommars commited on
Commit
7b2294b
·
verified ·
1 Parent(s): fd77cf4

Update README.md

Browse files

# TreeCorpus

TreeCorpus is a comprehensive, structured dataset derived from the latest Wikipedia dumps, specially processed to serve as high-quality training data for conversational AI models. This dataset transforms Wikipedia's encyclopedic knowledge into a format optimized for natural language understanding and generation tasks.

## Key Features

- **Clean, Structured Content**: Meticulously processed to remove markup, templates, references, and other non-content elements while preserving the informational value of Wikipedia articles.
- **Rich Metadata**: Each entry includes article ID, title, clean text content, source URL, and timestamp.
- **Comprehensive Coverage**: Incorporates the full spectrum of Wikipedia's knowledge base, spanning millions of articles across countless topics.
- **Conversational Optimization**: Content is processed specifically to support training of dialogue systems, conversational agents, and knowledge-grounded language models.
- **Regular Updates**: Built from the latest Wikipedia dumps to ensure current information.

## Dataset Structure

Each entry in the dataset contains:
- `id`: Unique Wikipedia article identifier
- `title`: Article title
- `text`: Clean, processed text content
- `url`: Source Wikipedia URL
- `timestamp`: Processing timestamp

## Usage

This dataset is ideal for:
- Training large language models requiring broad knowledge bases
- Fine-tuning conversational agents for knowledge-intensive tasks
- Question-answering systems that need factual grounding
- Research in knowledge representation and retrieval in natural language

## Citation and License

TreeCorpus is derived from Wikipedia content available under the CC BY-SA 3.0 license. When using this dataset, please provide appropriate attribution to both this dataset and Wikipedia.

## Creation Process

TreeCorpus was created using a specialized pipeline that:
1. Downloads the latest Wikipedia dumps
2. Processes XML content to extract articles
3. Cleans and standardizes text by removing markup, templates, and non-content elements
4. Structures data in a consistent, machine-readable format
5. Filters out redirects, stubs, and non-article content

For more details on the methodology and processing pipeline, please see the accompanying documentation.

Files changed (1) hide show
  1. README.md +36 -26
README.md CHANGED
@@ -1,26 +1,36 @@
1
- ---
2
- license: cc-by-sa-3.0
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: train
7
- path: data/train-*
8
- dataset_info:
9
- features:
10
- - name: id
11
- dtype: string
12
- - name: title
13
- dtype: string
14
- - name: text
15
- dtype: string
16
- - name: url
17
- dtype: string
18
- - name: timestamp
19
- dtype: string
20
- splits:
21
- - name: train
22
- num_bytes: 26272580250
23
- num_examples: 2882766
24
- download_size: 13326529312
25
- dataset_size: 26272580250
26
- ---
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-3.0
3
+ configs:
4
+ - config_name: default
5
+ data_files:
6
+ - split: train
7
+ path: data/train-*
8
+ dataset_info:
9
+ features:
10
+ - name: id
11
+ dtype: string
12
+ - name: title
13
+ dtype: string
14
+ - name: text
15
+ dtype: string
16
+ - name: url
17
+ dtype: string
18
+ - name: timestamp
19
+ dtype: string
20
+ splits:
21
+ - name: train
22
+ num_bytes: 26272580250
23
+ num_examples: 2882766
24
+ download_size: 13326529312
25
+ dataset_size: 26272580250
26
+ language:
27
+ - en
28
+ tags:
29
+ - treecorpus
30
+ - wikipedia
31
+ - encyclopedia
32
+ - knowledge-base
33
+ - factual-knowledge
34
+ - training-data
35
+ pretty_name: 'TreeCorpus: Wikipedia Knowledge for AI Models'
36
+ ---