akkiisfrommars commited on
Commit
f25865c
·
verified ·
1 Parent(s): 7b2294b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -36
README.md CHANGED
@@ -1,36 +1,56 @@
1
- ---
2
- license: cc-by-sa-3.0
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: train
7
- path: data/train-*
8
- dataset_info:
9
- features:
10
- - name: id
11
- dtype: string
12
- - name: title
13
- dtype: string
14
- - name: text
15
- dtype: string
16
- - name: url
17
- dtype: string
18
- - name: timestamp
19
- dtype: string
20
- splits:
21
- - name: train
22
- num_bytes: 26272580250
23
- num_examples: 2882766
24
- download_size: 13326529312
25
- dataset_size: 26272580250
26
- language:
27
- - en
28
- tags:
29
- - treecorpus
30
- - wikipedia
31
- - encyclopedia
32
- - knowledge-base
33
- - factual-knowledge
34
- - training-data
35
- pretty_name: 'TreeCorpus: Wikipedia Knowledge for AI Models'
36
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TreeCorpus
2
+
3
+ TreeCorpus is a comprehensive, structured dataset derived from the latest Wikipedia dumps, specially processed to serve as high-quality training data for conversational AI models. This dataset transforms Wikipedia's encyclopedic knowledge into a format optimized for natural language understanding and generation tasks.
4
+
5
+ ## Dataset Statistics
6
+
7
+ - **Size**: 26.27 GB (26,272,580,250 bytes)
8
+ - **Examples**: 2,882,766 articles
9
+ - **Download Size**: 13.33 GB (13,326,529,312 bytes)
10
+ - **Language**: English
11
+
12
+ ## Data Structure
13
+
14
+ Each entry in the dataset contains:
15
+ - `id` (string): Unique Wikipedia article identifier
16
+ - `title` (string): Article title
17
+ - `text` (string): Clean, processed text content
18
+ - `url` (string): Source Wikipedia URL
19
+ - `timestamp` (string): Processing timestamp
20
+
21
+ ## Key Features
22
+
23
+ - **Clean, Structured Content**: Meticulously processed to remove markup, templates, references, and other non-content elements while preserving the informational value of Wikipedia articles.
24
+ - **Rich Metadata**: Each entry includes article ID, title, clean text content, source URL, and timestamp.
25
+ - **Comprehensive Coverage**: Incorporates the full spectrum of Wikipedia's knowledge base, spanning nearly 3 million articles across countless topics.
26
+ - **Conversational Optimization**: Content is processed specifically to support training of dialogue systems, conversational agents, and knowledge-grounded language models.
27
+ - **Regular Updates**: Built from the latest Wikipedia dumps to ensure current information.
28
+
29
+ ## Usage
30
+
31
+ This dataset is ideal for:
32
+ - Training large language models requiring broad knowledge bases
33
+ - Fine-tuning conversational agents for knowledge-intensive tasks
34
+ - Question-answering systems that need factual grounding
35
+ - Research in knowledge representation and retrieval in natural language
36
+
37
+ ## License and Citation
38
+
39
+ TreeCorpus is derived from Wikipedia content available under the CC BY-SA 3.0 license. When using this dataset, please provide appropriate attribution to both this dataset and Wikipedia.
40
+
41
+ ## Dataset Configuration
42
+
43
+ The dataset is configured with a default split:
44
+ - Split name: train
45
+ - Data files pattern: data/train-*
46
+
47
+ ## Creation Process
48
+
49
+ TreeCorpus was created using a specialized pipeline that:
50
+ 1. Downloads the latest Wikipedia dumps
51
+ 2. Processes XML content to extract articles
52
+ 3. Cleans and standardizes text by removing markup, templates, and non-content elements
53
+ 4. Structures data in a consistent, machine-readable format
54
+ 5. Filters out redirects, stubs, and non-article content
55
+
56
+ For more details on the methodology and processing pipeline, please see the accompanying code documentation.