toolevalxm commited on
Commit
24fadb1
·
verified ·
1 Parent(s): 1458668

Upload folder using huggingface_hub

Browse files
Files changed (6) hide show
  1. README.md +108 -0
  2. data.parquet +3 -0
  3. figures/fig1.png +3 -0
  4. figures/fig2.png +3 -0
  5. figures/fig3.png +3 -0
  6. metadata.json +6 -0
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-classification
5
+ - question-answering
6
+ language:
7
+ - en
8
+ size_categories:
9
+ - 100K<n<1M
10
+ ---
11
+ # CuratedTextCorpus
12
+ <!-- markdownlint-disable first-line-h1 -->
13
+ <!-- markdownlint-disable html -->
14
+ <!-- markdownlint-disable no-duplicate-header -->
15
+
16
+ <div align="center">
17
+ <img src="figures/fig1.png" width="60%" alt="CuratedTextCorpus" />
18
+ </div>
19
+ <hr>
20
+
21
+ <div align="center" style="line-height: 1;">
22
+ <a href="LICENSE" style="margin: 2px;">
23
+ <img alt="License" src="figures/fig2.png" style="display: inline-block; vertical-align: middle;"/>
24
+ </a>
25
+ </div>
26
+
27
+ ## 1. Introduction
28
+
29
+ The CuratedTextCorpus dataset represents a major advancement in high-quality text data for NLP tasks. Through rigorous curation and validation processes, we have assembled a collection that meets the highest standards for machine learning applications. The dataset excels in text classification, question answering, and general language understanding tasks.
30
+
31
+ <p align="center">
32
+ <img width="80%" src="figures/fig3.png">
33
+ </p>
34
+
35
+ Compared to previous versions, this curated dataset shows significant improvements in data quality metrics. For instance, in duplicate detection tests, the deduplication rate has improved from 85% to 99.2%. This advancement stems from our enhanced preprocessing pipeline that now includes semantic similarity checks in addition to exact matching.
36
+
37
+ Beyond improved deduplication, this version also offers reduced noise levels, better annotation consistency, and enhanced domain coverage.
38
+
39
+ ## 2. Quality Metrics
40
+
41
+ ### Comprehensive Quality Assessment
42
+
43
+ <div align="center">
44
+
45
+ | | Metric | Baseline | v1.0 | v2.0 | CuratedTextCorpus |
46
+ |---|---|---|---|---|---|
47
+ | **Data Completeness** | Completeness | 0.821 | 0.855 | 0.871 | 0.877 |
48
+ | | Consistency | 0.756 | 0.782 | 0.801 | 0.806 |
49
+ | | Accuracy | 0.689 | 0.721 | 0.745 | 0.751 |
50
+ | **Data Validity** | Validity | 0.812 | 0.834 | 0.856 | 0.861 |
51
+ | | Uniqueness | 0.901 | 0.925 | 0.941 | 0.945 |
52
+ | | Timeliness | 0.667 | 0.698 | 0.721 | 0.727 |
53
+ | **Data Integrity** | Integrity | 0.778 | 0.801 | 0.823 | 0.828 |
54
+ | | Relevance | 0.712 | 0.738 | 0.761 | 0.766 |
55
+ | | Coverage | 0.645 | 0.678 | 0.702 | 0.708 |
56
+ | **Additional Metrics** | Conformity | 0.834 | 0.856 | 0.878 | 0.883 |
57
+ | | Precision | 0.723 | 0.751 | 0.776 | 0.782 |
58
+ | | Reliability | 0.789 | 0.812 | 0.834 | 0.839 |
59
+
60
+ </div>
61
+
62
+ ### Overall Quality Summary
63
+ The CuratedTextCorpus demonstrates exceptional quality across all evaluated metrics, with particularly strong results in completeness and integrity assessments.
64
+
65
+ ## 3. Data Access & API
66
+ We provide direct access to the dataset through our data portal. Please check our official documentation for API access details.
67
+
68
+ ## 4. How to Use
69
+
70
+ Please refer to our documentation for information on loading and using CuratedTextCorpus.
71
+
72
+ Usage recommendations for CuratedTextCorpus:
73
+
74
+ 1. Preprocessing scripts are included for common NLP tasks.
75
+ 2. Balanced sampling utilities are available for imbalanced labels.
76
+
77
+ The data format follows standard HuggingFace datasets conventions with train/validation/test splits.
78
+
79
+ ### Loading the Dataset
80
+ ```python
81
+ from datasets import load_dataset
82
+
83
+ dataset = load_dataset("username/CuratedTextCorpus")
84
+ ```
85
+
86
+ ### Data Fields
87
+ The dataset includes the following fields:
88
+ - `text`: The main text content
89
+ - `label`: Classification label (if applicable)
90
+ - `metadata`: Additional context information
91
+
92
+ ### Recommended Preprocessing
93
+ ```python
94
+ from transformers import AutoTokenizer
95
+
96
+ tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
97
+
98
+ def preprocess(examples):
99
+ return tokenizer(examples["text"], truncation=True, padding=True)
100
+
101
+ tokenized_dataset = dataset.map(preprocess, batched=True)
102
+ ```
103
+
104
+ ## 5. License
105
+ This dataset is licensed under the [Apache 2.0 License](LICENSE). The use of CuratedTextCorpus is subject to the license terms. Commercial use is permitted with attribution.
106
+
107
+ ## 6. Contact
108
+ If you have any questions, please raise an issue on our repository or contact us at data@curatedtextcorpus.ai.
data.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc92e46ed30b898d58466934cbfd070f9ed7bc672ef25f044b955fa0b1fc92c2
3
+ size 50000
figures/fig1.png ADDED

Git LFS Details

  • SHA256: 5ac9d3e7ac5b4d295be8ba5708d11b4549e6bceebfb5b1d3ab48a67efdf0ca52
  • Pointer size: 128 Bytes
  • Size of remote file: 238 Bytes
figures/fig2.png ADDED

Git LFS Details

  • SHA256: 4290320c963a7241ee081ea754943af0143cc1b70443ba628147d538a7b48024
  • Pointer size: 128 Bytes
  • Size of remote file: 237 Bytes
figures/fig3.png ADDED

Git LFS Details

  • SHA256: 115eec97f5876a5b639d58606d4b5b0da051030e97b09f8d83092eb1392cc6ee
  • Pointer size: 128 Bytes
  • Size of remote file: 237 Bytes
metadata.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "dataset_type": "text-corpus",
3
+ "version": "v8",
4
+ "num_samples": 562500,
5
+ "format": "parquet"
6
+ }