Datasets:

Languages:
Chinese
DOI:
License:
lunarlonging commited on
Commit
7832a04
·
verified ·
1 Parent(s): e5f7bc8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -22
README.md CHANGED
@@ -2,65 +2,68 @@
2
  license: cc0-1.0
3
  ---
4
 
5
- # Dataset Card for OpenTextVault
6
 
7
  ## Dataset Details
8
 
9
  ### Dataset Description
10
 
11
- **OpenTextVault** is an open, legally compliant dataset consisting of raw text files (TXT format) and compressed archives containing multilingual, unlabeled textual data.
12
- The dataset is curated to provide high-quality, diverse textual content suitable for natural language processing tasks, including language model training, fine-tuning, and linguistic research.
13
- All data included in this dataset are obtained through legal and ethical means, ensuring no copyright infringement.
14
 
15
- - **Curated by:** OpenTextVault
16
  - **Language(s):** Multiple languages (depending on source)
17
  - **License:** CC0 1.0 Universal (Public Domain)
18
 
19
  ### Dataset Sources
20
 
21
- - Text content comes from publicly available or otherwise legally distributable sources.
22
- - Data is provided as plain text files or compressed archives for easy use.
23
 
24
  ## Uses
25
 
26
  ### Direct Use
27
 
28
- - Pretraining or fine-tuning language models (including large language models).
29
  - Unsupervised and self-supervised NLP research.
30
- - Linguistic and stylistic analysis.
31
 
32
  ### Out-of-Scope Use
33
 
34
- - This dataset is **not** intended for supervised learning tasks requiring labeled data.
35
- - Users should ensure compliance with any local laws and ethical standards when using this dataset.
36
 
37
  ## Dataset Structure
38
 
39
- - The dataset contains folders of raw text files (.txt) and/or compressed archives (.zip, .tar.gz).
40
- - Each text file contains plain, unlabeled text data.
41
- - No additional annotations or metadata are provided.
42
 
43
  ## Dataset Creation
44
 
45
  ### Curation Rationale
46
 
47
- The dataset was created to provide a free, open resource of raw textual data for NLP practitioners and researchers seeking diverse multilingual text for language model training without copyright concerns.
48
 
49
  ### Source Data
50
 
51
- - Texts were collected from sources verified to be legal and compliant with redistribution policies.
52
 
53
  ### Data Collection and Processing
54
 
55
- - Data is cleaned to remove corrupted files and obvious non-text content.
56
- - No manual labeling or annotation was performed.
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- - Since the dataset is unlabeled and raw, users should be aware of potential biases inherited from the original sources.
61
- - The dataset may contain unfiltered language reflecting real-world usage, including slang or offensive content.
62
 
63
  ## Citation
64
 
65
- This dataset is released under CC0 1.0 Public Domain License.
66
- You are **not required** to cite or attribute this dataset in any use or publication.
 
 
 
 
 
2
  license: cc0-1.0
3
  ---
4
 
5
+ # Dataset Card for OpenTextVault_FromTheNetwork
6
 
7
  ## Dataset Details
8
 
9
  ### Dataset Description
10
 
11
+ **OpenTextVault_FromTheNetwork** is an open, legally compliant dataset consisting of raw, unlabeled text files (TXT format) and compressed archives containing multilingual textual data collected from various network sources.
12
+ This dataset offers **high-quality, large-scale, and diverse raw textual content**, totaling **hundreds of billions of unlabeled pure text tokens**, suitable for natural language processing tasks such as training and fine-tuning large language models, as well as linguistic research.
13
+ All included data is gathered through legal and ethical means, ensuring no copyright infringement.
14
 
15
+ - **Curated by:** OpenTextVault
16
  - **Language(s):** Multiple languages (depending on source)
17
  - **License:** CC0 1.0 Universal (Public Domain)
18
 
19
  ### Dataset Sources
20
 
21
+ - Text data is sourced from publicly available or legally distributable network resources.
22
+ - Data is provided in plain text files and/or compressed archives for easy access.
23
 
24
  ## Uses
25
 
26
  ### Direct Use
27
 
28
+ - Pretraining or fine-tuning language models, including large-scale models.
29
  - Unsupervised and self-supervised NLP research.
30
+ - Multilingual raw text analysis, including stylistic and diversity studies.
31
 
32
  ### Out-of-Scope Use
33
 
34
+ - This dataset is **not** designed for supervised learning tasks requiring labeled or annotated data.
35
+ - Users should ensure compliance with relevant legal and ethical standards when utilizing this dataset.
36
 
37
  ## Dataset Structure
38
 
39
+ - Contains folders of raw text files (.txt) and compressed archives (.zip, .tar.gz).
40
+ - Each text file holds plain, unlabeled raw textual data without manual annotations or metadata.
 
41
 
42
  ## Dataset Creation
43
 
44
  ### Curation Rationale
45
 
46
+ The dataset was created to provide a **free, open, and legally compliant extremely large-scale resource of unlabeled raw textual data from network sources**, enabling researchers and practitioners to train and fine-tune language models without copyright concerns.
47
 
48
  ### Source Data
49
 
50
+ - Collected from verified legal and publicly accessible network sources.
51
 
52
  ### Data Collection and Processing
53
 
54
+ - Cleaned to remove corrupted files and non-text content.
55
+ - Maintained in raw form without manual annotation or labeling.
56
 
57
  ## Bias, Risks, and Limitations
58
 
59
+ - As raw and unlabeled data, it may contain biases inherited from original network sources.
60
+ - May include unfiltered language such as slang, dialects, or offensive expressions.
61
 
62
  ## Citation
63
 
64
+ Released under the **CC0 1.0 Public Domain License**.
65
+ No citation or attribution is required for use or publication.
66
+
67
+ ---
68
+
69
+ **Note:** This dataset consists of **high-quality, extremely large-scale (hundreds of billions of tokens), and diverse unlabeled raw text from network sources**, making it highly suitable for **training or fine-tuning large language models**.