Datasets:

Languages:
Chinese
DOI:
License:
Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xa1 in position 0: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/text/text.py", line 73, in _generate_tables
                  batch = f.read(self.config.chunksize)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 813, in read_with_retries
                  out = read(*args, **kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^
                File "<frozen codecs>", line 322, in decode
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa1 in position 0: invalid start byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for OpenTextVault_FromTheNetwork

Dataset Details

Dataset Description

OpenTextVault_FromTheNetwork is an open, legally compliant dataset consisting of raw, unlabeled text files (TXT format) and compressed archives containing multilingual textual data collected from various network sources.
This dataset offers high-quality, large-scale, and diverse raw textual content, totaling hundreds of billions of unlabeled pure text tokens, suitable for natural language processing tasks such as training and fine-tuning large language models, as well as linguistic research.
All included data is gathered through legal and ethical means, ensuring no copyright infringement.

  • Curated by: OpenTextVault
  • Language(s): Multiple languages (depending on source)
  • License: CC0 1.0 Universal (Public Domain)

Dataset Sources

  • Text data is sourced from publicly available or legally distributable network resources.
  • Data is provided in plain text files and/or compressed archives for easy access.

highlight:

This dataset contains an exceptionally large volume of high-quality unlabeled Chinese text, covering diverse sources such as news articles, literature, online forums, and social media discussions. It provides rich linguistic diversity and natural language structures that make it especially valuable for training and fine-tuning large-scale Chinese or multilingual language models. All content is legally and ethically collected, ensuring a clean and compliant foundation for open research and LLM development.

Uses

Direct Use

  • Pretraining or fine-tuning language models, including large-scale models.
  • Unsupervised and self-supervised NLP research.
  • Multilingual raw text analysis, including stylistic and diversity studies.

Out-of-Scope Use

  • This dataset is not designed for supervised learning tasks requiring labeled or annotated data.
  • Users should ensure compliance with relevant legal and ethical standards when utilizing this dataset.

Dataset Structure

  • Contains folders of raw text files (.txt) and compressed archives (.zip, .tar.gz).
  • Each text file holds plain, unlabeled raw textual data without manual annotations or metadata.

Dataset Creation

Curation Rationale

The dataset was created to provide a free, open, and legally compliant extremely large-scale resource of unlabeled raw textual data from network sources, enabling researchers and practitioners to train and fine-tune language models without copyright concerns.

Source Data

  • Collected from verified legal and publicly accessible network sources.

Data Collection and Processing

  • Cleaned to remove corrupted files and non-text content.
  • Maintained in raw form without manual annotation or labeling.

Bias, Risks, and Limitations

  • As raw and unlabeled data, it may contain biases inherited from original network sources.
  • May include unfiltered language such as slang, dialects, or offensive expressions.

Citation

Released under the CC0 1.0 Public Domain License.
No citation or attribution is required for use or publication.


Note: This dataset consists of high-quality, extremely large-scale (hundreds of billions of tokens), and diverse unlabeled raw text from network sources, making it highly suitable for training or fine-tuning large language models.

Downloads last month
77