Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 1 fields in line 11, saw 3

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/csv/csv.py", line 198, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                         ^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pandas/_libs/parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "pandas/_libs/parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 11, saw 3

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Kyrgyz CommonCrawl Dataset

A 271 MB text corpus of Kyrgyz language data extracted from CommonCrawl — one of the largest openly available Kyrgyz text collections for NLP research.


Dataset Description

This dataset contains Kyrgyz-language web text scraped from CommonCrawl archives, filtered by the Kyrgyz language tag (ky). The data covers a wide range of domains including news, blogs, government sites, educational content, and general web pages.

Why this matters: Kyrgyz is a low-resource Turkic language spoken by ~7 million people. High-quality text corpora are essential for training language models, yet very few large-scale Kyrgyz datasets exist publicly.


Dataset Summary

Property Value
Total size 271 MB
Language Kyrgyz (ky)
Format CSV
Source CommonCrawl (filtered by ky language tag)
Files 31 CSV files
License CC0 (public domain)

File Structure

File Size Description
data_MN.csv 29.6 MB Large text segment
data.csv 9.98 MB General web text
data_bilesinbi.csv 7.11 MB Domain-specific data
Merged file2.csv 1.66 MB Merged text segments
data_f.csv 703 kB Filtered subset
8april_final.csv 634 kB Cleaned snapshot
data.numbers 581 kB Statistics/metadata
bia.csv 37.6 kB Small subset
data_ecoproduct.csv 19.9 kB Eco/product domain
... ... Additional CSV files

Use Cases

  • Language model pretraining — Training or fine-tuning LLMs for Kyrgyz (e.g., GPT, BERT, LLaMA)
  • Text classification — Building Kyrgyz text classifiers
  • Machine translation — Source data for Kyrgyz ↔ other language pairs
  • Linguistic research — Studying modern Kyrgyz web language usage
  • Punctuation / grammar models — Training data for text normalization tools
  • NER & information extraction — Building Kyrgyz entity recognizers

Data Collection

The data was collected by:

  1. Querying CommonCrawl archives for pages tagged with the Kyrgyz language identifier (ky)
  2. Extracting text content from the matched web pages
  3. Cleaning and organizing into CSV format
  4. Deduplication and quality filtering

Preprocessing Recommendations

Before using this dataset, consider:

  • Deduplication — Web-crawled data often contains duplicate paragraphs across pages
  • Language verification — Some pages may contain mixed-language content (Kyrgyz + Russian is common)
  • Quality filtering — Remove boilerplate (navigation menus, footers, cookie notices)
  • Encoding normalization — Ensure consistent Cyrillic encoding (UTF-8)

Limitations

  • Web-crawled data may contain noise, boilerplate HTML artifacts, and mixed-language content
  • No manual curation — quality varies across files
  • Potential duplicates across different CSV files
  • Bias toward web-present content — overrepresentation of news and government text, underrepresentation of informal speech

Related Resources


Citation

@dataset{uvalieva2024kyrgyz_commoncrawl,
  author = {Uvalieva, Zarina},
  title  = {Kyrgyz CommonCrawl Text Corpus},
  year   = {2024},
  url    = {https://huggingface.co/datasets/Zarinaaa/commoncrawl_dataset}
}

Author

Zarina Uvalieva — ML Engineer specializing in NLP for low-resource languages.

Downloads last month
10