Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
                  pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 193, in _generate_tables
                  examples = [ujson_loads(line) for line in batch.splitlines()]
                              ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
                  return pd.io.json.ujson_loads(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
              ValueError: Expected object or value
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Russian–Nanai Dictionary Dataset

This dataset is a structured digital version of a Russian–Nanai dictionary corpus prepared for low-resource NLP research.

Status: this dataset is still under development and active cleaning.
The current release should be treated as a working version for experimentation, tokenizer research, and corpus building.

Overview

The dataset is based on the Russian–Nanai Dictionary (1989) and was converted from web-accessible dictionary pages into machine-readable formats for:

  • tokenizer training
  • continued pretraining
  • bilingual lexicon extraction
  • vocabulary and character coverage analysis

Source metadata

  • Dictionary: Русско-нанайский словарь (1989)
  • Dictionary ID: 269166
  • Source URL: https://dict.fu-lab.ru/dict?id=269166
  • Original term count: 2570
  • Original page count: 52

Current dataset snapshot

After the current cleaning and normalization stage:

  • Structured entries: 2494
  • Unique term IDs: 2494
  • Tokenizer corpus lines: 5409
  • Pretraining corpus lines: 2494

Validation highlights:

  • Duplicate term-id rows: 0
  • Empty Russian headwords: 0
  • Empty Nanai translations: 887
  • Duplicate Russian–Nanai pairs: 0

Corpus summary

  • Average Russian headword length: 6.90
  • Average Nanai translation length: 10.07
  • Average raw entry length: 128.51
  • Maximum raw entry length: 304
  • Minimum raw entry length: 92
  • Unique characters in tokenizer corpus: 67

Character distribution:

  • Cyrillic: 53450
  • Latin: 3
  • Digits: 5
  • Spaces: 8414
  • Other characters: 3139

Notable characters in the corpus include:

  • standard Cyrillic letters used in Russian
  • Nanai-relevant characters such as ӈ
  • combining marks such as U+0304 (macron), which are important for preserving original orthography

Dataset structure

Depending on the export version, the dataset may include:

  • entries.jsonl — structured dictionary entries
  • parallel_pairs.jsonl — Russian–Nanai lexical pairs
  • parallel_pairs.csv — tabular lexical pairs
  • tokenizer_corpus.txt — corpus for tokenizer training
  • pretrain_corpus.txt — linearized corpus for continued pretraining

Main fields

Typical structured entries may contain:

  • ru_headword — Russian headword
  • grammar — grammatical notes
  • primary_nanai_line — main Nanai translation
  • examples — usage examples or free-form notes
  • example_pairs — extracted Russian–Nanai example pairs when available

Intended use

This dataset is intended for research and experimentation with low-resource language processing, especially:

  • custom tokenizer training
  • tokenizer vocabulary extension
  • continued pretraining of existing language models
  • Russian ↔ Nanai lexical alignment

Limitations

This is a dictionary-derived corpus, not a large natural text corpus.

Current limitations include:

  • the dataset is still being cleaned
  • some entries remain partially noisy or weakly structured
  • some Nanai translations are missing
  • example alignment is partly heuristic
  • this corpus is more suitable for lexical adaptation than standalone language modeling

Development notes

Planned improvements include:

  • additional cleaning and normalization
  • better extraction of Nanai translations from noisy entries
  • improved example alignment
  • merging with additional Nanai sources
  • creation of Nanai-focused tokenizer corpora

Acknowledgements

This dataset was assembled from publicly accessible dictionary materials and converted into structured machine-readable form for research purposes.

Downloads last month
47

Models trained or fine-tuned on zxc0zxc0zxc/nanai-language