Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code: ConfigNamesError
Exception: ValueError
Message:
Expected data_files in YAML to be either a string or a list of strings
or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'parallel-sentences-global-voices', 'path': 'en-es/parallel-sentences-global-voices/*.parquet'}, {'split': 'parallel-sentences-europarl', 'path': 'en-es/parallel-sentences-europarl/*.parquet'}, {'split': 'parallel-sentences-talks', 'path': 'en-es/parallel-sentences-talks/*.parquet'}, {'split': 'parallel-sentences-wikimatrix', 'path': 'en-es/parallel-sentences-wikimatrix/*.parquet'}, {'split': 'parallel-sentences-news-commentary', 'path': 'en-es/parallel-sentences-news-commentary/*.parquet'}, {'split': 'parallel-sentences-jw300', 'path': 'en-es/parallel-sentences-jw300/*.parquet'}, {'split': 'parallel-sentences-tatoeba', 'path': 'en-es/parallel-sentences-tatoeba/*.parquet'}, {'split': 'parallel-sentences-opus-100', 'path': 'en-es/parallel-sentences-opus-100/*.parquet'}]
Examples of data_files in YAML:
data_files: data.csv
data_files: data/*.png
data_files:
- part0/*
- part1/*
data_files:
- split: train
path: train/*
- split: test
path: test/*
data_files:
- split: train
path:
- train/part1/*
- train/part2/*
- split: test
path: test/*
PS: some symbols like dashes '-' are not allowed in split names
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 604, in get_module
metadata_configs = MetadataConfigs.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/metadata.py", line 153, in from_dataset_card_data
cls._raise_if_data_files_field_not_valid(metadata_config)
File "/usr/local/lib/python3.12/site-packages/datasets/utils/metadata.py", line 100, in _raise_if_data_files_field_not_valid
raise ValueError(yaml_error_message)
ValueError:
Expected data_files in YAML to be either a string or a list of strings
or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'parallel-sentences-global-voices', 'path': 'en-es/parallel-sentences-global-voices/*.parquet'}, {'split': 'parallel-sentences-europarl', 'path': 'en-es/parallel-sentences-europarl/*.parquet'}, {'split': 'parallel-sentences-talks', 'path': 'en-es/parallel-sentences-talks/*.parquet'}, {'split': 'parallel-sentences-wikimatrix', 'path': 'en-es/parallel-sentences-wikimatrix/*.parquet'}, {'split': 'parallel-sentences-news-commentary', 'path': 'en-es/parallel-sentences-news-commentary/*.parquet'}, {'split': 'parallel-sentences-jw300', 'path': 'en-es/parallel-sentences-jw300/*.parquet'}, {'split': 'parallel-sentences-tatoeba', 'path': 'en-es/parallel-sentences-tatoeba/*.parquet'}, {'split': 'parallel-sentences-opus-100', 'path': 'en-es/parallel-sentences-opus-100/*.parquet'}]
Examples of data_files in YAML:
data_files: data.csv
data_files: data/*.png
data_files:
- part0/*
- part1/*
data_files:
- split: train
path: train/*
- split: test
path: test/*
data_files:
- split: train
path:
- train/part1/*
- train/part2/*
- split: test
path: test/*
PS: some symbols like dashes '-' are not allowed in split namesNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
README.md exists but content is empty.
- Downloads last month
- 3