Datasets:
Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: HfHubHTTPError
Message: 429 Client Error: Too Many Requests for url: https://huggingface.co/datasets/Tnaot/large-dataset-audio/resolve/7e5eb9de649f4a96cb2537cc82fb16f325f65465/audio/audio_00469.mp3 (Request ID: Root=1-693aa9a6-6a1924020f1fd1e1784ad4bc;1032a286-70ba-4501-8d91-49daa7000c89)
maximum queue size reached
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 1026, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/datasets/Tnaot/large-dataset-audio/resolve/7e5eb9de649f4a96cb2537cc82fb16f325f65465/audio/audio_00469.mp3
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1586, in _prepare_split_single
writer.write(example, key)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 623, in write
self.write_examples_on_file()
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 581, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 701, in write_batch
self.write_table(pa_table, writer_batch_size)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 716, in write_table
pa_table = embed_table_storage(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in embed_table_storage
embed_array_storage(table[name], feature, token_per_repo_id=token_per_repo_id)
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2124, in embed_array_storage
return feature.embed_storage(array, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/audio.py", line 291, in embed_storage
(path_to_bytes(x["path"]) if x["bytes"] is None else x["bytes"]) if x is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 310, in wrapper
return func(value) if value is not None else None
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/audio.py", line 287, in path_to_bytes
return f.read()
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 813, in read_with_retries
out = read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 728, in track_read
out = f_read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 1012, in read
out = f.read()
^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 728, in track_read
out = f_read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 1078, in read
hf_raise_for_status(self.response)
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/datasets/Tnaot/large-dataset-audio/resolve/7e5eb9de649f4a96cb2537cc82fb16f325f65465/audio/audio_00469.mp3 (Request ID: Root=1-693aa9a6-6a1924020f1fd1e1784ad4bc;1032a286-70ba-4501-8d91-49daa7000c89)
maximum queue size reached
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1334, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 911, in stream_convert_to_parquet
builder._prepare_split(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1447, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1604, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
audio
audio |
|---|
End of preview.
khmer_speech_dataset
Khmer speech dataset with transcriptions, speaker labels, and metadata.
Dataset Description
This dataset contains Khmer (Cambodian) speech recordings with detailed transcriptions and annotations.
Dataset Statistics
| Metric | Value |
|---|---|
| Total Examples | 9,285 |
| Total Duration | 336.68 hours |
| Average Duration | 130.54 seconds |
| Total Words | 1,336,611 |
| Khmer Words | 994,050 (74.4%) |
| English Words | 340,127 (25.4%) |
| Unique Sources | 2857 |
| Unique Tags | 2092 |
Source Distribution
| Source Type | Count |
|---|---|
| youtube | 5,947 |
| unknown | 16 |
| conversation | 696 |
| telegram | 2,626 |
Speaker Distribution
| Speakers | Count | Duration (hours) |
|---|---|---|
| 0 | 2 | 0.00 (0.0%) |
| 1 | 3,060 | 33.97 (10.1%) |
| 2 | 2,855 | 137.80 (40.9%) |
| 3 | 2,892 | 141.56 (42.0%) |
| 4 | 387 | 18.99 (5.6%) |
| 5 | 65 | 3.18 (0.9%) |
| 6 | 16 | 0.78 (0.2%) |
| 7 | 2 | 0.10 (0.0%) |
| 8 | 5 | 0.25 (0.1%) |
| 12 | 1 | 0.05 (0.0%) |
Top Tags
| Tag | Count |
|---|---|
| overlapping | 5,982 |
| laughing | 4,736 |
| drag_tone | 3,705 |
| fast | 3,337 |
| mocking | 1,821 |
| high_pitch | 1,792 |
| shout | 1,597 |
| frustrated | 1,163 |
| smack_lips | 1,161 |
| heavy_breathing | 961 |
| whisper | 886 |
| softly | 856 |
| stutter | 726 |
| serious | 664 |
| normal | 638 |
| sigh | 465 |
| hesitant | 433 |
| normal_tone | 428 |
| panic | 423 |
| thinking | 399 |
Dataset Structure
Data Fields
audio: Audio file path/bytestext_raw: Raw transcription with speaker labels and tagstext_clean: Cleaned transcription (no tags, no speaker labels)duration_seconds: Audio duration in secondsspeaker_count: Number of speakers in the audiospeakers: List of speaker IDstags: List of normalized annotation tagsenglish_words: Count of English wordskhmer_words: Count of Khmer wordstotal_words: Total word countchunk_id: Chunk index within original audiooriginal_name: Name of original audio filesource_type: Source type (telegram, youtube, conversation, unknown)source_folder: Original folder nametelegram_chat_id: Telegram chat ID (if applicable)
Tag Categories
- Speech markers: hesitant, thinking, confused, drag_tone, overlapping
- Audio markers: music, low_quality, noise
- Language markers: english, khmer
Usage
from datasets import load_dataset
# Load dataset
dataset = load_dataset("path/to/dataset")
# Access examples
for example in dataset["train"]:
audio = example["audio"]
text = example["text_clean"]
print(f"Duration: {example['duration_seconds']:.2f}s")
print(f"Text: {text[:100]}...")
License
This dataset is released under CC-BY-NC-4.0 license.
Citation
If you use this dataset, please cite:
@dataset{khmer_speech_dataset,
title={Khmer Speech Dataset},
year={2024},
url={https://huggingface.co/datasets/...}
}
- Downloads last month
- 1