Datasets:
File size: 3,174 Bytes
aec8361 c121232 aec8361 89539af c121232 005f7fc bc3b5df a9ae63c ee16dd0 a9ae63c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
---
dataset_info:
features:
- name: video_id
dtype: string
- name: video_link
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: channel
dtype: string
- name: channel_id
dtype: string
- name: date
dtype: string
- name: license
dtype: string
- name: original_language
dtype: string
- name: language_id_method
dtype: string
- name: transcription_language
dtype: string
- name: word_count
dtype: int64
- name: character_count
dtype: int64
- name: source_language
dtype: string
splits:
- name: train
num_bytes: 298197594003
num_examples: 22684737
download_size: 162573072184
dataset_size: 298197594003
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- text-generation
tags:
- conversational
language:
- en
- fr
- es
- pt
- de
- ru
- nl
- tr
- it
pretty_name: YouTube Commons Re-upload
---
## YouTube Commons Re-upload
This is a re-upload of [PleIAs' YouTube Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons), a valuable open dataset:
> YouTube-Commons is a collection of audio transcripts of 2,063,066 videos shared on YouTube under a CC BY 4.0 license.
>
> **Content**
>
> The collection comprises 22,709,724 original and automatically translated transcripts from 3,156,703 videos (721,136 individual channels).
Unfortunately, there are [problems](https://huggingface.co/datasets/PleIAs/YouTube-Commons/discussions/10) with loading YouTube Commons with Hugging Face Datasets.
In order to alleviate those and to further process the dataset, I took the source parquet-files and reuploaded this fixed version to HuggingFace.
## Code
The code used for this reupload. It makes use of a git clone of the [PleIAs/YouTube-Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons) dataset.
```python
from pathlib import Path
from datasets import load_dataset, Dataset
from tqdm import tqdm
columns = set('''video_link
video_id
title
text
channel
channel_id
date
license
original_language
language_id_method
transcription_language
source_language
word_count
character_count'''.split('\n'))
def generate():
for filepath in tqdm(sorted(Path('/Path/To/PleIAs/YouTube-Commons').rglob('*.parquet'))):
print(filepath)
dataset = load_dataset("parquet",
data_files={'train': str(filepath)})
for row in dataset['train']:
keys = set(row)
# Some of the files are missing one of these two columns.
# Setting them to None results in an Arrow error, so we use '' instead
if 'language_id_method' not in keys:
row['language_id_method'] = ''
if 'source_language' not in keys:
row['source_language'] = ''
if '__index_level_0__' in keys:
del row['__index_level_0__']
if not set(row) == columns:
raise ValueError(f'Error in columns: {set(row)}')
yield row
youtube = Dataset.from_generator(generate)
youtube.push_to_hub('Rijgersberg/YouTube-Commons')
``` |