Datasets:
Add code
Browse files
README.md
CHANGED
|
@@ -69,3 +69,52 @@ This is a re-upload of [PleIAs' YouTube Commons](https://huggingface.co/datasets
|
|
| 69 |
|
| 70 |
Unfortunelaty, there are [problems](https://huggingface.co/datasets/PleIAs/YouTube-Commons/discussions/10) with loading YouTube Commons with Hugging Face Datasets.
|
| 71 |
In order to alleviate those and to further process the dataset, I took the source parquet-files and reuploaded this fixed version to HuggingFace.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
Unfortunelaty, there are [problems](https://huggingface.co/datasets/PleIAs/YouTube-Commons/discussions/10) with loading YouTube Commons with Hugging Face Datasets.
|
| 71 |
In order to alleviate those and to further process the dataset, I took the source parquet-files and reuploaded this fixed version to HuggingFace.
|
| 72 |
+
|
| 73 |
+
## Code
|
| 74 |
+
The code used for this reupload. It makes use of a git clone of the [PleIAs/YouTube-Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons) dataset.
|
| 75 |
+
|
| 76 |
+
```python
|
| 77 |
+
from pathlib import Path
|
| 78 |
+
|
| 79 |
+
from datasets import load_dataset, Dataset
|
| 80 |
+
from tqdm import tqdm
|
| 81 |
+
|
| 82 |
+
columns = set('''video_link
|
| 83 |
+
video_id
|
| 84 |
+
title
|
| 85 |
+
text
|
| 86 |
+
channel
|
| 87 |
+
channel_id
|
| 88 |
+
date
|
| 89 |
+
license
|
| 90 |
+
original_language
|
| 91 |
+
language_id_method
|
| 92 |
+
transcription_language
|
| 93 |
+
source_language
|
| 94 |
+
word_count
|
| 95 |
+
character_count'''.split('\n'))
|
| 96 |
+
|
| 97 |
+
def generate():
|
| 98 |
+
for filepath in tqdm(sorted(Path('/Path/To/PleIAs/YouTube-Commons').rglob('*.parquet'))):
|
| 99 |
+
print(filepath)
|
| 100 |
+
dataset = load_dataset("parquet",
|
| 101 |
+
data_files={'train': str(filepath)})
|
| 102 |
+
for row in dataset['train']:
|
| 103 |
+
keys = set(row)
|
| 104 |
+
# Some of the files are missing one of these two columns.
|
| 105 |
+
# Setting them to None results in an Arrow error, so we use '' instead
|
| 106 |
+
if 'language_id_method' not in keys:
|
| 107 |
+
row['language_id_method'] = ''
|
| 108 |
+
if 'source_language' not in keys:
|
| 109 |
+
row['source_language'] = ''
|
| 110 |
+
if '__index_level_0__' in keys:
|
| 111 |
+
del row['__index_level_0__']
|
| 112 |
+
|
| 113 |
+
if not set(row) == columns:
|
| 114 |
+
raise ValueError(f'Error in columns: {set(row)}')
|
| 115 |
+
yield row
|
| 116 |
+
|
| 117 |
+
youtube_nl = Dataset.from_generator(generate)
|
| 118 |
+
youtube_nl.push_to_hub('Rijgersberg/YouTube-Commons')
|
| 119 |
+
|
| 120 |
+
```
|