Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -3,41 +3,133 @@ language:
|
|
| 3 |
- en
|
| 4 |
task_categories:
|
| 5 |
- text-generation
|
|
|
|
|
|
|
| 6 |
size_categories:
|
| 7 |
- n<1M
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
-
# YouTube Transcripts Dataset
|
| 11 |
|
| 12 |
-
This dataset contains transcripts from YouTube videos,
|
| 13 |
|
| 14 |
## Dataset Structure
|
| 15 |
|
| 16 |
-
The dataset is
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
- `video_id`: YouTube video ID
|
| 20 |
-
- `
|
| 21 |
-
- `
|
| 22 |
-
- `
|
| 23 |
-
- `
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
## Loading the Dataset
|
| 26 |
|
| 27 |
```python
|
| 28 |
from datasets import load_dataset
|
| 29 |
|
| 30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
```
|
| 32 |
|
| 33 |
-
##
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
## Last Updated
|
| 38 |
|
| 39 |
-
2025-10-
|
| 40 |
|
| 41 |
-
## License
|
| 42 |
|
| 43 |
-
Please ensure compliance with YouTube's Terms of Service when using this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
- en
|
| 4 |
task_categories:
|
| 5 |
- text-generation
|
| 6 |
+
- conversational
|
| 7 |
+
- instruction-following
|
| 8 |
size_categories:
|
| 9 |
- n<1M
|
| 10 |
+
tags:
|
| 11 |
+
- youtube
|
| 12 |
+
- transcripts
|
| 13 |
+
- llm-training
|
| 14 |
+
- fine-tuning
|
| 15 |
+
- whisper
|
| 16 |
+
- conversational-ai
|
| 17 |
---
|
| 18 |
|
| 19 |
+
# YouTube Transcripts Dataset for LLM Training
|
| 20 |
|
| 21 |
+
This dataset contains high-quality, structured transcripts from YouTube videos, specifically formatted for Large Language Model (LLM) training and fine-tuning.
|
| 22 |
|
| 23 |
## Dataset Structure
|
| 24 |
|
| 25 |
+
The dataset is optimized for LLM training with the following structure:
|
| 26 |
|
| 27 |
+
### Core Training Fields
|
| 28 |
+
- `text`: Cleaned and normalized transcript text
|
| 29 |
+
- `instruction`: Instruction format for fine-tuning (e.g., "Provide a transcript of the video titled '...'")
|
| 30 |
+
- `response`: The transcript content (same as `text` but in instruction-response format)
|
| 31 |
+
|
| 32 |
+
### Content Analysis
|
| 33 |
+
- `word_count`: Number of words in the transcript
|
| 34 |
+
- `character_count`: Number of characters
|
| 35 |
+
- `estimated_tokens`: Estimated token count for training
|
| 36 |
+
- `quality_score`: Quality score (0-1) based on length, structure, and metadata
|
| 37 |
+
- `content_type`: Classified content type (educational, conversational, instructional, narrative, general)
|
| 38 |
+
|
| 39 |
+
### Metadata
|
| 40 |
- `video_id`: YouTube video ID
|
| 41 |
+
- `source`: Always "youtube"
|
| 42 |
+
- `transcription_method`: "whisper" (OpenAI Whisper)
|
| 43 |
+
- `language`: "en" (English)
|
| 44 |
+
- `timestamp`: Processing timestamp
|
| 45 |
+
- `video_metadata`: Structured video information
|
| 46 |
+
- `title`: Video title
|
| 47 |
+
- `channel`: Channel name
|
| 48 |
+
- `duration_seconds`: Video duration in seconds
|
| 49 |
+
- `duration_formatted`: Human-readable duration (MM:SS or HH:MM:SS)
|
| 50 |
+
- `upload_date`: Video upload date
|
| 51 |
+
- `view_count`: Number of views
|
| 52 |
+
- `category`: Auto-classified category (education, business, health, technology, etc.)
|
| 53 |
|
| 54 |
## Loading the Dataset
|
| 55 |
|
| 56 |
```python
|
| 57 |
from datasets import load_dataset
|
| 58 |
|
| 59 |
+
# Load the complete dataset
|
| 60 |
+
dataset = load_dataset("morka17/rtu-tgn", data_files="data_shard_*.jsonl")
|
| 61 |
+
|
| 62 |
+
# For instruction fine-tuning
|
| 63 |
+
train_data = dataset['train']
|
| 64 |
+
for example in train_data:
|
| 65 |
+
instruction = example['instruction']
|
| 66 |
+
response = example['response']
|
| 67 |
+
# Use for instruction-following fine-tuning
|
| 68 |
+
|
| 69 |
+
# For general language modeling
|
| 70 |
+
for example in train_data:
|
| 71 |
+
text = example['text']
|
| 72 |
+
# Use for general language model training
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
## Filtering and Quality Control
|
| 76 |
+
|
| 77 |
+
```python
|
| 78 |
+
# Filter by quality score
|
| 79 |
+
high_quality = dataset.filter(lambda x: x['quality_score'] > 0.7)
|
| 80 |
+
|
| 81 |
+
# Filter by content type
|
| 82 |
+
educational_content = dataset.filter(lambda x: x['content_type'] == 'educational')
|
| 83 |
+
|
| 84 |
+
# Filter by length (optimal for training)
|
| 85 |
+
optimal_length = dataset.filter(lambda x: 1000 <= x['word_count'] <= 5000)
|
| 86 |
+
|
| 87 |
+
# Filter by category
|
| 88 |
+
business_content = dataset.filter(lambda x: x['video_metadata']['category'] == 'business')
|
| 89 |
```
|
| 90 |
|
| 91 |
+
## Use Cases
|
| 92 |
+
|
| 93 |
+
### 1. **Instruction Fine-tuning**
|
| 94 |
+
Use the `instruction` and `response` fields for training models to follow instructions.
|
| 95 |
|
| 96 |
+
### 2. **Conversational AI Training**
|
| 97 |
+
Filter for `content_type == 'conversational'` for dialogue training.
|
| 98 |
+
|
| 99 |
+
### 3. **Domain-specific Training**
|
| 100 |
+
Filter by `video_metadata.category` for domain-specific fine-tuning.
|
| 101 |
+
|
| 102 |
+
### 4. **Quality-based Training**
|
| 103 |
+
Use `quality_score` to select high-quality training examples.
|
| 104 |
+
|
| 105 |
+
## Data Quality
|
| 106 |
+
|
| 107 |
+
- **Text Cleaning**: Transcripts are cleaned to remove artifacts, normalize punctuation, and improve readability
|
| 108 |
+
- **Quality Scoring**: Each entry has a quality score based on length, structure, punctuation, and metadata
|
| 109 |
+
- **Content Classification**: Automatic classification into content types for targeted training
|
| 110 |
+
- **Metadata Enrichment**: Rich metadata for filtering and analysis
|
| 111 |
+
|
| 112 |
+
## Sharding
|
| 113 |
+
|
| 114 |
+
The dataset is automatically sharded into files of max 10MB each (data_shard_XXXX.jsonl) for efficient loading and processing.
|
| 115 |
|
| 116 |
## Last Updated
|
| 117 |
|
| 118 |
+
2025-10-06T21:55:58.000864
|
| 119 |
|
| 120 |
+
## License and Usage
|
| 121 |
|
| 122 |
+
Please ensure compliance with YouTube's Terms of Service when using this dataset. This dataset is intended for research and educational purposes in natural language processing and machine learning.
|
| 123 |
+
|
| 124 |
+
## Citation
|
| 125 |
+
|
| 126 |
+
If you use this dataset in your research, please cite:
|
| 127 |
+
|
| 128 |
+
```
|
| 129 |
+
@dataset{youtube_transcripts_llm,
|
| 130 |
+
title={YouTube Transcripts Dataset for LLM Training},
|
| 131 |
+
author={Generated via OpenAI Whisper},
|
| 132 |
+
year={2025},
|
| 133 |
+
url={https://huggingface.co/datasets/morka17/rtu-tgn}
|
| 134 |
+
}
|
| 135 |
+
```
|