File size: 4,265 Bytes
8c30229
 
 
 
 
5bca675
 
8c30229
 
5bca675
 
 
 
 
 
 
8c30229
 
5bca675
8c30229
5bca675
8c30229
 
 
5bca675
8c30229
5bca675
 
 
 
 
 
 
 
 
 
 
 
 
8c30229
5bca675
 
 
 
 
 
 
 
 
 
 
 
8c30229
 
 
 
 
 
5bca675
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8c30229
 
5bca675
 
 
 
8c30229
5bca675
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8c30229
 
 
3d2bab5
8c30229
5bca675
8c30229
5bca675
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
---
language:
- en
task_categories:
- text-generation
- conversational
- instruction-following
size_categories:
- n<1M
tags:
- youtube
- transcripts
- llm-training
- fine-tuning
- whisper
- conversational-ai
---

# YouTube Transcripts Dataset for LLM Training

This dataset contains high-quality, structured transcripts from YouTube videos, specifically formatted for Large Language Model (LLM) training and fine-tuning.

## Dataset Structure

The dataset is optimized for LLM training with the following structure:

### Core Training Fields
- `text`: Cleaned and normalized transcript text
- `instruction`: Instruction format for fine-tuning (e.g., "Provide a transcript of the video titled '...'")
- `response`: The transcript content (same as `text` but in instruction-response format)

### Content Analysis
- `word_count`: Number of words in the transcript
- `character_count`: Number of characters
- `estimated_tokens`: Estimated token count for training
- `quality_score`: Quality score (0-1) based on length, structure, and metadata
- `content_type`: Classified content type (educational, conversational, instructional, narrative, general)

### Metadata
- `video_id`: YouTube video ID
- `source`: Always "youtube"
- `transcription_method`: "whisper" (OpenAI Whisper)
- `language`: "en" (English)
- `timestamp`: Processing timestamp
- `video_metadata`: Structured video information
  - `title`: Video title
  - `channel`: Channel name
  - `duration_seconds`: Video duration in seconds
  - `duration_formatted`: Human-readable duration (MM:SS or HH:MM:SS)
  - `upload_date`: Video upload date
  - `view_count`: Number of views
  - `category`: Auto-classified category (education, business, health, technology, etc.)

## Loading the Dataset

```python
from datasets import load_dataset

# Load the complete dataset
dataset = load_dataset("morka17/rtu-tgn", data_files="data_shard_*.jsonl")

# For instruction fine-tuning
train_data = dataset['train']
for example in train_data:
    instruction = example['instruction']
    response = example['response']
    # Use for instruction-following fine-tuning

# For general language modeling
for example in train_data:
    text = example['text']
    # Use for general language model training
```

## Filtering and Quality Control

```python
# Filter by quality score
high_quality = dataset.filter(lambda x: x['quality_score'] > 0.7)

# Filter by content type
educational_content = dataset.filter(lambda x: x['content_type'] == 'educational')

# Filter by length (optimal for training)
optimal_length = dataset.filter(lambda x: 1000 <= x['word_count'] <= 5000)

# Filter by category
business_content = dataset.filter(lambda x: x['video_metadata']['category'] == 'business')
```

## Use Cases

### 1. **Instruction Fine-tuning**
Use the `instruction` and `response` fields for training models to follow instructions.

### 2. **Conversational AI Training**
Filter for `content_type == 'conversational'` for dialogue training.

### 3. **Domain-specific Training**
Filter by `video_metadata.category` for domain-specific fine-tuning.

### 4. **Quality-based Training**
Use `quality_score` to select high-quality training examples.

## Data Quality

- **Text Cleaning**: Transcripts are cleaned to remove artifacts, normalize punctuation, and improve readability
- **Quality Scoring**: Each entry has a quality score based on length, structure, punctuation, and metadata
- **Content Classification**: Automatic classification into content types for targeted training
- **Metadata Enrichment**: Rich metadata for filtering and analysis

## Sharding

The dataset is automatically sharded into files of max 10MB each (data_shard_XXXX.jsonl) for efficient loading and processing.

## Last Updated

2025-10-26T14:52:26.835885

## License and Usage

Please ensure compliance with YouTube's Terms of Service when using this dataset. This dataset is intended for research and educational purposes in natural language processing and machine learning.

## Citation

If you use this dataset in your research, please cite:

```
@dataset{youtube_transcripts_llm,
  title={YouTube Transcripts Dataset for LLM Training},
  author={Generated via OpenAI Whisper},
  year={2025},
  url={https://huggingface.co/datasets/morka17/rtu-tgn}
}
```