nuhmanpk's picture
Update README.md
7372d39 verified
metadata
license: mit
dataset_info:
  features:
    - name: title
      dtype: large_string
    - name: video_id
      dtype: large_string
    - name: transcript
      dtype: large_string
  splits:
    - name: train
      num_bytes: 130792887
      num_examples: 1192
  download_size: 61288449
  dataset_size: 130792887
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - question-answering
  - text-generation
language:
  - en
tags:
  - code
pretty_name: Free Code Camp Transcripts
size_categories:
  - 1K<n<10K

Free Code Camp Transcripts

Overview

This dataset contains transcripts of programming tutorials from FreeCodeCamp videos. Each entry includes the video title, YouTube video ID, and the full transcript, making it suitable for training and evaluating NLP and LLM systems focused on developer education.

DataSource


Dataset Structure

Column Type Description
title string Title of the YouTube video
video_id string Unique YouTube video identifier
transcript string Full transcript of the video

Dataset Details

  • Total Samples: 1,192
  • Language: English
  • Format: Parquet (auto-converted by Hugging Face)
  • Domain: Programming / Software Development

How to Load the Dataset

from datasets import load_dataset

dataset = load_dataset("nuhmanpk/freecodecamp-transcripts")
print(dataset)
print(dataset["train"][0])

Example Record

{
  "title": "PostgreSQL Tutorial for Beginners",
  "video_id": "SpfIwlAYaKk",
  "transcript": "Welcome to this PostgreSQL tutorial..."
}

Use Cases

1. Text Summarization

from transformers import pipeline

summarizer = pipeline("summarization")

text = dataset["train"][0]["transcript"]
summary = summarizer(text[:2000])

print(summary)

2. Question Answering

from transformers import pipeline

qa = pipeline("question-answering")

context = dataset["train"][0]["transcript"]
question = "What is PostgreSQL?"

result = qa(question=question, context=context)
print(result)

3. Instruction Dataset

def to_instruction(example):
    return {
        "prompt": f"Explain this tutorial: {example['title']}",
        "response": example["transcript"][:1000]
    }

instruction_ds = dataset["train"].map(to_instruction)

4. Embeddings

from sentence_transformers import SentenceTransformer

model = SentenceTransformer("all-MiniLM-L6-v2")

embeddings = model.encode(dataset["train"]["transcript"][:100])

Preprocessing Tips

dataset = dataset.filter(lambda x: x["transcript"] != "")
def chunk_text(text, size=1000):
    return [text[i:i+size] for i in range(0, len(text), size)]

Limitations

  • Transcripts may contain noise
  • No timestamps
  • Limited to programming tutorials

License

MIT License


Future Improvements

  • Add topic tags
  • Generate QA pairs
  • Instruction tuning