metadata
license: mit
language:
- en
tags:
- audio
- speech-recognition
- whisper
- transcription
- podcast
datasets:
- custom
metrics:
- wer
library_name: transformers
pipeline_tag: automatic-speech-recognition
configs:
- config_name: default
data_files:
- split: train
path: data/*.txt
noagenda transcripts
This is the dataset for transcripts of the noagendashow.net podcast. The transcripts are in the data folder.
It also contains the source code for generating the transcripts, and the source code for the noagenda-transcripts.net website searches the the transcripts and plays the audio clips of the search results.
The repo consists of 4 main parts:
- A Go CLI for generating the transcript text files, and also downloading and preparing the audio (chunking, re-encoding).
- A Python worker for diariazation and speech-to-text. This needs a GPU to run, the 1800 archive took roughly 4 days of 8 x RTX 2000 GPUs to produce.
- Docker files for the above, and also queue management of jobs which was done using Temporal
- Website ReactJS code and NodeJS API for searching
Each directory has its own README with more details.
Requirements
You will need to install the following to run the transcript generation:
- Docker
- Go
- Python and Poetry
- protoc for Protobuf generation (Python only)
- Hugging Face token for the models
- ffmpeg
- Postgres (you can run this in a container, used for tracking progress)
- NodeJS for the website
The protobuf files of the audio segments are in the pipeline-data directory as a zip file.