Audio Datasets
Collection
2 items
•
Updated
audio
audioduration (s) 5.67
10.8
| transcription
stringlengths 103
183
| category
stringclasses 16
values |
|---|---|---|
I pushed the changes to GitHub and opened a pull request against the main branch. The CI pipeline is running now and should finish in a few minutes.
|
tech_github
|
|
The repository has over two thousand stars on GitHub. I forked it last week and added some new features that I want to contribute upstream.
|
tech_github
|
|
Check the GitHub issues page for any bug reports. Someone opened a new issue about the authentication flow not working properly on mobile devices.
|
tech_github
|
|
I need to update the GitHub Actions workflow to include the new test suite. The current pipeline only runs unit tests but we need integration tests too.
|
tech_github
|
|
Clone the repository from GitHub using the SSH URL. Make sure your SSH keys are properly configured before attempting to push any changes.
|
tech_github
|
|
I uploaded the dataset to Hugging Face and made it publicly available. The model card still needs some work before we can share it with the community.
|
tech_huggingface
|
|
The Hugging Face space is running on a free CPU instance. We might need to upgrade to a GPU runtime if the inference time is too slow for users.
|
tech_huggingface
|
|
Download the model weights from Hugging Face using the transformers library. The model is about 4 gigabytes so it might take a while on slower connections.
|
tech_huggingface
|
|
The Hugging Face Hub has thousands of pre-trained models available for download. I usually filter by task type and sort by most downloads to find the best ones.
|
tech_huggingface
|
|
Create a new Hugging Face space with the Gradio template. It provides a simple interface for demoing machine learning models without writing much frontend code.
|
tech_huggingface
|
|
The Docker container is running on port eight thousand. You can check the logs using docker logs with the container name or ID.
|
tech_docker
|
|
I built a new Docker image with all the dependencies included. The build process took about fifteen minutes because it had to compile some packages from source.
|
tech_docker
|
|
Run docker compose up to start all the services. The configuration file defines three containers that need to communicate with each other over a shared network.
|
tech_docker
|
|
The Docker volume persists data between container restarts. Without it we would lose all the database contents every time the container stops.
|
tech_docker
|
|
Pull the latest Docker image from the registry before deploying. The tag should match the version we tested in the staging environment.
|
tech_docker
|
|
I need to stop by the makolet to pick up some bread and milk. The one on the corner closes early on Friday so I should go before noon.
|
hebrew_daily
|
|
Don't forget to bring your teudat zehut when you go to the bank. They always ask for identification when opening a new account.
|
hebrew_daily
|
|
The misrad hapnim is closed on Friday afternoons and all day Saturday. You will need to come back on Sunday morning to submit your application.
|
hebrew_daily
|
|
We should take the sherut instead of the bus. It goes directly to Tel Aviv and usually takes less time during rush hour traffic.
|
hebrew_daily
|
|
The kupat cholim sent me a reminder about my appointment. I need to pick up my prescription from the pharmacy afterwards.
|
hebrew_daily
|
|
I got a package from the doar today. The delivery person left a note saying I can pick it up at the local branch tomorrow.
|
hebrew_daily
|
|
The arnona bill arrived in the mail yesterday. We need to pay it by the end of the month to avoid any late fees.
|
hebrew_daily
|
|
Let's meet at the tachana merkazit around three o'clock. I will be waiting near the entrance by the coffee shop.
|
hebrew_daily
|
|
The iriya office hours changed last week. They now open at eight thirty instead of nine in the morning.
|
hebrew_daily
|
|
The vaad habayit meeting is scheduled for next Tuesday evening. We need to discuss the elevator repairs and the new security system.
|
hebrew_daily
|
|
I ordered some shawarma and falafel for lunch. The hummus here is really good and they give you plenty of pita bread on the side.
|
hebrew_food
|
|
The shuk is packed on Friday mornings. Everyone is buying fresh challah and vegetables for Shabbat dinner.
|
hebrew_food
|
|
Would you like some more tahini with your sabich? The amba sauce is a bit spicy but it adds great flavor to the sandwich.
|
hebrew_food
|
|
The transformer model uses attention mechanisms to process sequences in parallel. This makes training much faster than older recurrent neural network architectures.
|
ai_ml
|
|
Fine tuning a large language model requires adjusting the learning rate carefully. Too high and the model forgets its pre-trained knowledge, too low and it learns nothing new.
|
ai_ml
|
|
The embeddings capture semantic meaning in a high dimensional vector space. Similar concepts end up close together which enables semantic search.
|
ai_ml
|
|
I am training a LoRA adapter instead of doing full fine tuning. It uses much less memory and the resulting weights are only a few megabytes.
|
ai_ml
|
|
Prompt engineering is about crafting inputs that guide the model toward desired outputs. Small changes in wording can dramatically affect the quality of responses.
|
ai_ml
|
|
The inference speed depends on batch size and model quantization. Using four bit quantization cuts memory usage in half with minimal impact on output quality.
|
ai_ml
|
|
Retrieval augmented generation combines language models with external knowledge bases. The retriever finds relevant documents and the generator synthesizes them into coherent answers.
|
ai_ml
|
|
The tokenizer splits text into subword units before feeding it to the model. Different tokenizers handle special characters and numbers in different ways.
|
ai_ml
|
|
Word error rate measures transcription accuracy by counting insertions, deletions, and substitutions. A lower score means better accuracy compared to the ground truth.
|
ai_ml
|
|
Ollama is running on port eleven thousand four hundred thirty four. I pulled the latest Llama model and it works great for local inference.
|
local_tools
|
|
The ComfyUI workflow generates images using stable diffusion. I connected the nodes for the prompt, sampler, and VAE decoder in a custom pipeline.
|
local_tools
|
|
ROCm provides GPU acceleration for AMD hardware. I had to set the HSA override GFX version to get it working properly on my card.
|
local_tools
|
|
The Whisper transcription service is running in a Docker container. It exposes a REST API endpoint that accepts audio files and returns JSON responses.
|
local_tools
|
|
I converted the model to GGML format for faster CPU inference. The quantized version loads in seconds and uses much less RAM than the original.
|
local_tools
|
|
The CTranslate2 version of the model runs inference twice as fast. It optimizes the computation graph and supports int8 quantization out of the box.
|
local_tools
|
|
Portainer makes it easy to manage Docker containers through a web interface. I can see logs, restart services, and monitor resource usage all in one place.
|
local_tools
|
|
InvokeAI provides a nice interface for image generation. The canvas mode lets you paint masks and do inpainting on specific areas of the image.
|
local_tools
|
|
So I was thinking we could try a different approach this time. The last method worked okay but it took way too long to get any results.
|
conversational
|
|
Yeah that makes sense. I had a similar issue last week and ended up just rewriting the whole thing from scratch. Sometimes that is faster than debugging.
|
conversational
|
|
Let me know when you have a chance to look at this. No rush but I would appreciate your feedback before I merge it into the main branch.
|
conversational
|
|
Actually, I changed my mind about that. Can we go back to the original plan? I think it was simpler and easier to maintain in the long run.
|
conversational
|
|
Hold on, let me check something real quick. I remember seeing an error message about this yesterday but I cannot recall what it said exactly.
|
conversational
|
|
That is a good question. I am not entirely sure about the answer but my best guess would be to check the configuration file first.
|
conversational
|
|
Okay so here is what I am thinking. We start with the basic functionality and then add more features as we go. Does that sound reasonable to you?
|
conversational
|
|
I tried that already and it did not work. Maybe there is something wrong with my environment or I am missing a dependency somewhere.
|
conversational
|
|
Right, that is exactly what I was trying to say. The problem is not with the code itself but with how we are deploying it to production.
|
conversational
|
|
Can you send me the link to that documentation? I have been looking for it all morning but cannot seem to find the right page.
|
conversational
|
|
The morning sun cast long shadows across the narrow streets of the old city. Merchants were setting up their stalls while the smell of fresh coffee drifted through the air.
|
narrative
|
|
Rain began to fall as the afternoon clouds gathered over the hills. People hurried along the sidewalks looking for shelter under shop awnings and building entrances.
|
narrative
|
|
The ancient stone walls had stood for thousands of years, bearing witness to countless generations. Each crack and crevice held stories of the past.
|
narrative
|
|
Children played in the courtyard while their parents sat on benches nearby. The sound of laughter echoed off the surrounding apartment buildings.
|
narrative
|
|
The market was filled with colorful displays of fruits and vegetables. Vendors called out their prices trying to attract customers to their stalls.
|
narrative
|
|
As evening approached the city lights began to flicker on one by one. The streets transformed into rivers of headlights and taillights.
|
narrative
|
|
First open the terminal and navigate to the project directory. Then create a new virtual environment using the uv venv command.
|
instructions
|
|
Install the required packages by running pip install with the requirements file. Make sure the virtual environment is activated before running this command.
|
instructions
|
|
Clone the repository and checkout the development branch. You will need to pull the latest changes before making any modifications.
|
instructions
|
|
Run the test suite to make sure everything is working correctly. All tests should pass before you push your changes to the remote repository.
|
instructions
|
|
Start the development server by running the main script. Open your browser and navigate to localhost on port three thousand to see the application.
|
instructions
|
|
Build the Docker image using the provided Dockerfile. Tag it with a meaningful version number so we can track different releases.
|
instructions
|
|
Set the environment variables before starting the application. The API keys should be stored securely and never committed to version control.
|
instructions
|
|
Check the system logs using journalctl to see what happened during the last boot. Filter by unit name to focus on the specific service that failed.
|
tech_linux
|
|
The systemd service needs to be restarted after changing the configuration file. Use systemctl daemon reload first to pick up the changes.
|
tech_linux
|
|
Ubuntu runs well on this workstation with KDE Plasma as the desktop environment. Wayland provides smoother graphics than the older display server.
|
tech_linux
|
|
The file permissions need to be changed to allow execution. Use chmod plus x to add the executable bit to the script file.
|
tech_linux
|
|
The package manager can install software from the official repositories. Run apt update first to refresh the list of available packages.
|
tech_linux
|
|
PipeWire handles audio routing on this system. The USB microphone should appear automatically when you plug it in.
|
tech_linux
|
|
The REST API returns JSON responses for all endpoints. Authentication is handled using bearer tokens in the request header.
|
tech_api
|
|
Send a POST request to the transcription endpoint with the audio file. The response will include the transcribed text and confidence scores.
|
tech_api
|
|
The OpenRouter API provides access to multiple language models through a single endpoint. You can switch between models by changing the model parameter.
|
tech_api
|
|
Rate limiting prevents too many requests from overwhelming the server. The API returns a special status code when you exceed the limit.
|
tech_api
|
|
The Python script reads the configuration from a YAML file. Make sure the indentation is correct or the parser will raise an error.
|
tech_python
|
|
Use the requests library to make HTTP calls in Python. It handles encoding and decoding automatically so you can work with JSON directly.
|
tech_python
|
|
The async function uses await to pause execution until the coroutine completes. This allows other tasks to run concurrently without blocking.
|
tech_python
|
|
Pydantic validates the input data and converts it to the correct types. It raises clear error messages when the data does not match the expected schema.
|
tech_python
|
|
FastAPI generates automatic documentation for your REST endpoints. You can see the swagger interface by navigating to the docs path in your browser.
|
tech_python
|
|
I need to finish the code review before the standup meeting. Let me push my changes to GitHub first and then create the pull request.
|
mixed_workflow
|
|
The deployment pipeline runs automatically when we merge to main. It builds the Docker image and pushes it to the container registry.
|
mixed_workflow
|
|
I uploaded the training data to Hugging Face and started a fine tuning job. The model should be ready for evaluation by tomorrow morning.
|
mixed_workflow
|
|
The Whisper model runs locally on my AMD GPU using ROCm. I converted it to CTranslate2 format for faster inference times.
|
mixed_workflow
|
|
After updating the model weights I need to rebuild the Docker container. The new version includes better handling of Hebrew words mixed with English.
|
mixed_workflow
|
|
The Jerusalem tech scene is growing rapidly. Startups here focus on cybersecurity, medical devices, and artificial intelligence applications.
|
mixed_locale
|
|
Working remotely from Israel has its challenges with timezone differences. Most meetings with US teams happen in the late afternoon or evening.
|
mixed_locale
|
|
The React component re-renders whenever the state changes. Use the memo hook to prevent unnecessary renders and improve performance.
|
tech_web
|
|
Tailwind provides utility classes for styling without writing custom CSS. The build process removes unused styles to keep the bundle size small.
|
tech_web
|
A small speech-to-text evaluation dataset containing 92 audio samples with ground truth transcriptions. Designed for evaluating STT systems on technical vocabulary, code-switching (English/Hebrew), and various speaking styles.
This dataset contains audio recordings with accompanying transcriptions across multiple categories:
| Category | Count | Description |
|---|---|---|
| tech_github | 5 | GitHub-related technical vocabulary |
| tech_huggingface | 4 | Hugging Face platform terminology |
| tech_docker | 5 | Docker and containerization terms |
| hebrew_daily | 10 | English with Hebrew words (daily life) |
| hebrew_food | 3 | English with Hebrew food terms |
| ai_ml | 9 | AI/ML technical vocabulary |
| local_tools | 8 | Local development tools |
| conversational | 10 | Casual conversational speech |
| narrative | 6 | Narrative/storytelling style |
| instructions | 7 | Instructional content |
| tech_linux | 6 | Linux system administration |
| tech_api | 4 | API and web services |
| tech_python | 5 | Python programming |
| mixed_workflow | 5 | Mixed technical workflows |
| mixed_locale | 2 | Mixed locale content |
| tech_web | 2 | Web development |
| tech_data | 1 | Data processing |
data/
├── metadata.csv
├── 001_tech_github.wav
├── 002_tech_github.wav
└── ...
The metadata.csv contains:
file_name: Audio filenametranscription: Ground truth transcriptioncategory: Content categoryfrom datasets import load_dataset
dataset = load_dataset("danielrosehill/Small-STT-Eval-Audio-Dataset")
# Access a sample
sample = dataset["train"][0]
print(sample["transcription"])
# Play audio: sample["audio"]
This dataset is intended for:
For WER (Word Error Rate) evaluation, we recommend using text normalization to handle variations in number formatting, punctuation, and casing:
from whisper_normalizer.english import EnglishTextNormalizer
from werpy import wer
normalizer = EnglishTextNormalizer()
# Normalize both reference and hypothesis before comparison
reference = normalizer(ground_truth)
hypothesis = normalizer(model_output)
error_rate = wer(reference, hypothesis)
CC-BY-4.0