Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
LAION-Tunes
908,174 AI-generated music tracks from 5 platforms, annotated with captions, transcriptions, embeddings, aesthetics scores, and NSFW safety labels. Includes a full-text and vector search engine with a web UI featuring both a beginner-friendly Simple mode and a power-user Advanced mode.
Quick Stats
| Metric | Value |
|---|---|
| Total tracks | 908,174 |
| Subsets | Mureka (383K), Suno (308K), Udio (115K), Riffusion (99K), Sonauto (3K) |
| Has caption (Music-Whisper) | 832,944 (91.7%) |
| Has transcription (Parakeet ASR) | 514,203 (56.6%) |
| Instrumental | 394,246 (43.4%) |
| NSFW flagged (very likely + likely) | 12,860 (1.4%) |
| FAISS vector indices | 6 (tag, whisper, caption, transcription, lyric, mood) |
| BM25 text indices | 3 (tags, caption, transcription) |
| Total index size | ~16 GB |
Dataset Description
LAION-Tunes is a curated metadata and annotation dataset derived from ai-music/ai-music-deduplicated, a collection of publicly available AI-generated music from Suno, Udio, Mureka, Riffusion, and Sonauto.
This dataset does NOT contain audio files. It contains metadata, annotations, embeddings, and search indices. Audio URLs pointing to the original hosting platforms are included for reference.
Note on Riffusion: Riffusion audio links went dead during the creation of this dataset. The metadata and annotations are preserved, but audio playback will not work for the 99K Riffusion tracks. The web UI defaults to "All except Riffusion" to avoid broken audio players.
What's Included
For each track:
- Metadata: title, tags, genre, mood, duration, play count, upvote count
- Music-Whisper Caption: AI-generated music description using laion/music-whisper (fine-tuned OpenAI Whisper Small)
- Parakeet ASR Transcription: Speech-to-text using nvidia/parakeet-tdt-0.6b-v3 with word-level timestamps
- Sentence Embeddings: 768-dim embeddings via google/embeddinggemma-300m for tags, captions, transcriptions, lyrics, and moods
- Whisper Audio Embeddings: 768-dim mean-pooled encoder embeddings from Music-Whisper for audio similarity search
- Aesthetics Scores: coherence, musicality, memorability, clarity, naturalness (computed from Music-Whisper)
- NSFW Safety Labels: three-tier classification (very_likely_nsfw / likely_nsfw / likely_sfw) across gore, sexual, and hate speech dimensions
- Language Detection: Detected language of vocal content via
langdetect - Pre-built Search Indices: FAISS vector indices and BM25 text indices ready to serve
Annotation Pipeline
The annotation pipeline processes the original TAR files from ai-music-deduplicated:
- Music-Whisper (
laion/music-whisper): Generates music captions describing instruments, genre, mood, tempo, etc. - Parakeet TDT 0.6B (
nvidia/parakeet-tdt-0.6b-v3): ASR transcription with word-level timestamps for vocal content - EmbeddingGemma 300M (
google/embeddinggemma-300m): Computes 768-dim sentence embeddings for captions, transcriptions, tags, lyrics, moods - Whisper Encoder Embeddings: Mean-pooled encoder hidden states from Music-Whisper for audio fingerprinting/similarity
- NSFW Classification: Cosine similarity of transcription embeddings against reference prompts for gore/violence, sexual content, and hate speech
- Language Detection:
langdetectapplied to Parakeet ASR transcriptions (first 300 characters)
Web UI
The search engine includes a single-page dark-mode web interface (index.html) with two modes:
Simple Mode (Default)
A clean, beginner-friendly interface for quick searches:
- Large search bar with natural language input
- Rank by: Relevance, Aesthetics Score, Most Liked, or Most Played
- Optional negative prompt (exclude certain terms/concepts)
- Language filter chips
- Hardcoded sensible defaults: Vector similarity on Whisper Caption, SFW only, all subsets except Riffusion, 60s minimum duration
Advanced Mode
Full control over all search parameters:
- Search modes: Vector Similarity, BM25 Text, Combined, Music Similarity (audio upload)
- Vector fields: Whisper Caption, Tags, Lyrics, Mood, Transcription
- BM25 fields: Caption, Tags, Transcription, Lyrics (hashed)
- Ranking: Relevance, Music Similarity, Aesthetics, Play Count, Like Count
- Filters: Vocals/Instrumental, Min Duration, Min Aesthetics, Subset, Safety (SFW/NSFW), Languages
- Two-stage search: Refine Stage 1 results with a second query using different search modes/fields
- Negative prompts: Exclude results matching a negative query with adjustable weight
- Audio upload: Find similar tracks by uploading an audio file
- Find Similar: Click on any result to find sonically similar tracks
- Help icons (?) next to every control with detailed explanations of what each option does
Repository Structure
laion-tunes/
├── README.md # This file
├── server.py # FastAPI search server (main application)
├── index.html # Web UI (dark-mode, single-page app)
├── build_search_index.py # Index builder script
├── update_indices.py # Incremental index updater
├── migrate_add_language.py # Language detection migration
├── nsfw_safety_report.html # Interactive NSFW analysis report
├── nsfw_analysis_data.json # Raw NSFW analysis data
│
├── public/ # Annotated metadata parquets (8.3 GB)
│ ├── mureka_000000.tar.parquet # One parquet per source TAR file
│ ├── mureka_000001.tar.parquet
│ ├── ...
│ └── udio_000015.tar.parquet # 159 parquet files total
│
├── search_index/ # Pre-built search indices (16 GB)
│ ├── metadata.db # SQLite database (908K tracks, 2.7 GB)
│ ├── faiss_tag.index # FAISS IndexFlatIP - tag embeddings (2.6 GB)
│ ├── faiss_whisper.index # FAISS IndexFlatIP - audio embeddings (2.6 GB)
│ ├── faiss_caption.index # FAISS IndexFlatIP - caption embeddings (2.3 GB)
│ ├── faiss_transcription.index # FAISS IndexFlatIP - transcription embeddings (1.5 GB)
│ ├── faiss_lyric.index # FAISS IndexFlatIP - lyric embeddings (1.4 GB)
│ ├── faiss_mood.index # FAISS IndexFlatIP - mood embeddings (1.1 GB)
│ ├── idmap_*.npy # Row ID mappings for each FAISS index
│ ├── bm25_tags.pkl # BM25 text index for tags (114 MB)
│ ├── bm25_caption.pkl # BM25 text index for captions (609 MB)
│ └── bm25_transcription.pkl # BM25 text index for transcriptions (392 MB)
│
└── whisper_embeddings/ # Raw Whisper encoder embeddings (1.6 GB)
├── mureka_000000.tar.npz # One NPZ per source TAR file
├── ...
└── udio_000015.tar.npz # 159 NPZ files total
Data Format
Parquet Files (public/)
Each parquet file corresponds to one TAR file from the source dataset and contains these columns:
| Column | Type | Description |
|---|---|---|
filename |
str | Filename within the source TAR |
tar_file |
str | Source TAR filename |
audio_url |
str | Original audio URL (mp3/m4a/ogg) |
subset |
str | Source platform (suno/udio/mureka/riffusion/sonauto) |
title |
str | Track title |
tags |
str | Comma-separated genre/style tags |
mood |
str | Mood tags |
lyrics |
str | Lyrics (if available, from source metadata) |
duration_seconds |
float | Track duration |
play_count |
int | Play count on source platform |
upvote_count |
int | Like/upvote count |
music_whisper_caption |
str | Music-Whisper generated caption |
parakeet_transcription |
str | Parakeet ASR transcription (plain text) |
parakeet_transcription_with_timestamps |
str | ASR with word-level timestamps |
tag_embedding |
list[float] | 768-dim EmbeddingGemma embedding of tags |
caption_embedding |
list[float] | 768-dim EmbeddingGemma embedding of caption |
transcription_embedding |
list[float] | 768-dim EmbeddingGemma embedding of transcription |
lyric_embedding |
list[float] | 768-dim EmbeddingGemma embedding of lyrics |
mood_embedding |
list[float] | 768-dim EmbeddingGemma embedding of mood |
Whisper Embeddings (whisper_embeddings/)
NPZ files containing mean-pooled Whisper encoder hidden states:
embeddings: float32 array of shape(N, 768)- L2-normalizedfilenames: string array of filenames matching the parquet entries
SQLite Database (search_index/metadata.db)
The tracks table contains all 908,174 tracks with 34 columns including metadata, aesthetics scores, annotation flags, NSFW safety labels, language codes, and instrumental flags. The row_id column is the primary key used by all FAISS indices.
FAISS Indices (search_index/faiss_*.index)
All indices are IndexFlatIP (inner product / cosine similarity for L2-normalized vectors) with 768 dimensions. Each index has a corresponding idmap_*.npy that maps FAISS internal indices to SQLite row_id values.
| Index | Vectors | Description |
|---|---|---|
faiss_tag |
908,241 | Tag text embeddings |
faiss_whisper |
908,174 | Audio encoder embeddings (music similarity) |
faiss_caption |
798,858 | Music-Whisper caption embeddings |
faiss_transcription |
511,610 | ASR transcription embeddings |
faiss_lyric |
479,313 | Lyrics embeddings |
faiss_mood |
383,616 | Mood text embeddings |
NSFW Safety Labels
Each track has NSFW safety scores and labels across three dimensions. The classification is performed by computing cosine similarity between each track's transcription embedding and curated reference prompts for each NSFW category.
Classification Method
- For each track with a transcription, the 768-dim EmbeddingGemma embedding is compared against reference prompt embeddings for three categories: gore/violence, sexual content, and hate speech
- Cosine similarity scores are computed for each category
- Two thresholds per category define the three-tier labeling:
- very_likely_nsfw: cosine similarity above the strict threshold
- likely_nsfw: cosine similarity between strict and moderate thresholds
- likely_sfw: cosine similarity below the moderate threshold
- The
nsfw_overall_labelis conservative: the worst (most NSFW) label across all three dimensions is used
Thresholds and Distribution
| Dimension | Strict Threshold | Moderate Threshold | Very Likely NSFW | Likely NSFW |
|---|---|---|---|---|
| Gore/Violence | >= 0.3779 | >= 0.3540 | 2,437 (0.27%) | 2,293 (0.25%) |
| Sexual Content | >= 0.3584 | >= 0.3234 | 3,367 (0.37%) | 2,689 (0.30%) |
| Hate Speech | >= 0.3633 | >= 0.3382 | 2,786 (0.31%) | 2,316 (0.26%) |
| Overall (conservative) | - | - | 6,762 (0.74%) | 6,098 (0.67%) |
NSFW Fields in the Dataset
| Field | Type | Description |
|---|---|---|
nsfw_gore_sim |
float | Raw cosine similarity score for gore/violence (0-1) |
nsfw_sexual_sim |
float | Raw cosine similarity score for sexual content (0-1) |
nsfw_hate_sim |
float | Raw cosine similarity score for hate speech (0-1) |
nsfw_gore_label |
str | very_likely_nsfw / likely_nsfw / likely_sfw |
nsfw_sexual_label |
str | very_likely_nsfw / likely_nsfw / likely_sfw |
nsfw_hate_label |
str | very_likely_nsfw / likely_nsfw / likely_sfw |
nsfw_overall_label |
str | Conservative overall label (worst of the three) |
The raw cosine similarity scores are stored so you can apply your own thresholds. The nsfw_safety_report.html file in this repository provides an interactive visual analysis of the NSFW distribution.
Filtering Behavior
- SFW Only (default in UI): Excludes all tracks where
nsfw_overall_label != "likely_sfw"(removes ~1.4% of tracks) - NSFW Only: Shows only tracks where
nsfw_overall_label != "likely_sfw" - All: No filtering, includes everything
Running the Search Server
Prerequisites
pip install fastapi uvicorn faiss-cpu numpy pandas sentence-transformers torch scipy tqdm python-multipart transformers
For ONNX-accelerated embeddings (recommended for CPU deployments):
pip install optimum[onnxruntime] onnxruntime
Option 1: ONNX Quantized Embeddings (Recommended for CPU)
The server automatically tries to load the non-gated onnx-community/embeddinggemma-300m-ONNX model with int8 quantization for fast CPU inference:
python server.py --port 7860 --no-whisper
This provides ~30ms per query embedding on CPU with the q8-quantized ONNX model.
Option 2: With HF Text Embeddings Inference
TEI provides fast CPU-based embedding serving via Docker:
# Start TEI (requires Docker + HF token for gated model)
docker run -d --name tei-embeddings \
-p 8090:80 \
-e HF_TOKEN=your_token \
ghcr.io/huggingface/text-embeddings-inference:cpu-latest \
--model-id google/embeddinggemma-300m \
--max-batch-requests 4
# Start the server with TEI backend
python server.py --port 7860 --gpu 0 --tei-url http://localhost:8090
Option 3: Direct Python Inference with PyTorch
# Requires HF token for gated google/embeddinggemma-300m
HF_TOKEN=your_token python server.py --port 7860 --gpu 0
This loads EmbeddingGemma 300M via SentenceTransformer and Music-Whisper encoder into Python directly (~430ms per query).
Server Flags
| Flag | Default | Description |
|---|---|---|
--port |
7860 | HTTP port |
--host |
0.0.0.0 | Bind address |
--gpu |
0 | GPU ID for Whisper encoder |
--tei-url |
None | TEI server URL for text embeddings (skips loading Python embedder) |
--no-whisper |
False | Skip loading Whisper encoder (disables audio upload similarity search) |
What Loads at Startup
- 6 FAISS indices (tag, whisper, caption, transcription, lyric, mood)
- 3 BM25 indices (tags, caption, transcription)
- SQLite database (908K tracks)
- Text embedder: ONNX quantized model, TEI backend, or Python SentenceTransformer (tried in that order)
- Music-Whisper encoder (optional, on GPU): for audio upload similarity search
Total memory: ~20 GB RAM + ~200 MB GPU VRAM (if Whisper encoder loaded)
Search API Reference
The FastAPI server exposes the following endpoints:
GET /
Serves the HTML search frontend (Simple/Advanced mode web UI).
GET /nsfw-report
Serves the interactive NSFW safety analysis report (nsfw_safety_report.html).
POST /api/search
Main search endpoint supporting vector similarity, BM25 text search, and combined mode with optional two-stage refinement.
Request Body (JSON)
| Field | Type | Default | Description |
|---|---|---|---|
query |
str | required | Search query text |
negative_query |
str | null | null | Negative prompt (subtracted from query embedding in vector mode) |
search_type |
str | "bm25" |
"vector" | "bm25" | "combined" |
vector_field |
str | "caption" |
FAISS index: "tag" | "caption" | "lyric" | "mood" | "transcription" |
bm25_field |
str | "caption" |
BM25 index: "tags" | "caption" | "transcription" | "lyrics_hashed" |
rank_by |
str | "similarity" |
"similarity" | "aesthetics" | "plays" | "likes" |
min_aesthetics |
float | null | null | Minimum aesthetics score (0-5 scale) |
min_similarity |
float | null | null | Minimum cosine similarity score |
subset_filter |
str | null | null | "suno" | "udio" | "mureka" | "riffusion" | "sonauto" | "no_riffusion" |
vocal_filter |
str | null | null | "instrumental" | "vocals" |
min_duration |
float | null | 60.0 | Minimum duration in seconds |
languages |
list[str] | null | null | Language codes to include (e.g. ["en", "es"]), null = all |
negative_weight |
float | 0.7 | Weight for negative query subtraction (0.0-1.0) |
nsfw_filter |
str | null | null | "sfw_only" | "nsfw_only" | null (all) |
top_k |
int | 50 | Number of results to return |
stage2_enabled |
bool | false | Enable two-stage refinement |
stage2_query |
str | null | null | Query text for Stage 2 |
stage2_search_type |
str | "vector" |
"vector" | "bm25" |
stage2_vector_field |
str | "caption" |
Vector field for Stage 2 |
stage2_bm25_field |
str | "caption" |
BM25 field for Stage 2 |
stage2_top_k |
int | 50 | Number of results after Stage 2 re-ranking |
Example Request
curl -X POST http://localhost:7860/api/search \
-H "Content-Type: application/json" \
-d '{
"query": "dreamy ambient synth pad",
"search_type": "vector",
"vector_field": "caption",
"rank_by": "similarity",
"subset_filter": "no_riffusion",
"nsfw_filter": "sfw_only",
"top_k": 20
}'
Response Body (JSON)
{
"results": [
{
"row_id": 12345,
"title": "Cosmic Drift",
"audio_url": "https://cdn1.suno.ai/abc123.mp3",
"subset": "suno",
"tags_text": "ambient, electronic, synth",
"mood_text": "dreamy, peaceful",
"genre_tags": ["Electronic", "Ambient"],
"scene_tags": [],
"emotion_tags": ["Peaceful"],
"score_average": 3.45,
"score_coherence": 3.5,
"score_musicality": 3.3,
"score_memorability": 3.6,
"score_clarity": 3.4,
"score_naturalness": 3.4,
"play_count": 42,
"upvote_count": 5,
"duration_seconds": 180.0,
"music_whisper_caption": "The listener hears a track characterized by...",
"has_caption": true,
"has_transcription": false,
"is_instrumental": true,
"language": "unknown",
"score": 0.8234,
"score_type": "cosine_similarity",
"has_whisper_emb": true,
"nsfw_overall_label": "likely_sfw",
"nsfw_gore_label": "likely_sfw",
"nsfw_sexual_label": "likely_sfw",
"nsfw_hate_label": "likely_sfw",
"nsfw_gore_sim": 0.2145,
"nsfw_sexual_sim": 0.1987,
"nsfw_hate_sim": 0.2034
}
],
"total_candidates": 2000,
"total_filtered": 1847,
"total_tracks": 908174,
"search_time_ms": 245.3,
"query_embedding_time_ms": 28.5,
"search_type": "vector",
"vector_field": "caption",
"bm25_field": "caption"
}
When Stage 2 is enabled, each result also includes stage1_score and stage2_score, and the response includes a stage2 object:
{
"stage2": {
"query": "dark atmospheric",
"search_type": "vector",
"field": "caption",
"matched": 20,
"returned": 10
}
}
Search Logic
- The query is embedded in real-time via EmbeddingGemma 300M (ONNX/TEI/Python)
- If
negative_queryis provided, it is also embedded and subtracted from the query vector withnegative_weight, then re-normalized - Vector mode: FAISS inner-product search on the specified
vector_field - BM25 mode: BM25-Okapi text search on the specified
bm25_field(tokenized: lowercase, alphanumeric tokens of length >= 2) - Combined mode: Union of vector and BM25 candidates (intersection if non-empty)
- Candidate pool size:
max(top_k * 100, 20000)if filters are active;max(top_k * 10, 2000)otherwise - Filters applied in order: min_aesthetics, subset_filter, min_similarity, vocal_filter, min_duration, languages, nsfw_filter
- Results ranked by the specified
rank_bycriterion - If Stage 2 is enabled, the top_k results from Stage 1 are re-scored by the Stage 2 query and the top
stage2_top_kare returned
POST /api/search_similar
Find tracks similar to an existing track using pre-computed Whisper audio embeddings.
Request Body (JSON)
| Field | Type | Default | Description |
|---|---|---|---|
row_id |
int | required | Row ID of the reference track |
top_k |
int | 50 | Number of results |
rank_by |
str | "similarity" |
"similarity" | "aesthetics" | "plays" | "likes" |
min_aesthetics |
float | null | null | Minimum aesthetics score |
subset_filter |
str | null | null | Subset filter (same values as /api/search) |
vocal_filter |
str | null | null | "instrumental" | "vocals" |
min_duration |
float | null | 60.0 | Minimum duration in seconds |
languages |
list[str] | null | null | Language codes to include |
nsfw_filter |
str | null | null | "sfw_only" | "nsfw_only" |
stage2_enabled |
bool | false | Enable two-stage refinement |
stage2_query |
str | null | null | Stage 2 text query |
stage2_search_type |
str | "vector" |
Stage 2 search type |
stage2_vector_field |
str | "caption" |
Stage 2 vector field |
stage2_bm25_field |
str | "caption" |
Stage 2 BM25 field |
stage2_top_k |
int | 50 | Stage 2 results count |
Example Request
curl -X POST http://localhost:7860/api/search_similar \
-H "Content-Type: application/json" \
-d '{"row_id": 12345, "top_k": 20, "nsfw_filter": "sfw_only"}'
Response Body (JSON)
Same structure as /api/search, with additional fields:
search_type: always"music_similarity"vector_field: always"whisper"reference_row_id: the input row_idreference_title: title of the reference track
Error Responses
- 503: Whisper FAISS index not loaded
- 404:
row_idnot found in Whisper embeddings
POST /api/search_by_audio
Upload an audio file to find similar tracks by audio fingerprint using the Music-Whisper encoder.
Request Format: multipart/form-data
| Field | Type | Default | Description |
|---|---|---|---|
audio |
file | required | Audio file (mp3, wav, flac, ogg, m4a, etc.). Max 100 MB. First 30 seconds used. |
top_k |
int | 50 | Number of results |
rank_by |
str | "similarity" |
Ranking criterion |
subset_filter |
str | null | Subset filter |
vocal_filter |
str | null | Vocal filter |
min_duration |
float | null | Min duration in seconds |
min_aesthetics |
float | null | Min aesthetics score |
languages |
str | null | Comma-separated language codes (e.g. "en,es,fr") |
nsfw_filter |
str | null | NSFW filter |
stage2_enabled |
str | null | "true" to enable Stage 2 |
stage2_query |
str | null | Stage 2 text query |
stage2_search_type |
str | "vector" |
Stage 2 mode |
stage2_vector_field |
str | "caption" |
Stage 2 vector field |
stage2_bm25_field |
str | "caption" |
Stage 2 BM25 field |
stage2_top_k |
int | 50 | Stage 2 results count |
Example Request
curl -X POST http://localhost:7860/api/search_by_audio \
-F "audio=@my_song.mp3" \
-F "top_k=20" \
-F "nsfw_filter=sfw_only"
Response Body (JSON)
Same structure as /api/search, with additional fields:
search_type: always"music_similarity"vector_field: always"whisper"audio_filename: name of the uploaded filecache_hit: boolean indicating if the embedding was cached
Audio Processing Pipeline
- Audio loaded with
librosa(sr=16000, mono) - Trimmed to first 30 seconds (minimum 0.1s required)
- Processed through Music-Whisper's encoder
- Hidden states mean-pooled to 768-dim vector, then L2-normalized
- Cached per (client_ip, filename) for 1 hour (max 1000 entries)
Error Responses
- 503: Whisper encoder not loaded (requires
--gpuand not--no-whisper) - 400: Empty audio file, file > 100 MB, or audio processing failed
GET /api/stats
Returns dataset statistics and index information.
Response Body (JSON)
{
"total_tracks": 908174,
"subsets": {
"mureka": 383549,
"suno": 307539,
"udio": 115140,
"riffusion": 99228,
"sonauto": 2718
},
"score_average": {
"mean": 3.259,
"min": 1.765,
"max": 4.395
},
"with_caption": 832944,
"with_transcription": 514203,
"faiss_indices": {
"tag": 908241,
"lyric": 479313,
"mood": 383616,
"caption": 798858,
"transcription": 511610,
"whisper": 908174
},
"bm25_indices": {
"tags": 908174,
"caption": 832944,
"transcription": 514203,
"lyrics_hashed": 0
},
"languages": {
"en": 350000,
"es": 25000,
"unknown": 394246
},
"instrumental_count": 394246,
"whisper_embeddings": 908174
}
Result Track Object
Every search endpoint returns results as a list of track objects with these fields:
| Field | Type | Description |
|---|---|---|
row_id |
int | Unique track identifier (primary key in SQLite) |
title |
str | Track title |
audio_url |
str | URL to the audio file on the source platform |
subset |
str | Source platform: suno, udio, mureka, riffusion, sonauto |
tags_text |
str | Comma-separated genre/style tags |
mood_text |
str | Mood descriptors |
genre_tags |
list[str] | Parsed genre classifications |
scene_tags |
list[str] | Parsed scene classifications |
emotion_tags |
list[str] | Parsed emotion classifications |
score_average |
float | null | Overall aesthetics score (0-5) |
score_coherence |
float | null | Coherence sub-score |
score_musicality |
float | null | Musicality sub-score |
score_memorability |
float | null | Memorability sub-score |
score_clarity |
float | null | Clarity sub-score |
score_naturalness |
float | null | Naturalness sub-score |
play_count |
int | Play count on source platform |
upvote_count |
int | Like/upvote count |
duration_seconds |
float | null | Track duration in seconds |
music_whisper_caption |
str | AI-generated music description |
has_caption |
bool | Whether a Music-Whisper caption exists |
has_transcription |
bool | Whether a Parakeet ASR transcription exists |
is_instrumental |
bool | Whether the track is instrumental (no vocals detected) |
language |
str | Detected language code (e.g. "en", "es") or "unknown" |
score |
float | null | Search relevance score (type depends on score_type) |
score_type |
str | Score type: cosine_similarity, bm25, aesthetics, play_count, upvote_count |
has_whisper_emb |
bool | Whether the track has a Whisper audio embedding |
nsfw_overall_label |
str | Conservative NSFW label |
nsfw_gore_label |
str | Gore/violence NSFW label |
nsfw_sexual_label |
str | Sexual content NSFW label |
nsfw_hate_label |
str | Hate speech NSFW label |
nsfw_gore_sim |
float | null | Gore cosine similarity score |
nsfw_sexual_sim |
float | null | Sexual cosine similarity score |
nsfw_hate_sim |
float | null | Hate cosine similarity score |
stage1_score |
float | (Only if Stage 2 active) Score from Stage 1 |
stage2_score |
float | (Only if Stage 2 active) Score from Stage 2 |
Building the Index from Scratch
If you want to rebuild the search indices from the parquet files:
python build_search_index.py --force
This reads all parquets from public/ (and optionally private/ for additional embeddings), builds the SQLite database, FAISS indices, and BM25 indices. Takes ~30 minutes on a modern machine.
Source Data
The original audio data comes from ai-music/ai-music-deduplicated, organized by platform:
| Platform | Tracks | Audio Format | Notable Fields | Audio Status |
|---|---|---|---|---|
| Mureka | 383,549 | MP3 | genres, moods, model version | Active |
| Suno | 307,539 | MP3 | tags (in metadata), prompt, model_name, explicit flag | Active |
| Udio | 115,140 | MP3 | tags (array), lyrics, prompt, likes, plays | Active |
| Riffusion | 99,228 | M4A | sound (style description), lyrics_timestamped, conditions | Dead links |
| Sonauto | 2,718 | OGG | tags (array), description, keyword | Active |
Models Used
| Model | Purpose | Output |
|---|---|---|
| laion/music-whisper | Music captioning + audio embeddings | Text caption + 768-dim encoder embedding |
| nvidia/parakeet-tdt-0.6b-v3 | ASR transcription | Text + word-level timestamps |
| google/embeddinggemma-300m | Text sentence embeddings | 768-dim L2-normalized vectors |
| onnx-community/embeddinggemma-300m-ONNX | Text embeddings (ONNX, non-gated) | 768-dim L2-normalized vectors (q8 quantized) |
License
Apache 2.0
Citation
@misc{laion-tunes-2025,
title={LAION-Tunes: Annotated AI Music Search Dataset},
author={LAION},
year={2025},
url={https://huggingface.co/datasets/laion/laion-tunes}
}
- Downloads last month
- 74