Datasets:
language:
- en
license: apache-2.0
size_categories:
- 1M<n<10M
task_categories:
- audio-classification
- text-to-audio
- feature-extraction
tags:
- music
- ai-generated-music
- audio
- embeddings
- search
- faiss
- bm25
- nsfw-detection
- transcription
- captioning
pretty_name: LAION-Tunes
dataset_info:
features:
- name: row_id
dtype: int64
- name: audio_url
dtype: string
- name: filename
dtype: string
- name: tar_url
dtype: string
- name: subset
dtype: string
- name: title
dtype: string
- name: tags_text
dtype: string
- name: mood_text
dtype: string
- name: has_lyrics
dtype: bool
- name: genre_tags
dtype: string
- name: scene_tags
dtype: string
- name: emotion_tags
dtype: string
- name: score_coherence
dtype: float64
- name: score_musicality
dtype: float64
- name: score_memorability
dtype: float64
- name: score_clarity
dtype: float64
- name: score_naturalness
dtype: float64
- name: score_average
dtype: float64
- name: play_count
dtype: int64
- name: upvote_count
dtype: int64
- name: duration_seconds
dtype: float64
- name: music_whisper_caption
dtype: string
- name: parakeet_transcription
dtype: string
- name: has_caption
dtype: bool
- name: has_transcription
dtype: bool
- name: language
dtype: string
- name: is_instrumental
dtype: bool
- name: nsfw_gore_sim
dtype: float64
- name: nsfw_sexual_sim
dtype: float64
- name: nsfw_hate_sim
dtype: float64
- name: nsfw_gore_label
dtype: string
- name: nsfw_sexual_label
dtype: string
- name: nsfw_hate_label
dtype: string
- name: nsfw_overall_label
dtype: string
- name: predicted_play_count
dtype: float64
- name: predicted_upvote_count
dtype: float64
splits:
- name: train
num_examples: 2019683
LAION-Tunes
2,019,683 AI-generated music tracks from 5 platforms, annotated with captions, transcriptions, embeddings, aesthetics scores, and NSFW safety labels. Includes a full-text and vector search engine with a web UI featuring both a beginner-friendly Simple mode and a power-user Advanced mode.
Quick Stats
| Metric | Value |
|---|---|
| Total tracks | 2,019,683 |
| Subsets | Suno (1,419,048), Mureka (383,549), Udio (115,140), Riffusion (99,228), Sonauto (2,718) |
| Has caption (Music-Whisper) | 1,944,297 (96.3%) |
| Has transcription (Parakeet ASR) | 1,463,129 (72.4%) |
| Instrumental | 556,829 (27.6%) |
| NSFW flagged (very likely + likely) | 33,457 (1.66%) |
| FAISS vector indices | 6 (tag, whisper, caption, transcription, lyric, mood) |
| BM25 text indices | 3 (tags, caption, transcription) |
| Total index size | ~30 GB |
Dataset Description
LAION-Tunes is a curated metadata and annotation dataset covering publicly available AI-generated music from Suno, Udio, Mureka, Riffusion, and Sonauto.
This dataset does NOT contain audio files. It contains metadata, annotations, embeddings, and search indices. Audio URLs pointing to the original hosting platforms are included for reference.
Note on Riffusion: Riffusion audio links went dead during the creation of this dataset. The metadata and annotations are preserved, but audio playback will not work for the 99,228 Riffusion tracks. The web UI defaults to "All except Riffusion" to avoid broken audio players.
Source Data
The Mureka, Udio, Riffusion, and Sonauto tracks come from ai-music/ai-music-deduplicated. The Suno subset combines the original ai-music-deduplicated Suno split with an extended collection from ai-music4you/ai-generated-songs, ai-music4you2/ai-generated-songs2, and ai-music4you3/ai-generated-songs3. The extended Suno collection added ~1.11 M additional tracks, bringing Suno to 1,419,048.
| Platform | Tracks | Audio Format | Notable Fields | Audio Status |
|---|---|---|---|---|
| Suno | 1,419,048 | MP3 | tags, prompt, model_name, explicit flag | Active |
| Mureka | 383,549 | MP3 | genres, moods, model version | Active |
| Udio | 115,140 | MP3 | tags (array), lyrics, prompt, likes, plays | Active |
| Riffusion | 99,228 | M4A | sound (style description), lyrics_timestamped, conditions | Dead links |
| Sonauto | 2,718 | OGG | tags (array), description, keyword | Active |
What's Included
For each track:
- Metadata: title, tags, genre, mood, duration, play count, upvote count
- Music-Whisper Caption: AI-generated music description using laion/music-whisper (fine-tuned OpenAI Whisper Small)
- Parakeet ASR Transcription (public-release DB): vocal text for ~1.46 M tracks. Raw transcription text is available in the SQLite
metadata.db; per-parquet raw transcription text is kept private for extended Suno tracks (the transcription embedding is public). - Sentence Embeddings: 768-dim embeddings via google/embeddinggemma-300m for tags, captions, transcriptions, lyrics, and moods
- Whisper Audio Embeddings: 768-dim mean-pooled encoder embeddings from Music-Whisper for audio similarity search
- Aesthetics Scores: coherence, musicality, memorability, clarity, naturalness (computed from Whisper audio embeddings via a trained MLP)
- Predicted Engagement (extended Suno tracks): ML-predicted play count and upvote count (the original platform stats were not available for this subset)
- NSFW Safety Labels: three-tier classification (very_likely_nsfw / likely_nsfw / likely_sfw) across gore, sexual, and hate speech dimensions
- Language Detection: Detected language of vocal content via
langdetecton the Parakeet transcription - Pre-built Search Indices: FAISS vector indices and BM25 text indices ready to serve
Annotation Pipeline
- Music-Whisper (
laion/music-whisper): Generates music captions describing instruments, genre, mood, tempo, etc. - Parakeet TDT 0.6B (
nvidia/parakeet-tdt-0.6b-v3): ASR transcription with word-level timestamps for vocal content - EmbeddingGemma 300M (
google/embeddinggemma-300m): Computes 768-dim sentence embeddings for captions, transcriptions, tags, lyrics, moods - Whisper Encoder Embeddings: Mean-pooled encoder hidden states from Music-Whisper for audio fingerprinting/similarity
- Aesthetics MLP: 5-head MLP trained on Whisper audio embeddings predicts coherence / musicality / memorability / clarity / naturalness
- NSFW Classification: Cosine similarity of transcription embeddings against reference prompts for gore/violence, sexual content, and hate speech
- Language Detection:
langdetectapplied to Parakeet ASR transcriptions (first 300 characters) - Engagement Prediction (extended Suno only): Gradient-boosted regressor over aesthetics + caption/transcription embeddings, trained on the original subset's play/upvote counts
Web UI
The search engine includes a single-page dark-mode web interface (index.html) with two modes.
Simple Mode (default)
A clean, beginner-friendly interface for quick searches:
- Large search bar with natural-language input
- Rank by: Relevance, Aesthetics Score, Most Liked, or Most Played
- Optional negative prompt (exclude certain terms/concepts)
- Language filter chips
- Sensible defaults: vector similarity on Whisper Caption, SFW only, all subsets except Riffusion, 60 s minimum duration
Advanced Mode
Full control over all search parameters:
- Search modes: Vector Similarity, BM25 Text, Combined, Music Similarity (audio upload)
- Vector fields: Whisper Caption, Tags, Lyrics, Mood, Transcription
- BM25 fields: Caption, Tags, Transcription, Lyrics (hashed)
- Ranking: Relevance, Music Similarity, Aesthetics, Play Count, Like Count
- Filters: Vocals/Instrumental, Min Duration, Min Aesthetics, Subset, Safety (SFW/NSFW), Languages
- Instrumental Subset: ~11 K curated instrumental-only tracks with dedicated FAISS/BM25 indices for faster search; selecting it auto-enables the "Instrumental" vocal filter
- Two-stage search: Refine Stage 1 results with a second query using different search modes/fields
- Negative prompts: Exclude results matching a negative query with adjustable weight
- Audio upload: Find similar tracks by uploading an audio file
- Find Similar: Click on any result to find sonically similar tracks
- Help icons (?) next to every control with detailed explanations of what each option does
Repository Structure
laion-tunes/
├── README.md # This file
├── server.py # FastAPI search server (main application)
├── index.html # Web UI (dark-mode, single-page app)
├── build_search_index.py # Index builder script
├── update_indices.py # Incremental index updater
├── migrate_add_language.py # Language detection migration
├── nsfw_safety_report.html # Interactive NSFW analysis report
├── nsfw_analysis_data.json # Raw NSFW analysis data
│
├── public/ # Annotated metadata parquets (~12 GB)
│ ├── mureka_000000.tar.parquet # One parquet per source TAR file
│ ├── ... # 49 Mureka, 14 Riffusion, 1 Sonauto,
│ ├── ... # 291 Suno, 30 Udio
│ └── ... # 385 parquet files total
│
├── search_index/ # Pre-built search indices (~30 GB)
│ ├── metadata.db # SQLite database (2,019,683 tracks, 6.5 GB)
│ ├── faiss_whisper.index # FAISS IndexFlatIP - audio embeddings (6.2 GB)
│ ├── faiss_caption.index # FAISS IndexFlatIP - caption embeddings (5.9 GB)
│ ├── faiss_transcription.index # FAISS IndexFlatIP - transcription embeddings (4.5 GB)
│ ├── faiss_tag.index # FAISS IndexFlatIP - tag embeddings (2.8 GB)
│ ├── faiss_lyric.index # FAISS IndexFlatIP - lyric embeddings (1.5 GB)
│ ├── faiss_mood.index # FAISS IndexFlatIP - mood embeddings (1.2 GB)
│ ├── idmap_*.npy # Row ID mappings for each FAISS index
│ ├── bm25_caption.pkl # BM25 text index for captions (1.5 GB)
│ ├── bm25_transcription.pkl # BM25 text index for transcriptions (1.2 GB)
│ └── bm25_tags.pkl # BM25 text index for tags (119 MB)
│
└── whisper_embeddings/ # Raw Whisper encoder embeddings (~5 GB)
├── mureka_000000.tar.npz # One NPZ per source TAR file
└── ... # 385 NPZ files total
Data Format
Parquet Files (public/)
Each parquet file corresponds to one TAR file from the source collection. The original-subset parquets (Mureka, Udio, Riffusion, Sonauto, original Suno) contain the full original schema; the extended Suno parquets (suno_*_ai_generated_songs*.tar.parquet) contain the enriched annotation-only columns.
Common columns:
| Column | Type | Description |
|---|---|---|
filename |
str | Filename within the source TAR |
tar_file |
str | Source TAR filename |
audio_url |
str | Original audio URL (mp3/m4a/ogg) |
subset |
str | Source platform (suno/udio/mureka/riffusion/sonauto) |
title |
str | Track title |
tags_text |
str | Comma-separated genre/style tags |
mood_text |
str | Mood tags |
duration_seconds |
float | Track duration |
play_count |
int | Play count on source platform (original subset) |
upvote_count |
int | Like/upvote count (original subset) |
predicted_play_count |
float | ML-predicted play count (extended Suno) |
predicted_upvote_count |
float | ML-predicted upvote count (extended Suno) |
music_whisper_caption |
str | Music-Whisper generated caption |
parakeet_transcription |
str | Parakeet ASR transcription (plain text) |
parakeet_transcription_with_timestamps |
str | ASR with word-level timestamps (original subset) |
tag_embedding |
list[float] | 768-dim EmbeddingGemma embedding of tags (original subset) |
caption_embedding |
list[float] | 768-dim EmbeddingGemma embedding of caption |
transcription_embedding |
list[float] | 768-dim EmbeddingGemma embedding of transcription |
lyric_embedding |
list[float] | 768-dim EmbeddingGemma embedding of lyrics (original subset) |
mood_embedding |
list[float] | 768-dim EmbeddingGemma embedding of mood (Mureka only) |
score_coherence / score_musicality / score_memorability / score_clarity / score_naturalness |
float | Aesthetics sub-scores (1 – 5) |
score_average |
float | Mean of the five aesthetics sub-scores |
has_caption / has_transcription / is_instrumental |
bool | Annotation flags |
language |
str | Detected language of vocal content |
nsfw_gore_sim / nsfw_sexual_sim / nsfw_hate_sim |
float | NSFW cosine similarities |
nsfw_gore_label / nsfw_sexual_label / nsfw_hate_label / nsfw_overall_label |
str | NSFW labels |
Whisper Embeddings (whisper_embeddings/)
NPZ files containing mean-pooled Whisper encoder hidden states:
embeddings: float32 array of shape(N, 768)– L2-normalizedfilenames: string array of filenames matching the parquet entries
SQLite Database (search_index/metadata.db)
The tracks table contains all 2,019,683 tracks with 36 columns including metadata, aesthetics scores, predicted engagement (for extended Suno tracks), annotation flags, NSFW safety labels, language codes, and instrumental flags. The row_id column is the primary key used by all FAISS indices.
FAISS Indices (search_index/faiss_*.index)
All indices are IndexFlatIP (inner product / cosine similarity for L2-normalized vectors) with 768 dimensions. Each index has a corresponding idmap_*.npy that maps FAISS internal indices to SQLite row_id values.
| Index | Vectors | Coverage | Description |
|---|---|---|---|
faiss_whisper |
2,032,032 | all tracks (with small deduplication) | Audio encoder embeddings (music similarity) |
faiss_caption |
1,922,299 | all tracks with captions | Music-Whisper caption embeddings |
faiss_transcription |
1,470,880 | all tracks with transcription | ASR transcription embeddings |
faiss_tag |
908,241 | original subset only | Tag text embeddings |
faiss_lyric |
479,313 | original subset only | Lyrics embeddings |
faiss_mood |
383,616 | Mureka only | Mood text embeddings |
Note: faiss_tag, faiss_lyric, and faiss_mood were not extended to the new Suno tracks because the extended source did not provide comparable free-form tag / lyric / mood text fields. Search queries on those fields therefore only hit the original subset; search on caption, transcription, and whisper covers the full dataset.
BM25 Indices (search_index/bm25_*.pkl)
| Index | Documents | Coverage |
|---|---|---|
bm25_caption |
1,956,676 | all tracks with captions (with small variance vs FAISS due to tokenization drops) |
bm25_transcription |
1,473,647 | all tracks with transcriptions |
bm25_tags |
908,241 | original subset only |
NSFW Safety Labels
Each track has NSFW safety scores and labels across three dimensions. The classification is performed by computing cosine similarity between the track's transcription embedding and curated reference prompts for each NSFW category.
Classification Method
- For each track with a transcription, the 768-dim EmbeddingGemma embedding is compared against reference prompt embeddings for three categories: gore/violence, sexual content, and hate speech
- Cosine similarity scores are computed for each category
- Two thresholds per category define the three-tier labeling:
- very_likely_nsfw: cosine similarity above the strict threshold
- likely_nsfw: cosine similarity between strict and moderate thresholds
- likely_sfw: cosine similarity below the moderate threshold
- The
nsfw_overall_labelis conservative: the worst (most NSFW) label across all three dimensions is used
Thresholds and Distribution (over the full 2,019,683 tracks)
| Dimension | Strict Threshold | Moderate Threshold | Very Likely NSFW | Likely NSFW |
|---|---|---|---|---|
| Gore/Violence | >= 0.3779 | >= 0.3540 | 7,908 (0.39%) | 7,726 (0.38%) |
| Sexual Content | >= 0.3584 | >= 0.3234 | 5,528 (0.27%) | 6,182 (0.31%) |
| Hate Speech | >= 0.3633 | >= 0.3382 | 5,752 (0.28%) | 6,396 (0.32%) |
| Overall (conservative) | – | – | 17,053 (0.84%) | 16,404 (0.81%) |
Tracks without a transcription are labeled likely_sfw for all dimensions by default (there is no vocal content to flag).
NSFW Fields in the Dataset
| Field | Type | Description |
|---|---|---|
nsfw_gore_sim |
float | Raw cosine similarity for gore/violence |
nsfw_sexual_sim |
float | Raw cosine similarity for sexual content |
nsfw_hate_sim |
float | Raw cosine similarity for hate speech |
nsfw_gore_label / nsfw_sexual_label / nsfw_hate_label |
str | very_likely_nsfw / likely_nsfw / likely_sfw |
nsfw_overall_label |
str | Conservative overall label (worst of the three) |
The raw cosine similarity scores are stored so you can apply your own thresholds. The nsfw_safety_report.html file in this repository provides an interactive visual analysis of the NSFW distribution.
Filtering Behavior in the UI
- SFW Only (default in UI): Keeps only tracks with
nsfw_overall_label = likely_sfw(excludes ~1.66% of tracks) - NSFW Only: Keeps only tracks with
nsfw_overall_label != likely_sfw - All: No filtering
Running the Search Server
Prerequisites
pip install fastapi uvicorn faiss-cpu numpy pandas sentence-transformers torch scipy tqdm python-multipart transformers
Important — embedder compatibility. The shipped FAISS caption / transcription / tag / lyric / mood indices were built with the PyTorch
google/embeddinggemma-300mmodel viasentence-transformers(which applies the model's specific LastToken pooling and task prompts). You must query the indices with the same model in the same configuration, otherwise retrieval lands in a different embedding subspace and returns semantically random results.google/embeddinggemma-300mis a gated model, so accept the terms on its HF page and export anHF_TOKENbefore running the server.
Option 1 (default, recommended): PyTorch google/embeddinggemma-300m
# Requires HF_TOKEN for the gated google/embeddinggemma-300m model
HF_TOKEN=your_token python server.py --port 7860 --gpu 0
Loads google/embeddinggemma-300m via sentence-transformers in bfloat16 on the first available CUDA device (or CPU if CUDA is absent). This is the configuration that matches the shipped FAISS indices. On GPU, query embedding takes ~15 ms; on CPU, ~430 ms.
Option 2 (same model, fastest on CPU): HF Text Embeddings Inference
TEI runs google/embeddinggemma-300m in an optimized server container and provides ~25 ms CPU embeddings. Same model, same pooling — fully compatible with the indices.
docker run -d --name tei-embeddings \
-p 8090:80 \
-e HF_TOKEN=your_token \
ghcr.io/huggingface/text-embeddings-inference:cpu-latest \
--model-id google/embeddinggemma-300m \
--max-batch-requests 4
python server.py --port 7860 --gpu 0 --tei-url http://localhost:8090
Option 3 (opt-in, NOT compatible with shipped indices): ONNX quantized
The non-gated onnx-community/embeddinggemma-300m-ONNX repo does not ship a sentence-transformers config, so SentenceTransformer falls back to naive mean-pooling — yielding a different subspace than the one the FAISS indices were built in. Using ONNX against the shipped indices produces ~0.04 cosine on a query that should score ~0.9, and nonsense search results.
Only enable this path if you rebuild the FAISS indices locally with the same ONNX embedder + mean pooling:
LAIONTUNES_USE_ONNX_EMBEDDER=1 python server.py --port 7860 --no-whisper
# The server will print a WARNING at startup reminding you retrieval will be degraded on the shipped indices.
Server Flags
| Flag | Default | Description |
|---|---|---|
--port |
7860 | HTTP port |
--host |
0.0.0.0 | Bind address |
--gpu |
0 | GPU ID for the text embedder and the Whisper encoder |
--tei-url |
None | TEI server URL for text embeddings (skips loading the local embedder) |
--no-whisper |
False | Skip loading the Music-Whisper encoder (disables audio-upload similarity search) |
Environment Variables
| Variable | Default | Description |
|---|---|---|
HF_TOKEN |
– | Required for google/embeddinggemma-300m (and for laion/music-whisper if audio-upload search is enabled). |
LAIONTUNES_USE_ONNX_EMBEDDER |
0 |
Set to 1 to opt into the ONNX embedder. Retrieval quality on the shipped FAISS indices will be degraded — only set if you have rebuilt the indices with the same ONNX embedder. |
LAIONTUNES_SKIP_FAISS |
– | Comma-separated list of FAISS fields to not load at startup (e.g. tag,lyric,mood). Useful for memory-constrained deployments. Skipped fields still accept query requests but return 404-style empty results. |
LAIONTUNES_SKIP_BM25 |
– | Comma-separated list of BM25 fields to skip (e.g. tags,transcription). |
What Loads at Startup
- 6 FAISS indices (tag, whisper, caption, transcription, lyric, mood) — unless listed in
LAIONTUNES_SKIP_FAISS - 3 BM25 indices (tags, caption, transcription) — unless listed in
LAIONTUNES_SKIP_BM25 - SQLite database (2,019,683 tracks)
- Text embedder — in order of preference: TEI if
--tei-urlis set; otherwisegoogle/embeddinggemma-300mvia SentenceTransformer (default); otherwiseonnx-community/embeddinggemma-300m-ONNXonly ifLAIONTUNES_USE_ONNX_EMBEDDER=1. - Music-Whisper encoder (optional, on GPU): for audio-upload similarity search
- Instrumental subset indices (optional): separate FAISS/BM25 indices for the ~11K curated instrumental subset
Total memory: ~35 GB RAM + ~1 GB GPU VRAM when Whisper + EmbeddingGemma are both loaded on GPU.
Search API Reference
The FastAPI server exposes the following endpoints.
GET /
Serves the HTML search frontend.
GET /nsfw-report
Serves the interactive NSFW safety analysis report.
POST /api/search
Main search endpoint supporting vector similarity, BM25 text search, and combined mode with optional two-stage refinement.
Request Body (JSON)
| Field | Type | Default | Description |
|---|---|---|---|
query |
str | required | Search query text |
negative_query |
str | null | null | Negative prompt (subtracted from query embedding in vector mode) |
search_type |
str | "bm25" |
"vector" | "bm25" | "combined" |
vector_field |
str | "caption" |
FAISS index: "tag" | "caption" | "lyric" | "mood" | "transcription" |
bm25_field |
str | "caption" |
BM25 index: "tags" | "caption" | "transcription" | "lyrics_hashed" |
rank_by |
str | "similarity" |
"similarity" | "aesthetics" | "plays" | "likes" |
min_aesthetics |
float | null | null | Minimum aesthetics score (0 – 5 scale) |
min_similarity |
float | null | null | Minimum cosine similarity score |
subset_filter |
str | null | null | "suno" | "udio" | "mureka" | "riffusion" | "sonauto" | "no_riffusion" | "instrumental_subset" |
vocal_filter |
str | null | null | "instrumental" | "vocals" |
min_duration |
float | null | 60.0 | Minimum duration in seconds |
languages |
list[str] | null | null | Language codes to include (e.g. ["en", "es"]), null = all |
negative_weight |
float | 0.7 | Weight for negative query subtraction (0.0 – 1.0) |
nsfw_filter |
str | null | null | "sfw_only" | "nsfw_only" | null (all) |
top_k |
int | 50 | Number of results to return |
stage2_enabled |
bool | false | Enable two-stage refinement |
stage2_query |
str | null | null | Query text for Stage 2 |
stage2_search_type |
str | "vector" |
"vector" | "bm25" |
stage2_vector_field |
str | "caption" |
Vector field for Stage 2 |
stage2_bm25_field |
str | "caption" |
BM25 field for Stage 2 |
stage2_top_k |
int | 50 | Number of results after Stage 2 re-ranking |
Example Request
curl -X POST http://localhost:7860/api/search \
-H "Content-Type: application/json" \
-d '{
"query": "dreamy ambient synth pad",
"search_type": "vector",
"vector_field": "caption",
"rank_by": "similarity",
"subset_filter": "no_riffusion",
"nsfw_filter": "sfw_only",
"top_k": 20
}'
POST /api/search_similar
Find tracks similar to an existing track by row_id, using the Whisper audio embeddings.
POST /api/search_by_audio
Upload an audio file (multipart/form-data, first 30 s used) to find similar tracks by audio fingerprint using the Music-Whisper encoder.
GET /api/stats
Returns dataset statistics and index information.
Response Body (example)
{
"total_tracks": 2019683,
"subsets": {
"suno": 1419048,
"mureka": 383549,
"udio": 115140,
"riffusion": 99228,
"sonauto": 2718
},
"score_average": { "mean": 3.27, "min": 1.4, "max": 4.77 },
"with_caption": 1944297,
"with_transcription": 1463129,
"faiss_indices": {
"whisper": 2032032,
"caption": 1922299,
"transcription": 1470880,
"tag": 908241,
"lyric": 479313,
"mood": 383616
},
"bm25_indices": {
"caption": 1956676,
"transcription": 1473647,
"tags": 908241,
"lyrics_hashed": 0
},
"instrumental_count": 556829,
"whisper_embeddings": 2032032,
"instrumental_subset_tracks": 11391
}
Result Track Object
Every search endpoint returns results as a list of track objects with these fields:
| Field | Type | Description |
|---|---|---|
row_id |
int | Unique track identifier (primary key in SQLite) |
title |
str | Track title |
audio_url |
str | URL to the audio file on the source platform |
subset |
str | Source platform: suno, udio, mureka, riffusion, sonauto |
tags_text |
str | Comma-separated genre/style tags |
mood_text |
str | Mood descriptors |
genre_tags / scene_tags / emotion_tags |
list[str] | Parsed taxonomy tags |
score_average, score_coherence, score_musicality, score_memorability, score_clarity, score_naturalness |
float | null | Aesthetics scores (1 – 5) |
play_count / upvote_count |
int | Source-platform stats (0 for extended Suno tracks; use predicted_* instead) |
duration_seconds |
float | null | Track duration in seconds |
music_whisper_caption |
str | AI-generated music description |
has_caption / has_transcription / is_instrumental |
bool | Annotation flags |
language |
str | Detected language code (e.g. "en") or "unknown" |
score |
float | null | Search relevance score |
score_type |
str | cosine_similarity, bm25, aesthetics, play_count, upvote_count |
has_whisper_emb |
bool | Whether the track has a Whisper audio embedding |
nsfw_overall_label, nsfw_gore_label, nsfw_sexual_label, nsfw_hate_label |
str | NSFW labels |
nsfw_gore_sim, nsfw_sexual_sim, nsfw_hate_sim |
float | null | NSFW cosine similarity scores |
stage1_score, stage2_score |
float | Only present if Stage 2 was enabled |
Building the Index from Scratch
python build_search_index.py --force
This reads all parquets from public/, builds the SQLite database, FAISS indices, and BM25 indices. Expect ~1 – 2 h on the full 2 M-track dataset.
Related Datasets
- laion/laion-tunes-rpg-music — 2,580 instrumental tracks from LAION-Tunes (Suno + Udio) annotated with Gemini 3 Flash Preview across 18 RPG genres (high fantasy, cosmic horror, cyberpunk, …) and evoked-emotion tags. Each genre has its own FAISS index over per-track "situation" lists, so you can search by natural-language scenario (e.g. "sneaking through a dark dungeon"). Includes a dedicated FastAPI server (
rpg_server.py) and purple-themed web UI (rpg_index.html).
Models Used
| Model | Purpose | Output |
|---|---|---|
| laion/music-whisper | Music captioning + audio embeddings | Text caption + 768-dim encoder embedding |
| nvidia/parakeet-tdt-0.6b-v3 | ASR transcription | Text + word-level timestamps |
| google/embeddinggemma-300m | Text sentence embeddings | 768-dim L2-normalized vectors |
| onnx-community/embeddinggemma-300m-ONNX | Text embeddings (ONNX, non-gated) | 768-dim L2-normalized vectors (int8 quantized) |
License
Apache 2.0
Citation
@misc{laion-tunes-2026,
title={LAION-Tunes: Annotated AI Music Search Dataset},
author={LAION},
year={2026},
url={https://huggingface.co/datasets/laion/laion-tunes}
}