Datasets:
File size: 4,447 Bytes
796bb5b 10aa2e1 796bb5b 2bac83f 796bb5b 2bac83f 796bb5b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | ---
license: cc-by-nc-4.0
task_categories:
- audio-classification
tags:
- ai-music-detection
- benchmark
- forensic
- audio
language:
- en
size_categories:
- 1K<n<10K
---
# ArtifactBench v1 — AI-Generated Music Detection Benchmark
A multi-generator evaluation benchmark for AI-generated music forensic detection, covering 22 AI generators and 6 real music sources.
## Dataset Description
- **Total tracks**: 8,766 (4,383 AI + 4,383 Real, 1:1 balanced)
- **AI generators**: 22 (MusicGen, Stable Audio, Suno v3/v3.5/v4, Udio, Riffusion, DiffRhythm, Yue, Chirp v2/v3/v3.5, etc.)
- **Real sources**: 6 (SONICS, MoM, FMA, YouTube)
- **Format**: AI tracks as Parquet (audio bytes embedded), Real tracks as CSV (YouTube IDs for user download)
## Motivation
Existing benchmarks (SONICS: 5 generators, MoM: 6 generators) only measure in-distribution performance. Models reporting high F1 on these benchmarks fail catastrophically on out-of-distribution generators:
- CLAM (194M params, F1=0.925 on MoM) → F1=0.824 on ArtifactBench
- SpecTTTra (19M params, F1=0.97 on SONICS) → F1=0.766 on ArtifactBench
ArtifactBench evaluates what matters for deployment: generalization across diverse generators.
## Sanity Check Protocol
Per-source pass/fail thresholds:
- Real source FPR ≤ 5%
- AI source TPR ≥ 90% (Stable Audio: ≥ 60%)
- Codec invariance: mean Δ ≤ 0.15, max Δ ≤ 0.35
## Baseline Results
| Model | Params | F1 | FAIL | Suno v4 TPR | Real FPR |
|---|---|---|---|---|---|
| **ArtifactNet v9.4** | **4.2M** | **0.983** | **4/28** | **98%** | **1.5%** |
| CLAM (MoM) | 194M | 0.824 | 16/28 | 78% | 70.5% |
| SpecTTTra | 19M | 0.766 | 23/28 | 55% | 21.4% |
## Usage
```python
from artifactbench.bench import main
# or
# python -m artifactbench.bench --model artifactnet --manifest artifactbench_v1_manifest.json
```
## Per-Source Breakdown (v1.0.1)
| Source | Class | Tracks | bench_origin: test | Generator |
|---|---|---|---|---|
| aime_musicgen_large | AI | 200 | 30 | MusicGen Large |
| aime_musicgen_medium | AI | 200 | 30 | MusicGen Medium |
| aime_musicgen_small | AI | 200 | 30 | MusicGen Small |
| aime_riffusion | AI | 200 | 30 | Riffusion |
| aime_stable_audio_v1 | AI | 200 | 50 | Stable Audio v1 |
| aime_stable_audio_v2 | AI | 200 | 50 | Stable Audio v2 |
| aime_suno_v3 | AI | 200 | 30 | Suno v3 |
| aime_suno_v35 | AI | 200 | 30 | Suno v3.5 |
| aime_udio | AI | 200 | 30 | Udio (AIME) |
| mom_diffrythm | AI | 200 | 100 | DiffRhythm |
| mom_riffusion | AI | 200 | 100 | Riffusion (MoM) |
| mom_udio | AI | 200 | 100 | Udio (MoM) |
| mom_yue | AI | 200 | 100 | Yue |
| sonics_chirp-v2-xxl-alpha | AI | 200 | 80 | Chirp v2 |
| sonics_chirp-v3 | AI | 200 | 80 | Chirp v3 |
| sonics_chirp-v3.5 | AI | 200 | 80 | Chirp v3.5 |
| sonics_udio-120s | AI | 200 | 80 | Udio 120s |
| sonics_udio-30s | AI | 200 | 80 | Udio 30s |
| suno_cdn_latest | AI | 200 | 100 | Suno CDN (post-freeze) |
| suno_extra | AI | 200 | 80 | Suno extras |
| **udio_cdn_latest** | AI | **200** | 35 | Udio CDN (post-freeze) — v1.0.1 balanced |
| udio_extra | AI | 200 | 80 | Udio extras |
| sonics_real | Real | 500 | 300 | SONICS real partition |
| mom_real | Real | 400 | 200 | MoM real (mp3 + wav) |
| fma_hardneg | Real | 300 | 150 | FMA mp3 hard-negatives |
| mom_extra_real | Real | 200 | 110 | MoM extra real |
| mom_real_wav | Real | 200 | 42 | MoM real WAV variants |
| youtube_hardneg | Real | 200 | 73 | YouTube curated hard-negatives |
| **TOTAL** | — | **6,200** | **2,280** | 28 sources, 22 AI generators |
Real sources are intentionally over-represented (1,800 total) to enable rigorous FPR estimation across diverse codec and production conditions.
## Files
- `artifactbench_v1_manifest.json` — Track manifest with bench_origin tags
- `metadata.json` — Dataset statistics and generator list
## Citation
```bibtex
@article{oh2026artifactnet,
title = {ArtifactNet: Detecting AI-Generated Music via Forensic Residual Physics},
author = {Oh, Heewon},
journal = {arXiv preprint arXiv:2604.16254},
year = {2026},
eprint = {2604.16254},
archivePrefix= {arXiv},
primaryClass = {cs.SD},
doi = {10.48550/arXiv.2604.16254},
url = {https://arxiv.org/abs/2604.16254}
}
```
**arXiv**: [2604.16254](https://arxiv.org/abs/2604.16254) · **DOI**: [10.48550/arXiv.2604.16254](https://doi.org/10.48550/arXiv.2604.16254)
## License
CC BY-NC 4.0
|