Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
Search is not available for this dataset
audio
audio
label
class label
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
End of preview. Expand in Data Studio

NeuralAcid: Chord-Conditioned Bassline Dataset

Overview

NeuralAcid is a dataset of 10,000 monophonic bassline sequences with per-bar chord annotations and 19.2 hours of rendered audio. Every sample is available in three aligned representations:

  1. Token sequence -- symbolic note events with chord labels
  2. Piano roll -- (60, 64) velocity matrix
  3. Audio -- 44.1 kHz stereo WAV

No existing public dataset provides monophonic bass MIDI paired with chord annotations at this scale. NeuralAcid fills that gap.

Why This Dataset Exists

Training a chord-conditioned bassline generator requires paired (chord progression, bassline) data. Existing symbolic music datasets either lack isolated bass tracks (MAESTRO, Groove MIDI), require manual extraction and chord labeling (Lakh, Slakh2100), or are too small (FiloBass: 48 jazz songs). Acid/electronic bass datasets simply don't exist in academic form.

NeuralAcid takes a different approach: a rule-based generative sequencer produces the data, with musical constraints baked in so the model only sees valid basslines from day one.

Dataset at a Glance

Train Validation
Samples 9,000 1,000
Token sequences 16.1 MB 1.8 MB
Piano rolls 131.8 MB 14.6 MB
Rendered audio 20.3 GB 2.4 GB
Total audio duration ~17.3 h ~1.9 h
Unique chord progressions 636

Musical Properties

Every sequence is 4 bars at 120 BPM, strictly monophonic, and scale-quantized.

Built-in Musical Constraints

  • Scale-chord consistency -- Major chords only produce notes from major-family scales (ionian, lydian, major pentatonic). Minor chords only use minor-family scales (aeolian, dorian, phrygian, minor pentatonic). Dominant 7th chords can draw from mixolydian, phrygian, or octatonic. No cross-contamination.
  • Downbeat rule -- Every bar begins with a note. 70% of downbeats land on the chord root.
  • Monophonic guarantee -- Post-generation clamp ensures no note overlaps the next.

Chord Progressions

53 templates transposed across all 12 keys, covering:

Style Examples
Minor i - iv7 - bVII - V7, i - bIII - iv - V7
Major I - IV - V7 - I, I - vi - IV - V7
Blues I7 - I7 - IV7 - I7, V7 - IV7 - I7 - V7
House / Techno one-chord vamps, i - bIII oscillations
Acid phrygian bII, tritone subs, neapolitan
Jazz ii - V7 - I, minor ii-V-i
Passing diminished, augmented, suspended

10 chord types: maj, maj7, min, min7, dom7, 7, dim, dim7, aug, sus4, 7sus4, min7b5

Scale Palette

13 scales selected per-bar based on chord type:

Chord Family Compatible Scales
Major ionian, lydian, major pentatonic
Minor aeolian, dorian, phrygian, minor pentatonic
Dominant mixolydian, phrygian, minor pentatonic, octatonic
Diminished locrian, octatonic, diminished 7th
Augmented whole tone, lydian

Parameter Space

Each sample is generated with independently randomized parameters. The full variation space:

Parameter Values What it controls
Mode random, drunk Pure random vs. random walk
Chance 0.45 -- 0.98 Note density
Deviation 1 -- 8 Pitch range, octave jumps, note length variation
Step division 1/16, 1/8 Rhythmic grid
Note length 1/32 -- 1/4 beat Gate time
Octave range 2, 3, 4 Pitch register span
Velocity 10 dynamic presets Soft to loud, narrow to wide
Tie chance 0 -- 0.3 Legato
Pitch drift -0.08 -- 0.08 Gradual pitch wander
Pitch weights 27 presets + random Per-degree note probability
Step probability 27 presets + random Rhythmic accent patterns
Clock pattern 18 presets Swing, shuffle, polyrhythm

Data Formats

Token Sequence (train.jsonl, val.jsonl)

JSONL, one sample per line:

{
  "id": 42,
  "roll_idx": 42,
  "chord_progression": "Cm7 | F7 | Bbmaj7 | G7",
  "chords": [
    {"bar": 0, "root": "C", "type": "min7", "scale_used": "dorian"},
    {"bar": 1, "root": "F", "type": "dom7", "scale_used": "mixolydian"}
  ],
  "params": {"mode": "random", "chance": 0.82, "deviation": 4, ...},
  "tokens": ["<BOS>", "<BAR>", "CHORD_Cm7", "NOTE_36", "DUR_4", "VEL_f", ...],
  "token_ids": [1, 3, 8, 149, 201, 218, ...],
  "num_notes": 25,
  "bars": 4
}

Vocabulary (220 tokens):

Type Count Format
Special 5 <PAD>, <BOS>, <EOS>, <BAR>, <REST>
Chord 132 CHORD_{root}{type} (12 roots x 11 types)
Note 61 NOTE_{midi} (MIDI 24-84, C1-C6)
Duration 16 DUR_{n} (1-16, in 16th notes)
Velocity 6 VEL_{pp,p,mp,mf,f,ff}

Average sequence length: 78 tokens.

Piano Roll (train_pianorolls.npy, val_pianorolls.npy)

NumPy float32 arrays:

  • Shape: (N, 60, 64) -- N samples, 60 pitches, 64 time steps
  • Pitch axis: MIDI 24 (C1) to 83 (B5)
  • Time axis: 16th-note grid, 4 bars = 64 steps
  • Values: velocity normalized to [0, 1]. Zero = silence.
  • Sparsity: ~99% zeros (monophonic bass)

Indexed by roll_idx in the JSONL.

Audio (train_audio/, val_audio/)

  • Format: WAV, 44.1 kHz, stereo, 32-bit float
  • Duration: ~7 seconds per clip (4 bars at 120 BPM + release tail)
  • Rendered with Vita (Vital synthesizer Python bindings), default init preset
  • Filename: {sample_id:06d}.wav

Intended Uses

Chord-conditioned generation -- Train a model that takes a chord progression as input and outputs a bassline. The token format supports autoregressive decoding with chord context tokens.

CNN + Sequence hybrid -- Use piano rolls as CNN encoder input to extract rhythmic/melodic features, combined with a sequence decoder for token generation.

Audio-to-MIDI benchmarking -- 10,000 paired (audio, MIDI ground truth) samples for evaluating monophonic transcription systems on synthesized bass.

Bassline pattern analysis -- Study the relationship between chord types and bass note choices across scales and styles.

Limitations

  • All sequences are synthetically generated, not human-performed. They lack expressive timing and dynamics of real performances.
  • Fixed tempo (120 BPM) and time signature (4/4).
  • Single synth timbre (Vital init preset). Real-world bass timbres vary widely.
  • 4-bar fixed length. Real basslines have longer-form structure.
  • The rule-based generator, while musically constrained, does not capture idiomatic patterns of specific genres (e.g., Motown walking bass, reggae one-drop) beyond what emerges from parameter randomization.

Generation Source

All data is produced by Electr-o-matic, a generative sequencer ported from a Max for Live device to Python. The original device was built for real-time generative performance in Ableton Live.

License

CC-BY-4.0

Downloads last month
21