--- license: mit task_categories: - image-to-text - text-to-image - audio-classification - image-classification - tabular-classification tags: - audio - image - multimodal - visualization - audio-visualization - 3d-visualization - synthetic - proof-of-concept - frequency-estimation - generative-audio - music-visualization --- [](https://webxos.netlify.app) [](https://github.com/webxos/webxos) [](https://huggingface.co/webxos) [](https://x.com/webxos)
## Audioform_Dataset_v1 This dataset is the very first output from **AUDIOFORM** — a Three.js powered 3D audio visualization tool that turns audio files into beautiful, timestamped visual frames with rich metadata. **AUDIOFORM** by webXOS is available for download in the /audioform/ folder of this repo so developers can create their own similar datasets. Audio for is a synthetic harmonic oscilator that runs in HTML, think of it as the "Hello World" / MNIST-style dataset application for audio-to-visual multimodal machine learning. This dataset contains **10 captured frames** from a short uploaded WAV file (played at 1× speed), together with per-frame metadata including dominant frequency, timestamp, and capture info. ## Dataset Structure ``` audioform_dataset/ ├── images/ │ ├── frame_0001.png │ ├── frame_0002.png │ └── ... (10 PNG frames total) ├── metadata.csv # Main metadata file (Hugging Face viewer uses this) └── README.md ``` ``` | Column | Type | Description | Example Value | |---------------|---------|-----------------------------------------------------------------------------|-----------------------------------| | `file_name` | string | Relative path to the visualization PNG (required by Hugging Face) | `images/frame_0001.png` | | `frame_id` | int | Sequential frame number (0-based) | 0, 1, 2, …, 9 | | `timestamp` | float | Time in seconds when the frame was captured from the audio | 5.365, 6.219, 9.504 | | `frequency` | int | Dominant / main detected audio frequency at capture time (Hz) | 0 (in this tiny sample) | | `time_scale` | int | Playback speed multiplier used during visualization | 1 | | `capture_date`| string | UTC ISO timestamp when the frame was rendered | 2026-01-13T19:57:36.427Z | ``` See how fast a tiny diffusion model / GAN / LoRA can memorize & regenerate these exact 10 styles. Use the frames as style references for ControlNet, IP-Adapter, or fine-tuning SD to adopt this neon 3D audio-viz aesthetic. ``` This dataset shows the **format** AUDIOFORM produces. → Feed it real music, voices, field recordings, synths → Generate 1k–100k+ frames → Add labels (genre, instrument, mood, multiple freq peaks…) → Unlock serious applications: - Music video auto-generation - Visual audio classifiers - Audio-conditioned image/video generation - Interactive music → 3D art installations - Novel multimodal music understanding models ``` ## Dataset Description This dataset was generated using AUDIOFORM, a 3D audio visualization system. - **Total Frames**: 10 - **Generation Date**: 2026-01-13 - **Audio Type**: Uploaded WAV File - **Time Scaling**: 1x ## Dataset Structure - `images/`: Contains all captured frames in PNG format - `metadata.csv`: Contains classification data for each frame ## Metadata Columns - `file_name`: Relative path to the image file (e.g., images/frame_0001.png) - **REQUIRED for Hugging Face** - `frame_id`: Unique identifier for each frame - `timestamp`: Time in seconds when frame was captured - `frequency`: Audio frequency at capture time (Hz) - `time_scale`: Playback speed multiplier - `capture_date`: ISO date string of capture ## Intended Use This dataset is intended for training machine learning models on audio visualization patterns, waveform classification, or generative AI tasks.