webxos commited on
Commit
303c9c8
·
verified ·
1 Parent(s): 69aaf14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -2
README.md CHANGED
@@ -1,5 +1,74 @@
1
- # audioform_dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ## Dataset Description
4
 
5
  This dataset was generated using AUDIOFORM, a 3D audio visualization system.
@@ -29,4 +98,4 @@ This dataset is intended for training machine learning models on audio visualiza
29
 
30
  ## Generation Details
31
 
32
- Generated with AUDIOFORM v1.0 - A Three.js based audio visualization dataset generator.
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - image-to-text
5
+ - text-to-image
6
+ - audio-classification
7
+ - image-classification
8
+ - tabular-classification
9
+ tags:
10
+ - audio
11
+ - image
12
+ - multimodal
13
+ - visualization
14
+ - audio-visualization
15
+ - 3d-visualization
16
+ - synthetic
17
+ - proof-of-concept
18
+ - frequency-estimation
19
+ - generative-audio
20
+ - music-visualization
21
+ ---
22
+ ## Audioform_Dataset_v1
23
 
24
+ This dataset is the very first output from **AUDIOFORM** — a Three.js powered 3D audio visualization tool that turns audio files
25
+ into beautiful, timestamped visual frames with rich metadata. **AUDIOFORM** by webXOS is available for download in the /audioform/
26
+ folder of this repo so developers can create their own similar datasets. Audio for is a synthetic harmonic oscilator that runs in HTML,
27
+ think of it as the "Hello World" / MNIST-style dataset application for audio-to-visual multimodal machine learning.
28
+
29
+ This dataset contains **10 captured frames** from a short uploaded WAV file (played at 1× speed), together with per-frame
30
+ metadata including dominant frequency, timestamp, and capture info.
31
+
32
+ ## Dataset Structure
33
+
34
+ ```
35
+ audioform_dataset/
36
+ ├── images/
37
+ │ ├── frame_0001.png
38
+ │ ├── frame_0002.png
39
+ │ └── ... (10 PNG frames total)
40
+ ├── metadata.csv # Main metadata file (Hugging Face viewer uses this)
41
+ └── README.md
42
+ ```
43
+
44
+ ```
45
+ | Column | Type | Description | Example Value |
46
+ |---------------|---------|-----------------------------------------------------------------------------|-----------------------------------|
47
+ | `file_name` | string | Relative path to the visualization PNG (required by Hugging Face) | `images/frame_0001.png` |
48
+ | `frame_id` | int | Sequential frame number (0-based) | 0, 1, 2, …, 9 |
49
+ | `timestamp` | float | Time in seconds when the frame was captured from the audio | 5.365, 6.219, 9.504 |
50
+ | `frequency` | int | Dominant / main detected audio frequency at capture time (Hz) | 0 (in this tiny sample) |
51
+ | `time_scale` | int | Playback speed multiplier used during visualization | 1 |
52
+ | `capture_date`| string | UTC ISO timestamp when the frame was rendered | 2026-01-13T19:57:36.427Z |
53
+ ```
54
+
55
+ See how fast a tiny diffusion model / GAN / LoRA can memorize & regenerate these exact 10 styles. Use the frames as
56
+ style references for ControlNet, IP-Adapter, or fine-tuning SD to adopt this neon 3D audio-viz aesthetic.
57
+
58
+ ```
59
+ This dataset shows the **format** AUDIOFORM produces.
60
+ → Feed it real music, voices, field recordings, synths
61
+ → Generate 1k–100k+ frames
62
+ → Add labels (genre, instrument, mood, multiple freq peaks…)
63
+ → Unlock serious applications:
64
+
65
+ - Music video auto-generation
66
+ - Visual audio classifiers
67
+ - Audio-conditioned image/video generation
68
+ - Interactive music → 3D art installations
69
+ - Novel multimodal music understanding models
70
+
71
+ ```
72
  ## Dataset Description
73
 
74
  This dataset was generated using AUDIOFORM, a 3D audio visualization system.
 
98
 
99
  ## Generation Details
100
 
101
+ Generated with AUDIOFORM v1.0 - by webXOS