Obvious Research
commited on
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# V2MIDI Dataset
|
| 2 |
+
|
| 3 |
+
## Overview
|
| 4 |
+
|
| 5 |
+
The V2MIDI dataset pairs 40,000 MIDI files with AI-generated videos, connecting music and visual art in a new way. It's designed to help researchers and artists explore how to synchronize music and visuals using AI. This dataset isn't just a collection of files – it's a tool that could change how we create and experience audio-visual content.
|
| 6 |
+
|
| 7 |
+
## Dataset Description
|
| 8 |
+
|
| 9 |
+
- **Size**: About 257GB
|
| 10 |
+
- **Contents**: 40,000 pairs of MIDI files and MP4 videos
|
| 11 |
+
- **Video Details**: 256x256 pixels, 16 seconds long, 24 frames per second
|
| 12 |
+
- **Music Focus**: House music drum patterns
|
| 13 |
+
- **Visual Variety**: AI-generated visuals based on diverse text prompts
|
| 14 |
+
|
| 15 |
+
## How We Created the Dataset
|
| 16 |
+
|
| 17 |
+
We built the V2MIDI dataset through several key steps:
|
| 18 |
+
|
| 19 |
+
1. **Gathering MIDI Data**:
|
| 20 |
+
We started with a large archive of drum and percussion MIDI files, focusing on house music. We picked files based on their rhythm quality and how well they might match with visuals.
|
| 21 |
+
|
| 22 |
+
2. **Standardizing MIDI Files**:
|
| 23 |
+
We processed each chosen MIDI file to make a 16-second sequence. We focused on five main drum sounds: kick, snare, closed hi-hat, open hi-hat, and pedal hi-hat. This helped keep things consistent across the dataset.
|
| 24 |
+
|
| 25 |
+
3. **Linking Music to Visuals**:
|
| 26 |
+
We created a system to turn MIDI events into visual changes. For example, a kick drum might make a peak of strength in the visuals, while hi-hats might make things rotate. This is the core of how we sync the music and visuals.
|
| 27 |
+
|
| 28 |
+
4. **Creating Visual Ideas**:
|
| 29 |
+
We came up with 10,000 text prompts across 100 themes. We used AI to help generate ideas, then went through and refined them by hand. This gave us a wide range of visual styles that fit well with electronic music.
|
| 30 |
+
|
| 31 |
+
5. **Making the Videos**:
|
| 32 |
+
We used our MIDI-to-visual system and tools such as Parseq, Deforum and Automatic1111 (Stable Diffusion web UI) to create videos for each MIDI file.
|
| 33 |
+
|
| 34 |
+
6. **Organizing and Checking**:
|
| 35 |
+
Finally, we paired each video with its MIDI file and organized everything neatly. We carefully made sure the visuals matched the music well and looked good.
|
| 36 |
+
|
| 37 |
+
## Why It's Useful
|
| 38 |
+
|
| 39 |
+
The V2MIDI dataset is special because it precisely matches MIDI events to visual changes. This opens up some exciting possibilities:
|
| 40 |
+
|
| 41 |
+
- **See the music**: Train AI to create visuals that match music in real-time.
|
| 42 |
+
- **Hear the visuals**: Explore whether AI can "guess" the music just by watching the video.
|
| 43 |
+
- **New creative tools**: Develop apps that let musicians visualize their music or let artists "hear" their visual creations.
|
| 44 |
+
- **Better live shows**: Create live visuals that perfectly sync with the music.
|
| 45 |
+
|
| 46 |
+
## Flexible and Customizable
|
| 47 |
+
|
| 48 |
+
We've built the V2MIDI creation process to be flexible. Researchers and artists can:
|
| 49 |
+
|
| 50 |
+
- Adjust how MIDI files are processed
|
| 51 |
+
- Change how music events are mapped to visual effects
|
| 52 |
+
- Create different styles of visuals
|
| 53 |
+
- Experiment with video settings like resolution and frame rate
|
| 54 |
+
- Adapt the process to work on different computer setups
|
| 55 |
+
|
| 56 |
+
This flexibility means the V2MIDI approach could be extended to other types of music or visual styles.
|
| 57 |
+
|
| 58 |
+
## Training AI Models
|
| 59 |
+
|
| 60 |
+
One of the most important aspects of the V2MIDI dataset is its potential for training AI models. Researchers can use this dataset to develop models that:
|
| 61 |
+
|
| 62 |
+
- Predict musical features from video content
|
| 63 |
+
- Create cross-modal representations linking audio and visual domains
|
| 64 |
+
- Develop more sophisticated audio-visual generation models
|
| 65 |
+
|
| 66 |
+
The size and quality of the dataset make it particularly valuable for deep learning approaches.
|
| 67 |
+
|
| 68 |
+
## How to Get the Dataset
|
| 69 |
+
|
| 70 |
+
The dataset is quite big so we've split it into 257 parts of about 1GB each. Here's how to put it back together:
|
| 71 |
+
|
| 72 |
+
1. Download all the parts (they're named `img2img_part_aa` to `img2img_part_jw`)
|
| 73 |
+
2. Stick them together with this command: `cat img2img_part_* > img2img-images_clean.tar`
|
| 74 |
+
3. Unpack it: `tar -xvf img2img-images_clean.tar`
|
| 75 |
+
|
| 76 |
+
Make sure you have at least 257GB of free space on your computer for this.
|
| 77 |
+
|
| 78 |
+
## What's Next?
|
| 79 |
+
|
| 80 |
+
We see the V2MIDI dataset as just the beginning. Future work could:
|
| 81 |
+
|
| 82 |
+
- Include more types of music
|
| 83 |
+
- Work with more complex musical structures
|
| 84 |
+
- Try generating music from videos (not just videos from music)
|
| 85 |
+
- Create tools for live performances
|
| 86 |
+
|
| 87 |
+
## Thank You
|
| 88 |
+
|
| 89 |
+
We couldn't have made this without the people who created the original MIDI archive and the open-source communities behind Stable Diffusion, Deforum, and AUTOMATIC1111.
|
| 90 |
+
|
| 91 |
+
## Get in Touch
|
| 92 |
+
|
| 93 |
+
If you have questions or want to know more about the V2MIDI dataset, email us at:
|
| 94 |
+
research.obvious@gmail.com
|