Art2Mus: Bridging Visual Arts and Music through Cross-Modal Generation
Abstract
Art2Mus extends AudioLDM2 architecture to generate music from digitized artworks using curated datasets created via ImageBind, demonstrating effective resonance between generated music and input stimuli.
Artificial Intelligence and generative models have revolutionized music creation, with many models leveraging textual or visual prompts for guidance. However, existing image-to-music models are limited to simple images, lacking the capability to generate music from complex digitized artworks. To address this gap, we introduce Art2Mus, a novel model designed to create music from digitized artworks or text inputs. Art2Mus extends the AudioLDM~2 architecture, a text-to-audio model, and employs our newly curated datasets, created via ImageBind, which pair digitized artworks with music. Experimental results demonstrate that Art2Mus can generate music that resonates with the input stimuli. These findings suggest promising applications in multimedia art, interactive installations, and AI-driven creative tools.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper