Voice Dataset Manager
Update dataset - 1761528434.687811
5fac213
raw
history blame
4.05 kB
So there has been this vast development in multimodal AI recently. I signed up for Replicate and FAL AI. And what really strikes me is not only the diversity and number of models out there, but also the large number of permutations in multimodal AI, meaning what input can go to what output. And I think what I find difficult about it at the moment to navigate as a, let's say, creator. I created a few music videos just as kind of fun experiments. Is that there's so many different models. Like just in, let's say, the one series, there is maybe 20 different models to choose from in FAL, but they all do slightly different things not only in terms of the resolution and the parameter and the max duration but also in terms of the modalities, and they don’t really allow you to filter on this at the moment. <br><br>So what I mean by that is if we take an image to video model that animates still images to video, one model in one might create video without audio and another might create video with audio. And that's a very significant difference. But there's also a significant difference in do I prompt for the audio? In other words, is it going to be text to audio and render out audio that then gets added to video? Is it reference audio and reference image? So when you begin opening, all these differences really matter because I might want to filter on ideally, let's say I wanted to look at image to video models, which could generate lip sync to audio from a prompt. That might be one use case as well as the video. <br><br>In another use case, I might want to create a dialogue video. Let’s say I have a still image of a crowded market in Jerusalem, and I might want to print something like create a video from this image; the background soundtrack is background conversation noise in a bustling marketplace with vendors yelling out sales prices. That's just an example of the kind of background noise and the ambient noise that we have in this market I'm thinking about. <br><br>So what I would like to do, I created this repository which I created here. I'm trying to think of a taxonomy for multimodal, really for my own reference, but also as an open source project. Exploring the permutations of multimodal that are possible. So in the preceding example, we might have one definition of a modality might be still image to video without audio. Another modality, and then the description. Another modality might be still image to audio without lip sync. Another modality might be still image to video with lip sync. <br><br>But then you might have some sub modalities being still image to video with lip sync with reference image, that a reference to image. Another sub modality there might be still image to video with reference character reference in video. Another might be still image to video with audio with character reference through a LoRa (L-O-R-A). And I reckon that if we really enumerated the modalities we might get to hundreds if not thousands of different ones. For example, in FAL, just to talk about the long tail, there's music to music, which is music in painting. There's audio in painting, well, yeah, audio in painting, which I'm thinking aloud here is, I guess, distinguished music in painting is a subset of audio in painting, that it's melodic. <br><br>So that's the objective. I think that the JSON is the obvious format in which to attempt to denote these. And what I'd like you to do as the task definition is try to do this basically. Try to enumerate, list out a hierarchy, some kind of taxonomy representation that makes sense. We could try to create a baseline and then explore various ways of mapping out the hierarchy, manipulating the JSON so that we look at different ways of organizing it. So I think it would be useful to have like a first entry JSON in which we, and later maybe I, as new modalities come to, and we can maybe have very interesting labels might be their point of maturity, example workflows, use cases, etc. There's an awful lot that could be explored within these parameters.