Voice Dataset Manager
Update dataset - 2025-10-27 01:49:53
79e629e
So, I have a question. For image to video, it's currently expensive, very expensive actually. I'm trying to find a way. So I found the WAN models, which are by Alibaba. I find them to be very good, and they have a more affordable WAN model that I like using. And when I'm doing a video, I frequently gather up my images, gather up my prompts, and I move in towards a workflow by which I kind of do the storyboarding, gather the source material as I call it, the photos. Gather the prompt together, and then I will run it as a script, which is a very novel way for me of approaching content creation in the sense that it's programmatic and it's code first. <br><br>Which is a strange way to approach a creative process, but it works. And it seems to me at the moment to be the most effective way to do this because otherwise, before this, I was using a playground, running them one by one, importing them to a video editor, and it's just a lot slower that way. Now the issue is that image to video, as I mentioned, is expensive. And if I'm doing these projects for fun, I have a lot of ideas I want to do for fun. But even the cheaper WAN models are in the region of 10 to 15 cents per generation, which could easily, it's very easy to go through 20 or even 50 dollars, especially given the fact that frequently you need to generate the same prompt multiple times before you get a satisfactory result. <br><br>I really, really want to explore image to video, and I'm trying to find a way to have an affordable way to play around with it even if it's not the best model. And you know, so what I've been thinking of is I come across for a while providers like RunPod who do make GPU available either in serverless functions or they do per hour pricing on GPUs. And since I discovered Replicate and FAL, I've kind of wondered, well, if you can just make an API call, why go to the trouble of managing an instance of a machine? I'm thinking now that it might be the cost reason that if the machines are a certain price per hour, it might actually be a lot more cost-effective than using an API. <br><br>So my question is, firstly, is that the case? Is a frequent reason that people actually do these or use these services for cost mitigation? And so on. So that's the first thing. Secondly, serverless versus pods as RunPod calls them. I guess serverless almost makes more sense to me because you just pay for what you use and you don't need to worry about starting and stopping the pod and configuring auto shutdown policies. So what’s the reason that people go for pods over serverless? <br><br>And finally, if I want to do this, probably the objective would be, is there a way that you can have like your own API endpoint and that's running stuff on the serverless function in the backend? And what I get confused about for these things, the first time I did it, if I'm not mistaken, I did it with video generation. The video actually generated on my local, which seems almost like magic to me. So you're doing the actual inference rendering in the cloud. And is it just the case when that happens? And so on. And then just running my script and then I'm using on-demand compute.