I built a couple of days ago, just to note some of the things I've been working on. So I recorded a long conversation with my mother AC and I wanted to transcribe it so I did that with Whisper. But I wanted to get it into SRT format. So a few good things occurred from this and I just want to know what they are.

The first is I understood that Hugging Face spaces, which are of course apps. So what you can do is you can create your own space. I can create a private space and choose to run it on any of their GPUs. The GPUs are on demand so I can create let say I guess what I previously thought was well hugging face spaces are great because you can prototype an idea and then you can share it etc They often used to demonstrate ideas or spaces What I didn't realize and what just clicked with me is you can create, let's say for example, a whisperer like what I did the transcription workflow. and or an image to video workflow that's not going to be it's not valuable to run locally as is more often the case than not especially for the current hardware you have which is AMD you deploy that up to Hugging Face and then you run it on their enterprise-grade hardware so you can run up to a quantum computer which is obviously something that no ordinary computer and many more

For FAL AI if you paying let say 25 cents a clip it not cost effective Google is like a clip But if you run your own, now I'm finally understanding why people pay for on-demand GPUs or they have provisioned GPUs All of these are great because some of the medium grade hardware that Hugging Face provides is priced at about $1-$3 per hour. Which is actually quite cheap and an awful lot cheaper than paying per generation. So the danger with these things of course is everyone's nightmare is leaving a serverless function turned on and then getting a huge bill. So they have pause settings. So that was one thing.

The other thing I discovered the thing I sort of connected finally is why it useful to duplicate spaces on Hugging Face I don really understand what the point was Someone created something that works So if you can duplicate a working space, you don't need to build it. But then it's private, so it's not... You know that nothing's happening, and then you can run it on your own terms in your own hardware.

And then finally, there is RunPod and Modal for on-demand GPUs. and many more. I'm also using this for this purpose. I was building a pipeline with PyAnode for diarization that doesn't really exist well in an API and then building these things yourself, pulling down these massive Python libraries every time is just way too time consuming. So if you create something and try it out in an environment that's much much faster, that's why these things are valuable.