Local Installation Video and Testing - Step by Step

#3
by fahdmirzac - opened

Hi,
Kudos on producing such a sublime model. I did a local installation and testing video :

https://youtu.be/yKFIO3KAbcY?si=TY3R0eaonnjVkVt0

Thanks and regards,
Fahd

Just to clarify that OP's claim for the content of this video is false. This isn't a "Local Installation" video. It's an installation to Google's cloud/AI platform.

Granted, it will allow you to run it, but it's not a tutorial on running it locally on your own hardware, as would be expected if you are running it "locally", so just be aware.

Not really. Its not running on Google's cloud. Its running locally on a virtual machine which is clearly mentioned in the video. There is a difference between API-based model and locally running models. Local doesn't just mean your own laptop or server in your garage. Hardly anyone have the budget for having GPU rack with all the power and cooling to run these models, so the viable option is to rent a server and install their 'locally' and that's what this video is doing.

Please don't mislead.

fahdmirzac changed discussion status to closed

Wtf does “It’s running locally on a virtual machine” even mean? And what is “Hardly anyone has the budget for a GPU rack with all the power and cooling to run these models” supposed to be?
Also, why are you renting an RTX A6000 for this? You could easily use a single RTX 3090, or even something lower.

And according to your logic, everything counts as “hosted locally,” but your definition of “locally” seems to be “if someone else hosts it and gives you access, it’s still local.” So let me get this straight: if I pay for the ChatGPT API, build a frontend that calls the API, and then host it on Vercel, does that count as local too? o.O

That's just clickbait...

Sign up or log in to comment