Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
3
2
Powerhouse.pr
powerhouse-pr
Follow
kbthrillz's profile picture
1 follower
·
3 following
AI & ML interests
None yet
Recent Activity
replied
to
obsxrver
's
post
21 days ago
(https://github.com/obsxrver/wan22-lora-training) If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you. This is currently the easiest, fastest, and cheapest way to get a high-quality training run done. Why this method? * Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes. * Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required. * Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually $3 or less. * Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary. How it works: 1. Click the Vast.AI template link (in the repo). 2. Open the WebUI in your browser. 3. Upload your dataset and press Train. 4. Come back in an hour to find your LoRA in your Google Drive. It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half. Repo + Template Link: https://github.com/obsxrver/wan22-lora-training Let me know if you have questions
reacted
to
obsxrver
's
post
with ❤️
21 days ago
(https://github.com/obsxrver/wan22-lora-training) If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you. This is currently the easiest, fastest, and cheapest way to get a high-quality training run done. Why this method? * Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes. * Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required. * Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually $3 or less. * Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary. How it works: 1. Click the Vast.AI template link (in the repo). 2. Open the WebUI in your browser. 3. Upload your dataset and press Train. 4. Come back in an hour to find your LoRA in your Google Drive. It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half. Repo + Template Link: https://github.com/obsxrver/wan22-lora-training Let me know if you have questions
liked
a model
about 1 month ago
obsxrver/wan2.2-i2v-scat
View all activity
Organizations
None yet
powerhouse-pr
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
a model
about 1 month ago
obsxrver/wan2.2-i2v-scat
Image-to-Video
•
Updated
12 days ago
•
2.08k
•
•
15
liked
a model
3 months ago
obsxrver/wan2.2-t2v-scat
Text-to-Video
•
Updated
12 days ago
•
728
•
20