Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
4
1
cyberdog
cyberdog94
Follow
0 followers
·
1 following
AI & ML interests
None yet
Recent Activity
new
activity
19 days ago
obsxrver/wan2.2-i2v-scat:
XX vs XXI version ?
replied
to
obsxrver
's
post
about 1 month ago
(https://github.com/obsxrver/wan22-lora-training) If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you. This is currently the easiest, fastest, and cheapest way to get a high-quality training run done. Why this method? * Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes. * Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required. * Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually $3 or less. * Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary. How it works: 1. Click the Vast.AI template link (in the repo). 2. Open the WebUI in your browser. 3. Upload your dataset and press Train. 4. Come back in an hour to find your LoRA in your Google Drive. It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half. Repo + Template Link: https://github.com/obsxrver/wan22-lora-training Let me know if you have questions
replied
to
obsxrver
's
post
about 1 month ago
(https://github.com/obsxrver/wan22-lora-training) If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you. This is currently the easiest, fastest, and cheapest way to get a high-quality training run done. Why this method? * Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes. * Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required. * Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually $3 or less. * Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary. How it works: 1. Click the Vast.AI template link (in the repo). 2. Open the WebUI in your browser. 3. Upload your dataset and press Train. 4. Come back in an hour to find your LoRA in your Google Drive. It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half. Repo + Template Link: https://github.com/obsxrver/wan22-lora-training Let me know if you have questions
View all activity
Organizations
None yet
cyberdog94
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
a model
3 months ago
lodestones/chroma-debug-development-only
Updated
Oct 14
•
47