video
video | label
class label 206
classes |
|---|---|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
092钟无艳
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
193喜相逢
|
|
2A.I.灵接触
|
|
2A.I.灵接触
|
|
2A.I.灵接触
|
|
2A.I.灵接触
|
|
2A.I.灵接触
|
|
2A.I.灵接触
|
|
2A.I.灵接触
|
|
2A.I.灵接触
|
|
2A.I.灵接触
|
|
2A.I.灵接触
|
|
2A.I.灵接触
|
|
3A货B货
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
4C.I.D.
|
|
5Catwalk俏佳人
|
|
5Catwalk俏佳人
|
|
5Catwalk俏佳人
|
|
5Catwalk俏佳人
|
|
5Catwalk俏佳人
|
|
5Catwalk俏佳人
|
|
5Catwalk俏佳人
|
|
5Catwalk俏佳人
|
|
5Catwalk俏佳人
|
|
5Catwalk俏佳人
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
|
6ID精英
|
the dataset is used to train loras on wan 2.2 and hunyuanvideo, all the videos are high resolution for 1080x720p, free of charge.
Click to expand
Introduction
This repository provides scripts for training LoRA (Low-Rank Adaptation) models with HunyuanVideo, Wan2.1/2.2, FramePack, FLUX.1 Kontext, and Qwen-Image architectures.
This repository is unofficial and not affiliated with the official HunyuanVideo/Wan2.1/2.2/FramePack/FLUX.1 Kontext/Qwen-Image repositories.
This repository is under development.
For Developers Using AI Coding Agents
This repository provides recommended instructions to help AI agents like Claude and Gemini understand our project context and coding standards.
To use them, you need to opt-in by creating your own configuration file in the project root.
Quick Setup:
Create a
CLAUDE.md,GEMINI.md, and/orAGENTS.mdfile in the project root.Add the following line to your
CLAUDE.mdto import the repository's recommended prompt (currently they are the almost same):@./.ai/claude.prompt.mdor for Gemini:
@./.ai/gemini.prompt.mdYou may be also import the prompt depending on the agent you are using with the custom prompt file such as
AGENTS.md.You can now add your own personal instructions below the import line (e.g.,
Always include a short summary of the change before diving into details.).
This approach ensures that you have full control over the instructions given to your agent while benefiting from the shared project context. Your CLAUDE.md, GEMINI.md and AGENTS.md (as well as Claude's .mcp.json) are already listed in .gitignore, so they won't be committed to the repository.
Overview
Hardware Requirements
- VRAM: 12GB or more recommended for image training, 24GB or more for video training
- Actual requirements depend on resolution and training settings. For 12GB, use a resolution of 960x544 or lower and use memory-saving options such as
--blocks_to_swap,--fp8_llm, etc.
- Actual requirements depend on resolution and training settings. For 12GB, use a resolution of 960x544 or lower and use memory-saving options such as
- Main Memory: 64GB or more recommended, 32GB + swap may work
Features
- Memory-efficient implementation
- Windows compatibility confirmed (Linux compatibility confirmed by community)
- Multi-GPU training (using Accelerate), documentation will be added later
Documentation
For detailed information on specific architectures, configurations, and advanced features, please refer to the documentation below.
Architecture-specific:
- HunyuanVideo
- Wan2.1/2.2
- Wan2.1/2.2 (Single Frame)
- FramePack
- FramePack (Single Frame)
- FLUX.1 Kontext
- Qwen-Image
Common Configuration & Usage:
- Dataset Configuration
- Advanced Configuration
- Sampling during Training
- Tools and Utilities
- Using torch.compile
Installation
pip based installation
Python 3.10 or later is required (verified with 3.10).
Create a virtual environment and install PyTorch and torchvision matching your CUDA version.
PyTorch 2.5.1 or later is required (see note).
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124
Install the required dependencies using the following command.
pip install -e .
Optionally, you can use FlashAttention and SageAttention (for inference only; see SageAttention Installation for installation instructions).
Optional dependencies for additional features:
ascii-magic: Used for dataset verificationmatplotlib: Used for timestep visualizationtensorboard: Used for logging training progressprompt-toolkit: Used for interactive prompt editing in Wan2.1 and FramePack inference scripts. If installed, it will be automatically used in interactive mode. Especially useful in Linux environments for easier prompt editing.
pip install ascii-magic matplotlib tensorboard prompt-toolkit
uv based installation (experimental)
You can also install using uv, but installation with uv is experimental. Feedback is welcome.
- Install uv (if not already present on your OS).
Linux/MacOS
curl -LsSf https://astral.sh/uv/install.sh | sh
Follow the instructions to add the uv path manually until you restart your session...
Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
Follow the instructions to add the uv path manually until you reboot your system... or just reboot your system at this point.
Model Download
Model download procedures vary by architecture. Please refer to the architecture-specific documents in the Documentation section for instructions.
Usage
Dataset Configuration
Please refer to here.
Pre-caching
Pre-caching procedures vary by architecture. Please refer to the architecture-specific documents in the Documentation section for instructions.
Configuration of Accelerate
Run accelerate config to configure Accelerate. Choose appropriate values for each question based on your environment (either input values directly or use arrow keys and enter to select; uppercase is default, so if the default value is fine, just press enter without inputting anything). For training with a single GPU, answer the questions as follows:
- In which compute environment are you running?: This machine
- Which type of machine are you using?: No distributed training
- Do you want to run your training on CPU only (even if a GPU / Apple Silicon / Ascend NPU device is available)?[yes/NO]: NO
- Do you wish to optimize your script with torch dynamo?[yes/NO]: NO
- Do you want to use DeepSpeed? [yes/NO]: NO
- What GPU(s) (by id) should be used for training on this machine as a comma-seperated list? [all]: all
- Would you like to enable numa efficiency? (Currently only supported on NVIDIA hardware). [yes/NO]: NO
- Do you wish to use mixed precision?: bf16
Note: In some cases, you may encounter the error ValueError: fp16 mixed precision requires a GPU. If this happens, answer "0" to the sixth question (What GPU(s) (by id) should be used for training on this machine as a comma-separated list? [all]:). This means that only the first GPU (id 0) will be used.
Training and Inference
Training and inference procedures vary significantly by architecture. Please refer to the architecture-specific documents in the Documentation section and the various configuration documents for detailed instructions.
Miscellaneous
SageAttention Installation
sdbsd has provided a Windows-compatible SageAttention implementation and pre-built wheels here: https://github.com/sdbds/SageAttention-for-windows. After installing triton, if your Python, PyTorch, and CUDA versions match, you can download and install the pre-built wheel from the Releases page. Thanks to sdbsd for this contribution.
For reference, the build and installation instructions are as follows. You may need to update Microsoft Visual C++ Redistributable to the latest version.
Download and install triton 3.1.0 wheel matching your Python version from here.
Install Microsoft Visual Studio 2022 or Build Tools for Visual Studio 2022, configured for C++ builds.
Clone the SageAttention repository in your preferred directory:
git clone https://github.com/thu-ml/SageAttention.gitOpen
x64 Native Tools Command Prompt for VS 2022from the Start menu under Visual Studio 2022.Activate your venv, navigate to the SageAttention folder, and run the following command. If you get a DISTUTILS not configured error, set
set DISTUTILS_USE_SDK=1and try again:python setup.py install
This completes the SageAttention installation.
PyTorch version
If you specify torch for --attn_mode, use PyTorch 2.5.1 or later (earlier versions may result in black videos).
If you use an earlier version, use xformers or SageAttention.
Disclaimer
This repository is unofficial and not affiliated with the official repositories of the supported architectures.
This repository is experimental and under active development. While we welcome community usage and feedback, please note:
- This is not intended for production use
- Features and APIs may change without notice
- Some functionalities are still experimental and may not work as expected
- Video training features are still under development
If you encounter any issues or bugs, please create an Issue in this repository with:
- A detailed description of the problem
- Steps to reproduce
- Your environment details (OS, GPU, VRAM, Python version, etc.)
- Any relevant error messages or logs
Contributing
We welcome contributions! Please see CONTRIBUTING.md for details.
License
Code under the hunyuan_model directory is modified from HunyuanVideo and follows their license.
Code under the hunyuan_video_1_5 directory is modified from HunyuanVideo 1.5 and follows their license.
Code under the wan directory is modified from Wan2.1. The license is under the Apache License 2.0.
Code under the frame_pack directory is modified from FramePack. The license is under the Apache License 2.0.
Other code is under the Apache License 2.0. Some code is copied and modified from Diffusers.
- Downloads last month
- 3,709