{ "base_model": "tencent/HunyuanVideo", "tree": [ { "model_id": "tencent/HunyuanVideo", "gated": "False", "card": "---\npipeline_tag: text-to-video\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: LICENSE\n---\n\n\n\n

\n \n

\n\n# HunyuanVideo: A Systematic Framework For Large Video Generation Model Training\n\n-----\n\nThis repo contains PyTorch model definitions, pre-trained weights and inference/sampling code for our paper exploring HunyuanVideo. You can find more visualizations on our [project page](https://aivideo.hunyuan.tencent.com).\n\n> [**HunyuanVideo: A Systematic Framework For Large Video Generation Model Training**](https://arxiv.org/abs/2412.03603)
\n\n\n\n## News!!\n\n* Jan 13, 2025: \ud83d\udcc8 We release the [Penguin Video Benchmark](https://github.com/Tencent/HunyuanVideo/blob/main/assets/PenguinVideoBenchmark.csv).\n* Dec 18, 2024: \ud83c\udfc3\u200d\u2642\ufe0f We release the [FP8 model weights](https://huggingface.co/tencent/HunyuanVideo/blob/main/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states_fp8.pt) of HunyuanVideo to save more GPU memory.\n* Dec 17, 2024: \ud83e\udd17 HunyuanVideo has been integrated into [Diffusers](https://huggingface.co/docs/diffusers/main/api/pipelines/hunyuan_video).\n* Dec 7, 2024: \ud83d\ude80 We release the parallel inference code for HunyuanVideo powered by [xDiT](https://github.com/xdit-project/xDiT).\n* Dec 3, 2024: \ud83d\udc4b We release the inference code and model weights of HunyuanVideo. [Download](https://github.com/Tencent/HunyuanVideo/blob/main/ckpts/README.md).\n\n\n\n## Open-source Plan\n\n- HunyuanVideo (Text-to-Video Model)\n - [x] Inference \n - [x] Checkpoints\n - [x] Multi-gpus Sequence Parallel inference (Faster inference speed on more gpus)\n - [x] Web Demo (Gradio)\n - [x] Diffusers \n - [x] FP8 Quantified weight\n - [x] Penguin Video Benchmark\n - [x] ComfyUI\n- [HunyuanVideo (Image-to-Video Model)](https://github.com/Tencent/HunyuanVideo-I2V)\n - [x] Inference \n - [x] Checkpoints \n\n\n\n## Contents\n\n- [HunyuanVideo: A Systematic Framework For Large Video Generation Model](#hunyuanvideo-a-systematic-framework-for-large-video-generation-model)\n - [News!!](#news)\n - [Open-source Plan](#open-source-plan)\n - [Contents](#contents)\n - [**Abstract**](#abstract)\n - [**HunyuanVideo Overall Architecture**](#hunyuanvideo-overall-architecture)\n - [**HunyuanVideo Key Features**](#hunyuanvideo-key-features)\n - [**Unified Image and Video Generative Architecture**](#unified-image-and-video-generative-architecture)\n - [**MLLM Text Encoder**](#mllm-text-encoder)\n - [**3D VAE**](#3d-vae)\n - [**Prompt Rewrite**](#prompt-rewrite)\n - [Comparisons](#comparisons)\n - [Requirements](#requirements)\n - [Dependencies and Installation](#\ufe0fdependencies-and-installation)\n - [Installation Guide for Linux](#installation-guide-for-linux)\n - [Download Pretrained Models](#download-pretrained-models)\n - [Single-gpu Inference](#single-gpu-inference)\n - [Using Command Line](#using-command-line)\n - [Run a Gradio Server](#run-a-gradio-server)\n - [More Configurations](#more-configurations)\n - [Parallel Inference on Multiple GPUs by xDiT](#parallel-inference-on-multiple-gpus-by-xdit)\n - [Using Command Line](#using-command-line-1)\n - [FP8 Inference](#fp8-inference)\n - [Using Command Line](#using-command-line-2)\n - [BibTeX](#bibtex)\n - [Acknowledgements](#acknowledgements)\n\n---\n\n## **Abstract**\n\nWe present HunyuanVideo, a novel open-source video foundation model that exhibits performance in video generation that is comparable to, if not superior to, leading closed-source models. In order to train HunyuanVideo model, we adopt several key technologies for model learning, including data curation, image-video joint model training, and an efficient infrastructure designed to facilitate large-scale model training and inference. Additionally, through an effective strategy for scaling model architecture and dataset, we successfully trained a video generative model with over 13 billion parameters, making it the largest among all open-source models. \n\nWe conducted extensive experiments and implemented a series of targeted designs to ensure high visual quality, motion diversity, text-video alignment, and generation stability. According to professional human evaluation results, HunyuanVideo outperforms previous state-of-the-art models, including Runway Gen-3, Luma 1.6, and 3 top-performing Chinese video generative models. By releasing the code and weights of the foundation model and its applications, we aim to bridge the gap between closed-source and open-source video foundation models. This initiative will empower everyone in the community to experiment with their ideas, fostering a more dynamic and vibrant video generation ecosystem. \n\n\n\n## **HunyuanVideo Overall Architecture**\n\nHunyuanVideo is trained on a spatial-temporally\ncompressed latent space, which is compressed through a Causal 3D VAE. Text prompts are encoded\nusing a large language model, and used as the conditions. Taking Gaussian noise and the conditions as\ninput, our generative model produces an output latent, which is then decoded to images or videos through\nthe 3D VAE decoder.\n\n

\n \n

\n\n\n\n## **HunyuanVideo Key Features**\n\n### **Unified Image and Video Generative Architecture**\n\nHunyuanVideo introduces the Transformer design and employs a Full Attention mechanism for unified image and video generation. \nSpecifically, we use a \"Dual-stream to Single-stream\" hybrid model design for video generation. In the dual-stream phase, video and text\ntokens are processed independently through multiple Transformer blocks, enabling each modality to learn its own appropriate modulation mechanisms without interference. In the single-stream phase, we concatenate the video and text\ntokens and feed them into subsequent Transformer blocks for effective multimodal information fusion.\nThis design captures complex interactions between visual and semantic information, enhancing\noverall model performance.\n\n

\n \n

\n\n\n### **MLLM Text Encoder**\n\nSome previous text-to-video models typically use pre-trained CLIP and T5-XXL as text encoders where CLIP uses Transformer Encoder and T5 uses an Encoder-Decoder structure. In contrast, we utilize a pre-trained Multimodal Large Language Model (MLLM) with a Decoder-Only structure as our text encoder, which has the following advantages: (i) Compared with T5, MLLM after visual instruction finetuning has better image-text alignment in the feature space, which alleviates the difficulty of the instruction following in diffusion models; (ii)\nCompared with CLIP, MLLM has demonstrated superior ability in image detail description\nand complex reasoning; (iii) MLLM can play as a zero-shot learner by following system instructions prepended to user prompts, helping text features pay more attention to key information. In addition, MLLM is based on causal attention while T5-XXL utilizes bidirectional attention that produces better text guidance for diffusion models. Therefore, we introduce an extra bidirectional token refiner to enhance text features.\n\n

\n \n

\n\n\n### **3D VAE**\n\nHunyuanVideo trains a 3D VAE with CausalConv3D to compress pixel-space videos and images into a compact latent space. We set the compression ratios of video length, space, and channel to 4, 8, and 16 respectively. This can significantly reduce the number of tokens for the subsequent diffusion transformer model, allowing us to train videos at the original resolution and frame rate.\n\n

\n \n

\n\n\n### **Prompt Rewrite**\n\nTo address the variability in linguistic style and length of user-provided prompts, we fine-tune the [Hunyuan-Large model](https://github.com/Tencent/Tencent-Hunyuan-Large) as our prompt rewrite model to adapt the original user prompt to model-preferred prompt.\n\nWe provide two rewrite modes: Normal mode and Master mode, which can be called using different prompts. The prompts are shown [here](hyvideo/prompt_rewrite.py). The Normal mode is designed to enhance the video generation model's comprehension of user intent, facilitating a more accurate interpretation of the instructions provided. The Master mode enhances the description of aspects such as composition, lighting, and camera movement, which leans towards generating videos with a higher visual quality. However, this emphasis may occasionally result in the loss of some semantic details. \n\nThe Prompt Rewrite Model can be directly deployed and inferred using the [Hunyuan-Large original code](https://github.com/Tencent/Tencent-Hunyuan-Large). We release the weights of the Prompt Rewrite Model [here](https://huggingface.co/Tencent/HunyuanVideo-PromptRewrite).\n\n\n\n## Comparisons\n\nTo evaluate the performance of HunyuanVideo, we selected five strong baselines from closed-source video generation models. In total, we utilized 1,533 text prompts, generating an equal number of video samples with HunyuanVideo in a single run. For a fair comparison, we conducted inference only once, avoiding any cherry-picking of results. When comparing with the baseline methods, we maintained the default settings for all selected models, ensuring consistent video resolution. Videos were assessed based on three criteria: Text Alignment, Motion Quality, and Visual Quality. More than 60 professional evaluators performed the evaluation. Notably, HunyuanVideo demonstrated the best overall performance, particularly excelling in motion quality. Please note that the evaluation is based on Hunyuan Video's high-quality version. This is different from the currently released fast version.\n\n

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n \n \n \n\n
Model Open Source Duration Text Alignment Motion Quality Visual Quality Overall Ranking
HunyuanVideo (Ours) \u2714 5s 61.8% 66.5% 95.7% 41.3% 1
CNTopA (API) 5s 62.6% 61.7% 95.6% 37.7% 2
CNTopB (Web) 5s 60.1% 62.9% 97.7% 37.5% 3
GEN-3 alpha (Web) 6s 47.7% 54.7% 97.5% 27.4% 4
Luma1.6 (API) 5s 57.6% 44.2% 94.1% 24.8% 5
CNTopC (Web) 5s 48.4% 47.2% 96.3% 24.6% 6
\n

\n\n\n\n## Requirements\n\nThe following table shows the requirements for running HunyuanVideo model (batch size = 1) to generate videos:\n\n| Model | Setting
(height/width/frame) | GPU Peak Memory |\n| :----------: | :------------------------------: | :-------------: |\n| HunyuanVideo | 720px1280px129f | 60GB |\n| HunyuanVideo | 544px960px129f | 45GB |\n\n* An NVIDIA GPU with CUDA support is required. \n * The model is tested on a single 80G GPU.\n * **Minimum**: The minimum GPU memory required is 60GB for 720px1280px129f and 45G for 544px960px129f.\n * **Recommended**: We recommend using a GPU with 80GB of memory for better generation quality.\n* Tested operating system: Linux\n\n\n\n## Dependencies and Installation\n\nBegin by cloning the repository:\n\n```shell\ngit clone https://github.com/tencent/HunyuanVideo\ncd HunyuanVideo\n```\n\n### Installation Guide for Linux\n\nWe recommend CUDA versions 12.4 or 11.8 for the manual installation.\n\nConda's installation instructions are available [here](https://docs.anaconda.com/free/miniconda/index.html).\n\n```shell\n# 1. Create conda environment\nconda create -n HunyuanVideo python==3.10.9\n\n# 2. Activate the environment\nconda activate HunyuanVideo\n\n# 3. Install PyTorch and other dependencies using conda\n# For CUDA 11.8\nconda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=11.8 -c pytorch -c nvidia\n# For CUDA 12.4\nconda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4 -c pytorch -c nvidia\n\n# 4. Install pip dependencies\npython -m pip install -r requirements.txt\n\n# 5. Install flash attention v2 for acceleration (requires CUDA 11.8 or above)\npython -m pip install ninja\npython -m pip install git+https://github.com/Dao-AILab/flash-attention.git@v2.6.3\n\n# 6. Install xDiT for parallel inference (It is recommended to use torch 2.4.0 and flash-attn 2.6.3)\npython -m pip install xfuser==0.4.0\n```\n\nIn case of running into float point exception(core dump) on the specific GPU type, you may try the following solutions:\n\n```shell\n# Option 1: Making sure you have installed CUDA 12.4, CUBLAS>=12.4.5.8, and CUDNN>=9.00 (or simply using our CUDA 12 docker image).\npip install nvidia-cublas-cu12==12.4.5.8\nexport LD_LIBRARY_PATH=/opt/conda/lib/python3.8/site-packages/nvidia/cublas/lib/\n\n# Option 2: Forcing to explictly use the CUDA 11.8 compiled version of Pytorch and all the other packages\npip uninstall -r requirements.txt # uninstall all packages\npip uninstall -y xfuser\npip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu118\npip install -r requirements.txt\npip install ninja\npip install git+https://github.com/Dao-AILab/flash-attention.git@v2.6.3\npip install xfuser==0.4.0\n```\n\nAdditionally, HunyuanVideo also provides a pre-built Docker image. Use the following command to pull and run the docker image.\n\n```shell\n# For CUDA 12.4 (updated to avoid float point exception)\ndocker pull hunyuanvideo/hunyuanvideo:cuda_12\ndocker run -itd --gpus all --init --net=host --uts=host --ipc=host --name hunyuanvideo --security-opt=seccomp=unconfined --ulimit=stack=67108864 --ulimit=memlock=-1 --privileged hunyuanvideo/hunyuanvideo:cuda_12\n\n# For CUDA 11.8\ndocker pull hunyuanvideo/hunyuanvideo:cuda_11\ndocker run -itd --gpus all --init --net=host --uts=host --ipc=host --name hunyuanvideo --security-opt=seccomp=unconfined --ulimit=stack=67108864 --ulimit=memlock=-1 --privileged hunyuanvideo/hunyuanvideo:cuda_11\n```\n\n\n\n## Download Pretrained Models\n\nThe details of download pretrained models are shown [here](ckpts/README.md).\n\n\n\n## Single-gpu Inference\n\nWe list the height/width/frame settings we support in the following table.\n\n| Resolution | h/w=9:16 | h/w=16:9 | h/w=4:3 | h/w=3:4 | h/w=1:1 |\n| :----------------: | :-------------: | :-------------: | :-------------: | :-------------: | :------------: |\n| 540p | 544px960px129f | 960px544px129f | 624px832px129f | 832px624px129f | 720px720px129f |\n| 720p (recommended) | 720px1280px129f | 1280px720px129f | 1104px832px129f | 832px1104px129f | 960px960px129f |\n\n### Using Command Line\n\n```bash\ncd HunyuanVideo\n\npython3 sample_video.py \\\n --video-size 720 1280 \\\n --video-length 129 \\\n --infer-steps 50 \\\n --prompt \"A cat walks on the grass, realistic style.\" \\\n --flow-reverse \\\n --use-cpu-offload \\\n --save-path ./results\n```\n\n### Run a Gradio Server\n\n```bash\npython3 gradio_server.py --flow-reverse\n\n# set SERVER_NAME and SERVER_PORT manually\n# SERVER_NAME=0.0.0.0 SERVER_PORT=8081 python3 gradio_server.py --flow-reverse\n```\n\n### More Configurations\n\nWe list some more useful configurations for easy usage:\n\n| Argument | Default | Description |\n| :--------------------: | :-------: | :----------------------------------------------------------: |\n| `--prompt` | None | The text prompt for video generation |\n| `--video-size` | 720 1280 | The size of the generated video |\n| `--video-length` | 129 | The length of the generated video |\n| `--infer-steps` | 50 | The number of steps for sampling |\n| `--embedded-cfg-scale` | 6.0 | Embedded Classifier free guidance scale |\n| `--flow-shift` | 7.0 | Shift factor for flow matching schedulers |\n| `--flow-reverse` | False | If reverse, learning/sampling from t=1 -> t=0 |\n| `--seed` | None | The random seed for generating video, if None, we init a random seed |\n| `--use-cpu-offload` | False | Use CPU offload for the model load to save more memory, necessary for high-res video generation |\n| `--save-path` | ./results | Path to save the generated video |\n\n\n\n## Parallel Inference on Multiple GPUs by xDiT\n\n[xDiT](https://github.com/xdit-project/xDiT) is a Scalable Inference Engine for Diffusion Transformers (DiTs) on multi-GPU Clusters.\nIt has successfully provided low-latency parallel inference solutions for a variety of DiTs models, including mochi-1, CogVideoX, Flux.1, SD3, etc. This repo adopted the [Unified Sequence Parallelism (USP)](https://arxiv.org/abs/2405.07719) APIs for parallel inference of the HunyuanVideo model.\n\n### Using Command Line\n\nFor example, to generate a video with 8 GPUs, you can use the following command:\n\n```bash\ncd HunyuanVideo\n\ntorchrun --nproc_per_node=8 sample_video.py \\\n --video-size 1280 720 \\\n --video-length 129 \\\n --infer-steps 50 \\\n --prompt \"A cat walks on the grass, realistic style.\" \\\n --flow-reverse \\\n --seed 42 \\\n --ulysses-degree 8 \\\n --ring-degree 1 \\\n --save-path ./results\n```\n\nYou can change the `--ulysses-degree` and `--ring-degree` to control the parallel configurations for the best performance. The valid parallel configurations are shown in the following table.\n\n
\nSupported Parallel Configurations (Click to expand)\n\n\n| --video-size | --video-length | --ulysses-degree x --ring-degree | --nproc_per_node |\n| -------------------- | -------------- | -------------------------------- | ---------------- |\n| 1280 720 or 720 1280 | 129 | 8x1,4x2,2x4,1x8 | 8 |\n| 1280 720 or 720 1280 | 129 | 1x5 | 5 |\n| 1280 720 or 720 1280 | 129 | 4x1,2x2,1x4 | 4 |\n| 1280 720 or 720 1280 | 129 | 3x1,1x3 | 3 |\n| 1280 720 or 720 1280 | 129 | 2x1,1x2 | 2 |\n| 1104 832 or 832 1104 | 129 | 4x1,2x2,1x4 | 4 |\n| 1104 832 or 832 1104 | 129 | 3x1,1x3 | 3 |\n| 1104 832 or 832 1104 | 129 | 2x1,1x2 | 2 |\n| 960 960 | 129 | 6x1,3x2,2x3,1x6 | 6 |\n| 960 960 | 129 | 4x1,2x2,1x4 | 4 |\n| 960 960 | 129 | 3x1,1x3 | 3 |\n| 960 960 | 129 | 1x2,2x1 | 2 |\n| 960 544 or 544 960 | 129 | 6x1,3x2,2x3,1x6 | 6 |\n| 960 544 or 544 960 | 129 | 4x1,2x2,1x4 | 4 |\n| 960 544 or 544 960 | 129 | 3x1,1x3 | 3 |\n| 960 544 or 544 960 | 129 | 1x2,2x1 | 2 |\n| 832 624 or 624 832 | 129 | 4x1,2x2,1x4 | 4 |\n| 624 832 or 624 832 | 129 | 3x1,1x3 | 3 |\n| 832 624 or 624 832 | 129 | 2x1,1x2 | 2 |\n| 720 720 | 129 | 1x5 | 5 |\n| 720 720 | 129 | 3x1,1x3 | 3 |\n\n
\n\n\n

\n\n\n\n \n\n\n \n \n \n \n\n\n\n\n \n \n \n \n\n\n\n\n
Latency (Sec) for 1280x720 (129 frames 50 steps) on 8xGPU
1248
1904.08934.09 (2.04x)514.08 (3.70x)337.58 (5.64x)
\n

\n\n\n\n## FP8 Inference\n\nUsing HunyuanVideo with FP8 quantized weights, which saves about 10GB of GPU memory. You can download the [weights](https://huggingface.co/tencent/HunyuanVideo/blob/main/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states_fp8.pt) and [weight scales](https://huggingface.co/tencent/HunyuanVideo/blob/main/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states_fp8_map.pt) from Huggingface.\n\n### Using Command Line\n\nHere, you must explicitly specify the FP8 weight path. For example, to generate a video with fp8 weights, you can use the following command:\n\n```bash\ncd HunyuanVideo\n\nDIT_CKPT_PATH={PATH_TO_FP8_WEIGHTS}/{WEIGHT_NAME}_fp8.pt\n\npython3 sample_video.py \\\n --dit-weight ${DIT_CKPT_PATH} \\\n --video-size 1280 720 \\\n --video-length 129 \\\n --infer-steps 50 \\\n --prompt \"A cat walks on the grass, realistic style.\" \\\n --seed 42 \\\n --embedded-cfg-scale 6.0 \\\n --flow-shift 7.0 \\\n --flow-reverse \\\n --use-cpu-offload \\\n --use-fp8 \\\n --save-path ./results\n```\n\n\n\n## BibTeX\n\nIf you find [HunyuanVideo](https://arxiv.org/abs/2412.03603) useful for your research and applications, please cite using this BibTeX:\n\n```BibTeX\n@misc{kong2024hunyuanvideo,\n title={HunyuanVideo: A Systematic Framework For Large Video Generative Models}, \n author={Weijie Kong, Qi Tian, Zijian Zhang, Rox Min, Zuozhuo Dai, Jin Zhou, Jiangfeng Xiong, Xin Li, Bo Wu, Jianwei Zhang, Kathrina Wu, Qin Lin, Aladdin Wang, Andong Wang, Changlin Li, Duojun Huang, Fang Yang, Hao Tan, Hongmei Wang, Jacob Song, Jiawang Bai, Jianbing Wu, Jinbao Xue, Joey Wang, Junkun Yuan, Kai Wang, Mengyang Liu, Pengyu Li, Shuai Li, Weiyan Wang, Wenqing Yu, Xinchi Deng, Yang Li, Yanxin Long, Yi Chen, Yutao Cui, Yuanbo Peng, Zhentao Yu, Zhiyu He, Zhiyong Xu, Zixiang Zhou, Zunnan Xu, Yangyu Tao, Qinglin Lu, Songtao Liu, Dax Zhou, Hongfa Wang, Yong Yang, Di Wang, Yuhong Liu, and Jie Jiang, along with Caesar Zhong},\n year={2024},\n archivePrefix={arXiv preprint arXiv:2412.03603},\n primaryClass={cs.CV},\n url={https://arxiv.org/abs/2412.03603}, \n}\n```\n\n\n\n## Acknowledgements\n\nWe would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [FLUX](https://github.com/black-forest-labs/flux), [Llama](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [Xtuner](https://github.com/InternLM/xtuner), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.\nAdditionally, we also thank the Tencent Hunyuan Multimodal team for their help with the text encoder. \n\n", "metadata": "\"N/A\"", "depth": 0, "children": [ "hunyuanvideo-community/HunyuanVideo", "Kunbyte/DRA-Ctrl", "tencent/HunyuanCustom", "icaruseu/QA", "BhilVasant/Noura140", "Usama1234/jonesjames", "Cseti/HunyuanVideo-LoRA-Arcane_Jinx-v1", "Cseti/HunyuanVideo-LoRA-Arcane_Style", "1989shack/1989shack-Ecommmerce-Platform", "jbilcke-hf/HunyuanVideo-HFIE", "Alikhani0916/bot3", "tanfff/test1", "DROWHOODIS/vidgen", "FastVideo/Hunyuan-Black-Myth-Wukong-lora-weight", "jobs-git/HunyuanVideoCommunity", "kudzueye/boreal-hl-v1", "ABDALLALSWAITI/hunyuan_video_720_cfgdistill_e5m2", "Skywork/SkyReels-V1-Hunyuan-T2V", "Skywork/SkyReels-V1-Hunyuan-I2V", "newgenai79/SkyReels-V1-Hunyuan-I2V-int4", "newgenai79/HunyuanVideo-int4", "dashtoon/hunyuan-video-keyframe-control-lora", "neph1/AncientRome_HunyuanVideo_Lora", "jqlive/hyv_depth_control", "linoyts/Hunyuan-LoRA", "Fabrice-TIERCELIN/HunyuanVideo", "ApacheOne/Info-Hunyuan_LoRAs", "funscripter/collection", "Razrien/Furry-hunyuan-testing-thing" ], "children_count": 29, "adapters": [ "shaonroy/roy", "trojblue/HunyuanVideo-lora-AnimeShots", "calcuis/hyvid", "trojblue/HunyuanVideo-lora-AnimeStills", "a-r-r-o-w/HunyuanVideo-tuxemons", "lucataco/hunyuan-musubi-rose-6-comfyui", "lucataco/hunyuan-musubi-lora-heygen-6", "lucataco/hunyuan-musubi-lora-mrgnfrmn-6", "fofr/hunyuan-test", "martintomov/HunyuanVideo-Coca-Cola", "fofr/hunyuan-sonic-2", "fofr/hunyuan-take-on-me", "fofr/hunyuan-ponponpon", "fofr/hunyuan-cyberpunk-mod", "lucataco/hunyuan-lora-heygen-man-8", "Knvl/test", "Knvl/test2", "Knvl/mybad", "gj3ka1/animaengine", "pablerdo/hunyuan-lora-f50cleat", "lucataco/hunyuan-steamboat-willie-10", "ghej4u/yay", "ghej4u/ian2", "ghej4u/lol", "ghej4u/test", "lucataco/hunyuan-lora-heygen-woman-2", "deepfates/hunyuan-beast", "hashu786/cine", "AI-Anna/anime-renderer", "deepfates/hunyuan-game-of-thrones", "deepfates/hunyuan-fargo", "deepfates/hunyuan-la-la-land", "deepfates/hunyuan-blade-runner", "deepfates/hunyuan-pulp-fiction", "deepfates/hunyuan-blade-runner-2049", "deepfates/hunyuan-the-grand-budapest-hotel", "deepfates/hunyuan-twin-peaks", "deepfates/hunyuan-the-neverending-story", "deepfates/hunyuan-interstellar", "deepfates/hunyuan-pirates-of-the-caribbean", "deepfates/hunyuan-once-upon-a-time-in-hollywood", "deepfates/hunyuan-dune", "deepfates/hunyuan-arcane", "deepfates/hunyuan-indiana-jones", "deepfates/hunyuan-joker", "deepfates/hunyuan-inception", "deepfates/hunyuan-her", "deepfates/hunyuan-westworld", "deepfates/hunyuan-atomic-blonde", "deepfates/hunyuan-avatar", "deepfates/hunyuan-cowboy-bebop", "deepfates/hunyuan-the-lord-of-the-rings", "deepfates/hunyuan-mad-max-fury-road", "deepfates/hunyuan-pixar", "deepfates/hunyuan-the-matrix-trilogy", "deepfates/hunyuan-rrr", "deepfates/hunyuan-neon-genesis-evangelion", "deepfates/hunyuan-spider-man-into-the-spider-verse", "neph1/hunyuan_night_graveyard", "CAWAI/celebdm", "GetMonie/GawkToon", "hashu786/CineArc", "Alched/brxdperf_hunyuan", "hashu786/HYVReward", "AlekseyCalvin/hyvid_YegorLetov_concert_LoRA", "blanflin/onstomach", "blanflin/Standingoverfemalebj", "CCRss/hunyuan_lora_anime_akame", "boisterous/steak_hunyuan", "Samsnake/LayonStomachBJ", "Samsnake/hqgwaktoon", "hazc138/GL1", "istominvi/vswpntsbeige_16_16_32", "istominvi/vswpntsbeige_30_8_32", "istominvi/vswpntsbeige_50_8_32", "hazc138/ZMGL", "hazc138/gl5", "yashlanjewar20/HEYGEN1-LORA", "Sergidev/IllustrationTTV", "yashlanjewar20/heygen-epoch50", "yashlanjewar20/HeyGen-epoch16-autocaption", "yashlanjewar20/Yash_c_epochs50_10seconds", "yashlanjewar20/surya_10s_epoch50", "yashlanjewar20/Yash_c_30seconds_epochs16", "BagOu22/Lora_HKLPAZ", "Klindle/gawk_toon3000", "yashlanjewar20/Yash_c_16epochs_10seconds", "yashlanjewar20/16epochs_surya_10seconds", "ghej4u/flamingo", "Alched/hv_dirty_panties_v1", "ghej4u/oh", "gulatiharsh/zinzanatrailer", "istominvi/gocha_16_4_32", "istominvi/gocha_32_4_32", "istominvi/gocha_64_4_32", "istominvi/gocha_128_4_32", "istominvi/gocha_256_4_32", "istominvi/gocha_16_8_32", "istominvi/gocha_32_8_32", "istominvi/gocha_64_8_32", "istominvi/gocha_128_8_32", "istominvi/gocha_256_8_32", "StoyanG/lora-video-DrThompsonVet", "JoshuaMKerr/joshvideo", "trojblue/HunyuanVideo-lora-PixelArt", "neph1/1980s_Fantasy_Movies_Hunyuan_Video_Lora", "neph1/1920s_horror_hunyuan_video_lora", "neph1/50s_scifi_hunyuan_video_lora", "strangerzonehf/Hunyuan-t2v-Cartoon-LoRA" ], "adapters_count": 109, "quantized": [ "city96/HunyuanVideo-gguf", "kohya-ss/HunyuanVideo-fp8_e4m3fn-unofficial" ], "quantized_count": 2, "merges": [], "merges_count": 0, "total_derivatives": 140, "spaces": [], "spaces_count": 0, "parents": [], "base_model": "tencent/HunyuanVideo", "base_model_relation": "base" }, { "model_id": "hunyuanvideo-community/HunyuanVideo", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\nlibrary_name: diffusers\n---\n\nUnofficial community fork for Diffusers-format weights on [`tencent/HunyuanVideo`](https://huggingface.co/tencent/HunyuanVideo).\n\n### Using Diffusers\n\nHunyuanVideo can be used directly from Diffusers. Install the latest version of Diffusers.\n\n```python\nimport torch\nfrom diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel\nfrom diffusers.utils import export_to_video\n\nmodel_id = \"hunyuanvideo-community/HunyuanVideo\"\ntransformer = HunyuanVideoTransformer3DModel.from_pretrained(\n model_id, subfolder=\"transformer\", torch_dtype=torch.bfloat16\n)\npipe = HunyuanVideoPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch.float16)\n\n# Enable memory savings\npipe.vae.enable_tiling()\npipe.enable_model_cpu_offload()\n\noutput = pipe(\n prompt=\"A cat walks on the grass, realistic\",\n height=320,\n width=512,\n num_frames=61,\n num_inference_steps=30,\n).frames[0]\nexport_to_video(output, \"output.mp4\", fps=15)\n```\n\nRefer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video) for more information.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [ "lucataco/hunyuan-lora-melty-test-3", "swyne/breast-growth", "wooyeolbaek/finetuned_models_debug2", "wooyeolbaek/finetuned_models_videojam_debug2" ], "adapters_count": 4, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 4, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "hunyuanvideo-community/HunyuanVideo", "base_model_relation": "base" }, { "model_id": "Kunbyte/DRA-Ctrl", "gated": "unknown", "card": "---\nlicense: apache-2.0\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: image-to-image\nlibrary_name: diffusers\n---\n\n# Model Card for DRA-Ctrl\n\n
\n\"arXiv\"\n\"arXiv\"\n\"HuggingFace\"\n\"HuggingFace\"\n\"GitHub\"\n\"Project\"\n
\n\nThis repository contains the LoRA weights for DRA-Ctrl across 9 tasks. For instructions on how to use these weights, please refer to our [GitHub repository](https://github.com/Kunbyte-AI/DRA-Ctrl) and [HuggingFace Space](https://huggingface.co/spaces/Kunbyte/DRA-Ctrl).", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": null, "base_model_relation": null }, { "model_id": "tencent/HunyuanCustom", "gated": "False", "card": "---\nlanguage:\n- en\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: image-to-video\n---\n\n\n

\n \n

\n\n# **HunyuanCustom** \ud83c\udf05\n \n
\n  \n  \n \n
\n
\n  \n
\n
\n  \n
\n-----\n\n\n> [**HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation**](https://arxiv.org/pdf/2505.04512) \n\n\n## \ud83d\udd25\ud83d\udd25\ud83d\udd25 News!!\n* June 6, 2025: \ud83d\udc83 We release the inference code and model weights of audio-driven and video-driven powered by [OmniV2V](https://arxiv.org/abs/2506.01801).\n* May 13, 2025: \ud83c\udf89 HunyuanCustom has been integrated into [ComfyUI-HunyuanVideoWrapper](https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/blob/develop/example_workflows/hyvideo_custom_testing_01.json) by [Kijai](https://github.com/kijai).\n* May 12, 2025: \ud83d\udd25 HunyuanCustom is available in Cloud-Native-Build (CNB) [HunyuanCustom](https://cnb.cool/tencent/hunyuan/HunyuanCustom).\n* May 8, 2025: \ud83d\udc4b We release the inference code and model weights of HunyuanCustom. [Download](models/README.md).\n\n\n## \ud83d\udcd1 Open-source Plan\n\n- HunyuanCustom\n - Single-Subject Video Customization\n - [x] Inference \n - [x] Checkpoints\n - [x] ComfyUI\n - Audio-Driven Video Customization\n - [x] Inference \n - [x] Checkpoints\n - [ ] ComfyUI\n - Video-Driven Video Customization\n - [x] Inference \n - [x] Checkpoints\n - [ ] ComfyUI\n - Multi-Subject Video Customization\n\n## Contents\n- [**HunyuanCustom** \ud83c\udf05](#hunyuancustom-)\n - [\ud83d\udd25\ud83d\udd25\ud83d\udd25 News!!](#-news)\n - [\ud83d\udcd1 Open-source Plan](#-open-source-plan)\n - [Contents](#contents)\n - [**Abstract**](#abstract)\n - [**HunyuanCustom Overall Architecture**](#hunyuancustom-overall-architecture)\n - [\ud83c\udf89 **HunyuanCustom Key Features**](#-hunyuancustom-key-features)\n - [**Multimodal Video customization**](#multimodal-video-customization)\n - [**Various Applications**](#various-applications)\n - [\ud83d\udcc8 Comparisons](#-comparisons)\n - [\ud83d\udcdc Requirements](#-requirements)\n - [\ud83d\udee0\ufe0f Dependencies and Installation](#\ufe0f-dependencies-and-installation)\n - [Installation Guide for Linux](#installation-guide-for-linux)\n - [\ud83e\uddf1 Download Pretrained Models](#-download-pretrained-models)\n - [\ud83d\ude80 Parallel Inference on Multiple GPUs](#-parallel-inference-on-multiple-gpus)\n - [\ud83d\udd11 Single-gpu Inference](#-single-gpu-inference)\n - [Run with very low VRAM](#run-with-very-low-vram)\n - [Run a Gradio Server](#run-a-gradio-server)\n - [\ud83d\udd17 BibTeX](#-bibtex)\n - [Acknowledgements](#acknowledgements)\n---\n\n## **Abstract**\n\nCustomized video generation aims to produce videos featuring specific subjects under flexible user-defined conditions, yet existing methods often struggle with identity consistency and limited input modalities. In this paper, we propose HunyuanCustom, a multi-modal customized video generation framework that emphasizes subject consistency while supporting image, audio, video, and text conditions. Built upon HunyuanVideo, our model first addresses the image-text conditioned generation task by introducing a text-image fusion module based on LLaVA for enhanced multi-modal understanding, along with an image ID enhancement module that leverages temporal concatenation to reinforce identity features across frames. To enable audio- and video-conditioned generation, we further propose modality-specific condition injection mechanisms: an AudioNet module that achieves hierarchical alignment via spatial cross-attention, and a video-driven injection module that integrates latent-compressed conditional video through a patchify-based feature-alignment network. Extensive experiments on single- and multi-subject scenarios demonstrate that HunyuanCustom significantly outperforms state-of-the-art open- and closed-source methods in terms of ID consistency, realism, and text-video alignment. Moreover, we validate its robustness across downstream tasks, including audio and video-driven customized video generation. Our results highlight the effectiveness of multi-modal conditioning and identity-preserving strategies in advancing controllable video generation.\n\n## **HunyuanCustom Overall Architecture**\n\n![image](assets/material/method.png)\n\nWe propose **HunyuanCustom, a multi-modal, conditional, and controllable generation model centered on subject consistency**, built upon the Hunyuan Video generation framework. It enables the generation of subject-consistent videos conditioned on text, images, audio, and video inputs. \n\n## \ud83c\udf89 **HunyuanCustom Key Features**\n\n### **Multimodal Video customization**\n\nHunyuanCustom supports inputs in the form of **text, images, audio, and video**. \nSpecifically, it can handle single or multiple image inputs to enable customized video generation for one or more subjects. \nAdditionally, it can incorporate extra audio inputs to drive the subject to speak the corresponding audio. \nLastly, HunyuanCustom supports video input, allowing for the replacement of specified objects in the video with subjects from a given image.\n![image](assets/material/teaser.png)\n\n### **Various Applications**\n\nWith the multi-modal capabilities of HunyuanCustom, numerous downstream tasks can be accomplished. \nFor instance, by taking multiple images as input, HunyuanCustom can facilitate **virtual human advertisements** and **virtual try-on**. Additionally, \nwith image and audio inputs, it can create **singing avatars**. Furthermore, by using an image and a video as inputs, \nHunyuanCustom supports **video editing** by replacing subjects in the video with those in the provided image. \nMore applications await your exploration!\n![image](assets/material/application.png)\n\n\n## \ud83d\udcc8 Comparisons\n\nTo evaluate the performance of HunyuanCustom, we compared it with state-of-the-art video customization methods, \nincluding VACE, Skyreels, Pika, Vidu, Keling, and Hailuo. The comparison focused on face/subject consistency, \nvideo-text alignment, and overall video quality.\n\n| Models | Face-Sim | CLIP-B-T | DINO-Sim | Temp-Consis | DD |\n|-------------------|----------|----------|----------|-------------|------|\n| VACE-1.3B | 0.204 | _0.308_ | 0.569 | **0.967** | 0.53 |\n| Skyreels | 0.402 | 0.295 | 0.579 | 0.942 | 0.72 |\n| Pika | 0.363 | 0.305 | 0.485 | 0.928 | _0.89_ |\n| Vidu2.0 | 0.424 | 0.300 | 0.537 | _0.961_ | 0.43 |\n| Keling1.6 | 0.505 | 0.285 | _0.580_ | 0.914 | 0.78 |\n| Hailuo | _0.526_ | **0.314**| 0.433 | 0.937 | **0.94** |\n| **HunyuanCustom (Ours)** | **0.627**| 0.306 | **0.593**| 0.958 | 0.71 |\n\n## \ud83d\udcdc Requirements\n\nThe following table shows the requirements for running HunyuanCustom model (batch size = 1) to generate videos:\n\n| Model | Setting
(height/width/frame) | GPU Peak Memory |\n|:------------:|:--------------------------------:|:----------------:|\n| HunyuanCustom | 720px1280px129f | 80GB |\n| HunyuanCustom | 512px896px129f | 60GB |\n\n* An NVIDIA GPU with CUDA support is required. \n * The model is tested on a machine with 8GPUs.\n * **Minimum**: The minimum GPU memory required is 24GB for 720px1280px129f but very slow.\n * **Recommended**: We recommend using a GPU with 80GB of memory for better generation quality.\n* Tested operating system: Linux\n\n\n## \ud83d\udee0\ufe0f Dependencies and Installation\n\nBegin by cloning the repository:\n```shell\ngit clone https://github.com/Tencent/HunyuanCustom.git\ncd HunyuanCustom\n```\n\n### Installation Guide for Linux\n\nWe recommend CUDA versions 12.4 or 11.8 for the manual installation.\n\nConda's installation instructions are available [here](https://docs.anaconda.com/free/miniconda/index.html).\n\n```shell\n# 1. Create conda environment\nconda create -n HunyuanCustom python==3.10.9\n\n# 2. Activate the environment\nconda activate HunyuanCustom\n\n# 3. Install PyTorch and other dependencies using conda\n# For CUDA 11.8\nconda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=11.8 -c pytorch -c nvidia\n# For CUDA 12.4\nconda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4 -c pytorch -c nvidia\n\n# 4. Install pip dependencies\npython -m pip install -r requirements.txt\n# 5. Install flash attention v2 for acceleration (requires CUDA 11.8 or above)\npython -m pip install ninja\npython -m pip install git+https://github.com/Dao-AILab/flash-attention.git@v2.6.3\n```\n\nIn case of running into float point exception(core dump) on the specific GPU type, you may try the following solutions:\n\n```shell\n# Option 1: Making sure you have installed CUDA 12.4, CUBLAS>=12.4.5.8, and CUDNN>=9.00 (or simply using our CUDA 12 docker image).\npip install nvidia-cublas-cu12==12.4.5.8\nexport LD_LIBRARY_PATH=/opt/conda/lib/python3.8/site-packages/nvidia/cublas/lib/\n\n# Option 2: Forcing to explicitly use the CUDA 11.8 compiled version of Pytorch and all the other packages\npip uninstall -r requirements.txt # uninstall all packages\npip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu118\npip install -r requirements.txt\npip install ninja\npip install git+https://github.com/Dao-AILab/flash-attention.git@v2.6.3\n```\n\nAdditionally, you can also use HunyuanVideo Docker image. Use the following command to pull and run the docker image.\n\n```shell\n# For CUDA 12.4 (updated to avoid float point exception)\ndocker pull hunyuanvideo/hunyuanvideo:cuda_12\ndocker run -itd --gpus all --init --net=host --uts=host --ipc=host --name hunyuanvideo --security-opt=seccomp=unconfined --ulimit=stack=67108864 --ulimit=memlock=-1 --privileged hunyuanvideo/hunyuanvideo:cuda_12\npip install gradio==3.39.0 diffusers==0.33.0 transformers==4.41.2\n\n# For CUDA 11.8\ndocker pull hunyuanvideo/hunyuanvideo:cuda_11\ndocker run -itd --gpus all --init --net=host --uts=host --ipc=host --name hunyuanvideo --security-opt=seccomp=unconfined --ulimit=stack=67108864 --ulimit=memlock=-1 --privileged hunyuanvideo/hunyuanvideo:cuda_11\npip install gradio==3.39.0 diffusers==0.33.0 transformers==4.41.2\n```\n\n\n## \ud83e\uddf1 Download Pretrained Models\n\nThe details of download pretrained models are shown [here](models/README.md).\n\n## \ud83d\ude80 Parallel Inference on Multiple GPUs\n\nFor example, to generate a video with 8 GPUs, you can use the following command:\n\n### Run Single-Subject Video Customization\n```bash\ncd HunyuanCustom\n\nexport MODEL_BASE=\"./models\"\nexport PYTHONPATH=./\ntorchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \\\n --ref-image './assets/images/seg_woman_01.png' \\\n --pos-prompt \"Realistic, High-quality. A woman is drinking coffee at a caf\u00e9.\" \\\n --neg-prompt \"Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border.\" \\\n --ckpt ${MODEL_BASE}\"/hunyuancustom_720P/mp_rank_00_model_states.pt\" \\\n --video-size 720 1280 \\\n --seed 1024 \\\n --sample-n-frames 129 \\\n --infer-steps 30 \\\n --flow-shift-eval-video 13.0 \\\n --save-path './results/sp_720p'\n```\n\n### Run Video-Driven Video Customization (Video Editing)\n```bash\ncd HunyuanCustom\n\nexport MODEL_BASE=\"./models\"\nexport PYTHONPATH=./\ntorchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \\\n --ref-image './assets/images/sed_red_panda.png' \\\n --input-video './assets/input_videos/001_bg.mp4' \\\n --mask-video './assets/input_videos/001_mask.mp4' \\\n --expand-scale 5 \\\n --video-condition \\\n --pos-prompt \"Realistic, High-quality. A red panda is walking on a stone road.\" \\\n --neg-prompt \"Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border.\" \\\n --ckpt ${MODEL_BASE}\"/hunyuancustom_editing_720P/mp_rank_00_model_states.pt\" \\\n --seed 1024 \\\n --infer-steps 50 \\\n --flow-shift-eval-video 5.0 \\\n --save-path './results/sp_editing_720p'\n # --pose-enhance # Enable for human videos to improve pose generation quality.\n```\n\n### Run Audio-Driven Video Customization\n```bash\ncd HunyuanCustom\n\nexport MODEL_BASE=\"./models\"\nexport PYTHONPATH=./\ntorchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \\\n --ref-image './assets/images/seg_man_01.png' \\\n --input-audio './assets/audios/milk_man.mp3' \\\n --audio-strength 0.8 \\\n --audio-condition \\\n --pos-prompt \"Realistic, High-quality. In the study, a man sits at a table featuring a bottle of milk while delivering a product presentation.\" \\\n --neg-prompt \"Two people, two persons, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border.\" \\\n --ckpt ${MODEL_BASE}\"/hunyuancustom_audio_720P/mp_rank_00_model_states.pt\" \\\n --seed 1026 \\\n --video-size 720 1280 \\\n --sample-n-frames 129 \\\n --cfg-scale 7.5 \\\n --infer-steps 30 \\\n --use-deepcache 1 \\\n --flow-shift-eval-video 13.0 \\\n --save-path './results/sp_audio_720p'\n```\n\n## \ud83d\udd11 Single-gpu Inference\n\nFor example, to generate a video with 1 GPU, you can use the following command:\n\n```bash\ncd HunyuanCustom\n\nexport MODEL_BASE=\"./models\"\nexport DISABLE_SP=1\nexport PYTHONPATH=./\npython hymm_sp/sample_gpu_poor.py \\\n --ref-image './assets/images/seg_woman_01.png' \\\n --pos-prompt \"Realistic, High-quality. A woman is drinking coffee at a caf\u00e9.\" \\\n --neg-prompt \"Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border.\" \\\n --ckpt ${MODEL_BASE}\"/hunyuancustom_720P/mp_rank_00_model_states_fp8.pt\" \\\n --video-size 512 896 \\\n --seed 1024 \\\n --sample-n-frames 129 \\\n --infer-steps 30 \\\n --flow-shift-eval-video 13.0 \\\n --save-path './results/1gpu_540p' \\\n --use-fp8\n```\n\n### Run with very low VRAM\n\n```bash\ncd HunyuanCustom\n\nexport MODEL_BASE=\"./models\"\nexport CPU_OFFLOAD=1\nexport PYTHONPATH=./\npython hymm_sp/sample_gpu_poor.py \\\n --ref-image './assets/images/seg_woman_01.png' \\\n --pos-prompt \"Realistic, High-quality. A woman is drinking coffee at a caf\u00e9.\" \\\n --neg-prompt \"Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border.\" \\\n --ckpt ${MODEL_BASE}\"/hunyuancustom_720P/mp_rank_00_model_states_fp8.pt\" \\\n --video-size 720 1280 \\\n --seed 1024 \\\n --sample-n-frames 129 \\\n --infer-steps 30 \\\n --flow-shift-eval-video 13.0 \\\n --save-path './results/cpu_720p' \\\n --use-fp8 \\\n --cpu-offload \n```\n\n\n## Run a Gradio Server\n```bash\ncd HunyuanCustom\n\n# Single-Subject Video Customization\nbash ./scripts/run_gradio.sh \n\n# Video-Driven Video Customization\nbash ./scripts/run_gradio.sh --video\n\n# Audio-Driven Video Customization\nbash ./scripts/run_gradio.sh --audio\n```\n\n## \ud83d\udd17 BibTeX\n\nIf you find [HunyuanCustom](https://arxiv.org/abs/2505.04512) useful for your research and applications, please cite using this BibTeX:\n\n```BibTeX\n@misc{hu2025hunyuancustom,\n title={HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation}, \n author={Teng Hu and Zhentao Yu and Zhengguang Zhou and Sen Liang and Yuan Zhou and Qin Lin and Qinglin Lu},\n year={2025},\n eprint={2505.04512},\n archivePrefix={arXiv},\n primaryClass={cs.CV},\n url={https://arxiv.org/abs/2505.04512}, \n}\n```\n\n## Acknowledgements\n\nWe would like to thank the contributors to the [HunyuanVideo](https://github.com/Tencent/HunyuanVideo), [HunyuanVideo-Avatar](https://github.com/Tencent-Hunyuan/HunyuanVideo-Avatar), [MimicMotion](https://github.com/Tencent/MimicMotion), [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [FLUX](https://github.com/black-forest-labs/flux), [Llama](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [Xtuner](https://github.com/InternLM/xtuner), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration. \n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "tencent/HunyuanCustom", "base_model_relation": "base" }, { "model_id": "icaruseu/QA", "gated": "False", "card": "---\ndatasets:\n- fka/awesome-chatgpt-prompts\nlanguage:\n- vi\nmetrics:\n- code_eval\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\n---\n# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "icaruseu/QA", "base_model_relation": "base" }, { "model_id": "BhilVasant/Noura140", "gated": "False", "card": "---\nlanguage:\n- hi\n- en\n- gu\nbase_model:\n- tencent/HunyuanVideo\n- tencent/HunyuanVideo-PromptRewrite\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "BhilVasant/Noura140", "base_model_relation": "base" }, { "model_id": "Usama1234/jonesjames", "gated": "False", "card": "---\nlicense: openrail\ndatasets:\n- microsoft/orca-agentinstruct-1M-v1\nlanguage:\n- ae\nmetrics:\n- accuracy\nbase_model:\n- tencent/HunyuanVideo\nnew_version: Qwen/QwQ-32B-Preview\npipeline_tag: text-classification\nlibrary_name: allennlp\ntags:\n- finance\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Usama1234/jonesjames", "base_model_relation": "base" }, { "model_id": "Cseti/HunyuanVideo-LoRA-Arcane_Jinx-v1", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\ntags:\n- LoRA\n- hunyuan\n---\n\n# Arcane Jinx HunyuanVideo LoRA v1\n\n
\n \n \n \n
\nPrompt: \"CSETIARCANE. A upper body shot of nfjinx, wearing an intricately detailed plate armor. The steel breastplate features ornate engravings, while articulated pauldrons protect her shoulders, their polished surface reflecting ambient light. Her blue hair stands out dramatically against the metallic armor. The armor's joints show fine craftsmanship with detailed rivets and carefully fitted plates. Small scratches and battle marks on the metal suggest authenticity and use. Her intense expression remains visible. Chainmail is visible at the gaps between plates, adding texture to the otherwise smooth metal surface.\"\n\nPrompt: \"CSETIARCANE. A full-body side view, nfjinx walking through a vast, ornate hall, her stride purposeful and measured. Her blue hair flows behind her with each step, framing her face in profile. Massive chamber around her, towering walls covered in elaborate golden patterns and archways. Thick smoke drifts lazily through the air. Wisps of the pale smoke curl around her advancing form, Her hands swing confidently at her sides\"\n\nPrompt: \"CSETIARCANE. Nfjinx with her blue hair strides through a rain-soaked cobblestone street. Her black miniskirt and white top peek out beneath a worn leather jacket, all dampened by the rain. The camera moves backward, focusing on her intense gaze. Her visible tattoos glisten with water droplets while her combat boots confidently strike the wet pavement. Behind her, neon signs in pink and blue illuminate the misty air, their glow reflecting off the wet stones and casting colorful shadows across her determined features\"\n\n\n\n## Important Notes:\n\nThis LoRA is created as part of a fan project for research purposes only and is not intended for commercial use. It is based on the TV series called Arcane which are protected by copyright. Users utilize the model at their own risk. Users are obligated to comply with copyright laws and applicable regulations. The model has been developed for non-commercial purposes, and it is not my intention to infringe on any copyright. I assume no responsibility for any damages or legal consequences arising from the use of the model. \n\n## Compatibility:\n- HunyuanVideo\n\n## ID Token / Trigger word(s):\n\nUse these in your prompt helps providing the character. See example prompt above.\n- csetiarcane, nfjinx, blue hair, black top\n\n**Please consider the following:**\n\n- If it doesn't produce the character, try increasing the lora strength to 1.2\n- With some seeds it doesn't work well, it simply doesn't produce either the style or the character\n- I definitely recommend using at least the following trigger words in your prompt: **'csetiarcane, nfjinx, blue hair'** .This is probably due to how the dataset was assembled, I'd like to fix this in later versions\"\n\n## Acknowledgment:\n\n- Thanks to the [Tencent team](https://github.com/Tencent/HunyuanVideo) for making this great model available\n- Thanks to [tdrussel](https://github.com/tdrussell) for the [diffusion-pipe](https://github.com/tdrussell/diffusion-pipe) that helps us making these LoRAs.\n- Thanks to [Kijai](https://github.com/kijai) for his great [ComfyUI integration](https://github.com/kijai/ComfyUI-HunyuanVideoWrapper)\n- Thanks to [POM](https://huggingface.co/peteromallet) for providing the computing resources. Without this, these LoRAs could not have been created.\n\n## Trainig details:\n- LR: 2e-5\n- Optimizer: adamw\n- steps: 6000\n- dataset: 40 (33x704x352) videos\n- rank: 32\n- batch size: 1\n- gradient accumulation steps: 4\n\n## Citation\n```\n@misc{kong2024hunyuanvideo,\n title={HunyuanVideo: A Systematic Framework For Large Video Generative Models}, \n author={Weijie Kong, Qi Tian, Zijian Zhang, Rox Min, Zuozhuo Dai, Jin Zhou, Jiangfeng Xiong, Xin Li, Bo Wu, Jianwei Zhang, Kathrina Wu, Qin Lin, Aladdin Wang, Andong Wang, Changlin Li, Duojun Huang, Fang Yang, Hao Tan, Hongmei Wang, Jacob Song, Jiawang Bai, Jianbing Wu, Jinbao Xue, Joey Wang, Junkun Yuan, Kai Wang, Mengyang Liu, Pengyu Li, Shuai Li, Weiyan Wang, Wenqing Yu, Xinchi Deng, Yang Li, Yanxin Long, Yi Chen, Yutao Cui, Yuanbo Peng, Zhentao Yu, Zhiyu He, Zhiyong Xu, Zixiang Zhou, Zunnan Xu, Yangyu Tao, Qinglin Lu, Songtao Liu, Daquan Zhou, Hongfa Wang, Yong Yang, Di Wang, Yuhong Liu, and Jie Jiang, along with Caesar Zhong},\n year={2024},\n archivePrefix={arXiv preprint arXiv:2412.03603},\n primaryClass={cs.CV},\n url={https://arxiv.org/abs/2412.03603}, \n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Cseti/HunyuanVideo-LoRA-Arcane_Jinx-v1", "base_model_relation": "base" }, { "model_id": "Cseti/HunyuanVideo-LoRA-Arcane_Style", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\ntags:\n- LoRA\n- hunyuan\n---\n\n# v3 Arcane Style HunyuanVideo LoRA is released\n
\n \n \n \n
\n\n## ID Token / Trigger word(s):\nUse this in your prompt helps providing the style.\n\n- csetiarcane animation style\n\n## Trainig details:\nIt was trained on images and videos\n- LR: 2e-5\n- Optimizer: adamw\n- epochs: 22\n- steps: 7326\n- dataset: 135 videos and 135 images\n- repeats: 5\n- rank: 128\n- batch size: 1\n- gradient accumulation steps: 4\n\n# v2 Arcane Style HunyuanVideo LoRA is released\n\n## Advantages:\n- Better image quality than the v1\n\n## ID Token / Trigger word(s):\n\nUse this in your prompt helps providing the style.\n- csetiarcane, scene, character\n\nIf you want to generate one or more characters in Arcane style, I strongly recommend including the word 'character' in the prompt as well. This seems to be quite a strong token and helps with displaying the style.\n\n## Trainig details:\n- LR: 2e-5\n- Optimizer: adamw\n- epochs: 52\n- dataset: 120 (33x704x352) videos\n- repeats: 5\n- rank: 32\n- batch size: 1\n- gradient accumulation steps: 4\n\n# Arcane Style HunyuanVideo LoRA v1\n\n
\n \n \n \n
\n\n## Important Notes:\n\nThis LoRA is created as part of a fan project for research purposes only and is not intended for commercial use. It is based on the TV series called Arcane which are protected by copyright. Users utilize the model at their own risk. Users are obligated to comply with copyright laws and applicable regulations. The model has been developed for non-commercial purposes, and it is not my intention to infringe on any copyright. I assume no responsibility for any damages or legal consequences arising from the use of the model. \n\n## Compatibility:\n- HunyuanVideo\n\n## ID Token / Trigger word(s):\n\nUse this in your prompt helps providing the style.\n- csetiarcane\n\n**Please consider the following:**\n\n- With some seeds it doesn't work well, it simply doesn't produce either the style\n\n## Acknowledgment:\n\n- Thanks to the [Tencent team](https://github.com/Tencent/HunyuanVideo) for making this great model available\n- Thanks to [tdrussel](https://github.com/tdrussell) for the [diffusion-pipe](https://github.com/tdrussell/diffusion-pipe) that helps us making these LoRAs.\n- Thanks to [Kijai](https://github.com/kijai) for his great [ComfyUI integration](https://github.com/kijai/ComfyUI-HunyuanVideoWrapper)\n- Thanks to [POM](https://huggingface.co/peteromallet) for providing the computing resources. Without this, these LoRAs could not have been created.\n\n## Trainig details:\n- LR: 1e-4\n- Optimizer: adamw\n- epochs: 14\n- dataset: 135 (33x704x352) videos\n- repeats: 5\n- rank: 32\n- batch size: 1\n- gradient accumulation steps: 4\n\n## Citation\n```\n@misc{kong2024hunyuanvideo,\n title={HunyuanVideo: A Systematic Framework For Large Video Generative Models}, \n author={Weijie Kong, Qi Tian, Zijian Zhang, Rox Min, Zuozhuo Dai, Jin Zhou, Jiangfeng Xiong, Xin Li, Bo Wu, Jianwei Zhang, Kathrina Wu, Qin Lin, Aladdin Wang, Andong Wang, Changlin Li, Duojun Huang, Fang Yang, Hao Tan, Hongmei Wang, Jacob Song, Jiawang Bai, Jianbing Wu, Jinbao Xue, Joey Wang, Junkun Yuan, Kai Wang, Mengyang Liu, Pengyu Li, Shuai Li, Weiyan Wang, Wenqing Yu, Xinchi Deng, Yang Li, Yanxin Long, Yi Chen, Yutao Cui, Yuanbo Peng, Zhentao Yu, Zhiyu He, Zhiyong Xu, Zixiang Zhou, Zunnan Xu, Yangyu Tao, Qinglin Lu, Songtao Liu, Daquan Zhou, Hongfa Wang, Yong Yang, Di Wang, Yuhong Liu, and Jie Jiang, along with Caesar Zhong},\n year={2024},\n archivePrefix={arXiv preprint arXiv:2412.03603},\n primaryClass={cs.CV},\n url={https://arxiv.org/abs/2412.03603}, \n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Cseti/HunyuanVideo-LoRA-Arcane_Style", "base_model_relation": "base" }, { "model_id": "1989shack/1989shack-Ecommmerce-Platform", "gated": "False", "card": "---\nlicense: pddl\ndatasets:\n- fka/awesome-chatgpt-prompts\nlanguage:\n- ab\nmetrics:\n- bleu\nbase_model:\n- tencent/HunyuanVideo\nnew_version: Qwen/QwQ-32B-Preview\nlibrary_name: pyannote-audio\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "1989shack/1989shack-Ecommmerce-Platform", "base_model_relation": "base" }, { "model_id": "jbilcke-hf/HunyuanVideo-HFIE", "gated": "False", "card": "---\nlanguage:\n- en\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\nlibrary_name: diffusers\ntags:\n- HunyuanVideo\n- Tencent\n- Video\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: LICENSE\n---\n\nThis model is [HunyuanVideo](https://huggingface.co/tencent/HunyuanVideo) adapted to run on the Hugging Face Inference Endpoints.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "jbilcke-hf/HunyuanVideo-HFIE", "base_model_relation": "base" }, { "model_id": "Alikhani0916/bot3", "gated": "unknown", "card": "---\nlicense: afl-3.0\npipeline_tag: text-classification\nbase_model:\n- tencent/HunyuanVideo\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": null, "base_model_relation": null }, { "model_id": "tanfff/test1", "gated": "False", "card": "---\nlicense: apache-2.0\nbase_model:\n- tencent/HunyuanVideo\ntags:\n- art\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "tanfff/test1", "base_model_relation": "base" }, { "model_id": "DROWHOODIS/vidgen", "gated": "False", "card": "---\nlicense: unlicense\ndatasets:\n- HuggingFaceFW/fineweb-2\n- fka/awesome-chatgpt-prompts\n- HuggingFaceTB/finemath\n- O1-OPEN/OpenO1-SFT\n- amphora/QwQ-LongCoT-130K\n- agibot-world/AgiBotWorld-Alpha\n- CohereForAI/Global-MMLU\n- foursquare/fsq-os-places\n- deepghs/sankaku_full\n- argilla/FinePersonas-v0.1\nbase_model:\n- tencent/HunyuanVideo\nlanguage:\n- en\nmetrics:\n- accuracy\nnew_version: tencent/HunyuanVideo\npipeline_tag: text-to-video\nlibrary_name: diffusers\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "DROWHOODIS/vidgen", "base_model_relation": "base" }, { "model_id": "FastVideo/Hunyuan-Black-Myth-Wukong-lora-weight", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\ntags:\n- LoRA\n- hunyuan\n---\n\n**Hunyuan supports Lora fine-tuning of videos up to 720p. Detailed finetuning instructions are available in our GitHub repository.** \n\nYou can easily perform inference using the LoRA weights in our [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository, supporting both single and multi-GPU configurations. \nOur training dataset consists solely of Wukong videos and can be accessed [here](https://huggingface.co/datasets/FastVideo/Black-Myth-Wukong-720p)\n# Black-Myth-Wukong\n\n
\n \n
\n \n \n
\n \n
\n \n \n
\n \n
\n \n \n
\n
\nPrompt: \"Against a backdrop of ancient trees shrouded in mist, Wukong stands prominently, his sophisticated black sunglasses adding a modern edge to his mythical appearance. His face, a striking blend of human and simian traits, is characterized by intense eyes behind the dark lenses and dense fur that frames his strong features. The ornate golden armor with swirling patterns shimmers as he crosses his arms across his chest, his posture exuding authority. He nods his head rhythmically, a subtle smile playing on his lips as the sunglasses reflect the diffused light.\"\n\nPrompt: \"Through tranquil space with traditional decorations, Wukong holds red envelopes, his stylish sunglasses creating an intriguing blend with his fur-covered face showing generous spirit. His elaborate golden armor adorned with intricate patterns gleams beside lucky packets, his strong features expressing giving joy.\"\n\nPrompt: \"Against peaceful light, Wukong examines a bespoke leather journal, his black sunglasses framing his fur-covered face thoughtfully. His elaborate golden armor with intricate patterns gleams as he appreciates craftmanship, his strong simian features showing writer's interest.\"\n\nPrompt: \"In misty light among paper-cut designs, Wukong makes a respectful gesture, his sleek sunglasses harmonizing with his fur-covered face showing artistic appreciation. His elaborate golden armor with dragon patterns catches intricate shadows as he shares cultural greetings, his strong simian features radiating tradition.\"\n\nPrompt: \"In misty light, Wukong contemplates a chessboard, his fur-covered face showing thoughtful consideration. His elaborate golden armor with intricate patterns gleams as he studies the pieces, his strong features deep in strategic thought.\"\n\nPrompt: \"Against a peaceful backdrop decorated with paper-cut designs, Wukong stands with a tray of mandarin oranges, his stylish sunglasses harmonizing with his fur-covered face showing gracious hospitality. His elaborate golden armor adorned with swirling patterns catches the gentle light as he offers the lucky fruit, his strong simian features radiating traditional courtesy.\"\n\n\n## Trainig details:\n- LR: 1e-4\n- Optimizer: adamw\n- steps: 6000\n- dataset: 22 (1280x720) videos\n- rank: 32\n- alpha: 32\n- batch size: 1\n- gradient accumulation steps: 2\n\n## Acknowledgment:\n\n- Thanks to the [Tencent team](https://github.com/Tencent/HunyuanVideo) for making this great model available\n\n## Citation\n```\n@misc{kong2024hunyuanvideo,\n title={HunyuanVideo: A Systematic Framework For Large Video Generative Models}, \n author={Weijie Kong, Qi Tian, Zijian Zhang, Rox Min, Zuozhuo Dai, Jin Zhou, Jiangfeng Xiong, Xin Li, Bo Wu, Jianwei Zhang, Kathrina Wu, Qin Lin, Aladdin Wang, Andong Wang, Changlin Li, Duojun Huang, Fang Yang, Hao Tan, Hongmei Wang, Jacob Song, Jiawang Bai, Jianbing Wu, Jinbao Xue, Joey Wang, Junkun Yuan, Kai Wang, Mengyang Liu, Pengyu Li, Shuai Li, Weiyan Wang, Wenqing Yu, Xinchi Deng, Yang Li, Yanxin Long, Yi Chen, Yutao Cui, Yuanbo Peng, Zhentao Yu, Zhiyu He, Zhiyong Xu, Zixiang Zhou, Zunnan Xu, Yangyu Tao, Qinglin Lu, Songtao Liu, Daquan Zhou, Hongfa Wang, Yong Yang, Di Wang, Yuhong Liu, and Jie Jiang, along with Caesar Zhong},\n year={2024},\n archivePrefix={arXiv preprint arXiv:2412.03603},\n primaryClass={cs.CV},\n url={https://arxiv.org/abs/2412.03603}, \n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "FastVideo/Hunyuan-Black-Myth-Wukong-lora-weight", "base_model_relation": "base" }, { "model_id": "jobs-git/HunyuanVideoCommunity", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\nlibrary_name: diffusers\n---\n\nUnofficial community fork for Diffusers-format weights on [`tencent/HunyuanVideo`](https://huggingface.co/tencent/HunyuanVideo).\n\n### Using Diffusers\n\nHunyuanVideo can be used directly from Diffusers. Install the latest version of Diffusers.\n\n```python\nimport torch\nfrom diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel\nfrom diffusers.utils import export_to_video\n\nmodel_id = \"hunyuanvideo-community/HunyuanVideo\"\ntransformer = HunyuanVideoTransformer3DModel.from_pretrained(\n model_id, subfolder=\"transformer\", torch_dtype=torch.bfloat16\n)\npipe = HunyuanVideoPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch.float16)\n\n# Enable memory savings\npipe.vae.enable_tiling()\npipe.enable_model_cpu_offload()\n\noutput = pipe(\n prompt=\"A cat walks on the grass, realistic\",\n height=320,\n width=512,\n num_frames=61,\n num_inference_steps=30,\n).frames[0]\nexport_to_video(output, \"output.mp4\", fps=15)\n```\n\nRefer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video) for more information.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "jobs-git/HunyuanVideoCommunity", "base_model_relation": "base" }, { "model_id": "kudzueye/boreal-hl-v1", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\n---\n\n# Boreal-HL\nBoring Reality Hunyuan LoRA\n\nThis LoRA is an attempt at improving on the overall detail of generations from Hunyuan. It focus in particular on things such as improved depth of field, realistic skin texture, and better lighting.\nIt works at generating both realistic short video clips and single frames for images.\n\nAt the moment, the candidate LoRA used here is overtrained. It is recommended to use a strength of around 0.6. You will also need to experiment a lot with the seeds, guidance, steps, and resolution. Try to keep steps over 35 and minimum resolution above 512x512. Guidance seems to work to varying degrees between 3.5-12.5. Higher guidance and strength may lead to more similarities in things such as scene and characters. You will want to experiment with lowering the strength while raising the resolution and guidance when you run into lots of distortion. Also swap seeds as results vary a lot.\n\n## IMPORTANT NOTES:\nThis LoRA is still very experimental and difficult to control. Even with the settings recommended above it may still be difficult to get usable results. I am continuing doing new training runs to see what will help improve it. Also trying to see if there are any new improvements with an updated workflow.\n\nIf you want an easier way of using this LoRA than with Comfyui, you might be able to try running it via [Fal's Hunyuan video LoRA](https://fal.ai/models/fal-ai/hunyuan-video-lora). Use this [LoRA URL](https://huggingface.co/kudzueye/boreal-hl-v1/resolve/main/boreal-hl-v1.safetensors?download=true) and set steps to 55 (pro mode) with 720p resolution. Expect to wait at least five minutes for it to run. Replicate should also have an option though I have not tested it out yet to verify the results.\n\n## Video Examples\n\n\n\n\n\n\n\n## Image Examples\nThe LoRA can also sometimes perform decently on images when you use the initial frame.\n\n![image/png](https://cdn-uploads.huggingface.co/production/uploads/641ba2eeec5b871c0bcfdba3/5zUD6bzyLWd2WiJY9F65U.png)\n\n## Training Details\nI used around 150 images only for this initial LoRA. I focused on public domain photos from around the early 2010s.\n\n- epochs = 600\n- gradient_accumulation_steps = 4\n- warmup_steps = 100\n#### Adapter\n- type = \"lora\"\n- rank = 32\n- dtype = \"bfloat16\"\n- only_double_blocks = true\n#### Optimizer\n- type = \"adamw_optimi\"\n- lr = 0.0002\n- betas = [ 0.9, 0.99,]\n- weight_decay = 0.01\n- eps = 1e-8\n\n\n## Additional Info\n\n[Full Video Demonstration](https://www.youtube.com/watch?v=0tuGBrDbXU0)\n\n[Diffusion Pipe for training](https://github.com/tdrussell/diffusion-pipe)\n\n[Gradio UI Diffusion Pipe option for training](https://github.com/alisson-anjos/diffusion-pipe-ui)\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "kudzueye/boreal-hl-v1", "base_model_relation": "base" }, { "model_id": "ABDALLALSWAITI/hunyuan_video_720_cfgdistill_e5m2", "gated": "False", "card": "---\nlicense: apache-2.0\nbase_model:\n- tencent/HunyuanVideo\ntags:\n- wavespeed\n- quantization\n- e5m2\n- pytorch\n- video-generation\n- hunyuan\n- video\n---\nQuantized hunyuan_video model (e5m2) compatible with torch.compile and ComfyUI-WaveSpeed.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "ABDALLALSWAITI/hunyuan_video_720_cfgdistill_e5m2", "base_model_relation": "base" }, { "model_id": "Skywork/SkyReels-V1-Hunyuan-T2V", "gated": "False", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\n---\n\n# SkyReels V1: Human-Centric Video Foundation Model\n

\n \"Skyreels\n

\n\n

\n\ud83c\udf10 Github \u00b7 \ud83d\udc4b Playground \u00b7 \ud83d\udcac Discord\n

\n\n---\nThis repo contains Diffusers-format model weights for SkyReels V1 Text-to-Video models. You can find the inference code on our github repository [SkyReels-V1](https://github.com/SkyworkAI/SkyReels-V1).\n\n## Introduction\n\nSkyReels V1 is the first and most advanced open-source human-centric video foundation model. By fine-tuning HunyuanVideo on O(10M) high-quality film and television clips, Skyreels V1 offers three key advantages:\n\n1. **Open-Source Leadership**: Our Text-to-Video model achieves state-of-the-art (SOTA) performance among open-source models, comparable to proprietary models like Kling and Hailuo.\n2. **Advanced Facial Animation**: Captures 33 distinct facial expressions with over 400 natural movement combinations, accurately reflecting human emotions.\n3. **Cinematic Lighting and Aesthetics**: Trained on high-quality Hollywood-level film and television data, each generated frame exhibits cinematic quality in composition, actor positioning, and camera angles.\n\n## \ud83d\udd11 Key Features\n\n### 1. Self-Developed Data Cleaning and Annotation Pipeline\n\nOur model is built on a self-developed data cleaning and annotation pipeline, creating a vast dataset of high-quality film, television, and documentary content.\n\n- **Expression Classification**: Categorizes human facial expressions into 33 distinct types.\n- **Character Spatial Awareness**: Utilizes 3D human reconstruction technology to understand spatial relationships between multiple people in a video, enabling film-level character positioning.\n- **Action Recognition**: Constructs over 400 action semantic units to achieve a precise understanding of human actions.\n- **Scene Understanding**: Conducts cross-modal correlation analysis of clothing, scenes, and plots.\n\n### 2. Multi-Stage Image-to-Video Pretraining\n\nOur multi-stage pretraining pipeline, inspired by the HunyuanVideo design, consists of the following stages:\n\n- **Stage 1: Model Domain Transfer Pretraining**: We use a large dataset (O(10M) of film and television content) to adapt the text-to-video model to the human-centric video domain.\n- **Stage 2: Image-to-Video Model Pretraining**: We convert the text-to-video model from Stage 1 into an image-to-video model by adjusting the conv-in parameters. This new model is then pretrained on the same dataset used in Stage 1.\n- **Stage 3: High-Quality Fine-Tuning**: We fine-tune the image-to-video model on a high-quality subset of the original dataset, ensuring superior performance and quality.\n\n## Model Introduction\n| Model Name | Resolution | Video Length | FPS | Download Link |\n|-----------------|------------|--------------|-----|---------------|\n| SkyReels-V1-Hunyuan-I2V | 544px960p | 97 | 24 | \ud83e\udd17 [Download](https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-I2V) |\n| SkyReels-V1-Hunyuan-T2V (Current) | 544px960p | 97 | 24 | \ud83e\udd17 [Download](https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-T2V) |\n\n## Usage\n**See the [Guide](https://github.com/SkyworkAI/SkyReels-V1) for details.**\n\n## Citation\n```BibTeX\n@misc{SkyReelsV1,\n author = {SkyReels-AI},\n title = {Skyreels V1: Human-Centric Video Foundation Model},\n year = {2025},\n publisher = {Huggingface},\n journal = {Huggingface repository},\n howpublished = {\\url{https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-T2V}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Skywork/SkyReels-V1-Hunyuan-T2V", "base_model_relation": "base" }, { "model_id": "Skywork/SkyReels-V1-Hunyuan-I2V", "gated": "False", "card": "---\nlanguage:\n- en\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: image-to-video\n---\n\n# Skyreels V1: Human-Centric Video Foundation Model\n

\n \"SkyReels\n

\n\n

\n\ud83c\udf10 Github \u00b7 \ud83d\udc4b Playground \u00b7 \ud83d\udcac Discord\n

\n\n---\nThis repo contains Diffusers-format model weights for SkyReels V1 Image-to-Video models. You can find the inference code on our github repository [SkyReels-V1](https://github.com/SkyworkAI/SkyReels-V1).\n\n## Introduction\nSkyReels V1 is the first and most advanced open-source human-centric video foundation model. By fine-tuning HunyuanVideo on O(10M) high-quality film and television clips, Skyreels V1 offers three key advantages:\n\n1. **Open-Source Leadership**: Our Text-to-Video model achieves state-of-the-art (SOTA) performance among open-source models, comparable to proprietary models like Kling and Hailuo.\n2. **Advanced Facial Animation**: Captures 33 distinct facial expressions with over 400 natural movement combinations, accurately reflecting human emotions.\n3. **Cinematic Lighting and Aesthetics**: Trained on high-quality Hollywood-level film and television data, each generated frame exhibits cinematic quality in composition, actor positioning, and camera angles.\n\n## \ud83d\udd11 Key Features\n\n### 1. Self-Developed Data Cleaning and Annotation Pipeline\n\nOur model is built on a self-developed data cleaning and annotation pipeline, creating a vast dataset of high-quality film, television, and documentary content.\n\n- **Expression Classification**: Categorizes human facial expressions into 33 distinct types.\n- **Character Spatial Awareness**: Utilizes 3D human reconstruction technology to understand spatial relationships between multiple people in a video, enabling film-level character positioning.\n- **Action Recognition**: Constructs over 400 action semantic units to achieve a precise understanding of human actions.\n- **Scene Understanding**: Conducts cross-modal correlation analysis of clothing, scenes, and plots.\n\n### 2. Multi-Stage Image-to-Video Pretraining\n\nOur multi-stage pretraining pipeline, inspired by the HunyuanVideo design, consists of the following stages:\n\n- **Stage 1: Model Domain Transfer Pretraining**: We use a large dataset (O(10M) of film and television content) to adapt the text-to-video model to the human-centric video domain.\n- **Stage 2: Image-to-Video Model Pretraining**: We convert the text-to-video model from Stage 1 into an image-to-video model by adjusting the conv-in parameters. This new model is then pretrained on the same dataset used in Stage 1.\n- **Stage 3: High-Quality Fine-Tuning**: We fine-tune the image-to-video model on a high-quality subset of the original dataset, ensuring superior performance and quality.\n\n## Model Introduction\n| Model Name | Resolution | Video Length | FPS | Download Link |\n|-----------------|------------|--------------|-----|---------------|\n| SkyReels-V1-Hunyuan-I2V (Current) | 544px960p | 97 | 24 | \ud83e\udd17 [Download](https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-I2V) |\n| SkyReels-V1-Hunyuan-T2V | 544px960p | 97 | 24 | \ud83e\udd17 [Download](https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-T2V) |\n\n## Usage\n**See the [Guide](https://github.com/SkyworkAI/SkyReels-V1) for details.**\n\n## Citation\n```BibTeX\n@misc{SkyReelsV1,\n author = {SkyReels-AI},\n title = {Skyreels V1: Human-Centric Video Foundation Model},\n year = {2025},\n publisher = {Huggingface},\n journal = {Huggingface repository},\n howpublished = {\\url{https://huggingface.co/Skywork/Skyreels-V1-Hunyuan-I2V}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [ "jbilcke-hf/SkyReels-V1-Hunyuan-I2V-HFIE" ], "children_count": 1, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Skywork/SkyReels-V1-Hunyuan-I2V", "base_model_relation": "base" }, { "model_id": "newgenai79/SkyReels-V1-Hunyuan-I2V-int4", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\nlibrary_name: diffusers\n---\n\nUnofficial community fork for Diffusers-format weights on [`tencent/HunyuanVideo`](https://huggingface.co/tencent/HunyuanVideo).\n\n### Using Diffusers\n\nHunyuanVideo can be used directly from Diffusers. Install the latest version of Diffusers.\n\n```python\nimport torch\nfrom diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel\nfrom diffusers.utils import export_to_video\n\nmodel_id = \"hunyuanvideo-community/HunyuanVideo\"\ntransformer = HunyuanVideoTransformer3DModel.from_pretrained(\n model_id, subfolder=\"transformer\", torch_dtype=torch.bfloat16\n)\npipe = HunyuanVideoPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch.float16)\n\n# Enable memory savings\npipe.vae.enable_tiling()\npipe.enable_model_cpu_offload()\n\noutput = pipe(\n prompt=\"A cat walks on the grass, realistic\",\n height=320,\n width=512,\n num_frames=61,\n num_inference_steps=30,\n).frames[0]\nexport_to_video(output, \"output.mp4\", fps=15)\n```\n\nRefer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video) for more information.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "newgenai79/SkyReels-V1-Hunyuan-I2V-int4", "base_model_relation": "base" }, { "model_id": "newgenai79/HunyuanVideo-int4", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\nlibrary_name: diffusers\n---\n\nUnofficial community fork for Diffusers-format weights on [`tencent/HunyuanVideo`](https://huggingface.co/tencent/HunyuanVideo).\n\n### Using Diffusers\n\nHunyuanVideo can be used directly from Diffusers. Install the latest version of Diffusers.\n\n```python\nimport torch\nfrom diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel\nfrom diffusers.utils import export_to_video\n\nmodel_id = \"hunyuanvideo-community/HunyuanVideo\"\ntransformer = HunyuanVideoTransformer3DModel.from_pretrained(\n model_id, subfolder=\"transformer\", torch_dtype=torch.bfloat16\n)\npipe = HunyuanVideoPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch.float16)\n\n# Enable memory savings\npipe.vae.enable_tiling()\npipe.enable_model_cpu_offload()\n\noutput = pipe(\n prompt=\"A cat walks on the grass, realistic\",\n height=320,\n width=512,\n num_frames=61,\n num_inference_steps=30,\n).frames[0]\nexport_to_video(output, \"output.mp4\", fps=15)\n```\n\nRefer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video) for more information.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "newgenai79/HunyuanVideo-int4", "base_model_relation": "base" }, { "model_id": "dashtoon/hunyuan-video-keyframe-control-lora", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\nlibrary_name: diffusers\n---\n\nHunyuanVideo Keyframe Control Lora is an adapter for HunyuanVideo T2V model for keyframe-based video generation. \u200bOur architecture builds upon existing models, introducing key enhancements to optimize keyframe-based video generation:\u200b\n* We modify the input patch embedding projection layer to effectively incorporate keyframe information. By adjusting the convolutional input parameters, we enable the model to process image inputs within the Diffusion Transformer (DiT) framework.\u200b\n* We apply Low-Rank Adaptation (LoRA) across all linear layers and the convolutional input layer. This approach facilitates efficient fine-tuning by introducing low-rank matrices that approximate the weight updates, thereby preserving the base model's foundational capabilities while reducing the number of trainable parameters.\n* The model is conditioned on user-defined keyframes, allowing precise control over the generated video's start and end frames. This conditioning ensures that the generated content aligns seamlessly with the specified keyframes, enhancing the coherence and narrative flow of the video.\u200b\n\n| Image 1 | Image 2 | Generated Video |\n|---------|---------|-----------------|\n| ![Image 1](https://content.dashtoon.ai/stability-images/41aeca63-064a-4003-8c8b-bfe2cc80d275.png) | ![Image 2](https://content.dashtoon.ai/stability-images/28956177-3455-4b56-bb6c-73eacef323ca.png) | |\n| ![Image 1](https://content.dashtoon.ai/stability-images/ddabbf2f-4218-497b-8239-b7b882d93000.png) | ![Image 2](https://content.dashtoon.ai/stability-images/b603acba-40a4-44ba-aa26-ed79403df580.png) | |\n| ![Image 1](https://content.dashtoon.ai/stability-images/5298cf0c-0955-4568-935a-2fb66045f21d.png) | ![Image 2](https://content.dashtoon.ai/stability-images/722a4ea7-7092-4323-8e83-3f627e8fd7f8.png) | |\n| ![Image 1](https://content.dashtoon.ai/stability-images/69d9a49f-95c0-4e85-bd49-14a039373c8b.png) | ![Image 2](https://content.dashtoon.ai/stability-images/0cef7fa9-e15a-48ec-9bd3-c61921181802.png) | |\n\n## Code:\nThe tranining code can be found [here](https://github.com/dashtoon/hunyuan-video-keyframe-control-lora).\n\n## Recommended Settings\n1. The model works best on human subjects. Single subject images work slightly better.\n2. It is recommended to use the following image generation resolutions `720x1280`, `544x960`, `1280x720`, `960x544`.\n3. It is recommended to set frames from 33 upto 97. Can go upto 121 frames as well (but not tested much).\n4. Prompting helps a lot but works even without. The prompt can be as simple as just the name of the object you want to generate or can be detailed.\n5. `num_inference_steps` is recommended to be 50, but for fast results you can use 30 as well. Anything less than 30 is not recommended.\n\n## Diffusers\nHunyuanVideo Keyframe Control Lora can be used directly from Diffusers. Install the latest version of Diffusers.\n\n\n## Inference\nWhile the included `inference.py` script can be used to run inference. We would encourage folks to visit out [github repo](https://github.com/dashtoon/hunyuan-video-keyframe-control-lora/blob/main/hv_control_lora_inference.py) which contains a much optimized version of this inference script.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "dashtoon/hunyuan-video-keyframe-control-lora", "base_model_relation": "base" }, { "model_id": "neph1/AncientRome_HunyuanVideo_Lora", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\n---\nPoor man's lora (still images) based on images from modern depictions of ancient Rome. Biased towards characters and military due to the source of the images. Next version will have more scenery.\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/vT8GqX7slYmD3wtnRjlJl.webp)\n\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/OI2TFc0IFcUSqF5kHyf9N.webp)\n\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/kfcX9vFbukVqLldlX4Sbr.webp)\n\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/3aolh3z8a_AhcBlFMJ1xC.webp)\n\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/IjPqAbIFDOmLQW0b45o00.webp)\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "neph1/AncientRome_HunyuanVideo_Lora", "base_model_relation": "base" }, { "model_id": "jqlive/hyv_depth_control", "gated": "False", "card": "---\nlicense: mit\nbase_model:\n- tencent/HunyuanVideo\n---\n\nHunyuan Video depth control loras in diffusers format. They're experimental, and do not work as expected.\nInference is overly sensitive. Either zero influence or too much, no middle ground.\n\nTrained with:\nhttps://github.com/jquintanilla4/HunyuanVideo-Training/blob/depth-control/train_hunyuan_lora.py\n\nInference/testing script:\nhttps://github.com/jquintanilla4/HunyuanVideo-Training/blob/depth-control/test_hunyuan_control_lora.py\n\nYou will need the depth anything v2 model to run both train and testing scripts.\n\nLast training run was done over a small 14K dataset (10k train, 2k test, 2k val) over 10K steps.\n- learning_rate 5e-5\n- lora_rank 128 \n- lora_alpha 128 \n- timestep_shift 5 \n- assert_steps 100 \n- input_lr_scale 5.0\n\nDeleted old versions. They did not work at all.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "jqlive/hyv_depth_control", "base_model_relation": "base" }, { "model_id": "linoyts/Hunyuan-LoRA", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\n---\n*temporary*\n\nFlat color LoRA for Hunyuan Video by [**@motimalu**](https://huggingface.co/motimalu)\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "linoyts/Hunyuan-LoRA", "base_model_relation": "base" }, { "model_id": "Fabrice-TIERCELIN/HunyuanVideo", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\nlibrary_name: diffusers\n---\n\nUnofficial personal fork for Diffusers-format weights on [`tencent/HunyuanVideo`](https://huggingface.co/tencent/HunyuanVideo).\n\n### Using Diffusers\n\nHunyuanVideo can be used directly from Diffusers. Install the latest version of Diffusers.\n\n```python\nimport torch\nfrom diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel\nfrom diffusers.utils import export_to_video\n\nmodel_id = \"hunyuanvideo-community/HunyuanVideo\"\ntransformer = HunyuanVideoTransformer3DModel.from_pretrained(\n model_id, subfolder=\"transformer\", torch_dtype=torch.bfloat16\n)\npipe = HunyuanVideoPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch.float16)\n\n# Enable memory savings\npipe.vae.enable_tiling()\npipe.enable_model_cpu_offload()\n\noutput = pipe(\n prompt=\"A cat walks on the grass, realistic\",\n height=320,\n width=512,\n num_frames=61,\n num_inference_steps=30,\n).frames[0]\nexport_to_video(output, \"output.mp4\", fps=15)\n```\n\nRefer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video) for more information.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Fabrice-TIERCELIN/HunyuanVideo", "base_model_relation": "base" }, { "model_id": "ApacheOne/Info-Hunyuan_LoRAs", "gated": "unknown", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\n- zh\nbase_model:\n- tencent/HunyuanVideo\n- tencent/HunyuanVideo-I2V\ntags:\n- art\n---\n## Hunyuan-Archive_Information\n+ ALL loRAs have markdown files with info from the author of the model.\n\n\n## Community and in-depth repo information\nAll models listed are for continuing the great open source push of GEN AI\n#### ALL Have\n- Trigger words\n- Authors nicknames\n- Base model type\n- Description\n- Version\n- Links to more info\n#### Some Have\n- T2V(Text : Video) | T2I(Text : Image)\n- I2V(Image : Video) | I2I(Image : Image)\n- V2V(Video : Video)\n+ **Usage Guidelines:**\n - **Age Restriction:** Due to the explicit nature of the generated content, usage is restricted to individuals over the legal age in their jurisdiction.\n## Thanks\n- Authors and Brains behind these models and info\n- Hosting and Sharing platforms\n\n## License\n\nThis project incorporates components with different licenses. Please review the licensing information carefully for each part.\n\n### Models (Base & LoRA)\n\nThe base models and any LoRA (Low-Rank Adaptation) models utilized or provided within this repository **retain their original licenses**.\n\n* For **[hunyuan-collection](https://huggingface.co/collections/ApacheOne/hunyuan-collection-68384deec537d9152752e7b1)**: Please refer to its original license [LICENSE](https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE).\n\nIt is crucial to consult the original source of these models (e.g., their Hugging Face model cards, original repositories) for their specific licensing terms before use, modification, or distribution.\n\n### Documentation & Repository Information (`.md` files)\n\nAll Markdown files (`.md`), including this [README.md](README.md), and other textual information created specifically for this repository and collection, are licensed under the **Apache License 2.0**.\n\nThe Apache License 2.0 can typically be found in [LICENSE](LICENSE) file in the root of similar open-source projects, or you can view it online at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0).\n\n**Why Apache 2.0 for documentation?**\nWe've chosen the Apache 2.0 license for our documentation and informational content to:\n* **Keep Information Open:** Ensure the information remains open-source and freely accessible to everyone.\n* **Encourage Collaboration & Edits:** Allow the community to freely use, modify, distribute, and contribute improvements to the documentation.\n* **Maintain Attribution & History:** The license terms help in properly attributing contributions and tracking the evolution of the information, which is beneficial for maintaining accurate and up-to-date notes over time.\n* **Provide Clear Permissions:** Offer clear guidelines on what users can and cannot do with the content, while also providing certain patent and copyright protections.\n\n---\n\n**Important:** It is your responsibility as a user to ensure compliance with all applicable licenses when using, modifying, or distributing any part of this project. If you have any questions about licensing, please consult the respective license texts or open an issue in this repository.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": null, "base_model_relation": null }, { "model_id": "funscripter/collection", "gated": "unknown", "card": "---\nbase_model:\n- tencent/HunyuanVideo\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": null, "base_model_relation": null }, { "model_id": "Razrien/Furry-hunyuan-testing-thing", "gated": "unknown", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\nbase_model:\n- tencent/HunyuanVideo\ntags:\n- furry\n- I2V\n- T2V\n- hunyuan\n---\nJust a personal quanting project i'm working on, nothing to see here ;D", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": null, "base_model_relation": null }, { "model_id": "shaonroy/roy", "gated": "False", "card": "---\nlicense: apache-2.0\ndatasets:\n- microsoft/orca-agentinstruct-1M-v1\nlanguage:\n- bn\nmetrics:\n- accuracy\nbase_model:\n- tencent/HunyuanVideo\nnew_version: Qwen/QwQ-32B-Preview\nlibrary_name: adapter-transformers\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "shaonroy/roy", "base_model_relation": "base" }, { "model_id": "trojblue/HunyuanVideo-lora-AnimeShots", "gated": "False", "card": "---\nlicense: mit\ndatasets:\n- trojblue/test-HunyuanVideo-anime-stills\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\ntags:\n- hunyuan-video\n- text-to-video\n- lora\n- diffusers\n- template:diffusion-lora\ninstance_prompt: anime\nwidget:\n- text: 'anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress, green dress, holding carton, long hair, long sleeves, multiple girls, night, open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single braid, solo focus, spaghetti strap'\n output:\n url: samples/ComfyUI_00017_.webp\n- text: ''\n output:\n url: samples/ComfyUI_00024_.webp\n- text: ''\n output:\n url: samples/ComfyUI_00068_.webp\n- text: 'anime scene of a vibrant carnival with colorful rides, games, and food stalls, and a clown handing balloons to a group of laughing children.'\n output:\n url: samples/ComfyUI_00071_.webp\n---\n\n# **Hunyuan Video Lora - AnimeShots**\n\n\n\n\n[v0.1 - testing version]\n\n\n\nAn anime-style lora trained on anime screencaps and illustrations, aimed to create vibrant, bright and colorful anime style motions. It's good at generating single person motions (and girls better than boys).\n\n\n\n## Usage\n\nTo use the lora (and to use HunyuanVideo in general) in ComfyUI, it's recommended to install the [VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite), and update to torch 2.5.1+cu124:\n\n```\npip install -U torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124\n```\n\n\n\nA sample workflow for using the lora can be found in the Huggingface repo:\n\n- [v0.1/ComfyUI_00024_.webp \u00b7 trojblue/HunyuanVideo-lora-AnimeShot at main](https://huggingface.co/trojblue/HunyuanVideo-lora-AnimeShot/blob/main/v0.1/ComfyUI_00024_.webp)\n\n\n\n## Prompting\n\nUse prompts in the format of `anime ` to get the best results, use resolution `544x960` (and usually horizontal works a little bit better than vertical). for example:\n\n\n\n- anime girl with pink twin tails and green eyes, wearing a school uniform, holding a stack of books in a bustling library filled with sunlight streaming through tall windows.\n- anime boy with silver hair and blue eyes, wearing a casual hoodie, sitting on a park bench, feeding pigeons with a gentle smile.\n- anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress, green dress, holding carton, long hair, long sleeves, multiple girls, night, open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single braid, solo focus, spaghetti strap\n\n## Limitations\n\nIt's trained as a test model so sometimes when body movements are large it gets disconnected. Also some concepts are less anime-like compared to others. I do plan to update the model later with more training time and dataset.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "trojblue/HunyuanVideo-lora-AnimeShots", "base_model_relation": "base" }, { "model_id": "calcuis/hyvid", "gated": "False", "card": "---\nlicense: mit\nlicense_name: tencent-hunyuan-community\nlicense_link: LICENSE\ndatasets:\n- trojblue/test-HunyuanVideo-anime-images\n- calcuis/anime-descriptor\nlanguage:\n- en\nbase_model:\n- tencent/HunyuanVideo\n- trojblue/HunyuanVideo-lora-AnimeShots\npipeline_tag: text-to-video\ntags:\n- hyvid\n- lora\n- gguf-comfy\n- gguf-node\nwidget:\n- text: >-\n anime style anime girl with massive fennec ears and one big fluffy tail, she\n has blonde long hair blue eyes wearing a maid outfit with a long black gold\n leaf pattern dress, walking slowly to the front with sweetie smile, holding\n a fancy black forest cake with candles on top in the kitchen of an old dark\n Victorian mansion lit by candlelight with a bright window to the foggy\n output:\n url: samples\\ComfyUI_00007_.webp\n- text: >-\n anime scene of a vibrant carnival with colorful rides, games, and food\n stalls, and a cute anime girl with multicolored hair, holding a melting\n ice-cream, wearing a fluffy white hoodie and blue mini shorts, winking her\n eye, smiling to the camera\n output:\n url: samples\\ComfyUI_00002_.webp\n- text: >-\n anime girl with pink twin tails and green eyes, wearing a school uniform,\n holding a stack of books in a bustling library filled with sunlight\n streaming through tall windows\n output:\n url: samples\\ComfyUI_00001_.webp\n- text: >-\n anime boy with silver hair and blue eyes, wearing a casual hoodie, sitting\n on a park bench, feeding pigeons with a gentle smile\n output:\n url: samples\\ComfyUI_00003_.webp\n- text: >-\n anime style anime girl with massive fennec ears and one big fluffy tail, she\n has blonde long hair blue eyes wearing a maid outfit with a long black gold\n leaf pattern dress, walking slowly to the front with sweetie smile, holding\n a fancy black forest cake with candles on top in the kitchen of an old dark\n Victorian mansion lit by candlelight with a bright window to the foggy\n forest\n output:\n url: samples\\ComfyUI_00004_.webp\n- text: >-\n anime style anime girl with massive fennec ears and one big fluffy tail, she\n has blonde hair long hair blue eyes wearing a pink sweater and a long blue\n skirt walking in a beautiful outdoor scenery with snow mountains in the\n background\n output:\n url: samples\\ComfyUI_00005_.webp\n---\n\n# **GGUF quantized and fp8 scaled version of hyvid with lora anime adapter**\n\n\n\n![screenshot](https://raw.githubusercontent.com/calcuis/comfy/master/hyvid.gif)\n\n## **setup (once)**\n- drag hyvid_lora_adapter.safetensors [[323MB](https://huggingface.co/calcuis/hyvid/blob/main/hyvid_lora_adapter.safetensors)] to > ./ComfyUI/models/loras\n- drag hunyuan-video-t2v-720p-q4_0.gguf [[7.74GB](https://huggingface.co/calcuis/hyvid/blob/main/hunyuan-video-t2v-720p-q4_0.gguf)] to > ./ComfyUI/models/diffusion_models\n- drag llava_llama3-q4_0.gguf [[4.68GB](https://huggingface.co/calcuis/hyvid/blob/main/llava_llama3-q4_0.gguf)] to > ./ComfyUI/models/text_encoders\n- drag clip_l_fp8_e4m3fn.safetensors [[123MB](https://huggingface.co/calcuis/hyvid/blob/main/clip_l_fp8_e4m3fn.safetensors)] to > ./ComfyUI/models/text_encoders\n- drag hunyuan_video_vae_fp8_e4m3fn.safetensors [[247MB](https://huggingface.co/calcuis/hyvid/blob/main/hunyuan_video_vae_fp8_e4m3fn.safetensors)] to > ./ComfyUI/models/vae\n\n## **run it straight (no installation needed way)**\n- run the .bat file in the main directory (assuming you are using the gguf-node pack below)\n- drag the demo [clip](https://huggingface.co/calcuis/hyvid/blob/main/samples%5CComfyUI_00007_.webp) or the workflow json file (below) to > your browser\n\n### **workflows**\n- example workflow for [gguf](https://huggingface.co/calcuis/hyvid/blob/main/workflow-hyvid-gguf.json) (upgrade your [node](https://github.com/calcuis/gguf) for llava gguf support; more quantized llava-llama3 version [here](https://huggingface.co/chatpig/llava-llama3/tree/main))\n- example workflow for [safetensors](https://huggingface.co/calcuis/hyvid/blob/main/workflow-hyvid-safetensors.json) (fp8 scaled version [[13.2GB](https://huggingface.co/calcuis/hyvid/blob/main/hunyuan_video_t2v_720_fp8_scaled.safetensors)] is recommended)\n\n### **review**\n- more stable output if adapter applied\n- working good even with fp8_e4m3fn scaled clip and vae\n- significant changes in loading speed while using the new quantized/scaled file(s) with revised workflow (see above)\n\n### **references**\n- lora adapter from [trojblue](https://huggingface.co/trojblue/HunyuanVideo-lora-AnimeShots)\n- base model from [tencent](https://huggingface.co/tencent/HunyuanVideo)\n- fast model from [fastvideo](https://huggingface.co/FastVideo/FastHunyuan)\n- comfyui from [comfyanonymous](https://github.com/comfyanonymous/ComfyUI)\n- comfyui-gguf [city96](https://github.com/city96/ComfyUI-GGUF)\n- gguf-comfy [pack](https://github.com/calcuis/gguf-comfy/releases)\n- gguf-node ([pypi](https://pypi.org/project/gguf-node)|[repo](https://github.com/calcuis/gguf)|[pack](https://github.com/calcuis/gguf/releases))\n\n### **appendices**\n- get more test prompt from the training [dataset](https://huggingface.co/datasets/calcuis/anime-descriptor)\n- still no ideas? get fresh prompt from our [simulator](https://prompt.calcuis.us)", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "calcuis/hyvid", "base_model_relation": "base" }, { "model_id": "trojblue/HunyuanVideo-lora-AnimeStills", "gated": "False", "card": "---\ntags:\n- text-to-image\n- text-to-video\n- lora\n- diffusers\n- template:diffusion-lora\nwidget:\n- text: '-'\n output:\n url: images/ComfyUI_00789_.png\n- text: '-'\n output:\n url: images/ComfyUI_00796_.png\n- text: '-'\n output:\n url: images/ComfyUI_00793_.png\n- text: >-\n an anime illustration of kitsune, girl, blue eyes, braided hair,\n multicoloured hair, brown hair, pink hair, brown fox ears, brown fox tail,\n fantasy school uniform, open shoulders, masterpiece, best quality, with\n professional photography composition, dynamic lighting, well-balanced color\n and contrast, clear separation of subject and background, detailed, and\n storytelling.\n output:\n url: images/ComfyUI_00784_.png\nbase_model: tencent/HunyuanVideo\ninstance_prompt: an anime illustration of\nlicense: mit\n---\n# Hunyuan Video Lora - AnimeStills\n\n\n\n**EXPERIMENTAL:** the model generates noisy, low-resolution illustration-like images. It can be used to guide more refined models such as SDXL for its natural language (and composition) capabilities, but use with a grain of salt if you plan to use it directly. Also, results might look 'old-time anime' due to dataset used. \n\n\nA experimental model that uses HunyuanVideo as a image generator. outputs images at 768 resolution.\n\nIn a typical HunyuanVideo workflow, set 'frame' to **1** and add this lora to get an anime illustration-like output.\n\n\n## Trigger words\n\nYou should use `an anime illustration of` to trigger the image generation.\n\n\n## Resolutions\n\nUse the following resolution for the best results:\n```\n(768, 768)\n(672, 864), (864, 672)\n(608, 960), (960, 608)\n(544, 1088), (1088, 544)\n```\n\n## Training\n\n\n![image/png](https://cdn-uploads.huggingface.co/production/uploads/636982a164aad59d4d42714b/oMsfEYPYbWLyK6mihIkWm.png)\n\nThe model has been trained on a tag-balanced dataset of 2k best pixiv illustrations, at resolution of 768, for 856 eopchs (214 epochs * 4 repeats per epoch). \n\n\nThe training takes about 3 days on a 8 x H100 cluster. By the time training ends the loss is still consistently going down, so further training could be beneficial.\n\n\n\n\n## Download model\n\nWeights for this model are available in Safetensors format.\n\n[Download](/trojblue/HunyuanVideo-lora-AnimeStills/tree/main/epoch214) them in the Files & versions tab.\n\n\n## Limitations\n\nThe model outputs could be deformed, not conforming to prompt, turning realistic, or getting nsfw results, due to the limited size of dataset used and limitations of lora models.\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "trojblue/HunyuanVideo-lora-AnimeStills", "base_model_relation": "base" }, { "model_id": "a-r-r-o-w/HunyuanVideo-tuxemons", "gated": "False", "card": "---\ndatasets:\n- diffusers/tuxemon\nbase_model:\n- tencent/HunyuanVideo\nlibrary_name: diffusers\nwidget:\n - text: Style of snomexut, A fluffy, brown-haired tuxemon with a curious expression, sporting an orange and black striped harness and unique patterned tail, explores its surroundings.\n output:\n url: video_0.mp4\n - text: Style of snomexut, A vibrant yellow Tuxemon, wielding a fiery blue staff, joyfully manipulates swirling flames around its cheerful form.\n output:\n url: video_1.mp4\n - text: Style of snomexut, A vibrant yellow Tuxemon, wielding a fiery blue staff, joyfully manipulates swirling flames around its cheerful form.\n output:\n url: video_2.mp4\n - text: Style of snomexut, A menacing blue tuxemon with sharp claws and fierce fangs roars against a cosmic backdrop, its body adorned with glowing orbs and sharp pink details.\n output:\n url: video_3.mp4\ntags:\n- text-to-video\n- diffusers-training\n- diffusers\n- lora\n- hunyuan_video\n- template:sd-lora\n---\n\n# LoRA Finetune\n\n\n\n## Model description\n\nThis is a lora finetune of https://huggingface.co/hunyuanvideo-community/HunyuanVideo.\n\nThe model was trained using [`finetrainers`](https://github.com/a-r-r-o-w/finetrainers). Training progress over time (4000 steps, 6000 steps, 8000 steps, 10000 steps):\n\n\n\n## Download model\n\n[Download LoRA](pytorch_lora_weights.safetensors) in the Files & Versions tab.\n\n## Usage\n\nRequires the [\ud83e\udde8 Diffusers library](https://github.com/huggingface/diffusers) installed.\n\nTrigger phrase: `\"Style of snomexut,\"`\n\n```python\nimport torch\nfrom diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel\nfrom diffusers.utils import export_to_video\n\nmodel_id = \"hunyuanvideo-community/HunyuanVideo\"\ntransformer = HunyuanVideoTransformer3DModel.from_pretrained(\n model_id, subfolder=\"transformer\", torch_dtype=torch.bfloat16\n)\npipe = HunyuanVideoPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch.float16)\npipe.vae.enable_tiling()\npipe.to(\"cuda\")\n\npipe.load_lora_weights(\"a-r-r-o-w/HunyuanVideo-tuxemons\", adapter_name=\"hunyuanvideo-lora\")\npipe.set_adapters(\"hunyuanvideo-lora\", 1.2)\n\noutput = pipe(\n prompt=\"Style of snomexut, a cat-like Tuxemon creature walks in alien-world grass, and observes its surroundings.\",\n height=768,\n width=768,\n num_frames=33,\n num_inference_steps=30,\n generator=torch.Generator().manual_seed(73),\n).frames[0]\nexport_to_video(output, \"output-tuxemon.mp4\", fps=15)\n```\n\n\n\nFor more details, including weighting, merging and fusing LoRAs, check the [documentation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) on loading LoRAs in diffusers.\n\n
\n Training params \n\nDataset contained 250 images from `diffusers/tuxemon` and 150 samples generated from a Flux LoRA trained on the same dataset.\n\n```\nid_token: \"Style of snomexut,\"\ntarget_modules: to_q to_k to_v to_out.0 add_q_proj add_k_proj add_v_proj to_add_out\nlr: 1e-5\ntraining_steps: 10000\nvideo_resolution_buckets: 1x768x768 1x512x512\nrank: 128\nlora_alpha: 128\noptimizer: adamw\nweight_decay: 0.01\nflow_weighting_scheme: logit_normal\nflow_shift: 7.0\nvalidation_prompts: $ID_TOKEN A fluffy, brown-haired tuxemon with a curious expression, sporting an orange and black striped harness and unique patterned tail, explores its surroundings.@@@1x768x768:::$ID_TOKEN A curious yellow Tuxemon bird with large, expressive eyes and a distinctive golden beak, set against a starry black background, looks gently to its left.@@@1x768x768:::$ID_TOKEN A menacing Tuxemon with the appearance of a green and white aquatic shark, featuring sharp teeth and red eyes, emerges from the dark abyss, ready to unleash its fury.@@@1x768x768:::$ID_TOKEN A cheerful light blue tuxemon with four tentacle-like arms and sharp ears smiles playfully against a dark background.@@@1x768x768:::$ID_TOKEN A cheerful white Tuxemon with large ears and a playful expression joyfully floats in a starry outer space backdrop.@@@1x768x768:::$ID_TOKEN A finely detailed, silver-toned pin shaped like a futuristic double-barreled gun, adorned with orange textured panels, popularly associated with the Tuxemon universe.@@@1x768x768:::$ID_TOKEN A menacing blue tuxemon with sharp claws and fierce fangs roars against a cosmic backdrop, its body adorned with glowing orbs and sharp pink details.@@@1x768x768:::$ID_TOKEN A vibrant yellow Tuxemon, wielding a fiery blue staff, joyfully manipulates swirling flames around its cheerful form.@@@1x768x768:::$ID_TOKEN A fluffy, brown-haired tuxemon with a curious expression, sporting an orange and black striped harness and unique patterned tail, explores its surroundings.@@@49x768x768:::$ID_TOKEN A curious yellow Tuxemon bird with large, expressive eyes and a distinctive golden beak, set against a starry black background, looks gently to its left.@@@49x768x768:::$ID_TOKEN A menacing Tuxemon with the appearance of a green and white aquatic shark, featuring sharp teeth and red eyes, emerges from the dark abyss, ready to unleash its fury.@@@49x768x768:::$ID_TOKEN A cheerful light blue tuxemon with four tentacle-like arms and sharp ears smiles playfully against a dark background.@@@49x768x768:::$ID_TOKEN A finely detailed, silver-toned pin shaped like a futuristic double-barreled gun, adorned with orange textured panels, popularly associated with the Tuxemon universe.@@@49x768x768:::$ID_TOKEN A menacing blue tuxemon with sharp claws and fierce fangs roars against a cosmic backdrop, its body adorned with glowing orbs and sharp pink details.@@@49x768x768:::$ID_TOKEN A vibrant yellow Tuxemon, wielding a fiery blue staff, joyfully manipulates swirling flames around its cheerful form.@@@49x768x768\n```\n
", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "a-r-r-o-w/HunyuanVideo-tuxemons", "base_model_relation": "base" }, { "model_id": "lucataco/hunyuan-musubi-rose-6-comfyui", "gated": "False", "card": "---\ntags:\n- text-to-video\n- lora\n- diffusers\n- replicate\nwidget:\n- text: >-\n In the style of RSNG. A woman with blonde hair stands on a balcony at night, framed against a backdrop of city lights. She wears a white crop top and a dark jacket, exuding a confident presence as she gazes directly at the camera\n output:\n url: https://replicate.delivery/xezq/Qufc0HtLTDWqJCVctoveVtYwnaBAh3jomqFgBqPdRVeDWmEoA/HunyuanVideo_00001.mp4\nbase_model: tencent/HunyuanVideo\ninstance_prompt: In the style of RSNG\nlicense: mit\n---\n# Hunyuan Video Lora - Rose Number One Girl\n\n\n\nTrained on Replicate using:\n\n[replicate.com/lucataco/musubi-tuner](https://replicate.com/lucataco/musubi-tuner)\n\nConverted to ComfyUI format using:\n\n[replicate.com/lucataco/musubi-tuner-lora-converter](https://replicate.com/lucataco/musubi-tuner-lora-converter)\n\n## Trigger words\n\nYou should use `In the style of RSNG` to trigger the video generation.\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "lucataco/hunyuan-musubi-rose-6-comfyui", "base_model_relation": "base" }, { "model_id": "lucataco/hunyuan-musubi-lora-heygen-6", "gated": "False", "card": "---\ntags:\n- text-to-video\n- lora\n- diffusers\n- replicate\nwidget:\n- text: >-\n HGW woman with long, dark hair, tied back neatly, wearing a black jacket over a light top.\n output:\n url: heygen-woman.mp4\nbase_model: tencent/HunyuanVideo\ninstance_prompt: HGW woman\nlicense: mit\n---\n# Hunyuan Video Lora - Heygen Woman\n\n\n\nRun on Replicate at:\n\nhttps://replicate.com/lucataco/hunyuan-heygen-woman\n\nTrained on Replicate using:\n\n[replicate.com/lucataco/musubi-tuner](https://replicate.com/lucataco/musubi-tuner)\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "lucataco/hunyuan-musubi-lora-heygen", "base_model_relation": "finetune" }, { "model_id": "lucataco/hunyuan-musubi-lora-mrgnfrmn-6", "gated": "False", "card": "---\ntags:\n- text-to-video\n- lora\n- diffusers\n- replicate\nwidget:\n- text: >-\n MRGNFRMN man with white hair and a beard stands against a dark background. He is wearing a dark suit jacket over a white shirt, conveying a serious and thoughtful expression.\n output:\n url: mgfm.mp4\nbase_model: tencent/HunyuanVideo\ninstance_prompt: MRGNFRMN man\nlicense: mit\n---\n# Hunyuan Video Lora - Morgan Freeman\n\n\n\nTrained on Replicate using:\n\n[replicate.com/lucataco/musubi-tuner](https://replicate.com/lucataco/musubi-tuner)\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "lucataco/hunyuan-musubi-lora-mrgnfrmn", "base_model_relation": "finetune" }, { "model_id": "fofr/hunyuan-test", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n---\n\n# Hunyuan Test\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "fofr/hunyuan-test", "base_model_relation": "base" }, { "model_id": "martintomov/HunyuanVideo-Coca-Cola", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\nlibrary_name: diffusers\ntags:\n- text-to-video\n- diffusers\n- lora\n- hunyuan\n---\n\n# HunyuanVideo-Coca-Cola LoRA Finetune\n\nExperimental finetune.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "martintomov/HunyuanVideo-Coca-Cola", "base_model_relation": "base" }, { "model_id": "fofr/hunyuan-sonic-2", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n---\n\n# Hunyuan Sonic 2\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "fofr/hunyuan-sonic", "base_model_relation": "finetune" }, { "model_id": "fofr/hunyuan-take-on-me", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n---\n\n# Hunyuan Take On Me\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "fofr/hunyuan-take-on-me", "base_model_relation": "base" }, { "model_id": "fofr/hunyuan-ponponpon", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n---\n\n# Hunyuan Ponponpon\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "fofr/hunyuan-ponponpon", "base_model_relation": "base" }, { "model_id": "fofr/hunyuan-cyberpunk-mod", "gated": "unknown", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\ninstance_prompt: CYB77\nwidget:\n - text: >-\n In the style of CYB77, first person view of a gunfight in a cyberpunk city\n output:\n url: https://replicate.delivery/xezq/n6DhktRQrALkCJy2zA8Pk0DvMnud9OKjvZbjNtdtLZRGZ2AF/HunyuanVideo_00001.mp4\n - text: >-\n In the style of CYB77, riding a motorbike in first person view through a cyberpunk city at night\n output:\n url: https://replicate.delivery/xezq/u0SwcftpJ4TNSyeBpZ49idwa22VkyYdUvMSWYVZgxZNRkZDUA/HunyuanVideo_00001.mp4\n---\n\n# Hunyuan Cyberpunk Mod\n\n\n\nRun on Replicate:\n\nhttps://replicate.com/fofr/hunyuan-cyberpunk-mod\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": null, "base_model_relation": null }, { "model_id": "lucataco/hunyuan-lora-heygen-man-8", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\nwidget:\n - text: >-\n HGM1 man standing indoors, likely in a modern, well-lit space. He is wearing a white, long-sleeved shirt. The background includes large windows or glass doors, through which greenery and possibly other buildings can be seen, suggesting an urban or semi-urban setting. The lighting is natural, with sunlight streaming in, creating a bright and airy atmosphere. The man appears to be speaking, as he is looking directly at the camera.\n output:\n url: heygen-man.mp4\n---\n\n# Hunyuan Lora Heygen Man 8\n\n\n\nRun on Replicate at:\n\nhttps://replicate.com/lucataco/hunyuan-heygen-joshua\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "lucataco/hunyuan-lora-heygen-man", "base_model_relation": "finetune" }, { "model_id": "Knvl/test", "gated": "False", "card": "---\nlicense: mit\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\ntags:\n- hunyuan-video\n- text-to-video\n- lora\n- diffusers\n- template:diffusion-lora\ninstance_prompt: anime\nwidget:\n- text: >-\n anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress,\n green dress, holding carton, long hair, long sleeves, multiple girls, night,\n open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single\n braid, solo focus, spaghetti strap\n output:\n url: samples/ComfyUI_00017_.webp\n- text: \n output:\n url: samples/ComfyUI_00024_.webp\n- text: \n output:\n url: samples/ComfyUI_00068_.webp\n- text: >-\n anime scene of a vibrant carnival with colorful rides, games, and food\n stalls, and a clown handing balloons to a group of laughing children.\n output:\n url: samples/ComfyUI_00071_.webp\n---\n\n# **Hunyuan Video Lora - AnimeShots**\n\n\n\n\n[v0.1 - testing version]\n\n\n\nAn anime-style lora trained on anime screencaps and illustrations, aimed to create vibrant, bright and colorful anime style motions. It's good at generating single person motions (and girls better than boys).\n\n\n\n## Usage\n\nTo use the lora (and to use HunyuanVideo in general) in ComfyUI, it's recommended to install the [VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite), and update to torch 2.5.1+cu124:\n\n```\npip install -U torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124\n```\n\n\n\nA sample workflow for using the lora can be found in the Huggingface repo:\n\n- [v0.1/ComfyUI_00024_.webp \u00b7 trojblue/HunyuanVideo-lora-AnimeShot at main](https://huggingface.co/trojblue/HunyuanVideo-lora-AnimeShot/blob/main/v0.1/ComfyUI_00024_.webp)\n\n\n\n## Prompting\n\nUse prompts in the format of `anime ` to get the best results, use resolution `544x960` (and usually horizontal works a little bit better than vertical). for example:\n\n\n\n- anime girl with pink twin tails and green eyes, wearing a school uniform, holding a stack of books in a bustling library filled with sunlight streaming through tall windows.\n- anime boy with silver hair and blue eyes, wearing a casual hoodie, sitting on a park bench, feeding pigeons with a gentle smile.\n- anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress, green dress, holding carton, long hair, long sleeves, multiple girls, night, open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single braid, solo focus, spaghetti strap\n\n## Limitations\n\nIt's trained as a test model so sometimes when body movements are large it gets disconnected. Also some concepts are less anime-like compared to others. I do plan to update the model later with more training time and dataset.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Knvl/test", "base_model_relation": "base" }, { "model_id": "Knvl/test2", "gated": "False", "card": "---\nlicense: mit\ndatasets:\n- trojblue/test-HunyuanVideo-anime-stills\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\ntags:\n- hunyuan-video\n- text-to-video\n- lora\n- diffusers\n- template:diffusion-lora\ninstance_prompt: anime\nwidget:\n- text: 'anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress, green dress, holding carton, long hair, long sleeves, multiple girls, night, open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single braid, solo focus, spaghetti strap'\n output:\n url: samples/ComfyUI_00017_.webp\n- text: ''\n output:\n url: samples/ComfyUI_00024_.webp\n- text: ''\n output:\n url: samples/ComfyUI_00068_.webp\n- text: 'anime scene of a vibrant carnival with colorful rides, games, and food stalls, and a clown handing balloons to a group of laughing children.'\n output:\n url: samples/ComfyUI_00071_.webp\n---\n\n# **Hunyuan Video Lora - AnimeShots**\n\n\n\n\n[v0.1 - testing version]\n\n\n\nAn anime-style lora trained on anime screencaps and illustrations, aimed to create vibrant, bright and colorful anime style motions. It's good at generating single person motions (and girls better than boys).\n\n\n\n## Usage\n\nTo use the lora (and to use HunyuanVideo in general) in ComfyUI, it's recommended to install the [VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite), and update to torch 2.5.1+cu124:\n\n```\npip install -U torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124\n```\n\n\n\nA sample workflow for using the lora can be found in the Huggingface repo:\n\n- [v0.1/ComfyUI_00024_.webp \u00b7 trojblue/HunyuanVideo-lora-AnimeShot at main](https://huggingface.co/trojblue/HunyuanVideo-lora-AnimeShot/blob/main/v0.1/ComfyUI_00024_.webp)\n\n\n\n## Prompting\n\nUse prompts in the format of `anime ` to get the best results, use resolution `544x960` (and usually horizontal works a little bit better than vertical). for example:\n\n\n\n- anime girl with pink twin tails and green eyes, wearing a school uniform, holding a stack of books in a bustling library filled with sunlight streaming through tall windows.\n- anime boy with silver hair and blue eyes, wearing a casual hoodie, sitting on a park bench, feeding pigeons with a gentle smile.\n- anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress, green dress, holding carton, long hair, long sleeves, multiple girls, night, open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single braid, solo focus, spaghetti strap\n\n## Limitations\n\nIt's trained as a test model so sometimes when body movements are large it gets disconnected. Also some concepts are less anime-like compared to others. I do plan to update the model later with more training time and dataset.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Knvl/test2", "base_model_relation": "base" }, { "model_id": "Knvl/mybad", "gated": "False", "card": "---\nlicense: mit\ndatasets:\n- trojblue/test-HunyuanVideo-anime-stills\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\ntags:\n- hunyuan-video\n- text-to-video\n- lora\n- diffusers\n- template:diffusion-lora\ninstance_prompt: anime\nwidget:\n- text: 'anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress, green dress, holding carton, long hair, long sleeves, multiple girls, night, open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single braid, solo focus, spaghetti strap'\n output:\n url: samples/ComfyUI_00017_.webp\n- text: ''\n output:\n url: samples/ComfyUI_00024_.webp\n- text: ''\n output:\n url: samples/ComfyUI_00068_.webp\n- text: 'anime scene of a vibrant carnival with colorful rides, games, and food stalls, and a clown handing balloons to a group of laughing children.'\n output:\n url: samples/ComfyUI_00071_.webp\n---\n\n# **Hunyuan Video Lora - AnimaEngine**\n\n\n\n\n[v0.1 - testing version]\n\n\n\nAn anime-style \"Anima\" trained on anime screencaps and illustrations, aimed to create vibrant, bright and colorful anime style motions. It's good at generating single person motions (and girls better than boys).\n\n\n\n## Usage\n\nTo use the lora (and to use HunyuanVideo in general) in ComfyUI, it's recommended to install the [VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite), and update to torch 2.5.1+cu124:\n\n```\npip install -U torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124\n```\n\n\n\nA sample workflow for using the lora can be found in the Huggingface repo:\n\n- [v0.1/ComfyUI_00024_.webp \u00b7 trojblue/HunyuanVideo-lora-AnimeShot at main](https://huggingface.co/trojblue/HunyuanVideo-lora-AnimeShot/blob/main/v0.1/ComfyUI_00024_.webp)\n\n\n\n## Prompting\n\nUse prompts in the format of `anime ` to get the best results, use resolution `544x960` (and usually horizontal works a little bit better than vertical). for example:\n\n\n\n- anime girl with pink twin tails and green eyes, wearing a school uniform, holding a stack of books in a bustling library filled with sunlight streaming through tall windows.\n- anime boy with silver hair and blue eyes, wearing a casual hoodie, sitting on a park bench, feeding pigeons with a gentle smile.\n- anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress, green dress, holding carton, long hair, long sleeves, multiple girls, night, open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single braid, solo focus, spaghetti strap\n\n## Limitations\n\nIt's trained as a test model so sometimes when body movements are large it gets disconnected. Also some concepts are less anime-like compared to others. I do plan to update the model later with more training time and dataset.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Knvl/mybad", "base_model_relation": "base" }, { "model_id": "gj3ka1/animaengine", "gated": "False", "card": "---\nlicense: mit\ndatasets:\n- trojblue/test-HunyuanVideo-anime-stills\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\ntags:\n- hunyuan-video\n- text-to-video\n- lora\n- diffusers\n- template:diffusion-lora\ninstance_prompt: anime\nwidget:\n- text: 'anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress, green dress, holding carton, long hair, long sleeves, multiple girls, night, open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single braid, solo focus, spaghetti strap'\n output:\n url: samples/ComfyUI_00017_.webp\n- text: ''\n output:\n url: samples/ComfyUI_00024_.webp\n- text: ''\n output:\n url: samples/ComfyUI_00068_.webp\n- text: 'anime scene of a vibrant carnival with colorful rides, games, and food stalls, and a clown handing balloons to a group of laughing children.'\n output:\n url: samples/ComfyUI_00071_.webp\n---\n\n# **AnimaEngine**\n\n\n\n\n[v0.1 - testing version]\n\n\n\nAn anime-style lora trained on anime screencaps and illustrations, aimed to create vibrant, bright and colorful anime style motions. It's good at generating single person motions (and girls better than boys).\n\n\n\n## Usage\n\nTo use the lora (and to use HunyuanVideo in general) in ComfyUI, it's recommended to install the [VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite), and update to torch 2.5.1+cu124:\n\n```\npip install -U torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124\n```\n\n\n## Prompting\n\nUse prompts in the format of `anime ` to get the best results, use resolution `544x960` (and usually horizontal works a little bit better than vertical). for example:\n\n\n\n- anime girl with pink twin tails and green eyes, wearing a school uniform, holding a stack of books in a bustling library filled with sunlight streaming through tall windows.\n- anime boy with silver hair and blue eyes, wearing a casual hoodie, sitting on a park bench, feeding pigeons with a gentle smile.\n- anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress, green dress, holding carton, long hair, long sleeves, multiple girls, night, open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single braid, solo focus, spaghetti strap\n\n## Limitations\n\nIt's trained as a test model so sometimes when body movements are large it gets disconnected. Also some concepts are less anime-like compared to others. I do plan to update the model later with more training time and dataset.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "gj3ka1/animaengine", "base_model_relation": "base" }, { "model_id": "pablerdo/hunyuan-lora-f50cleat", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n---\n\n# Hunyuan Lora F50Cleat\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "pablerdo/hunyuan-lora-f50cleat", "base_model_relation": "base" }, { "model_id": "lucataco/hunyuan-steamboat-willie-10", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\nwidget:\n - text: >-\n In the style of SWR. A black and white animated scene featuring a mouse, dressed in a sailor outfit. The mouse is standing at the helm of a ship, holding the steering wheel with both hands.\n output:\n url: mickey.mp4\n---\n\n# Hunyuan Steamboat Willie 10\n\n\n\nRun on Replicate at:\n\nhttps://replicate.com/lucataco/hunyuan-steamboat-willie\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "lucataco/hunyuan-steamboat-willie", "base_model_relation": "finetune" }, { "model_id": "ghej4u/yay", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Yay\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "ghej4u/yay", "base_model_relation": "base" }, { "model_id": "ghej4u/ian2", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Ian2\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "ghej4u/ian2", "base_model_relation": "base" }, { "model_id": "ghej4u/lol", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Lol\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "ghej4u/lol", "base_model_relation": "base" }, { "model_id": "ghej4u/test", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Test\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "ghej4u/test", "base_model_relation": "base" }, { "model_id": "lucataco/hunyuan-lora-heygen-woman-2", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\nwidget:\n - text: >-\n HGW2 woman sitting on a beige couch in a well-decorated room. She is wearing a light-colored, long-sleeved turtleneck top and has long, straight brown hair.\n output:\n url: woman2.mp4\n---\n\n# Hunyuan Lora Heygen Woman 2\n\n\n\nRun on Replicate at:\n\nhttps://replicate.com/lucataco/hunyuan-heygen-woman-2\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "lucataco/hunyuan-lora-heygen-woman", "base_model_relation": "finetune" }, { "model_id": "deepfates/hunyuan-beast", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Beast\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-beast", "base_model_relation": "base" }, { "model_id": "hashu786/cine", "gated": "False", "card": "---\ntags:\n - text-to-image\n - lora\n - diffusers\n - template:diffusion-lora\nwidget:\n- text: '-'\n output:\n url: images/81904488-4d18-47a6-986d-ec76535843a0.jpg\nbase_model: tencent/HunyuanVideo\ninstance_prompt: null\n\n---\n# cine\n\n\n\n\n\n## Download model\n\nWeights for this model are available in Safetensors format.\n\n[Download](/hashu786/cine/tree/main) them in the Files & versions tab.\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "hashu786/cine", "base_model_relation": "base" }, { "model_id": "AI-Anna/anime-renderer", "gated": "False", "card": "---\nlicense: mit\ndatasets:\n- trojblue/test-HunyuanVideo-anime-stills\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\ntags:\n- hunyuan-video\n- text-to-video\n- lora\n- diffusers\n- template:diffusion-lora\ninstance_prompt: anime\nwidget:\n- text: 'anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress, green dress, holding carton, long hair, long sleeves, multiple girls, night, open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single braid, solo focus, spaghetti strap'\n output:\n url: samples/ComfyUI_00017_.webp\n- text: ''\n output:\n url: samples/ComfyUI_00024_.webp\n- text: ''\n output:\n url: samples/ComfyUI_00068_.webp\n- text: 'anime scene of a vibrant carnival with colorful rides, games, and food stalls, and a clown handing balloons to a group of laughing children.'\n output:\n url: samples/ComfyUI_00071_.webp\n---\n\n# **AnimaEngine**\n\n\n\n\n[v0.1 - testing version]\n\n\n\nAn anime-style lora trained on anime screencaps and illustrations, aimed to create vibrant, bright and colorful anime style motions. It's good at generating single person motions (and girls better than boys).\n\n\n\n## Usage\n\nTo use the lora (and to use HunyuanVideo in general) in ComfyUI, it's recommended to install the [VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite), and update to torch 2.5.1+cu124:\n\n```\npip install -U torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124\n```\n\n\n## Prompting\n\nUse prompts in the format of `anime ` to get the best results, use resolution `544x960` (and usually horizontal works a little bit better than vertical). for example:\n\n\n\n- anime girl with pink twin tails and green eyes, wearing a school uniform, holding a stack of books in a bustling library filled with sunlight streaming through tall windows.\n- anime boy with silver hair and blue eyes, wearing a casual hoodie, sitting on a park bench, feeding pigeons with a gentle smile.\n- anime girl 1girl, alcohol carton, blush, braid, bridge, crosswalk, dress, green dress, holding carton, long hair, long sleeves, multiple girls, night, open mouth, outdoors, pedestrian bridge, purple eyes, red hair, single braid, solo focus, spaghetti strap\n\n## Limitations\n\nIt's trained as a test model so sometimes when body movements are large it gets disconnected. Also some concepts are less anime-like compared to others. I do plan to update the model later with more training time and dataset.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "AI-Anna/anime-renderer", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-game-of-thrones", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Game Of Thrones\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-game-of-thrones", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-fargo", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Fargo\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-fargo", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-la-la-land", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan La La Land\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-la-la-land", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-blade-runner", "gated": "unknown", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Blade Runner\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": null, "base_model_relation": null }, { "model_id": "deepfates/hunyuan-pulp-fiction", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Pulp Fiction\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-pulp-fiction", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-blade-runner-2049", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Blade Runner 2049\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-blade-runner", "base_model_relation": "finetune" }, { "model_id": "deepfates/hunyuan-the-grand-budapest-hotel", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan The Grand Budapest Hotel\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-the-grand-budapest-hotel", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-twin-peaks", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Twin Peaks\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-twin-peaks", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-the-neverending-story", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan The Neverending Story\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-the-neverending-story", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-interstellar", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Interstellar\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-interstellar", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-pirates-of-the-caribbean", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Pirates Of The Caribbean\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-pirates-of-the-caribbean", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-once-upon-a-time-in-hollywood", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Once Upon A Time In Hollywood\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-once-upon-a-time-in-hollywood", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-dune", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Dune\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-dune", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-arcane", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Arcane\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-arcane", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-indiana-jones", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Indiana Jones\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-indiana-jones", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-joker", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Joker\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-joker", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-inception", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Inception\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-inception", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-her", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Her\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-her", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-westworld", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Westworld\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-westworld", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-atomic-blonde", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Atomic Blonde\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-atomic-blonde", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-avatar", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Avatar\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-avatar", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-cowboy-bebop", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Cowboy Bebop\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-cowboy-bebop", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-the-lord-of-the-rings", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan The Lord Of The Rings\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-the-lord-of-the-rings", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-mad-max-fury-road", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Mad Max Fury Road\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-mad-max-fury-road", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-pixar", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Pixar\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-pixar", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-the-matrix-trilogy", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan The Matrix Trilogy\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-the-matrix-trilogy", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-rrr", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Rrr\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-rrr", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-neon-genesis-evangelion", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Neon Genesis Evangelion\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-neon-genesis-evangelion", "base_model_relation": "base" }, { "model_id": "deepfates/hunyuan-spider-man-into-the-spider-verse", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hunyuan Spider Man Into The Spider Verse\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "deepfates/hunyuan-spider-man-into-the-spider-verse", "base_model_relation": "base" }, { "model_id": "neph1/hunyuan_night_graveyard", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\nlibrary_name: diffusers\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- template:diffusion-lora\n- finetrainers\n---\nMore info in this article: https://huggingface.co/blog/neph1/hunyuan-lora\nTrigger word: afkx\nExample prompt: \"A graveyard at night. The scene is shrouded in a thick fog, creating a dark and eerie atmosphere. Numerous tombstones are visible, their inscriptions barely discernible in the dim light. Two large trees stand tall in the foreground, their branches reaching out like skeletal arms. The sky is overcast with a full moon casting a pale glow over the scene. The overall impression is one of mystery and melancholy.\"\n\nTrained with finetrainers: https://github.com/a-r-r-o-w/finetrainers\nand https://github.com/neph1/finetrainers-ui\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/6WoSEn-bjL1cfDG4Cec4q.webp)\n\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/3eTD6JZqaoKvN5vUJ1Ld7.webp)", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "neph1/hunyuan_night_graveyard", "base_model_relation": "base" }, { "model_id": "CAWAI/celebdm", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Celebdm\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "CAWAI/celebdm", "base_model_relation": "base" }, { "model_id": "GetMonie/GawkToon", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gawktoon\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "GetMonie/GawkToon", "base_model_relation": "base" }, { "model_id": "hashu786/CineArc", "gated": "False", "card": "---\ntags:\n- text-to-image\n- lora\n- diffusers\n- template:diffusion-lora\nwidget:\n- text: '-'\n output:\n url: images/cineArc.gif\nbase_model: tencent/HunyuanVideo\ninstance_prompt: null\nlicense: apache-2.0\n---\n# CineArc\n\n\n\n## Model description \n\nAn experimental RND lora trained on 1 video to get the camera arc motion. \n\nFor best results use: \n720x480\n33 frames\nprompt includes: "camera arcs"\n\n\n## Download model\n\nWeights for this model are available in Safetensors format.\n\n[Download](/hashu786/CineArc/tree/main) them in the Files & versions tab.\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "hashu786/CineArc", "base_model_relation": "base" }, { "model_id": "Alched/brxdperf_hunyuan", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Brxdperf_Hunyuan\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Alched/brxdperf_hunyuan", "base_model_relation": "base" }, { "model_id": "hashu786/HYVReward", "gated": "False", "card": "---\ntags:\n- text-to-image\n- lora\n- diffusers\n- template:diffusion-lora\nwidget:\n- text: >-\n An extreme close-up of an gray-haired man with a beard in his 60s, he is\n deep in thought pondering the history of the universe as he sits at a cafe\n in Paris, his eyes focus on people offscreen as they walk as he sits mostly\n motionless, he is dressed in a wool coat suit coat with a button-down shirt\n , he wears a brown beret and glasses and has a very professorial appearance,\n and the end he offers a subtle closed-mouth smile as if he found the answer\n to the mystery of life, the lighting is very cinematic with the golden light\n and the Parisian streets and city in the background, depth of field,\n cinematic 35mm film.\n output:\n url: images/rewardComparison.gif\nbase_model: tencent/HunyuanVideo\ninstance_prompt: null\n---\n# HYVReward\n\n\n\n## Model description \n\nHPS and MPS Reward Loras:\n\nHPS should improve aesthetics while MPS should improve prompt adherence. This is a WIP and very experimental. \n\n\n## Download model\n\nWeights for this model are available in Safetensors format.\n\n[Download](/hashu786/HYVReward/tree/main) them in the Files & versions tab.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "hashu786/HYVReward", "base_model_relation": "base" }, { "model_id": "AlekseyCalvin/hyvid_YegorLetov_concert_LoRA", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hyvid_Yegorletov_Concert_Lora\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "AlekseyCalvin/hyvid_YegorLetov_concert_LoRA", "base_model_relation": "base" }, { "model_id": "blanflin/onstomach", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Onstomach\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "blanflin/onstomach", "base_model_relation": "base" }, { "model_id": "blanflin/Standingoverfemalebj", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Standingoverfemalebj\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "blanflin/Standingoverfemalebj", "base_model_relation": "base" }, { "model_id": "CCRss/hunyuan_lora_anime_akame", "gated": "False", "card": "---\nlanguage:\n- en\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\ntags:\n- anime\n- hunyuan\n- akame\n- hunyuan-video\n- text-to-video\n- lora\n- diffusers\n- template:diffusion-lora\ninstance_prompt: Akame\nwidget:\n- text: >-\n Akame, close-up face shot with crimson eyes against red background, \n speaking brief words with neutral expression, transitioning into head \n movement leftward and slightly back, suggesting defensive motion, black \n hair following movement, high contrast lighting with red atmospheric \n glow behind, white collar visible at neck\n output:\n url: akame_v1_29short_clips/samples/id1_HunyuanVideo_00056.webp\n- text: >-\n Akame, close-up face shot against dark forest background, focused crimson eyes,\n mouth moving slightly as she speaks few words, pale skin contrasting with black hair,\n white collar visible, static shot maintaining same angle throughout brief dialogue, \n minimal animation focused only on lip movement\n output:\n url: akame_v1_29short_clips/samples/id2_HunyuanVideo_00057.webp\n- text: >-\n Akame, close-up face shot focusing on determined crimson eyes, \n speaking briefly while rotating Murasame's blade, steel catching \n moonlight creating subtle blue gleam, blade reflection mirroring her face,\n upper features gradually illuminated by blade's light, night atmosphere with \n soft lighting transition, minimal movement except for sword rotation\n output:\n url: akame_v1_29short_clips/samples/id3_HunyuanVideo_00058.webp\n---\n# Hunyuan Video Lora. Anime, Akame ga kill. Akame. v1\n\n\n\nMy first lora training. What questions I have:\n1. Which captions works the best? I followed structure like that: \"\"\"|tag|, |view|, |who + visual description|,|more precise view|\"\"\"\n2. What resolutions of video to use. I used [768, 480]. Is it better to have videos with different resolutions or unified?\n3. How to decide this value \"frame_buckets = [1, 16, 33, 65, 97, 129]\" I took this one because videos in dataset was from 0.6 sec to 4.93 sec. \n4. What is the \"video_clip_mode\"? I selected the multiple_overlapping but why this instead of others.\n5. What is more important our of those, if I want to improve quality of the lora:\n * A: collect more data;\n * B: make better captions;\n * C: only collect data for 1 task or a motion;\n6. Is it worth training lora with images and videos or only videos?\n7. It's hard to decide optimal inference parameters because there are a lot of knobs that you can change.\n\nIf someone have answers to questions above, I will be really happy to read them. \n\n## Description\nHunyuan Lora model trained on the short clips of Akame from the first episode of the anime, 29 clips in total with avg length: 2.16 sec. \n\nTrained using Diffusion-pipe repo. \n\n\n## Inference params.\n* lora_strength: 1.0\n* dtype: bfloat16 \n* resolution: [[768,480]] (width, heigth)\n* num_frames: 93\n* steps: 20 \n* embedded_guidance_scale: 9.00 *note I found that this value was good for my other lora so I used same here, I think it worth to experiment with;*\n* enhance video weight: 4.0 *note I think this parameter also can be adjusted and there are some other params in enhance video node.*\n\n## Data\n* amount: 29 clips from 0.6 to 4.93 sec.\n* avg_length: 2.16 sec\n\nData was collected manually using OpenShot program. \n\nIt took around 1 hour to collect 29 clips from 1 anime episode + 1 hour to create a captions for the clips using Sonnet 3.5 as a caption maker + manually correcting mistakes.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "CCRss/hunyuan_lora_anime_akame", "base_model_relation": "base" }, { "model_id": "boisterous/steak_hunyuan", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Steak_Hunyuan\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "boisterous/steak_hunyuan", "base_model_relation": "base" }, { "model_id": "Samsnake/LayonStomachBJ", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Layonstomachbj\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Samsnake/LayonStomachBJ", "base_model_relation": "base" }, { "model_id": "Samsnake/hqgwaktoon", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hqgwaktoon\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Samsnake/hqgwaktoon", "base_model_relation": "base" }, { "model_id": "hazc138/GL1", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gl1\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "hazc138/GL1", "base_model_relation": "base" }, { "model_id": "istominvi/vswpntsbeige_16_16_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Vswpntsbeige_16_16_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/vswpntsbeige_16_16_32", "base_model_relation": "base" }, { "model_id": "istominvi/vswpntsbeige_30_8_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Vswpntsbeige_30_8_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/vswpntsbeige_30_8_32", "base_model_relation": "base" }, { "model_id": "istominvi/vswpntsbeige_50_8_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Vswpntsbeige_50_8_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/vswpntsbeige_50_8_32", "base_model_relation": "base" }, { "model_id": "hazc138/ZMGL", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Zmgl\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "hazc138/ZMGL", "base_model_relation": "base" }, { "model_id": "hazc138/gl5", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gl5\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "hazc138/gl5", "base_model_relation": "base" }, { "model_id": "yashlanjewar20/HEYGEN1-LORA", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Heygen1 Lora\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "yashlanjewar20/HEYGEN1-LORA", "base_model_relation": "base" }, { "model_id": "Sergidev/IllustrationTTV", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\ntags:\n- lora\n---\n\n### Huggingface implementation of Flat Color - Style by Motimalu on CivitAI\nhttps://civitai.com/models/1132089/flat-color-style?modelVersionId=1315010", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Sergidev/IllustrationTTV", "base_model_relation": "base" }, { "model_id": "yashlanjewar20/heygen-epoch50", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Heygen Epoch50\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "yashlanjewar20/heygen-epoch50", "base_model_relation": "base" }, { "model_id": "yashlanjewar20/HeyGen-epoch16-autocaption", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Heygen Epoch16 Autocaption\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "yashlanjewar20/HeyGen-epoch16-autocaption", "base_model_relation": "base" }, { "model_id": "yashlanjewar20/Yash_c_epochs50_10seconds", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Yash_C_Epochs50_10Seconds\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "yashlanjewar20/Yash_c_epochs50_10seconds", "base_model_relation": "base" }, { "model_id": "yashlanjewar20/surya_10s_epoch50", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Surya_10S_Epoch50\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "yashlanjewar20/surya_10s_epoch50", "base_model_relation": "base" }, { "model_id": "yashlanjewar20/Yash_c_30seconds_epochs16", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Yash_C_30Seconds_Epochs16\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "yashlanjewar20/Yash_c_30seconds_epochs16", "base_model_relation": "base" }, { "model_id": "BagOu22/Lora_HKLPAZ", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Lora_Hklpaz\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "BagOu22/Lora_HKLPAZ", "base_model_relation": "base" }, { "model_id": "Klindle/gawk_toon3000", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gawk_Toon3000\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Klindle/gawk_toon3000", "base_model_relation": "base" }, { "model_id": "yashlanjewar20/Yash_c_16epochs_10seconds", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Yash_C_16Epochs_10Seconds\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "yashlanjewar20/Yash_c_16epochs_10seconds", "base_model_relation": "base" }, { "model_id": "yashlanjewar20/16epochs_surya_10seconds", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# 16Epochs_Surya_10Seconds\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "yashlanjewar20/16epochs_surya_10seconds", "base_model_relation": "base" }, { "model_id": "ghej4u/flamingo", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Flamingo\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "ghej4u/flamingo", "base_model_relation": "base" }, { "model_id": "Alched/hv_dirty_panties_v1", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Hv_Dirty_Panties_V1\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "Alched/hv_dirty_panties_v1", "base_model_relation": "base" }, { "model_id": "ghej4u/oh", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Oh\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "ghej4u/oh", "base_model_relation": "base" }, { "model_id": "gulatiharsh/zinzanatrailer", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Zinzanatrailer\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "gulatiharsh/zinzanatrailer", "base_model_relation": "base" }, { "model_id": "istominvi/gocha_16_4_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gocha_16_4_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/gocha_16_4_32", "base_model_relation": "base" }, { "model_id": "istominvi/gocha_32_4_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gocha_32_4_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/gocha_32_4_32", "base_model_relation": "base" }, { "model_id": "istominvi/gocha_64_4_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gocha_64_4_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/gocha_64_4_32", "base_model_relation": "base" }, { "model_id": "istominvi/gocha_128_4_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gocha_128_4_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/gocha_128_4_32", "base_model_relation": "base" }, { "model_id": "istominvi/gocha_256_4_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gocha_256_4_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/gocha_256_4_32", "base_model_relation": "base" }, { "model_id": "istominvi/gocha_16_8_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gocha_16_8_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/gocha_16_8_32", "base_model_relation": "base" }, { "model_id": "istominvi/gocha_32_8_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gocha_32_8_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/gocha_32_8_32", "base_model_relation": "base" }, { "model_id": "istominvi/gocha_64_8_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gocha_64_8_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/gocha_64_8_32", "base_model_relation": "base" }, { "model_id": "istominvi/gocha_128_8_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gocha_128_8_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/gocha_128_8_32", "base_model_relation": "base" }, { "model_id": "istominvi/gocha_256_8_32", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Gocha_256_8_32\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "istominvi/gocha_256_8_32", "base_model_relation": "base" }, { "model_id": "StoyanG/lora-video-DrThompsonVet", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Lora Video Drthompsonvet\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "StoyanG/lora-video-DrThompsonVet", "base_model_relation": "base" }, { "model_id": "JoshuaMKerr/joshvideo", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: https://huggingface.co/tencent/HunyuanVideo/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- replicate\nbase_model: \"tencent/HunyuanVideo\"\npipeline_tag: text-to-video\n# widget:\n# - text: >-\n# prompt\n# output:\n# url: https://...\n---\n\n# Joshvideo\n\n\n\nTrained on Replicate using:\n\nhttps://replicate.com/zsxkib/hunyuan-video-lora/train\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "JoshuaMKerr/joshvideo", "base_model_relation": "base" }, { "model_id": "trojblue/HunyuanVideo-lora-PixelArt", "gated": "False", "card": "---\ntags:\n- text-to-image\n- lora\n- diffusers\n- template:diffusion-lora\nwidget:\n- text: >-\n pixel art, a rainy day scene of a cat-eared anime girl with a transparent\n umbrella, standing at a train crossing, her reflection shimmering in the\n puddles as a train rushes by.\n output:\n url: images/ComfyUI_00021_.webp\n- text: >-\n pixel art, a pixelated scene featuring loli white-haired girl with cat ears \n blue eyes and white cat tails. She's dancing to the rhythm of wind, with her\n tail waing.\n output:\n url: images/ComfyUI_00006_.webp\n- text: >-\n pixel art, a pixelated mountaintop at dawn with a lone boy in a scarf\n standing at the edge, arms outstretched as the wind carries cherry petals\n past him, a red kite fluttering behind.\n output:\n url: images/ComfyUI_00024_.webp\n- text: >-\n pixel art, a peaceful farm landscape with a scarecrow slightly askew in the\n middle of a golden wheat field, crows perched on its arms as the wind moves\n the grain in waves.\n output:\n url: images/ComfyUI_00034_.webp\nbase_model: tencent/HunyuanVideo\ninstance_prompt: pixel art\nlicense: mit\ndatasets:\n- trojblue/test-HunyuanVideo-pixelart-videos\n- trojblue/test-HunyuanVideo-anime-images\n---\n# Hunyuan Video Lora - PixelArt\n\n\n\n## Model description \n\n# Hunyuan Video LoRA - PixelArt\n\n\n\n**(Model card WIP; subject to updates today or tomorrow)**\n\n\n\n[v1.0]:\n\nThis LoRA brings an **anime-inspired pixel art style** to life, trained on a mix of pixel animations and still images. It\u2019s designed to generate vibrant, colorful anime-style pixel art, excelling at **character motions** and **pixelated scenery**. Think bright, dynamic visuals with that classic retro charm.\n\n\n\n## Usage\n\nA sample workflow is available in the Hugging Face repo:\n\n-[ [v0.1/ComfyUI_00024_.webp \u00b7 trojblue/HunyuanVideo-lora-AnimeShot\\]](https://huggingface.co/trojblue/HunyuanVideo-lora-AnimeShot/blob/main/v0.1/ComfyUI_00024_.webp)\n\n\\- For default node compatibility, update ComfyUI to the latest commit. More details here: [[HunyuanVideo Native Support in ComfyUI - by Jo Zhang\\]](https://blog.comfy.org/p/hunyuanvideo-native-support-in-comfyui).\n\n\n\n## Configs\n\nTrained on a dynamic aspect ratio at **~768 resolution**, this LoRA has been tested at resolutions:\n\n- 768x768\n\n- 656x880\n\n- 880x656\n\nIt\u2019s flexible with step counts, working effectively at:\n\n- 1 step\n\n- 33 steps\n\n- 65 steps\n\n- 97 steps\n\nFeel free to tweak these settings to find your sweet spot!\n\n\n\n## Prompting\n\nStart prompts with `pixel art, <description>`. The training data used **natural language captions** (1-3 sentences), so descriptive, sentence-style prompts tend to yield the best results. Here are the prompts behind the sample images:\n\n```\npixel art, a pixelated scene showing a fox-eared anime boy with fiery red hair and golden eyes, sitting on a torii gate at sunset, gently playing a bamboo flute as cherry blossoms float in the air.\n\npixel art, a pixelated image of a silver-haired shrine maiden with glowing violet eyes, sweeping the temple courtyard under the full moon, with soft lantern light and petals drifting by.\n\npixel art, a pixel scene of a pink-haired demon loli with tiny horns, roasting marshmallows over a campfire in a haunted forest, surrounded by glowing ghost friends.\n\npixel art, a rainy day scene of a cat-eared anime girl with a transparent umbrella, standing at a train crossing, her reflection shimmering in the puddles as a train rushes by.\n\npixel art, a rainy day scene of a cat-eared anime girl with a transparent umbrella, standing at a train crossing, her reflection shimmering in the puddles as a train rushes by.\n\npixel art, a pixelated mountaintop at dawn with a lone boy in a scarf standing at the edge, arms outstretched as the wind carries cherry petals past him, a red kite fluttering behind.\n\npixel art, a pixelated scene of a frog in a wizard hat stirring a bubbling cauldron in the middle of a mushroom forest, fireflies glowing around him like sparks of magic.\n\npixel art, a pixelated alleyway in a quiet neon-lit city, a boy with silver hair feeding a stray black cat from a bento box, both bathed in soft vending machine light.\n\npixel art, a peaceful farm landscape with a scarecrow slightly askew in the middle of a golden wheat field, crows perched on its arms as the wind moves the grain in waves.\n\npixel art, a quiet underwater scene where a catfish wearing a crown floats lazily above a coral throne, tiny sea creatures circling like royal attendants.\n```\n\n\n\n## Limitations\n\nThe dataset leans heavily on **anime characters and scenery**, so prompts outside this scope might produce less pixelated or lower-quality results. I may experiment with WAN to improve this later, but for now, this is the Hunyuan version as it stands.\n\nTraining wrapped up around January\u2014I just didn\u2019t get around to posting it sooner (my bad!). As a result, the inference setup hasn\u2019t been updated to the latest Hunyuan Video best practices. Checking recent guidelines could help optimize your results.\n\n\n\n## Updates\n\nfeel free to follow me on twitter for model updates and stuff: [[yada (@yada_cc) / X\\]](https://x.com/yada_cc)\n\n## Trigger words\n\nYou should use `pixel art` to trigger the image generation.\n\n\n## Download model\n\nWeights for this model are available in Safetensors format.\n\n[Download](/trojblue/HunyuanVideo-lora-PixelArt/tree/main) them in the Files & versions tab.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [ "wangyooo/studio_ghibli_hv_v03_19", "aiartxx/Dayana" ], "adapters_count": 2, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 2, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "trojblue/HunyuanVideo-lora-PixelArt", "base_model_relation": "base" }, { "model_id": "neph1/1980s_Fantasy_Movies_Hunyuan_Video_Lora", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- template:diffusion-lora\n- finetrainers\nlibrary_name: diffusers\n---\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/JNxFXxz5rccuOMOncvLAZ.webp)\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/5Qut5wgWZp-QboofkDxLr.webp)\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/KkXk9_dPh9zp-qwXTXtn1.webp)\n\nhttps://civitai.com/models/1386261/1980s-fantasy-movies-hunyuanvideo-lora\n\nTrained with https://github.com/a-r-r-o-w/finetrainers and https://github.com/neph1/finetrainers-ui", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "neph1/1980s_Fantasy_Movies_Hunyuan_Video_Lora", "base_model_relation": "base" }, { "model_id": "neph1/1920s_horror_hunyuan_video_lora", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- template:diffusion-lora\n- finetrainers\n---\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/0ZNfMREgMn04q7gabLQ6F.webp)\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/ucqeg9ka-0eQnc8txoPKD.webp)\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/p6MXe4Q9ILRqmAUPLEs9e.webp)\n\nhttps://civitai.com/models/1371819/1920s-horror-hunyuanvideo-lora\n\nTrained with https://github.com/a-r-r-o-w/finetrainers and https://github.com/neph1/finetrainers-ui", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "neph1/1920s_horror_hunyuan_video_lora", "base_model_relation": "base" }, { "model_id": "neph1/50s_scifi_hunyuan_video_lora", "gated": "False", "card": "---\nbase_model:\n- tencent/HunyuanVideo\npipeline_tag: text-to-video\ntags:\n- hunyuan\n- hunyuan-video\n- hunyuan-lora\n- lora\n- template:diffusion-lora\n- finetrainers\n---\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/zLm1IKq3kToMdwDXAR-jh.webp)\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/mOaKyJqwi1S8Clf-1bu5f.webp)\n\n![image/webp](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/wzwAgoK9mtAFI1Qxkq9xZ.webp)\n\nhttps://civitai.com/models/1359530/50s-scifi-hunyuan-video-lora\n\nTrained with https://github.com/a-r-r-o-w/finetrainers and https://github.com/neph1/finetrainers-ui", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "neph1/50s_scifi_hunyuan_video_lora", "base_model_relation": "base" }, { "model_id": "strangerzonehf/Hunyuan-t2v-Cartoon-LoRA", "gated": "unknown", "card": "---\ntags:\n- text-to-image\n- lora\n- diffusers\n- template:diffusion-lora\nwidget:\n- text: 'cartoon motion real girl with messy bun and sparkling eyes, walking joyfully down a sunny, ivy-covered alley, holding a red heart. Striped dress, matching socks and sneakers. Soft sunlight, dreamy Pixar style, warm and whimsical atmosphere, cinematic depth of field.'\n output:\n url: videos/1.mp4\nbase_model: tencent/HunyuanVideo\ninstance_prompt: cartoon motion, real\nlicense: apache-2.0\npipeline_tag: text-to-video\n---\n\n\n\n# Hunyuan-t2v-Cartoon-LoRA\n\n**Hunyuan-t2v-Cartoon-LoRA** is a text-to-video adapter fine-tuned using human preferences. It is based on the tencent/HunyuanVideo text-to-video model and has been trained with 66 cartoon-style entries over 2,100 steps across 11 epochs.\n\n### Limitations\n\n* Not suitable for complex indoor scenes\n* Not recommended for robotic or animal characters\n\n## Trigger words\n\nYou should use `cartoon motion` to trigger the image generation.\n\nYou should use `real` to trigger the image generation.\n\n\n## Download model\n\nWeights for this model are available in Safetensors format.\n\n[Download](/strangerzonehf/Hunyuan-t2v-Cartoon-LoRA/tree/main) them in the Files & versions tab.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": null, "base_model_relation": null }, { "model_id": "city96/HunyuanVideo-gguf", "gated": "False", "card": "---\nbase_model: tencent/HunyuanVideo\nlibrary_name: gguf\nquantized_by: city96\ntags:\n- text-to-video\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: LICENSE.md\n---\nThis is a direct GGUF conversion of [tencent/HunyuanVideo](https://huggingface.co/tencent/HunyuanVideo)\n\n**It is intended to be used with the native, built-in ComfyUI HunyuanVideo nodes**\n\nAs this is a quantized model not a finetune, all the same restrictions/original license terms still apply.\n\nThe model files can be used with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node.\n\nPlace model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions.\n\nThe VAE can be downloaded from [this repository by Kijai](https://huggingface.co/Kijai/HunyuanVideo_comfy/blob/main/hunyuan_video_vae_bf16.safetensors)\n\nPlease refer to [this chart](https://github.com/ggerganov/llama.cpp/blob/master/examples/perplexity/README.md#llama-3-8b-scoreboard) for a basic overview of quantization types.\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "city96/HunyuanVideo-gguf", "base_model_relation": "base" }, { "model_id": "kohya-ss/HunyuanVideo-fp8_e4m3fn-unofficial", "gated": "False", "card": "---\nlicense: other\nlicense_name: tencent-hunyuan-community\nlicense_link: LICENSE\nbase_model_relation: quantized\ntags:\n - text-to-video\nbase_model:\n- tencent/HunyuanVideo\n---\n\nThis is a direct float8_e4m3fn conversion of [tencent/HunyuanVideo](https://huggingface.co/tencent/HunyuanVideo).\n\nThis project follows the Tencent Hunyuan Community License Agreement. Please see LICENSE for details.\n\nThis is intended to be used with [Musubi Tuner](https://github.com/kohya-ss/musubi-tuner)'s `--fp8_base` training.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "tencent/HunyuanVideo" ], "base_model": "kohya-ss/HunyuanVideo-fp8_e4m3fn-unofficial", "base_model_relation": "base" }, { "model_id": "lucataco/hunyuan-lora-melty-test-3", "gated": "False", "card": "---\ntags:\n- text-to-video\n- lora\n- diffusers\n- replicate\nbase_model:\n- hunyuanvideo-community/HunyuanVideo\ninstance_prompt: melty\nlicense: mit\n---\n# Hunyuan Video Lora - melty\nTrained on Replicate via [a-r-r-o-w/finetrainers](https://github.com/a-r-r-o-w/finetrainers):\n\n[replicate.com/lucataco/hunyuanvideo-lora-trainer](https://replicate.com/lucataco/hunyuanvideo-lora-trainer)", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "hunyuanvideo-community/HunyuanVideo" ], "base_model": "lucataco/hunyuan-lora-melty-test", "base_model_relation": "finetune" }, { "model_id": "swyne/breast-growth", "gated": "False", "card": "---\ntags:\n- text-to-image\n- lora\n- diffusers\n- template:diffusion-lora\nwidget:\n- text: breast growth, hatsune miku with growing breasts.\n output:\n url: images/videoframe_1162.png\nbase_model: hunyuanvideo-community/HunyuanVideo\ninstance_prompt: breast growth, woman with growing breasts.\nlicense: apache-2.0\n---\n# breast-growth\n\n\n\n## Model description \n\nTrying to train LoRa with https://github.com/a-r-r-o-w/finetrainers\n\nThe following keywords are used in the prompt:\nbreast growth, woman with growing breasts.\n\nExample of a prompt:\n\nbreast growth, Anime-style digital animation of a cute, busty woman with growing breasts and short brown hair, green eyes, and large wings. She wears a frilly white and pink dress with a star pendant, holding a wand. Background is pastel pink with sparkles.\n\n## Trigger words\n\nYou should use `breast growth` to trigger the image generation.\n\nYou should use `woman with growing breasts.` to trigger the image generation.\n\n\n## Download model\n\nWeights for this model are available in Safetensors format.\n\n[Download](/swyne/breast-growth/tree/main) them in the Files & versions tab.\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "hunyuanvideo-community/HunyuanVideo" ], "base_model": "swyne/breast-growth", "base_model_relation": "base" }, { "model_id": "wooyeolbaek/finetuned_models_debug2", "gated": "False", "card": "---\nbase_model: hunyuanvideo-community/HunyuanVideo\nlibrary_name: diffusers\nwidget: []\ntags:\n- text-to-video\n- diffusers-training\n- diffusers\n- finetrainers\n- template:sd-lora\n- lora\n---\n\n\n\n\n# LoRA Finetune\n\n\n\n## Model description\n\nThis is a lora finetune of model: `hunyuanvideo-community/HunyuanVideo`.\n\nThe model was trained using [`finetrainers`](https://github.com/a-r-r-o-w/finetrainers).\n\n`id_token` used: afkx (if it's not `None`, it should be used in the prompts.)\n\n## Download model\n\n[Download LoRA](wooyeolbaek/finetuned_models_debug2/tree/main) in the Files & Versions tab.\n\n## Usage\n\nRequires the [\ud83e\udde8 Diffusers library](https://github.com/huggingface/diffusers) installed.\n\n```py\nTODO\n```\n\nFor more details, including weighting, merging and fusing LoRAs, check the [documentation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) on loading LoRAs in diffusers.\n\n\n## Intended uses & limitations\n\n#### How to use\n\n```python\n# TODO: add an example code snippet for running this diffusion pipeline\n```\n\n#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]\n\n## Training details\n\n[TODO: describe the data used to train the model]", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "hunyuanvideo-community/HunyuanVideo" ], "base_model": "wooyeolbaek/finetuned_models_debug2", "base_model_relation": "base" }, { "model_id": "wooyeolbaek/finetuned_models_videojam_debug2", "gated": "False", "card": "---\nbase_model: hunyuanvideo-community/HunyuanVideo\nlibrary_name: diffusers\nwidget: []\ntags:\n- text-to-video\n- diffusers-training\n- diffusers\n- finetrainers\n- template:sd-lora\n- lora\n---\n\n\n\n\n# LoRA Finetune\n\n\n\n## Model description\n\nThis is a lora finetune of model: `hunyuanvideo-community/HunyuanVideo`.\n\nThe model was trained using [`finetrainers`](https://github.com/a-r-r-o-w/finetrainers).\n\n`id_token` used: afkx (if it's not `None`, it should be used in the prompts.)\n\n## Download model\n\n[Download LoRA](wooyeolbaek/finetuned_models_videojam_debug2/tree/main) in the Files & Versions tab.\n\n## Usage\n\nRequires the [\ud83e\udde8 Diffusers library](https://github.com/huggingface/diffusers) installed.\n\n```py\nTODO\n```\n\nFor more details, including weighting, merging and fusing LoRAs, check the [documentation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) on loading LoRAs in diffusers.\n\n\n## Intended uses & limitations\n\n#### How to use\n\n```python\n# TODO: add an example code snippet for running this diffusion pipeline\n```\n\n#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]\n\n## Training details\n\n[TODO: describe the data used to train the model]", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "hunyuanvideo-community/HunyuanVideo" ], "base_model": "wooyeolbaek/finetuned_models_videojam_debug2", "base_model_relation": "base" }, { "model_id": "jbilcke-hf/SkyReels-V1-Hunyuan-I2V-HFIE", "gated": "False", "card": "---\nlanguage:\n- en\nbase_model:\n- Skywork/SkyReels-V1-Hunyuan-I2V\npipeline_tag: text-to-video\nlibrary_name: diffusers\ntags:\n- SkyReels-V1-Hunyuan\n- SkyReels-V1-Hunyuan-I2V\n- Skywork\n- HunyuanVideo\n- Tencent\n- Video\nlicense: other\nlicense_link: \"https://github.com/SkyworkAI/SkyReels-V1/blob/main/LICENSE.txt\"\n---\n\nThis model is [SkyReels-V1-Hunyuan-I2V](https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-I2V) adapted to run on the Hugging Face Inference Endpoints.\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Skywork/SkyReels-V1-Hunyuan-I2V" ], "base_model": "jbilcke-hf/SkyReels-V1-Hunyuan-I2V-HFIE", "base_model_relation": "base" }, { "model_id": "wangyooo/studio_ghibli_hv_v03_19", "gated": "unknown", "card": "---\ntags:\n - text-to-image\n - lora\n - diffusers\n - template:diffusion-lora\nwidget:\n- text: '-'\n output:\n url: images/Screenshot 2025-06-03 010649.png\nbase_model: trojblue/HunyuanVideo-lora-PixelArt\ninstance_prompt: null\n\n---\n# studio_ghibli_hv_v03_19\n\n\n\n\n\n## Download model\n\nWeights for this model are available in Safetensors format.\n\n[Download](/wangyooo/studio_ghibli_hv_v03_19/tree/main) them in the Files & versions tab.\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "trojblue/HunyuanVideo-lora-PixelArt" ], "base_model": null, "base_model_relation": null }, { "model_id": "aiartxx/Dayana", "gated": "unknown", "card": "---\ntags:\n - text-to-image\n - lora\n - diffusers\n - template:diffusion-lora\nwidget:\n- text: '-'\n output:\n url: images/7338716f-fd9c-450e-a3e1-e66d61c729ca.jpg\nbase_model: trojblue/HunyuanVideo-lora-PixelArt\ninstance_prompt: null\n\n---\n# HunyuanVideo\n\n\n\n## Model description \n\nhunyuan video t2v\n\nhttps://cdn-uploads.huggingface.co/production/uploads/666bcf92b08412278c6a968b/3nOS3HT-6ja-9w9I5FUGy.mp4\nhttps://cdn-uploads.huggingface.co/production/uploads/666bcf92b08412278c6a968b/KHvp45UCbSjwV4oEqucAu.mp4\nhttps://cdn-uploads.huggingface.co/production/uploads/666bcf92b08412278c6a968b/xSQ3ixUuf7vKtaTkIkQhf.mp4\nhttps://cdn-uploads.huggingface.co/production/uploads/666bcf92b08412278c6a968b/FXs3UmtYAfcWuB7j3LLWM.mp4\n\n\n\n\n## Download model\n\nWeights for this model are available in Safetensors format.\n\n[Download](/aiartxx/Dayana/tree/main) them in the Files & versions tab.\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "trojblue/HunyuanVideo-lora-PixelArt" ], "base_model": null, "base_model_relation": null } ] }