Beauty of rain [Wan 2.1/2.2]

Creator: Mantissa_Hub Type: LORA Base Model: Wan Video 2.2 TI2V-5B Version: wan 2.2 ti2v-5B Trigger Words: b3@ut1f0ll_r@in

Civitai Model ID: 1747192 Civitai Version ID: 2179119

Stats (at time of fetch for this version):

  • Downloads: 81
  • Rating: 0 (0 ratings)
  • Favorites: N/A

πŸ“„ Description (Parent Model)

You can find detailed information about the versions in these articles: [Wan 2.1 T2V-14B] [Wan 2.2 TI2V-5B]

Version Notes (wan 2.2 ti2v-5B)

Training Details dataset.toml resolutions = [[ 1280, 704]] enable_ar_bucket = true min_ar = 0.5 max_ar = 2.0 num_ar_buckets = 7 ar_buckets = [[1280, 704]] frame_buckets = [ 1, 24, 46, 81,] [[directory]] path = "/home/user/beauty_of_rain_dataset/videos" num_repeats = 4 train.toml output_dir = "/home/user/beauty_of_rain_dataset/5B" dataset = "/home/user/beauty_of_rain_5B.toml" epochs = 120 micro_batch_size_per_gpu = 1 pipeline_stages = 1 gradient_accumulation_steps = 1 gradient_clipping = 1 warmup_steps = 100 eval_every_n_epochs = 1 eval_before_first_step = true eval_micro_batch_size_per_gpu = 1 eval_gradient_accumulation_steps = 1 save_every_n_epochs = 12 activation_checkpointing = true partition_method = "parameters" save_dtype = "bfloat16" caching_batch_size = 1 steps_per_print = 10 video_clip_mode = "single_beginning"

[model] type = "wan" ckpt_path = "/home/user/Wan2.2-TI2V-5B" dtype = "bfloat16" transformer_dtype = "float8" timestep_sample_method = "logit_normal"

[adapter] type = "lora" rank = 32 dtype = "bfloat16"

[optimizer] type = "adamw_optimi" lr = 8e-5 betas = [ 0.9, 0.99,] weight_decay = 0.01 eps = 1e-8


Civitai Links


File Information

  • Filename: beauty_of_rain_wan2_2_ti2v_5B.safetensors
  • Size: 153.82 MB
  • Hash (AutoV2): D1E74FE4EA
  • Hash (SHA256): D1E74FE4EA9420DA4725AFC6617D5F93C4AFC8963E83609954337A0CD3ACA1A9
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including UnifiedHorusRA/Beauty_of_rain__Wan_2.1_2.2