File size: 1,487 Bytes
9b92478
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
tags:
- lora
- diffusers
- template:diffusion-lora
- image-to-video
- i2v


widget:
- output:
    url: https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/wELqO6i8Hc_ZxjUbmhfqs.mp4
  text: '-'
- output:
    url: https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/uHMdR_l6NTjJvCujpLsBv.mp4
  text: '-'
- output:
    url: https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/ubGisPVxx6txg82hbeHNV.mp4
  text: '-'
  
base_model:
- Wan-AI/Wan2.2-I2V-A14B
instance_prompt: null
---
# Quick cuts Lora Wan2.2 I2V 14B

Mirror of: https://civitai.com/models/2113025/cinematic-quick-cuts

This lora is trained on "quick cuts", an editing technique that tells the story of a whole scene in a couple of seconds. I figured it would be suitable for the constrained context window local video producers have to work with.

Consider it experimental, as the dataset is quite limited at the moment.

It's trained on shot concepts like "wide-angle shot", "mid-shot", "close-up shot" and (which is often used for quick cuts, "extreme close-up shot".

The format is:

A series of quick cuts:

    [shot one]

    [shot two]

    [shot three]

...

Each cut has (about) one sentence description. You may specify angle, too.

It's trained on between 3 and 5 shots, over a very short time. Going for the full 81 frames might make it lose strength.

Tested both with T2V and I2V (but trained on I2V).

Only high noise required.