Update README.md
Browse files
README.md
CHANGED
|
@@ -12,4 +12,30 @@ tags:
|
|
| 12 |
- diffusion
|
| 13 |
---
|
| 14 |
|
| 15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
- diffusion
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Fine-Tuning Mochi-Sota Text-to-Video: Jinx Lora Test
|
| 16 |
+
|
| 17 |
+
This project demonstrates the fine-tuning of the **Mochi-Sota Text-to-Video** model using a LoRA (Low-Rank Adaptation) approach, focusing on the character **Jinx** from the *League of Legends* universe. The goal was to adapt the model to generate dynamic, character-specific video sequences with consistent visual and motion styles.
|
| 18 |
+
|
| 19 |
+
## Training Details
|
| 20 |
+
|
| 21 |
+
- **Model Base**: Mochi-Sota Text-to-Video
|
| 22 |
+
- **Fine-Tuning Dataset**: 14 short video clips of Jinx
|
| 23 |
+
- **Frame Selection**: 61 frames extracted from the videos
|
| 24 |
+
- **Training Hardware**: H100 GPU
|
| 25 |
+
- **Training Duration**: 5 hours
|
| 26 |
+
|
| 27 |
+
This fine-tuning process leverages LoRA to efficiently adapt the model while preserving the core capabilities of the base model.
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
## Results
|
| 32 |
+
|
| 33 |
+
Below is an example of the generated video output:
|
| 34 |
+
|
| 35 |
+
### **Sample Description**
|
| 36 |
+
*Jinx sprints through a dimly lit alley, her vibrant blue hair trailing behind her. She clutches a small, bulging sack tightly against her chest. Dressed in a dark crop top and boots, she moves with chaotic energy, her boots thudding loudly on the pavement. Her mischievous grin flashes briefly as she glances back, her pace never faltering.*
|
| 37 |
+
|
| 38 |
+
### **Generated Sample**
|
| 39 |
+
[**v1/samples 0_3200.mp4**](v1/samples 0_3200.mp4)
|
| 40 |
+
|
| 41 |
+
---
|