NEXUS-VideoModel v1.1 (Fine-tuned on Real Videos)
Fine-tuned from amewebstudio/nexus-videomodel-v1.1 on real video data.
Training Results
| Metric | Value |
|---|---|
| Best Loss | 1.0267 |
| Best Coherence | 0.769 |
| Epochs | 20 |
Model Evolution (Neurogenesis)
| Component | Before | After | Change |
|---|---|---|---|
| Neurons | 149 | 156 | +7 |
| Experts | 24 | 24 | +0 |
| Parameters | 98,063,470 | 98,070,652 | +7,182 |
Usage
from huggingface_hub import snapshot_download
import sys
model_dir = snapshot_download("amewebstudio/nexus-videomodel-v1.1-finetuned")
sys.path.insert(0, model_dir)
from modeling_nexus import NexusVideoModel
model = NexusVideoModel.from_pretrained(model_dir)
video = model.generate(n_frames=16, temperature=0.5)
Important: Dynamic Architecture
This model uses neurogenesis - it dynamically grows during training!
The config.json contains _dynamic_state with the exact dimensions needed
for from_pretrained() to reconstruct the model correctly.
Author
Mike Amega (Logo) - Ame Web Studio
- Downloads last month
- 3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for amewebstudio/nexus-videomodel-v1.1-finetuned
Base model
amewebstudio/nexus-videomodel-v1.1