YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Enochiatron Training Run
Training Configuration
_hf_repo: timotheospaul/enochiatron-lora
_run_name: run-01
alpha: 32
audio_caption_dropout: 0.3
audio_lr_scale: 0.5
batch_size: 1
checkpoint_every: 100
compute_dtype: bfloat16
cost_guardrail_usd: 50.0
data_root: /workspace/encoded
ema: False
epochs: 1
gradient_accumulation_steps: 1
hourly_rate_usd: 1.64
keep_checkpoints: 10
learning_rate: 0.0001
lora_dropout: 0.0
max_grad_norm: 1.0
max_steps: None
mode: video-only
model_path: /workspace/models/ltx-2.3/ltx-2.3-22b-dev.safetensors
optimizer: prodigy
output_dir: /workspace/output
rank: 32
sample_every: 50
sample_prompt: enochiatron, a digital figure stands on a glowing grid
sample_steps: 10
seed: 42
tensorboard_enabled: True
wandb_enabled: True
wandb_project: enochiatron-training
weight_decay: 0.01
W&B Dashboard
https://wandb.ai/timotheospaul-tasumer-maf/enochiatron-training/runs/xxci46rt
Loss Summary
| Metric | Value |
|---|---|
| Initial Loss | 0.6339 |
| Final Loss | 0.8241 |
| Best Loss | 0.0593 |
| Total Steps | 796 |
Run Metadata
- Generated by: Enochiatron training pipeline
- Model: LTX-2.3 22B DiT
- Rank: 32
- Optimizer: prodigy
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support