File size: 1,230 Bytes
17236cc 58881b9 54c3d8d 17236cc 54c3d8d 58881b9 54c3d8d 58881b9 54c3d8d 58881b9 17236cc 58881b9 17236cc 54c3d8d 17236cc 58881b9 54c3d8d 58881b9 54c3d8d 17236cc 54c3d8d 17236cc 58881b9 54c3d8d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
license: mit
tags:
- lora
- training
- runpod
- ai-toolkit
---
# AI Trainer - RunPod Serverless
Single-endpoint multi-model LoRA training with all models cached in this repo.
## RunPod Deployment
**Set Model field to:** `Aloukik21/trainer`
This will cache all models (~240GB) for fast cold starts.
## Cached Models
| Model Key | Subfolder | Size |
|-----------|-----------|------|
| flux_dev | flux-dev/ | ~54GB |
| flux_schnell | flux-schnell/ | ~54GB |
| wan21_14b | wan21-14b/ | ~75GB |
| wan22_14b | wan22-14b/ | ~53GB |
| qwen_image | qwen-image/ | ~54GB |
| accuracy_recovery_adapters | accuracy_recovery_adapters/ | ~3GB |
## API Usage
### List Models
```json
{"input": {"action": "list_models"}}
```
### Train LoRA
```json
{
"input": {
"action": "train",
"model": "flux_dev",
"params": {
"dataset_path": "/workspace/dataset",
"output_path": "/workspace/output",
"steps": 1000
}
}
}
```
### Cleanup (between different models)
```json
{"input": {"action": "cleanup"}}
```
## Environment Variables
- `HF_TOKEN`: HuggingFace token (required for some gated models)
## Auto-Cleanup
Handler automatically cleans up GPU memory when switching between different model types.
|