| --- |
| license: mit |
| base_model: hp-l33/aim-xlarge |
| tags: |
| - mamba |
| - lora |
| - image-generation |
| - simpsons |
| pipeline_tag: unconditional-image-generation |
| --- |
| |
| # NicolasNoya/SimpsonsMamba |
|
|
| LoRA fine-tuned Mamba/AiM-based cartoon image generation model. |
|
|
| ## Model Details |
|
|
| - Base model: hp-l33/aim-xlarge (From https://arxiv.org/pdf/2408.12245) |
| - Fine-tuning method: LoRA |
| - Dataset: Simpsons cartoons |
| - Checkpoint step: 50500 |
|
|
| ## Intended Use |
|
|
| This model is intended for research and educational image generation experiments. |
|
|
| ## Example Usage |
|
|
| Use the local generation script in this project with the corresponding checkpoint weights. |
|
|
| ## Training Notes |
|
|
| - Trained with periodic checkpointing and evaluation. |
| - This card was auto-generated and should be updated with detailed metrics and limitations. |
|
|
| ## Limitations |
|
|
| Outputs may reflect biases or artifacts in the training data. |
|
|