Instructions to use pixologyds/xamala with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use pixologyds/xamala with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("pixologyds/xamala") prompt = "wide and low angle, cinematic, fashion photography. xamala sitting on floor wearing a full size light white t-shirt with big letters \\\"Amala Paul\\\" , teal jeans, nice red high heels and a gracious look on her face. The background is a color gradient, her face is lit with cool white light, studio setting <lora:xamala-flux-lora:1>" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Amala Paul

- Prompt
- wide and low angle, cinematic, fashion photography. xamala sitting on floor wearing a full size light white t-shirt with big letters \"Amala Paul\" , teal jeans, nice red high heels and a gracious look on her face. The background is a color gradient, her face is lit with cool white light, studio setting <lora:xamala-flux-lora:1>
Trigger words
You should use xamala to trigger the image generation.
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
- Downloads last month
- 5
Model tree for pixologyds/xamala
Base model
black-forest-labs/FLUX.1-dev