import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("hanungaddi/march7th")
prompt = "This is a digital anime-style drawing featuring March 7th, a character with pale pink hair, styled in twin buns, and a single loose strand. March 7th has a vibrant, multi-colored eye (purple and blue) and is winking. She wears a traditional Chinese-inspired in a combination of red, white and brown outfit with a high collar, featuring a blue and gold patterned front. Her expression is playful, and she has a small, pink bow in her hair. The background is plain white with small pink flowers scattered around."
image = pipe(prompt).images[0]March7Th

- Prompt
- This is a digital anime-style drawing featuring March 7th, a character with pale pink hair, styled in twin buns, and a single loose strand. March 7th has a vibrant, multi-colored eye (purple and blue) and is winking. She wears a traditional Chinese-inspired in a combination of red, white and brown outfit with a high collar, featuring a blue and gold patterned front. Her expression is playful, and she has a small, pink bow in her hair. The background is plain white with small pink flowers scattered around.
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
Trigger words
You should use TOK to trigger the image generation.
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('hanungaddi/march7th', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
- Downloads last month
- -
Model tree for hanungaddi/march7th
Base model
black-forest-labs/FLUX.1-dev