Instructions to use ozocalan/raybanmeta with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use ozocalan/raybanmeta with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("ozocalan/raybanmeta") prompt = "A close-up editorial studio photo of a black woman wearing raybanmeta black glasses." image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("ozocalan/raybanmeta")
prompt = "A close-up editorial studio photo of a black woman wearing raybanmeta black glasses."
image = pipe(prompt).images[0]raybanmeta
Model description
This LoRA model was trained on Flux-dev-1 using 46 campaign and real visuals from the Ray-Ban Meta campaign/collection.
Trigger words
You should use raybanmeta word to trigger the image generation.
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Training at fal.ai
Training was done using fal.ai/models/fal-ai/flux-lora-general-training.
- Downloads last month
- 10
Model tree for ozocalan/raybanmeta
Base model
black-forest-labs/FLUX.1-dev