Instructions to use LHRuig/randoging with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use LHRuig/randoging with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("LHRuig/randoging") prompt = "suit" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
| tags: | |
| - text-to-image | |
| - lora | |
| - diffusers | |
| - template:diffusion-lora | |
| widget: | |
| - text: suit | |
| output: | |
| url: images/suit.jpg | |
| base_model: black-forest-labs/FLUX.1-dev | |
| instance_prompt: randoging | |
| # randoging | |
| <Gallery /> | |
| ## Model description | |
| randoging lora | |
| ## Trigger words | |
| You should use `randoging` to trigger the image generation. | |
| ## Download model | |
| Weights for this model are available in Safetensors format. | |
| [Download](/LHRuig/randoging/tree/main) them in the Files & versions tab. | |