Instructions to use codermert/tugce2-lora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use codermert/tugce2-lora with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("codermert/tugce2-lora") prompt = "DHANUSH" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Update README.md
Browse files
README.md
CHANGED
|
@@ -29,7 +29,7 @@ You should use `tugce` to trigger the image generation.
|
|
| 29 |
```py
|
| 30 |
from diffusers import AutoPipelineForText2Image
|
| 31 |
import torch
|
| 32 |
-
pipeline = AutoPipelineForText2Image.from_pretrained('
|
| 33 |
pipeline.load_lora_weights('codermert/tugce2-lora', weight_name='flux_train_replicate.safetensors')
|
| 34 |
image = pipeline('your prompt').images[0]
|
| 35 |
```
|
|
|
|
| 29 |
```py
|
| 30 |
from diffusers import AutoPipelineForText2Image
|
| 31 |
import torch
|
| 32 |
+
pipeline = AutoPipelineForText2Image.from_pretrained('prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0', torch_dtype=torch.float16).to('cuda')
|
| 33 |
pipeline.load_lora_weights('codermert/tugce2-lora', weight_name='flux_train_replicate.safetensors')
|
| 34 |
image = pipeline('your prompt').images[0]
|
| 35 |
```
|