Update LoRA model to Valentin character with examples
Browse files- Changed trigger word from 'ohwx woman' to 'valentin'
- Added character_lora.safetensors model file
- Updated README with new generation examples (gen1, gen3, gen5, gen7)
- Added assets folder with dataset images and generation examples
- Updated code examples and prompts
- Configured Git LFS for image files
- .gitattributes +2 -0
- README.md +24 -54
- assets/dataset/001.jpg +3 -0
- assets/dataset/002.jpg +3 -0
- assets/dataset/003.jpg +3 -0
- assets/dataset/004.jpg +3 -0
- assets/dataset/005.jpg +3 -0
- assets/dataset/006.jpg +3 -0
- assets/dataset/007.jpg +3 -0
- assets/dataset/008.jpg +3 -0
- assets/dataset/009.jpg +3 -0
- assets/dataset/010.jpg +3 -0
- assets/dataset/011.jpg +3 -0
- assets/dataset/012.jpg +3 -0
- assets/generations/gen1.jpg +3 -0
- assets/generations/gen2.jpg +3 -0
- assets/generations/gen3.jpg +3 -0
- assets/generations/gen4.jpg +3 -0
- assets/generations/gen5.jpg +3 -0
- assets/generations/gen6.png +3 -0
- assets/generations/gen7.jpg +3 -0
- character_lora.safetensors +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -18,7 +18,7 @@ library_name: diffusers
|
|
| 18 |
|
| 19 |
Lora for [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
|
| 20 |
|
| 21 |
-
**
|
| 22 |
|
| 23 |
## 🌟 About FlyMy.AI
|
| 24 |
|
|
@@ -60,23 +60,23 @@ pipe = pipe.to(device)
|
|
| 60 |
|
| 61 |
```python
|
| 62 |
# Load LoRA weights
|
| 63 |
-
pipe.load_lora_weights('
|
| 64 |
```
|
| 65 |
|
| 66 |
### 🎨 Generate Image with lora trained on person
|
| 67 |
|
| 68 |
```python
|
| 69 |
-
prompt = '''
|
| 70 |
negative_prompt = "blurry, low quality, distorted, bad anatomy"
|
| 71 |
image = pipe(
|
| 72 |
prompt=prompt,
|
| 73 |
negative_prompt=negative_prompt,
|
| 74 |
width=1024,
|
| 75 |
height=1024,
|
| 76 |
-
num_inference_steps=
|
| 77 |
guidance_scale=3.5,
|
| 78 |
-
generator=torch.Generator(device="cuda").manual_seed(
|
| 79 |
-
)
|
| 80 |
|
| 81 |
# Display the image (in Jupyter or save to file)
|
| 82 |
image.show()
|
|
@@ -86,41 +86,7 @@ image.save("output.png")
|
|
| 86 |
|
| 87 |
### 🖼️ Sample Output
|
| 88 |
|
| 89 |
-

|
| 99 |
-
- Clone or download the latest version
|
| 100 |
-
|
| 101 |
-
2. **Install ComfyUI**:
|
| 102 |
-
- Follow the installation instructions from the [ComfyUI repository](https://github.com/comfyanonymous/ComfyUI?tab=readme-ov-file#installing)
|
| 103 |
-
- Make sure all dependencies are properly installed
|
| 104 |
-
|
| 105 |
-
3. **Download FLUX.1-dev model weights**:
|
| 106 |
-
- Go to [FLUX.1-dev model](https://huggingface.co/black-forest-labs/FLUX.1-dev)
|
| 107 |
-
- Download all the model files
|
| 108 |
-
|
| 109 |
-
4. **Place FLUX.1-dev weights in ComfyUI**:
|
| 110 |
-
- Copy the downloaded FLUX.1-dev model files to the appropriate folders in `ComfyUI/models/`
|
| 111 |
-
- Follow the folder structure as specified in the model repository
|
| 112 |
-
|
| 113 |
-
5. **Download our pre-trained LoRA weights**:
|
| 114 |
-
- Visit [flymy-ai/flux-anne-hathaway-lora](https://huggingface.co/flymy-ai/flux-anne-hathaway-lora)
|
| 115 |
-
- Download the LoRA `.safetensors` files
|
| 116 |
-
|
| 117 |
-
6. **Place LoRA weights in ComfyUI**:
|
| 118 |
-
- Copy the LoRA file `flymy-ai/flux-anne-hathaway-lora/pytorch_lora_weights.safetensors` to `ComfyUI/models/loras/`
|
| 119 |
-
|
| 120 |
-
7. **Load the workflow**:
|
| 121 |
-
- Open ComfyUI in your browser
|
| 122 |
-
- Load the workflow file `flux_anne_hathaway_lora_example.json` located in this repository
|
| 123 |
-
- The workflow is pre-configured to work with our LoRA models
|
| 124 |
|
| 125 |
### Workflow Features
|
| 126 |
|
|
@@ -137,27 +103,31 @@ The ComfyUI workflow provides a user-friendly interface for generating images wi
|
|
| 137 |
|
| 138 |
## 🎨 Generation Examples
|
| 139 |
|
| 140 |
-
Below are examples of images generated with our
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 141 |
|
| 142 |
-
|
| 143 |
-
**Prompt**: *"ohwx woman portrait selfie"*
|
| 144 |
|
| 145 |
-

|
| 20 |
|
| 21 |
+
**valentin** trigger word required
|
| 22 |
|
| 23 |
## 🌟 About FlyMy.AI
|
| 24 |
|
|
|
|
| 60 |
|
| 61 |
```python
|
| 62 |
# Load LoRA weights
|
| 63 |
+
pipe.load_lora_weights('character_lora.safetensors', adapter_name="lora")
|
| 64 |
```
|
| 65 |
|
| 66 |
### 🎨 Generate Image with lora trained on person
|
| 67 |
|
| 68 |
```python
|
| 69 |
+
prompt = '''valentin sitting on a chair, portrait'''
|
| 70 |
negative_prompt = "blurry, low quality, distorted, bad anatomy"
|
| 71 |
image = pipe(
|
| 72 |
prompt=prompt,
|
| 73 |
negative_prompt=negative_prompt,
|
| 74 |
width=1024,
|
| 75 |
height=1024,
|
| 76 |
+
num_inference_steps=28,
|
| 77 |
guidance_scale=3.5,
|
| 78 |
+
generator=torch.Generator(device="cuda").manual_seed(42)
|
| 79 |
+
).images[0]
|
| 80 |
|
| 81 |
# Display the image (in Jupyter or save to file)
|
| 82 |
image.show()
|
|
|
|
| 86 |
|
| 87 |
### 🖼️ Sample Output
|
| 88 |
|
| 89 |
+

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
### Workflow Features
|
| 92 |
|
|
|
|
| 103 |
|
| 104 |
## 🎨 Generation Examples
|
| 105 |
|
| 106 |
+
Below are examples of images generated with our Valentin LoRA model:
|
| 107 |
+
|
| 108 |
+
### Official Style Portrait
|
| 109 |
+
|
| 110 |
+
**Prompt**: *"valentin official style portrait"*
|
| 111 |
+
|
| 112 |
+

|
| 113 |
+
|
| 114 |
+
### Cyberpunk Close-up
|
| 115 |
|
| 116 |
+
**Prompt**: *"valentin close up in a cyberpunk aesthetic, neon lighting, holographic elements, wearing futuristic clothing, standing in a neon-lit alley, sci-fi atmosphere, digital art"*
|
|
|
|
| 117 |
|
| 118 |
+

|
| 119 |
|
| 120 |
+
### Greek City Portrait
|
|
|
|
| 121 |
|
| 122 |
+
**Prompt**: *"valentin wearing sunglasses and a hat in a Greek city"*
|
| 123 |
|
| 124 |
+

|
|
|
|
| 125 |
|
| 126 |
+
### Fluorescent Garden Editorial
|
| 127 |
|
| 128 |
+
**Prompt**: *"valentin, waist-up, standing in a fluorescent garden under blacklight, glowing bioluminescent plants, saturated purple and green hues, reflective sunglasses catching UV glints, surreal fashion editorial aesthetic, hyperreal depth, cinematic 85mm photography"*
|
|
|
|
| 129 |
|
| 130 |
+

|
| 131 |
|
| 132 |
## 🚀 Try it Online
|
| 133 |
|
assets/dataset/001.jpg
ADDED
|
Git LFS Details
|
assets/dataset/002.jpg
ADDED
|
Git LFS Details
|
assets/dataset/003.jpg
ADDED
|
Git LFS Details
|
assets/dataset/004.jpg
ADDED
|
Git LFS Details
|
assets/dataset/005.jpg
ADDED
|
Git LFS Details
|
assets/dataset/006.jpg
ADDED
|
Git LFS Details
|
assets/dataset/007.jpg
ADDED
|
Git LFS Details
|
assets/dataset/008.jpg
ADDED
|
Git LFS Details
|
assets/dataset/009.jpg
ADDED
|
Git LFS Details
|
assets/dataset/010.jpg
ADDED
|
Git LFS Details
|
assets/dataset/011.jpg
ADDED
|
Git LFS Details
|
assets/dataset/012.jpg
ADDED
|
Git LFS Details
|
assets/generations/gen1.jpg
ADDED
|
Git LFS Details
|
assets/generations/gen2.jpg
ADDED
|
Git LFS Details
|
assets/generations/gen3.jpg
ADDED
|
Git LFS Details
|
assets/generations/gen4.jpg
ADDED
|
Git LFS Details
|
assets/generations/gen5.jpg
ADDED
|
Git LFS Details
|
assets/generations/gen6.png
ADDED
|
Git LFS Details
|
assets/generations/gen7.jpg
ADDED
|
Git LFS Details
|
character_lora.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:130f2744a42ee2eb1b09205401e0518345677c58ae83c5d527f4486414b1d0ac
|
| 3 |
+
size 179399904
|