diff --git "a/Stable-diffusion/cottonnoob_v10.html" "b/Stable-diffusion/cottonnoob_v10.html" new file mode 100644--- /dev/null +++ "b/Stable-diffusion/cottonnoob_v10.html" @@ -0,0 +1,247 @@ + + + + + + +
+

CottonNoob

+

Uploaded by uraoto

+
+
+
Version
+
v1.0
+
Base Model
+
Illustrious
+
Published
+
2025-02-16
+
Availability
+
Public
+
CivitAI Tags
+
+
+ base model +
+
+
Download Link
+
https://civitai.com/api/download/models/1419841
+
+
+
+

Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges

+
+
+
+ +
+

Description

+

This is a merge model based on Noob e-pred.

illustration-style pattern and focuses on skin tones.

The merged content is as follows

1.NoobaHoshiv1.0(Noob e-pred1.1 + Nova Orange XL v3 + Raehoshi illust XL v3)

2.paruparu_illustrious_v4

3.My Lora(Trained by 200 images AI-generated)

Quality tags:

masterpiece, best quality

Negative prompt:

bad quality, worst quality, watermark

※no problem without it

+
+ +
+
+
+
+ + + + +
+ +
+
Prompt
1girl, yanami anna, make heroine ga oo sugiru!, ahoge, blue eyes, blue hair, blue bow, yellow bow, school uniform, white shirt, double v, wink,grin, +,masterpiece, best quality,
Negative prompt
bad quality, worst quality, watermark,
Seed
310855507269888
Model
CottonNoob_v1
Sampler
Euler a
Steps
24
CFG scale
5
+
+
+ + +
+
Vaes
['sdxl_vae.safetensors']
Comfy
{"prompt": {"2": {"inputs": {"ckpt_name": "CottonNoob_v1.safetensors"}, "class_type": "CheckpointLoaderSimple", "_meta": {"title": "Load Checkpoint"}}, "3": {"inputs": {"vae_name": "sdxl_vae.safetensors"}, "class_type": "VAELoader", "_meta": {"title": "Load VAE"}}, "7": {"inputs": {"text": "bad quality, worst quality, watermark,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "8": {"inputs": {"text": "1girl, yanami anna, make heroine ga oo sugiru!, ahoge, blue eyes, blue hair, blue bow, yellow bow, school uniform, white shirt, double v, wink,grin,\n,masterpiece, best quality,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "13": {"inputs": {"filename_prefix": "ComfyUI", "images": ["16", 0]}, "class_type": "SaveImage", "_meta": {"title": "Save Image"}}, "14": {"inputs": {"seed": 310855507269888, "steps": 24, "cfg": 5.0, "sampler_name": "euler_ancestral", "scheduler": "normal", "denoise": 1.0, "model": ["2", 0], "positive": ["8", 0], "negative": ["7", 0], "latent_image": ["15", 0]}, "class_type": "KSampler", "_meta": {"title": "KSampler"}}, "15": {"inputs": {"width": 960, "height": 1280, "batch_size": 1}, "class_type": "EmptyLatentImage", "_meta": {"title": "Empty Latent Image"}}, "16": {"inputs": {"samples": ["14", 0], "vae": ["3", 0]}, "class_type": "VAEDecode", "_meta": {"title": "VAE Decode"}}}, "workflow": {"last_node_id": 18, "last_link_id": 38, "nodes": [{"id": 16, "type": "VAEDecode", "pos": [1570, 58], "size": [210, 46], "flags": {}, "order": 6, "mode": 0, "inputs": [{"name": "samples", "type": "LATENT", "link": 21}, {"name": "vae", "type": "VAE", "link": 23}], "outputs": [{"name": "IMAGE", "type": "IMAGE", "links": [22], "slot_index": 0}], "properties": {"Node name for S&R": "VAEDecode"}, "widgets_values": []}, {"id": 3, "type": "VAELoader", "pos": [19, 200], "size": [360, 60], "flags": {}, "order": 0, "mode": 0, "inputs": [], "outputs": [{"name": "VAE", "type": "VAE", "links": [23], "slot_index": 0}], "properties": {"Node name for S&R": "VAELoader"}, "widgets_values": ["sdxl_vae.safetensors"]}, {"id": 15, "type": "EmptyLatentImage", "pos": [771, 632], "size": [303.5653991699219, 173.1395721435547], "flags": {}, "order": 1, "mode": 0, "inputs": [], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [20]}], "properties": {"Node name for S&R": "EmptyLatentImage"}, "widgets_values": [960, 1280, 1]}, {"id": 14, "type": "KSampler", "pos": [1200, 49], "size": [315, 262], "flags": {}, "order": 5, "mode": 0, "inputs": [{"name": "model", "type": "MODEL", "link": 36}, {"name": "positive", "type": "CONDITIONING", "link": 18}, {"name": "negative", "type": "CONDITIONING", "link": 19}, {"name": "latent_image", "type": "LATENT", "link": 20}], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [21], "slot_index": 0}], "properties": {"Node name for S&R": "KSampler"}, "widgets_values": [310855507269888, "randomize", 24, 5, "euler_ancestral", "normal", 1]}, {"id": 7, "type": "CLIPTextEncode", "pos": [210.20828247070312, 620.292236328125], "size": [530.4689331054688, 178.5200653076172], "flags": {}, "order": 4, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 38}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [19], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["bad quality, worst quality, watermark,"], "color": "#322", "bgcolor": "#533", "shape": 1}, {"id": 2, "type": "CheckpointLoaderSimple", "pos": [20, 50], "size": [360, 100], "flags": {}, "order": 2, "mode": 0, "inputs": [], "outputs": [{"name": "MODEL", "type": "MODEL", "links": [36], "slot_index": 0}, {"name": "CLIP", "type": "CLIP", "links": [37, 38], "slot_index": 1}, {"name": "VAE", "type": "VAE", "links": null}], "properties": {"Node name for S&R": "CheckpointLoaderSimple"}, "widgets_values": ["CottonNoob_v1.safetensors"]}, {"id": 13, "type": "SaveImage", "pos": [1291.6605224609375, 698.4917602539062], "size": [320, 270], "flags": {}, "order": 7, "mode": 0, "inputs": [{"name": "images", "type": "IMAGE", "link": 22}], "outputs": [], "properties": {}, "widgets_values": ["ComfyUI"]}, {"id": 8, "type": "CLIPTextEncode", "pos": [450.7476501464844, 326.7282409667969], "size": [535.6358642578125, 241.32017517089844], "flags": {}, "order": 3, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 37}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [18], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["1girl, yanami anna, make heroine ga oo sugiru!, ahoge, blue eyes, blue hair, blue bow, yellow bow, school uniform, white shirt, double v, wink,grin,\n,masterpiece, best quality,"], "color": "#232", "bgcolor": "#353", "shape": 1}], "links": [[18, 8, 0, 14, 1, "CONDITIONING"], [19, 7, 0, 14, 2, "CONDITIONING"], [20, 15, 0, 14, 3, "LATENT"], [21, 14, 0, 16, 0, "LATENT"], [22, 16, 0, 13, 0, "IMAGE"], [23, 3, 0, 16, 1, "VAE"], [36, 2, 0, 14, 0, "MODEL"], [37, 2, 1, 8, 0, "CLIP"], [38, 2, 1, 7, 0, "CLIP"]], "groups": [], "config": {}, "extra": {"ds": {"scale": 3.855432894295392, "offset": [-1264.0440199680513, -750.8195795557158]}}, "version": 0.4}}
Width
960
Height
1280
Models
['CottonNoob_v1.safetensors']
Denoise
1
Modelids
[]
Scheduler
normal
Upscalers
[]
Versionids
[]
Controlnets
[]
Additionalresources
[]
+
+
+ + + + +
+ +
+
Prompt
1girl,solo, ayase momo, dandadan, brown eyes, pink sweater, collared shirt, pleated skirt, choker, red bow, +,masterpiece, best quality,
Negative prompt
bad quality, worst quality, watermark,
Seed
839358858976082
Model
CottonNoob_v1
Sampler
Euler a
Steps
24
CFG scale
5
+
+
+ + +
+
Vaes
['sdxl_vae.safetensors']
Comfy
{"prompt": {"2": {"inputs": {"ckpt_name": "CottonNoob_v1.safetensors"}, "class_type": "CheckpointLoaderSimple", "_meta": {"title": "Load Checkpoint"}}, "3": {"inputs": {"vae_name": "sdxl_vae.safetensors"}, "class_type": "VAELoader", "_meta": {"title": "Load VAE"}}, "7": {"inputs": {"text": "bad quality, worst quality, watermark,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "8": {"inputs": {"text": "1girl,solo, ayase momo, dandadan, brown eyes, pink sweater, collared shirt, pleated skirt, choker, red bow, \n,masterpiece, best quality,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "13": {"inputs": {"filename_prefix": "ComfyUI", "images": ["16", 0]}, "class_type": "SaveImage", "_meta": {"title": "Save Image"}}, "14": {"inputs": {"seed": 839358858976082, "steps": 24, "cfg": 5.0, "sampler_name": "euler_ancestral", "scheduler": "normal", "denoise": 1.0, "model": ["2", 0], "positive": ["8", 0], "negative": ["7", 0], "latent_image": ["15", 0]}, "class_type": "KSampler", "_meta": {"title": "KSampler"}}, "15": {"inputs": {"width": 960, "height": 1280, "batch_size": 1}, "class_type": "EmptyLatentImage", "_meta": {"title": "Empty Latent Image"}}, "16": {"inputs": {"samples": ["14", 0], "vae": ["3", 0]}, "class_type": "VAEDecode", "_meta": {"title": "VAE Decode"}}}, "workflow": {"last_node_id": 18, "last_link_id": 38, "nodes": [{"id": 16, "type": "VAEDecode", "pos": [1570, 58], "size": [210, 46], "flags": {}, "order": 6, "mode": 0, "inputs": [{"name": "samples", "type": "LATENT", "link": 21}, {"name": "vae", "type": "VAE", "link": 23}], "outputs": [{"name": "IMAGE", "type": "IMAGE", "links": [22], "slot_index": 0}], "properties": {"Node name for S&R": "VAEDecode"}, "widgets_values": []}, {"id": 3, "type": "VAELoader", "pos": [19, 200], "size": [360, 60], "flags": {}, "order": 0, "mode": 0, "inputs": [], "outputs": [{"name": "VAE", "type": "VAE", "links": [23], "slot_index": 0}], "properties": {"Node name for S&R": "VAELoader"}, "widgets_values": ["sdxl_vae.safetensors"]}, {"id": 15, "type": "EmptyLatentImage", "pos": [771, 632], "size": [303.5653991699219, 173.1395721435547], "flags": {}, "order": 1, "mode": 0, "inputs": [], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [20]}], "properties": {"Node name for S&R": "EmptyLatentImage"}, "widgets_values": [960, 1280, 1]}, {"id": 14, "type": "KSampler", "pos": [1200, 49], "size": [315, 262], "flags": {}, "order": 5, "mode": 0, "inputs": [{"name": "model", "type": "MODEL", "link": 36}, {"name": "positive", "type": "CONDITIONING", "link": 18}, {"name": "negative", "type": "CONDITIONING", "link": 19}, {"name": "latent_image", "type": "LATENT", "link": 20}], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [21], "slot_index": 0}], "properties": {"Node name for S&R": "KSampler"}, "widgets_values": [839358858976082, "randomize", 24, 5, "euler_ancestral", "normal", 1]}, {"id": 7, "type": "CLIPTextEncode", "pos": [210.20828247070312, 620.292236328125], "size": [530.4689331054688, 178.5200653076172], "flags": {}, "order": 4, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 38}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [19], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["bad quality, worst quality, watermark,"], "color": "#322", "bgcolor": "#533", "shape": 1}, {"id": 8, "type": "CLIPTextEncode", "pos": [450.7476501464844, 326.7282409667969], "size": [535.6358642578125, 241.32017517089844], "flags": {}, "order": 3, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 37}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [18], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["1girl,solo, ayase momo, dandadan, brown eyes, pink sweater, collared shirt, pleated skirt, choker, red bow, \n,masterpiece, best quality,"], "color": "#232", "bgcolor": "#353", "shape": 1}, {"id": 13, "type": "SaveImage", "pos": [1291.6605224609375, 698.4917602539062], "size": [320, 270], "flags": {}, "order": 7, "mode": 0, "inputs": [{"name": "images", "type": "IMAGE", "link": 22}], "outputs": [], "properties": {}, "widgets_values": ["ComfyUI"]}, {"id": 2, "type": "CheckpointLoaderSimple", "pos": [20, 50], "size": [360, 100], "flags": {}, "order": 2, "mode": 0, "inputs": [], "outputs": [{"name": "MODEL", "type": "MODEL", "links": [36], "slot_index": 0}, {"name": "CLIP", "type": "CLIP", "links": [37, 38], "slot_index": 1}, {"name": "VAE", "type": "VAE", "links": null}], "properties": {"Node name for S&R": "CheckpointLoaderSimple"}, "widgets_values": ["CottonNoob_v1.safetensors"]}], "links": [[18, 8, 0, 14, 1, "CONDITIONING"], [19, 7, 0, 14, 2, "CONDITIONING"], [20, 15, 0, 14, 3, "LATENT"], [21, 14, 0, 16, 0, "LATENT"], [22, 16, 0, 13, 0, "IMAGE"], [23, 3, 0, 16, 1, "VAE"], [36, 2, 0, 14, 0, "MODEL"], [37, 2, 1, 8, 0, "CLIP"], [38, 2, 1, 7, 0, "CLIP"]], "groups": [], "config": {}, "extra": {"ds": {"scale": 1.0152559799477192, "offset": [393.46372752618885, -14.55888512566515]}}, "version": 0.4}}
Width
960
Height
1280
Models
['CottonNoob_v1.safetensors']
Denoise
1
Modelids
[]
Scheduler
normal
Upscalers
[]
Versionids
[]
Controlnets
[]
Additionalresources
[]
+
+
+ + + + +
+ +
+
Prompt
1girl,gagaga girl, yu-gi-oh! zexal, official art, large breasts, +upper body, blush,glaring, +, +,masterpiece, best quality, very aesthetic,
Negative prompt
lowres, bad quality, worst quality, bad anatomy, sketch, jpeg artifacts, signature, watermark,
Seed
449588100462188
Model
CottonNoob_v1
Sampler
Euler a
Steps
24
CFG scale
5
+
+
+ + +
+
Vaes
['sdxl_vae.safetensors']
Comfy
{"prompt": {"2": {"inputs": {"ckpt_name": "CottonNoob_v1.safetensors"}, "class_type": "CheckpointLoaderSimple", "_meta": {"title": "Load Checkpoint"}}, "3": {"inputs": {"vae_name": "sdxl_vae.safetensors"}, "class_type": "VAELoader", "_meta": {"title": "Load VAE"}}, "7": {"inputs": {"text": "lowres, bad quality, worst quality, bad anatomy, sketch, jpeg artifacts, signature, watermark,", "clip": ["17", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "8": {"inputs": {"text": "1girl,gagaga girl, yu-gi-oh! zexal, official art, large breasts, \nupper body, blush,glaring, \n,\n,masterpiece, best quality, very aesthetic, ", "clip": ["17", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "13": {"inputs": {"filename_prefix": "ComfyUI", "images": ["16", 0]}, "class_type": "SaveImage", "_meta": {"title": "Save Image"}}, "14": {"inputs": {"seed": 449588100462188, "steps": 24, "cfg": 5.0, "sampler_name": "euler_ancestral", "scheduler": "normal", "denoise": 1.0, "model": ["17", 0], "positive": ["8", 0], "negative": ["7", 0], "latent_image": ["15", 0]}, "class_type": "KSampler", "_meta": {"title": "KSampler"}}, "15": {"inputs": {"width": 960, "height": 1280, "batch_size": 1}, "class_type": "EmptyLatentImage", "_meta": {"title": "Empty Latent Image"}}, "16": {"inputs": {"samples": ["14", 0], "vae": ["3", 0]}, "class_type": "VAEDecode", "_meta": {"title": "VAE Decode"}}, "17": {"inputs": {"lora_name": {"content": "kyuusuigakari_Illustrious.safetensors", "image": null}, "strength_model": 0.0, "strength_clip": 0.0, "example": "[none]", "model": ["2", 0], "clip": ["2", 1]}, "class_type": "LoraLoader|pysssss", "_meta": {"title": "Lora Loader \ud83d\udc0d"}}}, "workflow": {"last_node_id": 18, "last_link_id": 35, "nodes": [{"id": 16, "type": "VAEDecode", "pos": [1570, 58], "size": [210, 46], "flags": {}, "order": 7, "mode": 0, "inputs": [{"name": "samples", "type": "LATENT", "link": 21}, {"name": "vae", "type": "VAE", "link": 23}], "outputs": [{"name": "IMAGE", "type": "IMAGE", "links": [22], "slot_index": 0}], "properties": {"Node name for S&R": "VAEDecode"}, "widgets_values": []}, {"id": 14, "type": "KSampler", "pos": [1200, 49], "size": [315, 262], "flags": {}, "order": 6, "mode": 0, "inputs": [{"name": "model", "type": "MODEL", "link": 33}, {"name": "positive", "type": "CONDITIONING", "link": 18}, {"name": "negative", "type": "CONDITIONING", "link": 19}, {"name": "latent_image", "type": "LATENT", "link": 20}], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [21], "slot_index": 0}], "properties": {"Node name for S&R": "KSampler"}, "widgets_values": [449588100462188, "randomize", 24, 5, "euler_ancestral", "normal", 1]}, {"id": 17, "type": "LoraLoader|pysssss", "pos": [535.435546875, 2.1264538764953613], "size": [345.5870056152344, 214.0751495361328], "flags": {}, "order": 3, "mode": 0, "inputs": [{"name": "model", "type": "MODEL", "link": 24}, {"name": "clip", "type": "CLIP", "link": 26}], "outputs": [{"name": "MODEL", "type": "MODEL", "links": [33], "slot_index": 0, "shape": 3}, {"name": "CLIP", "type": "CLIP", "links": [34, 35], "slot_index": 1, "shape": 3}, {"name": "STRING", "type": "STRING", "links": null, "shape": 3}], "properties": {"Node name for S&R": "LoraLoader|pysssss"}, "widgets_values": [{"content": "kyuusuigakari_Illustrious.safetensors", "image": null}, 0, 0, "[none]"]}, {"id": 7, "type": "CLIPTextEncode", "pos": [210.20828247070312, 620.292236328125], "size": [530.4689331054688, 178.5200653076172], "flags": {}, "order": 5, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 35}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [19], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["lowres, bad quality, worst quality, bad anatomy, sketch, jpeg artifacts, signature, watermark,"], "color": "#322", "bgcolor": "#533", "shape": 1}, {"id": 15, "type": "EmptyLatentImage", "pos": [771, 632], "size": [303.5653991699219, 173.1395721435547], "flags": {}, "order": 0, "mode": 0, "inputs": [], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [20]}], "properties": {"Node name for S&R": "EmptyLatentImage"}, "widgets_values": [960, 1280, 1]}, {"id": 8, "type": "CLIPTextEncode", "pos": [450.7476501464844, 326.7282409667969], "size": [535.6358642578125, 241.32017517089844], "flags": {}, "order": 4, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 34}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [18], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["1girl,gagaga girl, yu-gi-oh! zexal, official art, large breasts, \nupper body, blush,glaring, \n,\n,masterpiece, best quality, very aesthetic, "], "color": "#232", "bgcolor": "#353", "shape": 1}, {"id": 13, "type": "SaveImage", "pos": [1291.6605224609375, 698.4917602539062], "size": [320, 270], "flags": {}, "order": 8, "mode": 0, "inputs": [{"name": "images", "type": "IMAGE", "link": 22}], "outputs": [], "properties": {}, "widgets_values": ["ComfyUI"]}, {"id": 3, "type": "VAELoader", "pos": [0.6460132598876953, 211.7989501953125], "size": [360, 60], "flags": {}, "order": 1, "mode": 0, "inputs": [], "outputs": [{"name": "VAE", "type": "VAE", "links": [23], "slot_index": 0}], "properties": {"Node name for S&R": "VAELoader"}, "widgets_values": ["sdxl_vae.safetensors"]}, {"id": 2, "type": "CheckpointLoaderSimple", "pos": [20, 50], "size": [360, 100], "flags": {}, "order": 2, "mode": 0, "inputs": [], "outputs": [{"name": "MODEL", "type": "MODEL", "links": [24], "slot_index": 0}, {"name": "CLIP", "type": "CLIP", "links": [26], "slot_index": 1}, {"name": "VAE", "type": "VAE", "links": null}], "properties": {"Node name for S&R": "CheckpointLoaderSimple"}, "widgets_values": ["CottonNoob_v1.safetensors"]}], "links": [[18, 8, 0, 14, 1, "CONDITIONING"], [19, 7, 0, 14, 2, "CONDITIONING"], [20, 15, 0, 14, 3, "LATENT"], [21, 14, 0, 16, 0, "LATENT"], [22, 16, 0, 13, 0, "IMAGE"], [23, 3, 0, 16, 1, "VAE"], [24, 2, 0, 17, 0, "MODEL"], [26, 2, 1, 17, 1, "CLIP"], [33, 17, 0, 14, 0, "MODEL"], [34, 17, 1, 8, 0, "CLIP"], [35, 17, 1, 7, 0, "CLIP"]], "groups": [], "config": {}, "extra": {"ds": {"scale": 0.762776844438552, "offset": [10.683238313235876, 142.74722815903195]}}, "version": 0.4}}
Width
960
Height
1280
Models
['CottonNoob_v1.safetensors']
Denoise
1
Modelids
[]
Scheduler
normal
Upscalers
[]
Versionids
[]
Controlnets
[]
Additionalresources
[]
+
+
+ + + + +
+ +
+
Prompt
1girl, oyama mahiro, +sundress,sunset,beach,sea,shiny,sparkle, +,masterpiece, best quality,scenery
Negative prompt
bad quality, worst quality, watermark,
Seed
123726006810994
Model
CottonNoob_v1
Sampler
Euler a
Steps
24
CFG scale
5
+
+
+ + +
+
Vaes
['sdxl_vae.safetensors']
Comfy
{"prompt": {"2": {"inputs": {"ckpt_name": "CottonNoob_v1.safetensors"}, "class_type": "CheckpointLoaderSimple", "_meta": {"title": "Load Checkpoint"}}, "3": {"inputs": {"vae_name": "sdxl_vae.safetensors"}, "class_type": "VAELoader", "_meta": {"title": "Load VAE"}}, "7": {"inputs": {"text": "bad quality, worst quality, watermark,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "8": {"inputs": {"text": "1girl, oyama mahiro, \nsundress,sunset,beach,sea,shiny,sparkle, \n,masterpiece, best quality,scenery", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "13": {"inputs": {"filename_prefix": "ComfyUI", "images": ["16", 0]}, "class_type": "SaveImage", "_meta": {"title": "Save Image"}}, "14": {"inputs": {"seed": 123726006810994, "steps": 24, "cfg": 5.0, "sampler_name": "euler_ancestral", "scheduler": "normal", "denoise": 1.0, "model": ["2", 0], "positive": ["8", 0], "negative": ["7", 0], "latent_image": ["15", 0]}, "class_type": "KSampler", "_meta": {"title": "KSampler"}}, "15": {"inputs": {"width": 960, "height": 1280, "batch_size": 1}, "class_type": "EmptyLatentImage", "_meta": {"title": "Empty Latent Image"}}, "16": {"inputs": {"samples": ["14", 0], "vae": ["3", 0]}, "class_type": "VAEDecode", "_meta": {"title": "VAE Decode"}}}, "workflow": {"last_node_id": 18, "last_link_id": 38, "nodes": [{"id": 16, "type": "VAEDecode", "pos": [1570, 58], "size": [210, 46], "flags": {}, "order": 6, "mode": 0, "inputs": [{"name": "samples", "type": "LATENT", "link": 21}, {"name": "vae", "type": "VAE", "link": 23}], "outputs": [{"name": "IMAGE", "type": "IMAGE", "links": [22], "slot_index": 0}], "properties": {"Node name for S&R": "VAEDecode"}, "widgets_values": []}, {"id": 3, "type": "VAELoader", "pos": [19, 200], "size": [360, 60], "flags": {}, "order": 0, "mode": 0, "inputs": [], "outputs": [{"name": "VAE", "type": "VAE", "links": [23], "slot_index": 0}], "properties": {"Node name for S&R": "VAELoader"}, "widgets_values": ["sdxl_vae.safetensors"]}, {"id": 15, "type": "EmptyLatentImage", "pos": [771, 632], "size": [303.5653991699219, 173.1395721435547], "flags": {}, "order": 1, "mode": 0, "inputs": [], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [20]}], "properties": {"Node name for S&R": "EmptyLatentImage"}, "widgets_values": [960, 1280, 1]}, {"id": 14, "type": "KSampler", "pos": [1200, 49], "size": [315, 262], "flags": {}, "order": 5, "mode": 0, "inputs": [{"name": "model", "type": "MODEL", "link": 36}, {"name": "positive", "type": "CONDITIONING", "link": 18}, {"name": "negative", "type": "CONDITIONING", "link": 19}, {"name": "latent_image", "type": "LATENT", "link": 20}], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [21], "slot_index": 0}], "properties": {"Node name for S&R": "KSampler"}, "widgets_values": [123726006810994, "randomize", 24, 5, "euler_ancestral", "normal", 1]}, {"id": 7, "type": "CLIPTextEncode", "pos": [210.20828247070312, 620.292236328125], "size": [530.4689331054688, 178.5200653076172], "flags": {}, "order": 4, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 38}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [19], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["bad quality, worst quality, watermark,"], "color": "#322", "bgcolor": "#533", "shape": 1}, {"id": 2, "type": "CheckpointLoaderSimple", "pos": [20, 50], "size": [360, 100], "flags": {}, "order": 2, "mode": 0, "inputs": [], "outputs": [{"name": "MODEL", "type": "MODEL", "links": [36], "slot_index": 0}, {"name": "CLIP", "type": "CLIP", "links": [37, 38], "slot_index": 1}, {"name": "VAE", "type": "VAE", "links": null}], "properties": {"Node name for S&R": "CheckpointLoaderSimple"}, "widgets_values": ["CottonNoob_v1.safetensors"]}, {"id": 13, "type": "SaveImage", "pos": [1291.6605224609375, 698.4917602539062], "size": [320, 270], "flags": {}, "order": 7, "mode": 0, "inputs": [{"name": "images", "type": "IMAGE", "link": 22}], "outputs": [], "properties": {}, "widgets_values": ["ComfyUI"]}, {"id": 8, "type": "CLIPTextEncode", "pos": [450.7476501464844, 326.7282409667969], "size": [535.6358642578125, 241.32017517089844], "flags": {}, "order": 3, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 37}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [18], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["1girl, oyama mahiro, \nsundress,sunset,beach,sea,shiny,sparkle, \n,masterpiece, best quality,scenery"], "color": "#232", "bgcolor": "#353", "shape": 1}], "links": [[18, 8, 0, 14, 1, "CONDITIONING"], [19, 7, 0, 14, 2, "CONDITIONING"], [20, 15, 0, 14, 3, "LATENT"], [21, 14, 0, 16, 0, "LATENT"], [22, 16, 0, 13, 0, "IMAGE"], [23, 3, 0, 16, 1, "VAE"], [36, 2, 0, 14, 0, "MODEL"], [37, 2, 1, 8, 0, "CLIP"], [38, 2, 1, 7, 0, "CLIP"]], "groups": [], "config": {}, "extra": {"ds": {"scale": 1.4864362802414661, "offset": [-353.4303708582373, -282.3008826049962]}}, "version": 0.4}}
Width
960
Height
1280
Models
['CottonNoob_v1.safetensors']
Denoise
1
Modelids
[]
Scheduler
normal
Upscalers
[]
Versionids
[]
Controlnets
[]
Additionalresources
[]
+
+
+ + + + +
+ +
+
Prompt
2girls, touhoku kiritan, touhoku zunko, +hug from behind, sitting, straight-on, +,masterpiece, best quality,scenery
Negative prompt
bad quality, worst quality, watermark,
Seed
1065542616343526
Model
CottonNoob_v1
Sampler
Euler a
Steps
24
CFG scale
5
+
+
+ + +
+
Vaes
['sdxl_vae.safetensors']
Comfy
{"prompt": {"2": {"inputs": {"ckpt_name": "CottonNoob_v1.safetensors"}, "class_type": "CheckpointLoaderSimple", "_meta": {"title": "Load Checkpoint"}}, "3": {"inputs": {"vae_name": "sdxl_vae.safetensors"}, "class_type": "VAELoader", "_meta": {"title": "Load VAE"}}, "7": {"inputs": {"text": "bad quality, worst quality, watermark,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "8": {"inputs": {"text": "2girls, touhoku kiritan, touhoku zunko, \nhug from behind, sitting, straight-on, \n,masterpiece, best quality,scenery", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "13": {"inputs": {"filename_prefix": "ComfyUI", "images": ["16", 0]}, "class_type": "SaveImage", "_meta": {"title": "Save Image"}}, "14": {"inputs": {"seed": 1065542616343526, "steps": 24, "cfg": 5.0, "sampler_name": "euler_ancestral", "scheduler": "normal", "denoise": 1.0, "model": ["2", 0], "positive": ["8", 0], "negative": ["7", 0], "latent_image": ["15", 0]}, "class_type": "KSampler", "_meta": {"title": "KSampler"}}, "15": {"inputs": {"width": 960, "height": 1280, "batch_size": 1}, "class_type": "EmptyLatentImage", "_meta": {"title": "Empty Latent Image"}}, "16": {"inputs": {"samples": ["14", 0], "vae": ["3", 0]}, "class_type": "VAEDecode", "_meta": {"title": "VAE Decode"}}}, "workflow": {"last_node_id": 18, "last_link_id": 38, "nodes": [{"id": 16, "type": "VAEDecode", "pos": [1570, 58], "size": [210, 46], "flags": {}, "order": 6, "mode": 0, "inputs": [{"name": "samples", "type": "LATENT", "link": 21}, {"name": "vae", "type": "VAE", "link": 23}], "outputs": [{"name": "IMAGE", "type": "IMAGE", "links": [22], "slot_index": 0}], "properties": {"Node name for S&R": "VAEDecode"}, "widgets_values": []}, {"id": 3, "type": "VAELoader", "pos": [19, 200], "size": [360, 60], "flags": {}, "order": 0, "mode": 0, "inputs": [], "outputs": [{"name": "VAE", "type": "VAE", "links": [23], "slot_index": 0}], "properties": {"Node name for S&R": "VAELoader"}, "widgets_values": ["sdxl_vae.safetensors"]}, {"id": 15, "type": "EmptyLatentImage", "pos": [771, 632], "size": [303.5653991699219, 173.1395721435547], "flags": {}, "order": 1, "mode": 0, "inputs": [], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [20]}], "properties": {"Node name for S&R": "EmptyLatentImage"}, "widgets_values": [960, 1280, 1]}, {"id": 14, "type": "KSampler", "pos": [1200, 49], "size": [315, 262], "flags": {}, "order": 5, "mode": 0, "inputs": [{"name": "model", "type": "MODEL", "link": 36}, {"name": "positive", "type": "CONDITIONING", "link": 18}, {"name": "negative", "type": "CONDITIONING", "link": 19}, {"name": "latent_image", "type": "LATENT", "link": 20}], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [21], "slot_index": 0}], "properties": {"Node name for S&R": "KSampler"}, "widgets_values": [1065542616343526, "randomize", 24, 5, "euler_ancestral", "normal", 1]}, {"id": 7, "type": "CLIPTextEncode", "pos": [210.20828247070312, 620.292236328125], "size": [530.4689331054688, 178.5200653076172], "flags": {}, "order": 4, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 38}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [19], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["bad quality, worst quality, watermark,"], "color": "#322", "bgcolor": "#533", "shape": 1}, {"id": 2, "type": "CheckpointLoaderSimple", "pos": [20, 50], "size": [360, 100], "flags": {}, "order": 2, "mode": 0, "inputs": [], "outputs": [{"name": "MODEL", "type": "MODEL", "links": [36], "slot_index": 0}, {"name": "CLIP", "type": "CLIP", "links": [37, 38], "slot_index": 1}, {"name": "VAE", "type": "VAE", "links": null}], "properties": {"Node name for S&R": "CheckpointLoaderSimple"}, "widgets_values": ["CottonNoob_v1.safetensors"]}, {"id": 13, "type": "SaveImage", "pos": [1291.6605224609375, 698.4917602539062], "size": [320, 270], "flags": {}, "order": 7, "mode": 0, "inputs": [{"name": "images", "type": "IMAGE", "link": 22}], "outputs": [], "properties": {}, "widgets_values": ["ComfyUI"]}, {"id": 8, "type": "CLIPTextEncode", "pos": [450.7476501464844, 326.7282409667969], "size": [535.6358642578125, 241.32017517089844], "flags": {}, "order": 3, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 37}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [18], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["2girls, touhoku kiritan, touhoku zunko, \nhug from behind, sitting, straight-on, \n,masterpiece, best quality,scenery"], "color": "#232", "bgcolor": "#353", "shape": 1}], "links": [[18, 8, 0, 14, 1, "CONDITIONING"], [19, 7, 0, 14, 2, "CONDITIONING"], [20, 15, 0, 14, 3, "LATENT"], [21, 14, 0, 16, 0, "LATENT"], [22, 16, 0, 13, 0, "IMAGE"], [23, 3, 0, 16, 1, "VAE"], [36, 2, 0, 14, 0, "MODEL"], [37, 2, 1, 8, 0, "CLIP"], [38, 2, 1, 7, 0, "CLIP"]], "groups": [], "config": {}, "extra": {"ds": {"scale": 6.209213230591693, "offset": [-1329.874336927554, -808.0668512000824]}}, "version": 0.4}}
Width
960
Height
1280
Models
['CottonNoob_v1.safetensors']
Denoise
1
Modelids
[]
Scheduler
normal
Upscalers
[]
Versionids
[]
Controlnets
[]
Additionalresources
[]
+
+
+ + + + +
+ +
+
Prompt
1girl,solo,dark blue hair,long hair,hair bun,blue eyes, ,medium breasts, school swimsuit, +,blush, pool,portrait, name tag, swim cap removed,serious,holding cap, +partially submerged,wet hair, wet clothes, wet,smile, arched back, splashing, upper body, from side, shiny, forehead, arm up, armpits +,masterpiece, best quality,
Negative prompt
bad quality, worst quality, watermark,
Seed
622677107966885
Model
CottonNoob_v1
Sampler
Euler a
Steps
24
CFG scale
5
+
+
+ + +
+
Vaes
['sdxl_vae.safetensors']
Comfy
{"prompt": {"2": {"inputs": {"ckpt_name": "CottonNoob_v1.safetensors"}, "class_type": "CheckpointLoaderSimple", "_meta": {"title": "Load Checkpoint"}}, "3": {"inputs": {"vae_name": "sdxl_vae.safetensors"}, "class_type": "VAELoader", "_meta": {"title": "Load VAE"}}, "7": {"inputs": {"text": "bad quality, worst quality, watermark,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "8": {"inputs": {"text": "1girl,solo,dark blue hair,long hair,hair bun,blue eyes, ,medium breasts, school swimsuit,\n,blush, pool,portrait, name tag, swim cap removed,serious,holding cap,\npartially submerged,wet hair, wet clothes, wet,smile, arched back, splashing, upper body, from side, shiny, forehead, arm up, armpits\n,masterpiece, best quality,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "13": {"inputs": {"filename_prefix": "ComfyUI", "images": ["16", 0]}, "class_type": "SaveImage", "_meta": {"title": "Save Image"}}, "14": {"inputs": {"seed": 622677107966885, "steps": 24, "cfg": 5.0, "sampler_name": "euler_ancestral", "scheduler": "normal", "denoise": 1.0, "model": ["2", 0], "positive": ["8", 0], "negative": ["7", 0], "latent_image": ["15", 0]}, "class_type": "KSampler", "_meta": {"title": "KSampler"}}, "15": {"inputs": {"width": 960, "height": 1280, "batch_size": 1}, "class_type": "EmptyLatentImage", "_meta": {"title": "Empty Latent Image"}}, "16": {"inputs": {"samples": ["14", 0], "vae": ["3", 0]}, "class_type": "VAEDecode", "_meta": {"title": "VAE Decode"}}}, "workflow": {"last_node_id": 18, "last_link_id": 38, "nodes": [{"id": 16, "type": "VAEDecode", "pos": [1570, 58], "size": [210, 46], "flags": {}, "order": 6, "mode": 0, "inputs": [{"name": "samples", "type": "LATENT", "link": 21}, {"name": "vae", "type": "VAE", "link": 23}], "outputs": [{"name": "IMAGE", "type": "IMAGE", "links": [22], "slot_index": 0}], "properties": {"Node name for S&R": "VAEDecode"}, "widgets_values": []}, {"id": 3, "type": "VAELoader", "pos": [19, 200], "size": [360, 60], "flags": {}, "order": 0, "mode": 0, "inputs": [], "outputs": [{"name": "VAE", "type": "VAE", "links": [23], "slot_index": 0}], "properties": {"Node name for S&R": "VAELoader"}, "widgets_values": ["sdxl_vae.safetensors"]}, {"id": 15, "type": "EmptyLatentImage", "pos": [771, 632], "size": [303.5653991699219, 173.1395721435547], "flags": {}, "order": 1, "mode": 0, "inputs": [], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [20]}], "properties": {"Node name for S&R": "EmptyLatentImage"}, "widgets_values": [960, 1280, 1]}, {"id": 14, "type": "KSampler", "pos": [1200, 49], "size": [315, 262], "flags": {}, "order": 5, "mode": 0, "inputs": [{"name": "model", "type": "MODEL", "link": 36}, {"name": "positive", "type": "CONDITIONING", "link": 18}, {"name": "negative", "type": "CONDITIONING", "link": 19}, {"name": "latent_image", "type": "LATENT", "link": 20}], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [21], "slot_index": 0}], "properties": {"Node name for S&R": "KSampler"}, "widgets_values": [622677107966885, "randomize", 24, 5, "euler_ancestral", "normal", 1]}, {"id": 7, "type": "CLIPTextEncode", "pos": [210.20828247070312, 620.292236328125], "size": [530.4689331054688, 178.5200653076172], "flags": {}, "order": 4, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 38}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [19], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["bad quality, worst quality, watermark,"], "color": "#322", "bgcolor": "#533", "shape": 1}, {"id": 2, "type": "CheckpointLoaderSimple", "pos": [20, 50], "size": [360, 100], "flags": {}, "order": 2, "mode": 0, "inputs": [], "outputs": [{"name": "MODEL", "type": "MODEL", "links": [36], "slot_index": 0}, {"name": "CLIP", "type": "CLIP", "links": [37, 38], "slot_index": 1}, {"name": "VAE", "type": "VAE", "links": null}], "properties": {"Node name for S&R": "CheckpointLoaderSimple"}, "widgets_values": ["CottonNoob_v1.safetensors"]}, {"id": 13, "type": "SaveImage", "pos": [1291.6605224609375, 698.4917602539062], "size": [320, 270], "flags": {}, "order": 7, "mode": 0, "inputs": [{"name": "images", "type": "IMAGE", "link": 22}], "outputs": [], "properties": {}, "widgets_values": ["ComfyUI"]}, {"id": 8, "type": "CLIPTextEncode", "pos": [450.7476501464844, 326.7282409667969], "size": [535.6358642578125, 241.32017517089844], "flags": {}, "order": 3, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 37}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [18], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["1girl,solo,dark blue hair,long hair,hair bun,blue eyes, ,medium breasts, school swimsuit,\n,blush, pool,portrait, name tag, swim cap removed,serious,holding cap,\npartially submerged,wet hair, wet clothes, wet,smile, arched back, splashing, upper body, from side, shiny, forehead, arm up, armpits\n,masterpiece, best quality,"], "color": "#232", "bgcolor": "#353", "shape": 1}], "links": [[18, 8, 0, 14, 1, "CONDITIONING"], [19, 7, 0, 14, 2, "CONDITIONING"], [20, 15, 0, 14, 3, "LATENT"], [21, 14, 0, 16, 0, "LATENT"], [22, 16, 0, 13, 0, "IMAGE"], [23, 3, 0, 16, 1, "VAE"], [36, 2, 0, 14, 0, "MODEL"], [37, 2, 1, 8, 0, "CLIP"], [38, 2, 1, 7, 0, "CLIP"]], "groups": [], "config": {}, "extra": {"ds": {"scale": 3.186308177103659, "offset": [-1242.4910061846842, -743.9724438067678]}}, "version": 0.4}}
Width
960
Height
1280
Models
['CottonNoob_v1.safetensors']
Denoise
1
Modelids
[]
Scheduler
normal
Upscalers
[]
Versionids
[]
Controlnets
[]
Additionalresources
[]
+
+
+ + + + +
+ +
+
Prompt
1girl, gyaru,brown eyes, dark skin, smile, dynamic pose, blush, open mouth, henshin, magical girl, dutch angle,(glowing:1.2),simple background, yellow theme,black background,sparkle,light particles,glowing clothes, dissolving clothes, +,masterpiece, best quality,
Negative prompt
bad quality, worst quality, watermark,
Seed
863586488876886
Model
CottonNoob_v1
Sampler
Euler a
Steps
24
CFG scale
5
+
+
+ + +
+
Vaes
['sdxl_vae.safetensors']
Comfy
{"prompt": {"2": {"inputs": {"ckpt_name": "CottonNoob_v1.safetensors"}, "class_type": "CheckpointLoaderSimple", "_meta": {"title": "Load Checkpoint"}}, "3": {"inputs": {"vae_name": "sdxl_vae.safetensors"}, "class_type": "VAELoader", "_meta": {"title": "Load VAE"}}, "7": {"inputs": {"text": "bad quality, worst quality, watermark,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "8": {"inputs": {"text": "1girl, gyaru,brown eyes, dark skin, smile, dynamic pose, blush, open mouth, henshin, magical girl, dutch angle,(glowing:1.2),simple background, yellow theme,black background,sparkle,light particles,glowing clothes, dissolving clothes,\n,masterpiece, best quality,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "13": {"inputs": {"filename_prefix": "ComfyUI", "images": ["16", 0]}, "class_type": "SaveImage", "_meta": {"title": "Save Image"}}, "14": {"inputs": {"seed": 863586488876886, "steps": 24, "cfg": 5.0, "sampler_name": "euler_ancestral", "scheduler": "normal", "denoise": 1.0, "model": ["2", 0], "positive": ["8", 0], "negative": ["7", 0], "latent_image": ["15", 0]}, "class_type": "KSampler", "_meta": {"title": "KSampler"}}, "15": {"inputs": {"width": 960, "height": 1280, "batch_size": 1}, "class_type": "EmptyLatentImage", "_meta": {"title": "Empty Latent Image"}}, "16": {"inputs": {"samples": ["14", 0], "vae": ["3", 0]}, "class_type": "VAEDecode", "_meta": {"title": "VAE Decode"}}}, "workflow": {"last_node_id": 18, "last_link_id": 38, "nodes": [{"id": 16, "type": "VAEDecode", "pos": [1570, 58], "size": [210, 46], "flags": {}, "order": 6, "mode": 0, "inputs": [{"name": "samples", "type": "LATENT", "link": 21}, {"name": "vae", "type": "VAE", "link": 23}], "outputs": [{"name": "IMAGE", "type": "IMAGE", "links": [22], "slot_index": 0}], "properties": {"Node name for S&R": "VAEDecode"}, "widgets_values": []}, {"id": 3, "type": "VAELoader", "pos": [19, 200], "size": [360, 60], "flags": {}, "order": 0, "mode": 0, "inputs": [], "outputs": [{"name": "VAE", "type": "VAE", "links": [23], "slot_index": 0}], "properties": {"Node name for S&R": "VAELoader"}, "widgets_values": ["sdxl_vae.safetensors"]}, {"id": 15, "type": "EmptyLatentImage", "pos": [771, 632], "size": [303.5653991699219, 173.1395721435547], "flags": {}, "order": 1, "mode": 0, "inputs": [], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [20]}], "properties": {"Node name for S&R": "EmptyLatentImage"}, "widgets_values": [960, 1280, 1]}, {"id": 14, "type": "KSampler", "pos": [1200, 49], "size": [315, 262], "flags": {}, "order": 5, "mode": 0, "inputs": [{"name": "model", "type": "MODEL", "link": 36}, {"name": "positive", "type": "CONDITIONING", "link": 18}, {"name": "negative", "type": "CONDITIONING", "link": 19}, {"name": "latent_image", "type": "LATENT", "link": 20}], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [21], "slot_index": 0}], "properties": {"Node name for S&R": "KSampler"}, "widgets_values": [863586488876886, "randomize", 24, 5, "euler_ancestral", "normal", 1]}, {"id": 7, "type": "CLIPTextEncode", "pos": [210.20828247070312, 620.292236328125], "size": [530.4689331054688, 178.5200653076172], "flags": {}, "order": 4, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 38}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [19], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["bad quality, worst quality, watermark,"], "color": "#322", "bgcolor": "#533", "shape": 1}, {"id": 2, "type": "CheckpointLoaderSimple", "pos": [20, 50], "size": [360, 100], "flags": {}, "order": 2, "mode": 0, "inputs": [], "outputs": [{"name": "MODEL", "type": "MODEL", "links": [36], "slot_index": 0}, {"name": "CLIP", "type": "CLIP", "links": [37, 38], "slot_index": 1}, {"name": "VAE", "type": "VAE", "links": null}], "properties": {"Node name for S&R": "CheckpointLoaderSimple"}, "widgets_values": ["CottonNoob_v1.safetensors"]}, {"id": 13, "type": "SaveImage", "pos": [1291.6605224609375, 698.4917602539062], "size": [320, 270], "flags": {}, "order": 7, "mode": 0, "inputs": [{"name": "images", "type": "IMAGE", "link": 22}], "outputs": [], "properties": {}, "widgets_values": ["ComfyUI"]}, {"id": 8, "type": "CLIPTextEncode", "pos": [450.7476501464844, 326.7282409667969], "size": [535.6358642578125, 241.32017517089844], "flags": {}, "order": 3, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 37}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [18], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["1girl, gyaru,brown eyes, dark skin, smile, dynamic pose, blush, open mouth, henshin, magical girl, dutch angle,(glowing:1.2),simple background, yellow theme,black background,sparkle,light particles,glowing clothes, dissolving clothes,\n,masterpiece, best quality,"], "color": "#232", "bgcolor": "#353", "shape": 1}], "links": [[18, 8, 0, 14, 1, "CONDITIONING"], [19, 7, 0, 14, 2, "CONDITIONING"], [20, 15, 0, 14, 3, "LATENT"], [21, 14, 0, 16, 0, "LATENT"], [22, 16, 0, 13, 0, "IMAGE"], [23, 3, 0, 16, 1, "VAE"], [36, 2, 0, 14, 0, "MODEL"], [37, 2, 1, 8, 0, "CLIP"], [38, 2, 1, 7, 0, "CLIP"]], "groups": [], "config": {}, "extra": {"ds": {"scale": 3.1863081771036605, "offset": [-1220.4338248437525, -731.4887049199868]}}, "version": 0.4}}
Width
960
Height
1280
Models
['CottonNoob_v1.safetensors']
Denoise
1
Modelids
[]
Scheduler
normal
Upscalers
[]
Versionids
[]
Controlnets
[]
Additionalresources
[]
+
+
+ + + + +
+ +
+
Prompt
2girls, kotonoha akane, kotonoha aoi, tongue, :p, binaural microphone, heavy breathing, +,masterpiece, best quality,
Negative prompt
bad quality, worst quality, watermark,
Seed
1055691258345775
Model
CottonNoob_v1
Sampler
Euler a
Steps
24
CFG scale
5
+
+
+ + +
+
Vaes
['sdxl_vae.safetensors']
Comfy
{"prompt": {"2": {"inputs": {"ckpt_name": "CottonNoob_v1.safetensors"}, "class_type": "CheckpointLoaderSimple", "_meta": {"title": "Load Checkpoint"}}, "3": {"inputs": {"vae_name": "sdxl_vae.safetensors"}, "class_type": "VAELoader", "_meta": {"title": "Load VAE"}}, "7": {"inputs": {"text": "bad quality, worst quality, watermark,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "8": {"inputs": {"text": "2girls, kotonoha akane, kotonoha aoi, tongue, :p, binaural microphone, heavy breathing, \n,masterpiece, best quality,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "13": {"inputs": {"filename_prefix": "ComfyUI", "images": ["16", 0]}, "class_type": "SaveImage", "_meta": {"title": "Save Image"}}, "14": {"inputs": {"seed": 1055691258345775, "steps": 24, "cfg": 5.0, "sampler_name": "euler_ancestral", "scheduler": "normal", "denoise": 1.0, "model": ["2", 0], "positive": ["8", 0], "negative": ["7", 0], "latent_image": ["15", 0]}, "class_type": "KSampler", "_meta": {"title": "KSampler"}}, "15": {"inputs": {"width": 1280, "height": 960, "batch_size": 1}, "class_type": "EmptyLatentImage", "_meta": {"title": "Empty Latent Image"}}, "16": {"inputs": {"samples": ["14", 0], "vae": ["3", 0]}, "class_type": "VAEDecode", "_meta": {"title": "VAE Decode"}}}, "workflow": {"last_node_id": 18, "last_link_id": 38, "nodes": [{"id": 16, "type": "VAEDecode", "pos": [1570, 58], "size": [210, 46], "flags": {}, "order": 6, "mode": 0, "inputs": [{"name": "samples", "type": "LATENT", "link": 21}, {"name": "vae", "type": "VAE", "link": 23}], "outputs": [{"name": "IMAGE", "type": "IMAGE", "links": [22], "slot_index": 0}], "properties": {"Node name for S&R": "VAEDecode"}, "widgets_values": []}, {"id": 3, "type": "VAELoader", "pos": [19, 200], "size": [360, 60], "flags": {}, "order": 0, "mode": 0, "inputs": [], "outputs": [{"name": "VAE", "type": "VAE", "links": [23], "slot_index": 0}], "properties": {"Node name for S&R": "VAELoader"}, "widgets_values": ["sdxl_vae.safetensors"]}, {"id": 14, "type": "KSampler", "pos": [1200, 49], "size": [315, 262], "flags": {}, "order": 5, "mode": 0, "inputs": [{"name": "model", "type": "MODEL", "link": 36}, {"name": "positive", "type": "CONDITIONING", "link": 18}, {"name": "negative", "type": "CONDITIONING", "link": 19}, {"name": "latent_image", "type": "LATENT", "link": 20}], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [21], "slot_index": 0}], "properties": {"Node name for S&R": "KSampler"}, "widgets_values": [1055691258345775, "randomize", 24, 5, "euler_ancestral", "normal", 1]}, {"id": 7, "type": "CLIPTextEncode", "pos": [210.20828247070312, 620.292236328125], "size": [530.4689331054688, 178.5200653076172], "flags": {}, "order": 4, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 38}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [19], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["bad quality, worst quality, watermark,"], "color": "#322", "bgcolor": "#533", "shape": 1}, {"id": 2, "type": "CheckpointLoaderSimple", "pos": [20, 50], "size": [360, 100], "flags": {}, "order": 1, "mode": 0, "inputs": [], "outputs": [{"name": "MODEL", "type": "MODEL", "links": [36], "slot_index": 0}, {"name": "CLIP", "type": "CLIP", "links": [37, 38], "slot_index": 1}, {"name": "VAE", "type": "VAE", "links": null}], "properties": {"Node name for S&R": "CheckpointLoaderSimple"}, "widgets_values": ["CottonNoob_v1.safetensors"]}, {"id": 13, "type": "SaveImage", "pos": [1291.6605224609375, 698.4917602539062], "size": [320, 270], "flags": {}, "order": 7, "mode": 0, "inputs": [{"name": "images", "type": "IMAGE", "link": 22}], "outputs": [], "properties": {}, "widgets_values": ["ComfyUI"]}, {"id": 15, "type": "EmptyLatentImage", "pos": [771, 632], "size": [303.5653991699219, 173.1395721435547], "flags": {}, "order": 2, "mode": 0, "inputs": [], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [20]}], "properties": {"Node name for S&R": "EmptyLatentImage"}, "widgets_values": [1280, 960, 1]}, {"id": 8, "type": "CLIPTextEncode", "pos": [450.7476501464844, 326.7282409667969], "size": [535.6358642578125, 241.32017517089844], "flags": {}, "order": 3, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 37}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [18], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["2girls, kotonoha akane, kotonoha aoi, tongue, :p, binaural microphone, heavy breathing, \n,masterpiece, best quality,"], "color": "#232", "bgcolor": "#353", "shape": 1}], "links": [[18, 8, 0, 14, 1, "CONDITIONING"], [19, 7, 0, 14, 2, "CONDITIONING"], [20, 15, 0, 14, 3, "LATENT"], [21, 14, 0, 16, 0, "LATENT"], [22, 16, 0, 13, 0, "IMAGE"], [23, 3, 0, 16, 1, "VAE"], [36, 2, 0, 14, 0, "MODEL"], [37, 2, 1, 8, 0, "CLIP"], [38, 2, 1, 7, 0, "CLIP"]], "groups": [], "config": {}, "extra": {"ds": {"scale": 1.351305709310436, "offset": [-455.13735514504197, -201.7151614659186]}}, "version": 0.4}}
Width
1280
Height
960
Models
['CottonNoob_v1.safetensors']
Denoise
1
Modelids
[]
Scheduler
normal
Upscalers
[]
Versionids
[]
Controlnets
[]
Additionalresources
[]
+
+
+ + + + +
+ +
+
Prompt
1girl,solo,rabbit hole (vocaloid), large breasts, shiny skin, black leotard, +clenched teeth,blush,sweat, ribbon bondage, body writing,arrow (symbol), "free", +leg up, arms behind back, restrained,looking at viewer , male silhouette, +,masterpiece, best quality,
Negative prompt
bad quality, worst quality, watermark,
Seed
424305543863213
Model
CottonNoob_v1
Sampler
Euler a
Steps
24
CFG scale
5
+
+
+ + +
+
Vaes
['sdxl_vae.safetensors']
Comfy
{"prompt": {"2": {"inputs": {"ckpt_name": "CottonNoob_v1.safetensors"}, "class_type": "CheckpointLoaderSimple", "_meta": {"title": "Load Checkpoint"}}, "3": {"inputs": {"vae_name": "sdxl_vae.safetensors"}, "class_type": "VAELoader", "_meta": {"title": "Load VAE"}}, "7": {"inputs": {"text": "bad quality, worst quality, watermark,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "8": {"inputs": {"text": "1girl,solo,rabbit hole (vocaloid), large breasts, shiny skin, black leotard, \nclenched teeth,blush,sweat, ribbon bondage, body writing,arrow (symbol), \"free\",\nleg up, arms behind back, restrained,looking at viewer , male silhouette,\n,masterpiece, best quality,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "13": {"inputs": {"filename_prefix": "ComfyUI", "images": ["16", 0]}, "class_type": "SaveImage", "_meta": {"title": "Save Image"}}, "14": {"inputs": {"seed": 424305543863213, "steps": 24, "cfg": 5.0, "sampler_name": "euler_ancestral", "scheduler": "normal", "denoise": 1.0, "model": ["2", 0], "positive": ["8", 0], "negative": ["7", 0], "latent_image": ["15", 0]}, "class_type": "KSampler", "_meta": {"title": "KSampler"}}, "15": {"inputs": {"width": 960, "height": 1280, "batch_size": 1}, "class_type": "EmptyLatentImage", "_meta": {"title": "Empty Latent Image"}}, "16": {"inputs": {"samples": ["14", 0], "vae": ["3", 0]}, "class_type": "VAEDecode", "_meta": {"title": "VAE Decode"}}}, "workflow": {"last_node_id": 18, "last_link_id": 38, "nodes": [{"id": 16, "type": "VAEDecode", "pos": [1570, 58], "size": [210, 46], "flags": {}, "order": 6, "mode": 0, "inputs": [{"name": "samples", "type": "LATENT", "link": 21}, {"name": "vae", "type": "VAE", "link": 23}], "outputs": [{"name": "IMAGE", "type": "IMAGE", "links": [22], "slot_index": 0}], "properties": {"Node name for S&R": "VAEDecode"}, "widgets_values": []}, {"id": 3, "type": "VAELoader", "pos": [19, 200], "size": [360, 60], "flags": {}, "order": 0, "mode": 0, "inputs": [], "outputs": [{"name": "VAE", "type": "VAE", "links": [23], "slot_index": 0}], "properties": {"Node name for S&R": "VAELoader"}, "widgets_values": ["sdxl_vae.safetensors"]}, {"id": 15, "type": "EmptyLatentImage", "pos": [771, 632], "size": [303.5653991699219, 173.1395721435547], "flags": {}, "order": 1, "mode": 0, "inputs": [], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [20]}], "properties": {"Node name for S&R": "EmptyLatentImage"}, "widgets_values": [960, 1280, 1]}, {"id": 14, "type": "KSampler", "pos": [1200, 49], "size": [315, 262], "flags": {}, "order": 5, "mode": 0, "inputs": [{"name": "model", "type": "MODEL", "link": 36}, {"name": "positive", "type": "CONDITIONING", "link": 18}, {"name": "negative", "type": "CONDITIONING", "link": 19}, {"name": "latent_image", "type": "LATENT", "link": 20}], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [21], "slot_index": 0}], "properties": {"Node name for S&R": "KSampler"}, "widgets_values": [424305543863213, "randomize", 24, 5, "euler_ancestral", "normal", 1]}, {"id": 7, "type": "CLIPTextEncode", "pos": [210.20828247070312, 620.292236328125], "size": [530.4689331054688, 178.5200653076172], "flags": {}, "order": 4, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 38}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [19], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["bad quality, worst quality, watermark,"], "color": "#322", "bgcolor": "#533", "shape": 1}, {"id": 2, "type": "CheckpointLoaderSimple", "pos": [20, 50], "size": [360, 100], "flags": {}, "order": 2, "mode": 0, "inputs": [], "outputs": [{"name": "MODEL", "type": "MODEL", "links": [36], "slot_index": 0}, {"name": "CLIP", "type": "CLIP", "links": [37, 38], "slot_index": 1}, {"name": "VAE", "type": "VAE", "links": null}], "properties": {"Node name for S&R": "CheckpointLoaderSimple"}, "widgets_values": ["CottonNoob_v1.safetensors"]}, {"id": 13, "type": "SaveImage", "pos": [1130.7520751953125, 592.1286010742188], "size": [320, 270], "flags": {}, "order": 7, "mode": 0, "inputs": [{"name": "images", "type": "IMAGE", "link": 22}], "outputs": [], "properties": {}, "widgets_values": ["ComfyUI"]}, {"id": 8, "type": "CLIPTextEncode", "pos": [450.7476501464844, 326.7282409667969], "size": [535.6358642578125, 241.32017517089844], "flags": {}, "order": 3, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 37}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [18], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["1girl,solo,rabbit hole (vocaloid), large breasts, shiny skin, black leotard, \nclenched teeth,blush,sweat, ribbon bondage, body writing,arrow (symbol), \"free\",\nleg up, arms behind back, restrained,looking at viewer , male silhouette,\n,masterpiece, best quality,"], "color": "#232", "bgcolor": "#353", "shape": 1}], "links": [[18, 8, 0, 14, 1, "CONDITIONING"], [19, 7, 0, 14, 2, "CONDITIONING"], [20, 15, 0, 14, 3, "LATENT"], [21, 14, 0, 16, 0, "LATENT"], [22, 16, 0, 13, 0, "IMAGE"], [23, 3, 0, 16, 1, "VAE"], [36, 2, 0, 14, 0, "MODEL"], [37, 2, 1, 8, 0, "CLIP"], [38, 2, 1, 7, 0, "CLIP"]], "groups": [], "config": {}, "extra": {"ds": {"scale": 1.9487171000000039, "offset": [-412.0927930813626, -315.01372736170663]}}, "version": 0.4}}
Width
960
Height
1280
Models
['CottonNoob_v1.safetensors']
Denoise
1
Modelids
[]
Scheduler
normal
Upscalers
[]
Versionids
[]
Controlnets
[]
Additionalresources
[]
+
+
+ + + + +
+ +
+
Prompt
1girl,solo,lovely labrynth of the silver castle, huge breasts, upper body,torogao,rolling eyes, +nude,wet,sweat,blush,embarrassed,in heat,half-closed eyes, moaning,trembling, +, darkness, (transparent slime:1.5),(stationary restraints:1.1), +pink slime, slime monster,monster,hug,hug from behind, +, sandwiched, restrained, pink theme,pink smoke,slime sandwich, slime wall,slime pit, +bound,slime covering breasts,vore,breast press,arms behind back,slime covering body, +, slime grabbing breasts,covered nipples, nipples, unaligned breasts, +,masterpiece, best quality,nsfw
Negative prompt
bad quality, worst quality, watermark,
Seed
466173845090234
Model
CottonNoob_v1
Sampler
Euler a
Steps
24
CFG scale
5
+
+
+ + +
+
Vaes
['sdxl_vae.safetensors']
Comfy
{"prompt": {"2": {"inputs": {"ckpt_name": "CottonNoob_v1.safetensors"}, "class_type": "CheckpointLoaderSimple", "_meta": {"title": "Load Checkpoint"}}, "3": {"inputs": {"vae_name": "sdxl_vae.safetensors"}, "class_type": "VAELoader", "_meta": {"title": "Load VAE"}}, "7": {"inputs": {"text": "bad quality, worst quality, watermark,", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "8": {"inputs": {"text": "1girl,solo,lovely labrynth of the silver castle, huge breasts, upper body,torogao,rolling eyes, \nnude,wet,sweat,blush,embarrassed,in heat,half-closed eyes, moaning,trembling, \n, darkness, (transparent slime:1.5),(stationary restraints:1.1), \npink slime, slime monster,monster,hug,hug from behind, \n, sandwiched, restrained, pink theme,pink smoke,slime sandwich, slime wall,slime pit,\nbound,slime covering breasts,vore,breast press,arms behind back,slime covering body,\n, slime grabbing breasts,covered nipples, nipples, unaligned breasts, \n,masterpiece, best quality,nsfw", "clip": ["2", 1]}, "class_type": "CLIPTextEncode", "_meta": {"title": "CLIP Text Encode (Prompt)"}}, "13": {"inputs": {"filename_prefix": "ComfyUI", "images": ["16", 0]}, "class_type": "SaveImage", "_meta": {"title": "Save Image"}}, "14": {"inputs": {"seed": 466173845090234, "steps": 24, "cfg": 5.0, "sampler_name": "euler_ancestral", "scheduler": "normal", "denoise": 1.0, "model": ["2", 0], "positive": ["8", 0], "negative": ["7", 0], "latent_image": ["15", 0]}, "class_type": "KSampler", "_meta": {"title": "KSampler"}}, "15": {"inputs": {"width": 960, "height": 1280, "batch_size": 1}, "class_type": "EmptyLatentImage", "_meta": {"title": "Empty Latent Image"}}, "16": {"inputs": {"samples": ["14", 0], "vae": ["3", 0]}, "class_type": "VAEDecode", "_meta": {"title": "VAE Decode"}}}, "workflow": {"last_node_id": 18, "last_link_id": 38, "nodes": [{"id": 16, "type": "VAEDecode", "pos": [1570, 58], "size": [210, 46], "flags": {}, "order": 6, "mode": 0, "inputs": [{"name": "samples", "type": "LATENT", "link": 21}, {"name": "vae", "type": "VAE", "link": 23}], "outputs": [{"name": "IMAGE", "type": "IMAGE", "links": [22], "slot_index": 0}], "properties": {"Node name for S&R": "VAEDecode"}, "widgets_values": []}, {"id": 3, "type": "VAELoader", "pos": [19, 200], "size": [360, 60], "flags": {}, "order": 0, "mode": 0, "inputs": [], "outputs": [{"name": "VAE", "type": "VAE", "links": [23], "slot_index": 0}], "properties": {"Node name for S&R": "VAELoader"}, "widgets_values": ["sdxl_vae.safetensors"]}, {"id": 15, "type": "EmptyLatentImage", "pos": [771, 632], "size": [303.5653991699219, 173.1395721435547], "flags": {}, "order": 1, "mode": 0, "inputs": [], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [20]}], "properties": {"Node name for S&R": "EmptyLatentImage"}, "widgets_values": [960, 1280, 1]}, {"id": 14, "type": "KSampler", "pos": [1200, 49], "size": [315, 262], "flags": {}, "order": 5, "mode": 0, "inputs": [{"name": "model", "type": "MODEL", "link": 36}, {"name": "positive", "type": "CONDITIONING", "link": 18}, {"name": "negative", "type": "CONDITIONING", "link": 19}, {"name": "latent_image", "type": "LATENT", "link": 20}], "outputs": [{"name": "LATENT", "type": "LATENT", "links": [21], "slot_index": 0}], "properties": {"Node name for S&R": "KSampler"}, "widgets_values": [466173845090234, "randomize", 24, 5, "euler_ancestral", "normal", 1]}, {"id": 7, "type": "CLIPTextEncode", "pos": [210.20828247070312, 620.292236328125], "size": [530.4689331054688, 178.5200653076172], "flags": {}, "order": 4, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 38}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [19], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["bad quality, worst quality, watermark,"], "color": "#322", "bgcolor": "#533", "shape": 1}, {"id": 2, "type": "CheckpointLoaderSimple", "pos": [20, 50], "size": [360, 100], "flags": {}, "order": 2, "mode": 0, "inputs": [], "outputs": [{"name": "MODEL", "type": "MODEL", "links": [36], "slot_index": 0}, {"name": "CLIP", "type": "CLIP", "links": [37, 38], "slot_index": 1}, {"name": "VAE", "type": "VAE", "links": null}], "properties": {"Node name for S&R": "CheckpointLoaderSimple"}, "widgets_values": ["CottonNoob_v1.safetensors"]}, {"id": 13, "type": "SaveImage", "pos": [1130.7520751953125, 592.1286010742188], "size": [320, 270], "flags": {}, "order": 7, "mode": 0, "inputs": [{"name": "images", "type": "IMAGE", "link": 22}], "outputs": [], "properties": {}, "widgets_values": ["ComfyUI"]}, {"id": 8, "type": "CLIPTextEncode", "pos": [450.7476501464844, 326.7282409667969], "size": [535.6358642578125, 241.32017517089844], "flags": {}, "order": 3, "mode": 0, "inputs": [{"name": "clip", "type": "CLIP", "link": 37}], "outputs": [{"name": "CONDITIONING", "type": "CONDITIONING", "links": [18], "slot_index": 0, "shape": 3}], "properties": {"Node name for S&R": "CLIPTextEncode"}, "widgets_values": ["1girl,solo,lovely labrynth of the silver castle, huge breasts, upper body,torogao,rolling eyes, \nnude,wet,sweat,blush,embarrassed,in heat,half-closed eyes, moaning,trembling, \n, darkness, (transparent slime:1.5),(stationary restraints:1.1), \npink slime, slime monster,monster,hug,hug from behind, \n, sandwiched, restrained, pink theme,pink smoke,slime sandwich, slime wall,slime pit,\nbound,slime covering breasts,vore,breast press,arms behind back,slime covering body,\n, slime grabbing breasts,covered nipples, nipples, unaligned breasts, \n,masterpiece, best quality,nsfw"], "color": "#232", "bgcolor": "#353", "shape": 1}], "links": [[18, 8, 0, 14, 1, "CONDITIONING"], [19, 7, 0, 14, 2, "CONDITIONING"], [20, 15, 0, 14, 3, "LATENT"], [21, 14, 0, 16, 0, "LATENT"], [22, 16, 0, 13, 0, "IMAGE"], [23, 3, 0, 16, 1, "VAE"], [36, 2, 0, 14, 0, "MODEL"], [37, 2, 1, 8, 0, "CLIP"], [38, 2, 1, 7, 0, "CLIP"]], "groups": [], "config": {}, "extra": {"ds": {"scale": 2.393920493691646, "offset": [-941.1539768957533, -576.5519756665452]}}, "version": 0.4}}
Width
960
Height
1280
Models
['CottonNoob_v1.safetensors']
Denoise
1
Modelids
[]
Scheduler
normal
Upscalers
[]
Versionids
[]
Controlnets
[]
Additionalresources
[]
+ \ No newline at end of file