diff --git a/.gitattributes b/.gitattributes index 43f8d1e56fc46b2a09e53f65fe5b0eab56d1ca50..aa8cfb73149c2d4daeb4581c3ed516f04b7d072f 100644 --- a/.gitattributes +++ b/.gitattributes @@ -287,3 +287,17 @@ Lora/in-dark.preview.png filter=lfs diff=lfs merge=lfs -text Lora/krekkov-ByChiAi_.preview.png filter=lfs diff=lfs merge=lfs -text Lora/theodyss.preview.png filter=lfs diff=lfs merge=lfs -text Lora/touching_grass_v0.2.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/AmberLightvaleIllustrious.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/Darkynsfw_Style__Pony_XL.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/KMS_RWBY_RR_IL-000005.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/Marceline_ILL.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/Marceline_The_Star__Adventure_Time_Fionna__Cake.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/Ruby_Illustrious-Copy1.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/Smooth_Booster_v3.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/WSSKX_WAI.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/contract_controller_illus01_v1.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/illustriousXLv01_stabilizer_v1.152.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/illustriousXLv01_stable_dark_v0.3.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/minimal_design_slider.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/noobai_ep11_stabilizer_v0.205_fp16.preview.png filter=lfs diff=lfs merge=lfs -text +Lora/style_strength_controller_nbep11_v1.preview.png filter=lfs diff=lfs merge=lfs -text diff --git a/Lora/3264c091187ab1237974a43a21883fb3.json b/Lora/3264c091187ab1237974a43a21883fb3.json new file mode 100644 index 0000000000000000000000000000000000000000..21f1bfbd13575ea1db1b5050eb9ff034f97c0fed --- /dev/null +++ b/Lora/3264c091187ab1237974a43a21883fb3.json @@ -0,0 +1,5 @@ +{ + "sha256": "988F82FCBE9C4E96D5D48130D09B04930C524717FEDA8A63DA7F308BA2BDEE8C", + "modelId": "Model not found", + "modelVersionId": "Model not found" +} \ No newline at end of file diff --git a/Lora/AmberLightvaleIllustrious.html b/Lora/AmberLightvaleIllustrious.html new file mode 100644 index 0000000000000000000000000000000000000000..ffcf5ae2fa5bb1bb62e5891805444640000b0763 --- /dev/null +++ b/Lora/AmberLightvaleIllustrious.html @@ -0,0 +1,107 @@ + +
+ + + + +
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Works best at 0.75 weight for SD1, 1 for SDXL
+
-
+
+
- Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
A first attempt at recreating Dark artstyle
+
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges

Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Trigger words:
gwentennyson, orange hair, short hair, green eyes, earrings
Outfit:
blue shirt, hairclip, white pants
Tested and used with WAI-SHUFFLE-NOOB:
https://civitai.com/models/989367?modelVersionId=1202076
For on-site generations, use my Tensor Art page: https://tensor.art/u/830771749061779884
Order a commission here!
https://ko-fi.com/c/38aa4a973d
Support my work by buying me a ko-fi:
+
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Doesn't require any trigger words.
Strength : 1.0
+
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Trigger: pyramidheadSDXL
Strenght: 1 or 0.8
Use Adetailer

Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Tested at 0.8-1.0 strength
Activator
RWBY_RR,Body
ruby rose, 1girl, grey eyes, short hair, bangs, black hair, gradient hair, two-tone hair, huge breasts, wide hips, (thick thighs:1.2),'Fit
black choker, shrug (clothing), scarf, hooded cloak, red cape, bandolier, short red dress, cleavage, sleeveless, bare shoulders, corset, red miniskirt, pleated skirt, belt buckle, multiple belts, sheer black pantyhose, black thighhigh boots, deep skin, skindentation, thigh strap, black elbow gloves, fingerless gloves,lingerie, see-through, lace trim, lace, black nightgown, loose nightgown, bottomless, cleavage, bare shoulders, nipples visible through clothes, covered nipples, no panties, female pubic hair, pussy under clothes,wedding dress, overflowing breasts, bursting breasts, cleavage, bridal lingerie, long sleeves, white lace gloves, white lace collar, frilled collar, plunging neckline, white lace harness, white pantyhose, zettai ryouiki,sports bra, cleavage, overflowing breasts, bursting breasts, deep skin, skindentation, sideboob, dolphin shorts, short shorts,deep skin, skindentation, volleyball uniform, track uniform, red and white croptop, sleevless, midriff, red shorts, buruma, short shorts, black elbow pads, black knee pads, red footwear, sneakers,heart-shaped eyewear, eyewear on head, pajamas, black tank top, camisole, sleeveless, heart print, pink pants, polka dot legwear,black camisole, overflowing breasts, sideboob, deep skin, skindentation, sideboob, denim shorts, white thighhigh socks, zettai ryouiki,
+ Illustrious is a fully trained SDXL model. This LoRA is intended to be used with Illustrious models. It may not work with other SDXL/PonyXL ressources.
Trained on Illustrious-XL-v0.1 with 58 pictures.
Best result with weight between : 0.8-1.
Main prompts : karasuchan
Style prompts : greyscale with colored background, hatching \(texture\), monochrome
Reviews are really appreciated, i love to see the community use my work, that's why I share it.
If you like my work, you can tip me here.
Got a specific request ? I'm open for commission on my kofi or fiverr gig *! If you provide enough data, OCs are accepted
If you want to get updates on my projects as they go, you can follow me on X
+Trained on Illustrious-XL-v0.1 with 58 pictures.
Best result with weight between : 0.8-1.
Main prompts : karasuchan
Style prompts : greyscale with colored background, hatching \(texture\), monochrome
Reviews are really appreciated, i love to see the community use my work, that's why I share it.
If you like my work, you can tip me here.
Got a specific request ? I'm open for commission on my kofi or fiverr gig *! If you provide enough data, OCs are accepted
If you want to get updates on my projects as they go, you can follow me on X

Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Style of the artist "Krekkov"
+
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
ワンピースで見かけるプレイステーションのボタンみたいな怯えた目です。
アニメ系のチェックポイントやスタイルLoRAとの相性がいいです。
プロンプト等は以下を参照してください。
Shocked eyes in One Piece like a button on a PlayStation.
It goes well with anime-style checkpoints and style LoRAs.
Thank you.
+
-
+
+
-
-
-
+
+
+
+
+
+
+
+
+

Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
This is a concept LoRA to manipulate the camera angle. It provides a fixed perspective shot of the image. Intended for NSFW content but is not limited to that. Please do post pictures below, as I'm interested to see what everyone is able to make with this tool.
My first attempt at creating a concept LoRA and something I have been collecting images for a while as I come across them. Felt like I had enough to give it a shot and this is the result. I will need to collect more to update it in the future. Requires ADetailer and HiResFix to get good outputs. If prompting more than one subject inpainting will likely be required to fix it up.

I have expanded the size of the dataset used for training and have also categorized a section of the images to attempt to provide trigger words to manipulate the camera further. I was rough with it due to the sheer quantity of images used and limited time to spend pruning and cleaning up the tags. Due to that it could be improved, but I did manage to get part of the control that I wanted.


Removed some images and added new sources to training dataset to attempt to have the LoRA alter style of output as much as V2 or V1. Toyed around with tag weights to try and add further control. V3 is not strictly better than V2.


+
+
+
+
+
+
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Support me on Ko-fi https://ko-fi.com/hinablue
+
+
+
+
+
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Another awesome JJK meme lora again
This was fun, though I couldn't get it to work better on Noob VPRED..
What does this LoRA includes:
Gojo's hollow purple frame from the anime ✔
Manga version ❌
Toji Fushiguro frames ❌
I recommend you to put scanlines on the negative!!
Have fufnfdnfdfjhhs
+
+
+
+
+
+
+
+ 

+
+
+
+
+
+
+


+
+
+
+
+
+
+


+
+
+
+
+
+
+


+
+
+
+
+
+
+

Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Sharing merges using this LoRA, re-printing it to other platforms, are prohibited.
All cover images are directly from the vanilla (the original, not finetuned) base model in a1111, no upscale, no inpaint fixes, no any plugin, even no negative prompt. They demonstrate the effect of the LoRA, not clickbait. You can drop the images into a1111 to reproduce yourself, they have metadata.
(5/19/2025): illus v1.152
Continual to improve lighting and textures and details.
Added 5K more photographs. Still, contains everything, except human. Covering all lighting conditions as much as possible. (From super bright to super dark)
Refactored my caption pipeline. All images now have natural captions from Google latest LLM. All lighting (brightness, color temperature, etc.) conditions are properly tagged. All anime characters are tagged by wd tagger v3 and Google LLM.
More data, so more training steps, as a result, stronger effect.
FAQ:
If you want 100% effects of texture, avoid base models with AI style (trained on AI images). Because what AI styles are super overfitted style, and will overlap the texture instantly. FYI. Cover images are from vanilla base model. And I only use vanilla models + artist style LoRAs.
How to know if it is AI style. No good method. Personally I look at hair (or other surfaces). The more plastic it feels (no texture, weird shiny reflections), the more AI style it may have.
If you got realistic faces on anime characters. Don't blame this LoRA. What it saw is what it learned. There is zero real human in dataset, so it has zero knowledge of realistic faces. Check whether your base model was merged with other realistic model.
.........
See more in the update log section.
.........
(3/2/2025): You can find the REAL me at TensorArt now. I've reclaimed all my models that were duped and faked by other accounts.
It's an all-in-one finetuned LoRA. If you apply it to NoobAI v1.1, then you will get my personal "finetuned" base model. (Why would you train a 6GB checkpoint if you can just train a 100MB LoRA? And you can also apply it to any model you want in no time.)
Same as full finetuned (trained, not merged) base models
Dataset is not small. (Comparing to a normal LoRA. Can't say big, there are many gigachads who like to finetune their models with millions images... Orz )
This LoRA is also trained in one go. No merging, so no confliction (at least inside this LoRA).
The dataset only contains high resolution images. Zero AI image. So you can get texture and details beyond pixel level. Instead of a weird smooth plastic feeling.
It does not focus on a very unique art style, and won't dramatically change the image composition.
Cover images are the direct outputs from the vanilla (the original, not finetuned) base model in a1111-sd-webui, no upscale, no inpaint fixes, no negative prompt. They demonstrate the effect of the LoRA, not clickbait. You can drop the images into a1111 to reproduce yourself, they have metadata.
Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.
Remember to leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.
Have fun.
Just apply it. No trigger words needed. Also it does not patch text encoders. So you don't have to set the patch strength for text encoder (in comfyui, etc.).
Strength 0.4~0.8.
Version prefix:
illus01 = Trained on Illustrious v0.1.
nbep11 = Trained on NoobAI e-pred v1.1
Which version to use?
Hard to tell. You should try both version. Models nowadays are just merges of merges and merges. You would never know what's truly inside your base model. Most model creators don't know either.
Fun fact (5/10/2025): 90% models that labeled as "illustrious" are actually NoobAi, if you calculate their weight similarities.
You can also just use both, with low strength each, many users reported this has noticeable better result.
Every image is hand-picked by me.
Only normal good looking things. No crazy art style.
No AI images, no watermarks, etc.
Only high resolution images. Avg pixels 3.37 MP, ~1800x1800.
2 main dataset:
a 2D/anime dataset with ~1k images. Character-focus. Natural poses. Natural body proportions. No exaggerated art, chibi, jojo pose, etc.
a real world photographs dataset with ~1k images. Contains nature, indoors, animals, buildings...many things, except human.
Why real world images? You can get better background, lighting, pixel level details/textures. There is no human in dataset so it won't affect characters.
I named the dataset Touching Grass. There is also a LoRA that was only trained on this photograph dataset. If you want something pure.
But I got realistic faces on my anime characters.
Well, don't blame this LoRA. What it saw is what it learned. It has zero knowledge of realistic faces. Most likely your base model was mixed with other realistic models.
Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.
Touching Grass: Trained on and only on the photographs dataset (No anime dataset). Has stronger effect. Useful for gigachad users who like pure concepts and like to balance weights themselves.
Dark: It can fix the high bias in anime models that towards high brightness. Trained on low brightness images in the Touching Grass dataset. Also, no human in dataset. So does not affect style.
Example on WAI v13.

Contrast Controller: Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, linear, and has zero side effect on style. (Not an exaggeration, it's really mathematically zero and linear. It was not from training.) Example on WAI v13.

Style Strength Controller: Or overfitting effect reducer. Also not from training, so zero side effect on style and mathematically linear effects. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.).
Effect test on Hassaku XL: The prompt has keyword "dark", but the model almost ignored it. Notice that: at strength 0.25 this LoRA reduces the bias of high brightness, and a weird smooth feeling on every surfaces, so the image feels more natural.
Differences between Stabilizer:
Stabilizer affects style. Because it was trained on real world data. It can "reduce" overfitting effects about texture, details and backgrounds, by adding them back.
Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. It does not affect style. And can reduce all overfitting effects, like bias on brightness, objects.
New version == new stuffs and new attempt != better version for you base model.
You can check the "Update log" section to find old versions. It's ok to use different versions together just like mixing base models. As long as the sum of strengths does not > 1.
(5/19/2025): illus01 v1.152
Continual to improve lighting and textures and details.
Added 5K more photographs. Still, contains everything, except human. Covering all lighting conditions as much as possible. (From super bright to super dark)
Refactored my caption pipeline. All images now have natural captions from Google latest LLM. All lighting (brightness, color temperature, etc.) conditions are properly tagged. All anime characters are tagged by wd tagger v3 and Google LLM.
More data, so more training steps, as a result, stronger effect.
(5/9/2025): nbep11 v0.205:
A quick fix of brightness and color issues in v0.198. Now it should not change brightness and colors so dramatically like a real photograph. v0.198 isn't bad, just creative, but too creative.
(5/7/2025): nbep11 v0.198:
Added more dark images. Less deformed body, background in dark environment.
Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead.
(4/25/2025): nbep11 v0.172.
Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.
Better color accuracy and stability. (Comparing to nbep11 v0.160)
(4/17/2025): illus01 v1.121.
Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.
Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.
Other things just same as below (v1.113).
(4/10/2025): illus11 v1.113 ❌.
Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.
Trained on Illustrious v1.1.
New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.
Full natural language captions from LLM.
(3/30/2025): illus01 v1.93.
v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.
(3/22/2025): nbep11 v0.160.
Same stuffs in illus v1.72.
(3/15/2025): illus01 v1.72
Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.
Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.
Removed all "simple background" images from dataset. -200 images.
Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.
(3/4/2025) ani40z v0.4
Trained on Animagine XL 4.0 ani40zero.
Added ~1k dataset focusing on natural dynamic lighting and real world texture.
More natural lighting and natural textures.
Above: Added more real world images. More natural texture and details.
ani04 v0.1
Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.
illus01 v1.23
nbep11 v0.138
Added some furry/non-human/other images to balance the dataset.
nbep11 v0.129
bad version, effect is too weak, just ignore it
nbep11 v0.114
Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%
Added a little bit realistic data. More vivid details, lighting, less flat colors.
illus01 v1.7
nbep11 v0.96
More training images.
Then finetuned again on a small "wallpaper" dataset (Real game wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.
Above: Has a weak default style.
nbep11 v0.58
More images. Change the training parameters as close as to NoobAI base model.
illus01 v1.3
nbep11 v0.30
More images.
nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.
Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.
illus01 v1.1
Trained on illustriousXL v0.1.
nbep10 v0.10
Trained on NoobAI epsilon pred v1.0.
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Sharing merges using this LoRA, re-printing it to other platforms, are prohibited.
All cover images are directly from the vanilla (the original, not finetuned) base model in a1111, no upscale, no inpaint fixes, no any plugin, even no negative prompt. They demonstrate the effect of the LoRA, not clickbait. You can drop the images into a1111 to reproduce yourself, they have metadata.
(5/19/2025): illus v1.152
Continual to improve lighting and textures and details.
Added 5K more photographs. Still, contains everything, except human. Covering all lighting conditions as much as possible. (From super bright to super dark)
Refactored my caption pipeline. All images now have natural captions from Google latest LLM. All lighting (brightness, color temperature, etc.) conditions are properly tagged. All anime characters are tagged by wd tagger v3 and Google LLM.
More data, so more training steps, as a result, stronger effect.
FAQ:
If you want 100% effects of texture, avoid base models with AI style (trained on AI images). Because what AI styles are super overfitted style, and will overlap the texture instantly. FYI. Cover images are from vanilla base model. And I only use vanilla models + artist style LoRAs.
How to know if it is AI style. No good method. Personally I look at hair (or other surfaces). The more plastic it feels (no texture, weird shiny reflections), the more AI style it may have.
If you got realistic faces on anime characters. Don't blame this LoRA. What it saw is what it learned. There is zero real human in dataset, so it has zero knowledge of realistic faces. Check whether your base model was merged with other realistic model.
.........
See more in the update log section.
.........
(3/2/2025): You can find the REAL me at TensorArt now. I've reclaimed all my models that were duped and faked by other accounts.
It's an all-in-one finetuned LoRA. If you apply it to NoobAI v1.1, then you will get my personal "finetuned" base model. (Why would you train a 6GB checkpoint if you can just train a 100MB LoRA? And you can also apply it to any model you want in no time.)
Same as full finetuned (trained, not merged) base models
Dataset is not small. (Comparing to a normal LoRA. Can't say big, there are many gigachads who like to finetune their models with millions images... Orz )
This LoRA is also trained in one go. No merging, so no confliction (at least inside this LoRA).
The dataset only contains high resolution images. Zero AI image. So you can get texture and details beyond pixel level. Instead of a weird smooth plastic feeling.
It does not focus on a very unique art style, and won't dramatically change the image composition.
Cover images are the direct outputs from the vanilla (the original, not finetuned) base model in a1111-sd-webui, no upscale, no inpaint fixes, no negative prompt. They demonstrate the effect of the LoRA, not clickbait. You can drop the images into a1111 to reproduce yourself, they have metadata.
Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.
Remember to leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.
Have fun.
Just apply it. No trigger words needed. Also it does not patch text encoders. So you don't have to set the patch strength for text encoder (in comfyui, etc.).
Strength 0.4~0.8.
Version prefix:
illus01 = Trained on Illustrious v0.1.
nbep11 = Trained on NoobAI e-pred v1.1
Which version to use?
Hard to tell. You should try both version. Models nowadays are just merges of merges and merges. You would never know what's truly inside your base model. Most model creators don't know either.
Fun fact (5/10/2025): 90% models that labeled as "illustrious" are actually NoobAi, if you calculate their weight similarities.
You can also just use both, with low strength each, many users reported this has noticeable better result.
Every image is hand-picked by me.
Only normal good looking things. No crazy art style.
No AI images, no watermarks, etc.
Only high resolution images. Avg pixels 3.37 MP, ~1800x1800.
2 main dataset:
a 2D/anime dataset with ~1k images. Character-focus. Natural poses. Natural body proportions. No exaggerated art, chibi, jojo pose, etc.
a real world photographs dataset with ~1k images. Contains nature, indoors, animals, buildings...many things, except human.
Why real world images? You can get better background, lighting, pixel level details/textures. There is no human in dataset so it won't affect characters.
I named the dataset Touching Grass. There is also a LoRA that was only trained on this photograph dataset. If you want something pure.
But I got realistic faces on my anime characters.
Well, don't blame this LoRA. What it saw is what it learned. It has zero knowledge of realistic faces. Most likely your base model was mixed with other realistic models.
Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.
Touching Grass: Trained on and only on the photographs dataset (No anime dataset). Has stronger effect. Useful for gigachad users who like pure concepts and like to balance weights themselves.
Dark: It can fix the high bias in anime models that towards high brightness. Trained on low brightness images in the Touching Grass dataset. Also, no human in dataset. So does not affect style.
Example on WAI v13.

Contrast Controller: Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, linear, and has zero side effect on style. (Not an exaggeration, it's really mathematically zero and linear. It was not from training.) Example on WAI v13.

Style Strength Controller: Or overfitting effect reducer. Also not from training, so zero side effect on style and mathematically linear effects. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.).
Effect test on Hassaku XL: The prompt has keyword "dark", but the model almost ignored it. Notice that: at strength 0.25 this LoRA reduces the bias of high brightness, and a weird smooth feeling on every surfaces, so the image feels more natural.
Differences between Stabilizer:
Stabilizer affects style. Because it was trained on real world data. It can "reduce" overfitting effects about texture, details and backgrounds, by adding them back.
Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. It does not affect style. And can reduce all overfitting effects, like bias on brightness, objects.
New version == new stuffs and new attempt != better version for you base model.
You can check the "Update log" section to find old versions. It's ok to use different versions together just like mixing base models. As long as the sum of strengths does not > 1.
(5/19/2025): illus01 v1.152
Continual to improve lighting and textures and details.
Added 5K more photographs. Still, contains everything, except human. Covering all lighting conditions as much as possible. (From super bright to super dark)
Refactored my caption pipeline. All images now have natural captions from Google latest LLM. All lighting (brightness, color temperature, etc.) conditions are properly tagged. All anime characters are tagged by wd tagger v3 and Google LLM.
More data, so more training steps, as a result, stronger effect.
(5/9/2025): nbep11 v0.205:
A quick fix of brightness and color issues in v0.198. Now it should not change brightness and colors so dramatically like a real photograph. v0.198 isn't bad, just creative, but too creative.
(5/7/2025): nbep11 v0.198:
Added more dark images. Less deformed body, background in dark environment.
Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead.
(4/25/2025): nbep11 v0.172.
Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.
Better color accuracy and stability. (Comparing to nbep11 v0.160)
(4/17/2025): illus01 v1.121.
Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.
Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.
Other things just same as below (v1.113).
(4/10/2025): illus11 v1.113 ❌.
Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.
Trained on Illustrious v1.1.
New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.
Full natural language captions from LLM.
(3/30/2025): illus01 v1.93.
v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.
(3/22/2025): nbep11 v0.160.
Same stuffs in illus v1.72.
(3/15/2025): illus01 v1.72
Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.
Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.
Removed all "simple background" images from dataset. -200 images.
Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.
(3/4/2025) ani40z v0.4
Trained on Animagine XL 4.0 ani40zero.
Added ~1k dataset focusing on natural dynamic lighting and real world texture.
More natural lighting and natural textures.
Above: Added more real world images. More natural texture and details.
ani04 v0.1
Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.
illus01 v1.23
nbep11 v0.138
Added some furry/non-human/other images to balance the dataset.
nbep11 v0.129
bad version, effect is too weak, just ignore it
nbep11 v0.114
Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%
Added a little bit realistic data. More vivid details, lighting, less flat colors.
illus01 v1.7
nbep11 v0.96
More training images.
Then finetuned again on a small "wallpaper" dataset (Real game wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.
Above: Has a weak default style.
nbep11 v0.58
More images. Change the training parameters as close as to NoobAI base model.
illus01 v1.3
nbep11 v0.30
More images.
nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.
Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.
illus01 v1.1
Trained on illustriousXL v0.1.
nbep10 v0.10
Trained on NoobAI epsilon pred v1.0.
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Lower the brightness. And make it more photography-ish.
I trained this LoRA because anime models have a very high bias towards bright images. So even if you prompted something dark, it still not dark enough, most part of the image still very bright.
However, I can't find a "dark" LoRA that is for natural dark. Because existing LoRAs are
Either trained on bright images, so you have to use negative strength to go dark, which is doable, but the quality is really bad.
Or focusing on pure black level, trained on pure black anime images, which causes style shifting, losing details, sometimes deformed faces and backgrounds.
This LoRA is trained on a sub dataset of "Touching Grass", only low brightness real world images.
Only environment, no human in dataset. So it will not "pollute" your base model style. Can be applied on both pure anime and realistic models.
It's dark, not black. The training images still have a quite wide brightness range and are full of details. E.g. cityscape at night and full of small building lights. So the model will know what it should do, just go dark, rather than go crazily black. You will not get deformed bodies, faces or backgrounds. Instead, it makes the dark environment more stable, and adds more details.
Useful if you:
want something very dark, darker than prompts can achieve.
want to lower the overall brightness and create a photography feeling.
Trained on illustrious v0.1, but I tested it on NoobAI as well. The effects are very good. So I don't think we need a separated NoobAI version.
All cover images directly come from a1111, zero modification or fix, no upscale, even no negative prompt.
+
+
+
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Sharing merges using this LoRA, re-printing it to other platforms, are prohibited.
All cover images are directly from the vanilla (the original, not finetuned) base model in a1111, no upscale, no inpaint fixes, no any plugin, even no negative prompt. They demonstrate the effect of the LoRA, not clickbait. You can drop the images into a1111 to reproduce yourself, they have metadata.
(5/19/2025): illus v1.152
Continual to improve lighting and textures and details.
Added 5K more photographs. Still, contains everything, except human. Covering all lighting conditions as much as possible. (From super bright to super dark)
Refactored my caption pipeline. All images now have natural captions from Google latest LLM. All lighting (brightness, color temperature, etc.) conditions are properly tagged. All anime characters are tagged by wd tagger v3 and Google LLM.
More data, so more training steps, as a result, stronger effect.
FAQ:
If you want 100% effects of texture, avoid base models with AI style (trained on AI images). Because what AI styles are super overfitted style, and will overlap the texture instantly. FYI. Cover images are from vanilla base model. And I only use vanilla models + artist style LoRAs.
How to know if it is AI style. No good method. Personally I look at hair (or other surfaces). The more plastic it feels (no texture, weird shiny reflections), the more AI style it may have.
If you got realistic faces on anime characters. Don't blame this LoRA. What it saw is what it learned. There is zero real human in dataset, so it has zero knowledge of realistic faces. Check whether your base model was merged with other realistic model.
.........
See more in the update log section.
.........
(3/2/2025): You can find the REAL me at TensorArt now. I've reclaimed all my models that were duped and faked by other accounts.
It's an all-in-one finetuned LoRA. If you apply it to NoobAI v1.1, then you will get my personal "finetuned" base model. (Why would you train a 6GB checkpoint if you can just train a 100MB LoRA? And you can also apply it to any model you want in no time.)
Same as full finetuned (trained, not merged) base models
Dataset is not small. (Comparing to a normal LoRA. Can't say big, there are many gigachads who like to finetune their models with millions images... Orz )
This LoRA is also trained in one go. No merging, so no confliction (at least inside this LoRA).
The dataset only contains high resolution images. Zero AI image. So you can get texture and details beyond pixel level. Instead of a weird smooth plastic feeling.
It does not focus on a very unique art style, and won't dramatically change the image composition.
Cover images are the direct outputs from the vanilla (the original, not finetuned) base model in a1111-sd-webui, no upscale, no inpaint fixes, no negative prompt. They demonstrate the effect of the LoRA, not clickbait. You can drop the images into a1111 to reproduce yourself, they have metadata.
Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.
Remember to leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.
Have fun.
Just apply it. No trigger words needed. Also it does not patch text encoders. So you don't have to set the patch strength for text encoder (in comfyui, etc.).
Strength 0.4~0.8.
Version prefix:
illus01 = Trained on Illustrious v0.1.
nbep11 = Trained on NoobAI e-pred v1.1
Which version to use?
Hard to tell. You should try both version. Models nowadays are just merges of merges and merges. You would never know what's truly inside your base model. Most model creators don't know either.
Fun fact (5/10/2025): 90% models that labeled as "illustrious" are actually NoobAi, if you calculate their weight similarities.
You can also just use both, with low strength each, many users reported this has noticeable better result.
Every image is hand-picked by me.
Only normal good looking things. No crazy art style.
No AI images, no watermarks, etc.
Only high resolution images. Avg pixels 3.37 MP, ~1800x1800.
2 main dataset:
a 2D/anime dataset with ~1k images. Character-focus. Natural poses. Natural body proportions. No exaggerated art, chibi, jojo pose, etc.
a real world photographs dataset with ~1k images. Contains nature, indoors, animals, buildings...many things, except human.
Why real world images? You can get better background, lighting, pixel level details/textures. There is no human in dataset so it won't affect characters.
I named the dataset Touching Grass. There is also a LoRA that was only trained on this photograph dataset. If you want something pure.
But I got realistic faces on my anime characters.
Well, don't blame this LoRA. What it saw is what it learned. It has zero knowledge of realistic faces. Most likely your base model was mixed with other realistic models.
Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.
Touching Grass: Trained on and only on the photographs dataset (No anime dataset). Has stronger effect. Useful for gigachad users who like pure concepts and like to balance weights themselves.
Dark: It can fix the high bias in anime models that towards high brightness. Trained on low brightness images in the Touching Grass dataset. Also, no human in dataset. So does not affect style.
Example on WAI v13.

Contrast Controller: Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, linear, and has zero side effect on style. (Not an exaggeration, it's really mathematically zero and linear. It was not from training.) Example on WAI v13.

Style Strength Controller: Or overfitting effect reducer. Also not from training, so zero side effect on style and mathematically linear effects. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.).
Effect test on Hassaku XL: The prompt has keyword "dark", but the model almost ignored it. Notice that: at strength 0.25 this LoRA reduces the bias of high brightness, and a weird smooth feeling on every surfaces, so the image feels more natural.
Differences between Stabilizer:
Stabilizer affects style. Because it was trained on real world data. It can "reduce" overfitting effects about texture, details and backgrounds, by adding them back.
Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. It does not affect style. And can reduce all overfitting effects, like bias on brightness, objects.
New version == new stuffs and new attempt != better version for you base model.
You can check the "Update log" section to find old versions. It's ok to use different versions together just like mixing base models. As long as the sum of strengths does not > 1.
(5/19/2025): illus01 v1.152
Continual to improve lighting and textures and details.
Added 5K more photographs. Still, contains everything, except human. Covering all lighting conditions as much as possible. (From super bright to super dark)
Refactored my caption pipeline. All images now have natural captions from Google latest LLM. All lighting (brightness, color temperature, etc.) conditions are properly tagged. All anime characters are tagged by wd tagger v3 and Google LLM.
More data, so more training steps, as a result, stronger effect.
(5/9/2025): nbep11 v0.205:
A quick fix of brightness and color issues in v0.198. Now it should not change brightness and colors so dramatically like a real photograph. v0.198 isn't bad, just creative, but too creative.
(5/7/2025): nbep11 v0.198:
Added more dark images. Less deformed body, background in dark environment.
Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead.
(4/25/2025): nbep11 v0.172.
Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.
Better color accuracy and stability. (Comparing to nbep11 v0.160)
(4/17/2025): illus01 v1.121.
Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.
Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.
Other things just same as below (v1.113).
(4/10/2025): illus11 v1.113 ❌.
Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.
Trained on Illustrious v1.1.
New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.
Full natural language captions from LLM.
(3/30/2025): illus01 v1.93.
v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.
(3/22/2025): nbep11 v0.160.
Same stuffs in illus v1.72.
(3/15/2025): illus01 v1.72
Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.
Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.
Removed all "simple background" images from dataset. -200 images.
Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.
(3/4/2025) ani40z v0.4
Trained on Animagine XL 4.0 ani40zero.
Added ~1k dataset focusing on natural dynamic lighting and real world texture.
More natural lighting and natural textures.
Above: Added more real world images. More natural texture and details.
ani04 v0.1
Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.
illus01 v1.23
nbep11 v0.138
Added some furry/non-human/other images to balance the dataset.
nbep11 v0.129
bad version, effect is too weak, just ignore it
nbep11 v0.114
Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%
Added a little bit realistic data. More vivid details, lighting, less flat colors.
illus01 v1.7
nbep11 v0.96
More training images.
Then finetuned again on a small "wallpaper" dataset (Real game wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.
Above: Has a weak default style.
nbep11 v0.58
More images. Change the training parameters as close as to NoobAI base model.
illus01 v1.3
nbep11 v0.30
More images.
nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.
Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.
illus01 v1.1
Trained on illustriousXL v0.1.
nbep10 v0.10
Trained on NoobAI epsilon pred v1.0.
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
This LoRA for enhancing the depiction of light in darkness.
+
+
+
+
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
+
+
+
+
+
+
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Image style slider.
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Sharing merges using this LoRA, re-printing it to other platforms, are prohibited.
All cover images are directly from the vanilla (the original, not finetuned) base model in a1111, no upscale, no inpaint fixes, no any plugin, even no negative prompt. They demonstrate the effect of the LoRA, not clickbait. You can drop the images into a1111 to reproduce yourself, they have metadata.
(5/19/2025): illus v1.152
Continual to improve lighting and textures and details.
Added 5K more photographs. Still, contains everything, except human. Covering all lighting conditions as much as possible. (From super bright to super dark)
Refactored my caption pipeline. All images now have natural captions from Google latest LLM. All lighting (brightness, color temperature, etc.) conditions are properly tagged. All anime characters are tagged by wd tagger v3 and Google LLM.
More data, so more training steps, as a result, stronger effect.
FAQ:
If you want 100% effects of texture, avoid base models with AI style (trained on AI images). Because what AI styles are super overfitted style, and will overlap the texture instantly. FYI. Cover images are from vanilla base model. And I only use vanilla models + artist style LoRAs.
How to know if it is AI style. No good method. Personally I look at hair (or other surfaces). The more plastic it feels (no texture, weird shiny reflections), the more AI style it may have.
If you got realistic faces on anime characters. Don't blame this LoRA. What it saw is what it learned. There is zero real human in dataset, so it has zero knowledge of realistic faces. Check whether your base model was merged with other realistic model.
.........
See more in the update log section.
.........
(3/2/2025): You can find the REAL me at TensorArt now. I've reclaimed all my models that were duped and faked by other accounts.
It's an all-in-one finetuned LoRA. If you apply it to NoobAI v1.1, then you will get my personal "finetuned" base model. (Why would you train a 6GB checkpoint if you can just train a 100MB LoRA? And you can also apply it to any model you want in no time.)
Same as full finetuned (trained, not merged) base models
Dataset is not small. (Comparing to a normal LoRA. Can't say big, there are many gigachads who like to finetune their models with millions images... Orz )
This LoRA is also trained in one go. No merging, so no confliction (at least inside this LoRA).
The dataset only contains high resolution images. Zero AI image. So you can get texture and details beyond pixel level. Instead of a weird smooth plastic feeling.
It does not focus on a very unique art style, and won't dramatically change the image composition.
Cover images are the direct outputs from the vanilla (the original, not finetuned) base model in a1111-sd-webui, no upscale, no inpaint fixes, no negative prompt. They demonstrate the effect of the LoRA, not clickbait. You can drop the images into a1111 to reproduce yourself, they have metadata.
Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.
Remember to leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.
Have fun.
Just apply it. No trigger words needed. Also it does not patch text encoders. So you don't have to set the patch strength for text encoder (in comfyui, etc.).
Strength 0.4~0.8.
Version prefix:
illus01 = Trained on Illustrious v0.1.
nbep11 = Trained on NoobAI e-pred v1.1
Which version to use?
Hard to tell. You should try both version. Models nowadays are just merges of merges and merges. You would never know what's truly inside your base model. Most model creators don't know either.
Fun fact (5/10/2025): 90% models that labeled as "illustrious" are actually NoobAi, if you calculate their weight similarities.
You can also just use both, with low strength each, many users reported this has noticeable better result.
Every image is hand-picked by me.
Only normal good looking things. No crazy art style.
No AI images, no watermarks, etc.
Only high resolution images. Avg pixels 3.37 MP, ~1800x1800.
2 main dataset:
a 2D/anime dataset with ~1k images. Character-focus. Natural poses. Natural body proportions. No exaggerated art, chibi, jojo pose, etc.
a real world photographs dataset with ~1k images. Contains nature, indoors, animals, buildings...many things, except human.
Why real world images? You can get better background, lighting, pixel level details/textures. There is no human in dataset so it won't affect characters.
I named the dataset Touching Grass. There is also a LoRA that was only trained on this photograph dataset. If you want something pure.
But I got realistic faces on my anime characters.
Well, don't blame this LoRA. What it saw is what it learned. It has zero knowledge of realistic faces. Most likely your base model was mixed with other realistic models.
Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.
Touching Grass: Trained on and only on the photographs dataset (No anime dataset). Has stronger effect. Useful for gigachad users who like pure concepts and like to balance weights themselves.
Dark: It can fix the high bias in anime models that towards high brightness. Trained on low brightness images in the Touching Grass dataset. Also, no human in dataset. So does not affect style.
Example on WAI v13.

Contrast Controller: Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, linear, and has zero side effect on style. (Not an exaggeration, it's really mathematically zero and linear. It was not from training.) Example on WAI v13.

Style Strength Controller: Or overfitting effect reducer. Also not from training, so zero side effect on style and mathematically linear effects. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.).
Effect test on Hassaku XL: The prompt has keyword "dark", but the model almost ignored it. Notice that: at strength 0.25 this LoRA reduces the bias of high brightness, and a weird smooth feeling on every surfaces, so the image feels more natural.
Differences between Stabilizer:
Stabilizer affects style. Because it was trained on real world data. It can "reduce" overfitting effects about texture, details and backgrounds, by adding them back.
Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. It does not affect style. And can reduce all overfitting effects, like bias on brightness, objects.
New version == new stuffs and new attempt != better version for you base model.
You can check the "Update log" section to find old versions. It's ok to use different versions together just like mixing base models. As long as the sum of strengths does not > 1.
(5/19/2025): illus01 v1.152
Continual to improve lighting and textures and details.
Added 5K more photographs. Still, contains everything, except human. Covering all lighting conditions as much as possible. (From super bright to super dark)
Refactored my caption pipeline. All images now have natural captions from Google latest LLM. All lighting (brightness, color temperature, etc.) conditions are properly tagged. All anime characters are tagged by wd tagger v3 and Google LLM.
More data, so more training steps, as a result, stronger effect.
(5/9/2025): nbep11 v0.205:
A quick fix of brightness and color issues in v0.198. Now it should not change brightness and colors so dramatically like a real photograph. v0.198 isn't bad, just creative, but too creative.
(5/7/2025): nbep11 v0.198:
Added more dark images. Less deformed body, background in dark environment.
Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead.
(4/25/2025): nbep11 v0.172.
Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.
Better color accuracy and stability. (Comparing to nbep11 v0.160)
(4/17/2025): illus01 v1.121.
Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.
Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.
Other things just same as below (v1.113).
(4/10/2025): illus11 v1.113 ❌.
Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.
Trained on Illustrious v1.1.
New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.
Full natural language captions from LLM.
(3/30/2025): illus01 v1.93.
v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.
(3/22/2025): nbep11 v0.160.
Same stuffs in illus v1.72.
(3/15/2025): illus01 v1.72
Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.
Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.
Removed all "simple background" images from dataset. -200 images.
Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.
(3/4/2025) ani40z v0.4
Trained on Animagine XL 4.0 ani40zero.
Added ~1k dataset focusing on natural dynamic lighting and real world texture.
More natural lighting and natural textures.
Above: Added more real world images. More natural texture and details.
ani04 v0.1
Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.
illus01 v1.23
nbep11 v0.138
Added some furry/non-human/other images to balance the dataset.
nbep11 v0.129
bad version, effect is too weak, just ignore it
nbep11 v0.114
Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%
Added a little bit realistic data. More vivid details, lighting, less flat colors.
illus01 v1.7
nbep11 v0.96
More training images.
Then finetuned again on a small "wallpaper" dataset (Real game wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.
Above: Has a weak default style.
nbep11 v0.58
More images. Change the training parameters as close as to NoobAI base model.
illus01 v1.3
nbep11 v0.30
More images.
nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.
Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.
illus01 v1.1
Trained on illustriousXL v0.1.
nbep10 v0.10
Trained on NoobAI epsilon pred v1.0.
+
+
+
+
+
+
+
+
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Or overfitting effect reducer. I really don't know how to name.
NOTE: Forget about v0.1. It is alpha version.
Use v1. It's much better and stabler. Supports all tools. All info in this page has been updated for v1.
What can this LoRA do?
This LoRA can reduce the style of your base model. Or amplify it if strength < 0 and you want something crazy.
This lora will not bring any side effects to the style.
I like the style, why would I reduce its strength?
The real purpose of this LoRA is to reduce overfitting effects and bring creativity back, by just reducing the style a little bit.
Overfitting effects?
Because model was trained too hard, and the dataset has bias.
E.g.:
noticeable bias. E.g.: Always too bright/dark, generating same faces / things / backgrounds.
too sensitive to some prompt words.
What's the effect of this LoRA?
The effect mainly depends on what your base model looks like. You should test it and feel yourself.
Here is a example on Hassaku XL v2.1fix. Notice that
This base model has a noticeable bias towards high brightness, (and signs/paintings on wall, shiny reflections...)
So at strength -0.3, the model completely ignores the prompt word "dark". Because you amplified the style and bias as well.
At strength 0.25. Model has much less bias on brightness and feels more natural. Notice the table and wall "wooden" textures. Less weird reflection. The style doesn't noticeable change.
Strength 0.5 is for reference, weaker style and less bias (looking at viewer, signs/paintings on well, etc). More natural.
This LoRA can also stabilize other LoRAs, avoid "burn" effect caused by super overfitted LoRA.
How to use?
Just apply it as normal LoRA.
Find the best strength for your model. Start around 0.2. Super overfitted model may needs > 0.5.
Working strength is around -0.5~1.
You don't have to set the patch strength for text encoder. This LoRA does not patch it.
Some styles heavily affect CFG scale. So you may also need to adjust the CFG scales because the style strength changed.
What's the training data? Why it has zero side effect on style?
This LoRA is "calculated/calibrated" directly from SDXL and Illustrious v0.1/NoobAI ep11. No training process.
+
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Or overfitting effect reducer. I really don't know how to name.
NOTE: Forget about v0.1. It is alpha version.
Use v1. It's much better and stabler. Supports all tools. All info in this page has been updated for v1.
What can this LoRA do?
This LoRA can reduce the style of your base model. Or amplify it if strength < 0 and you want something crazy.
This lora will not bring any side effects to the style.
I like the style, why would I reduce its strength?
The real purpose of this LoRA is to reduce overfitting effects and bring creativity back, by just reducing the style a little bit.
Overfitting effects?
Because model was trained too hard, and the dataset has bias.
E.g.:
noticeable bias. E.g.: Always too bright/dark, generating same faces / things / backgrounds.
too sensitive to some prompt words.
What's the effect of this LoRA?
The effect mainly depends on what your base model looks like. You should test it and feel yourself.
Here is a example on Hassaku XL v2.1fix. Notice that
This base model has a noticeable bias towards high brightness, (and signs/paintings on wall, shiny reflections...)
So at strength -0.3, the model completely ignores the prompt word "dark". Because you amplified the style and bias as well.
At strength 0.25. Model has much less bias on brightness and feels more natural. Notice the table and wall "wooden" textures. Less weird reflection. The style doesn't noticeable change.
Strength 0.5 is for reference, weaker style and less bias (looking at viewer, signs/paintings on well, etc). More natural.
This LoRA can also stabilize other LoRAs, avoid "burn" effect caused by super overfitted LoRA.
How to use?
Just apply it as normal LoRA.
Find the best strength for your model. Start around 0.2. Super overfitted model may needs > 0.5.
Working strength is around -0.5~1.
You don't have to set the patch strength for text encoder. This LoRA does not patch it.
Some styles heavily affect CFG scale. So you may also need to adjust the CFG scales because the style strength changed.
What's the training data? Why it has zero side effect on style?
This LoRA is "calculated/calibrated" directly from SDXL and Illustrious v0.1/NoobAI ep11. No training process.
+
+
+
+
+ SDXL LoRA/LyCORIS works best on the model that it was trained on. I will release multiple versions for a few popular models. Feel free to request for artist or model.
Version naming convention: MajorVersion.MinorVersion [BaseModel]
MajorVesion: For big updates that can apply to any style LoRA. For example, training parameter update.
MinorVersion: For small updates that only apply to this LoRA. For example, epoch selection, removing a certain image from dataset.
BaseModel: SDXL LoRA/LyCORIS works best on the model that it was trained on.
Change notes:
v2.233->v2.233-2: Switching to a new epoch selecting strategy: reducing overfitting and improving hands.
I will release multiple versions for a few popular models. Feel free to request for artist or model.
Version naming convention: MajorVersion.MinorVersion [BaseModel]
MajorVesion: For big updates that can apply to any style LoRA. For example, training parameter update.
MinorVersion: For small updates that only apply to this LoRA. For example, epoch selection, removing a certain image from dataset.
BaseModel: SDXL LoRA/LyCORIS works best on the model that it was trained on.
Change notes:
v2.233->v2.233-2: Switching to a new epoch selecting strategy: reducing overfitting and improving hands.

Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
Will remove if requested
+
+
+
+
+
+
+
+
+
+
+ 
Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
The best LoRA for grass details.
Terms of Use: If you merge this model into your own and sell your own models without citing the original author, you acknowledge that 100% of the proceeds will be used to purchase your own coffin for your own future use.
What's this LoRA?
This LoRA is for users who like raw and pure things and like to balance weights themselves.
Useful to fix those poorly trained model, which was trained on only dozens of AI images but for thousands of steps. (aka. super super overfitted models, which can only generate same things/faces/background over an over again.)
What is the difference between this and the Stabilizer LoRA?
Stabilizer has very weak effects. Because the rule is "don't break things". So it may have no effect on super overfitted models. Also Stabilizer has another anime dataset to make anime characters look better.
This LoRA is trained very hard and has much stronger effects, so it can "overwrite" those super overfitted models if you want. 100% real world images.
What's in the dataset?
~1K real world photographs of objects and environment.
No human. So it will not "pollute" your characters. Can be used on both anime and realistic models.
Very diverse and creative. Highest quality images. high contrast, full of details. (That's why they are photographs)
Paired with natural captions from LLM. Mainly because WD tagger v3 is really bad at real world images. Also because natural captions have more diverse vocabularies and can avoid overfitting.
What's the effect?
It really depends on your base model. Here is a quick comparations on WAI v13. This model has very strong AI style (trained on AI images).
With/without.

Pixel level natural details. A so-called "detailer". But instead of training on AI images to amplify fake details from noise to generate more fake objects. This LoRA focuses on natural texture. Less flat and smooth feelings. Notice the food, clothes, light reflection on the table, depth of field and blurry background.
Significantly improve background structural stability for anime models. Anime dataset doesn't contain much background knowledge. Most of are just "simple background". Even if some of them have some kind of background, they may be abstract art and lacking proper tags. So the base model will forget it or learn weird things during training. This LoRA was trained with tons of background/environment images with strong structural features.
How to use?
No trigger word needed.
You don't have to set the patch strength for text encoder. This LoRA does not patch it.
Lower your CFG scales (-30%) for better details.
I got realistic faces on my anime characters.
Don't blame this LoRA, it has zero knowledge of realistic faces. Most likely your base model was mixed with other realistic models, probably for better texture and lighting as well. It was already polluted. This LoRA may just active the polluted part because the training datasets are similar (both are from real world).
Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.
(4/15/2025) v0.2:
+30% images. Because there is a bug causing all avif files not being used in v0.1. Which is 30% of the dataset. lol.
Changed some parameters. Stronger, cleaner and more stable effect.
(4/02/2025) v0.1: init release.
+
+
+
+
+
+
+
+
+