--- library_name: diffusers license: apache-2.0 base_model: - neta-art/Neta-Lumina tags: - diffusers, - text-to-image --- # Neta Lumina v1.0 for diffusers library [**Neta Lumina Tech Report**](https://neta.art/blog/neta_lumina/) ## 📽️ Flash Preview # Introduction **Neta Lumina** is a high‑quality anime‑style image‑generation model developed by Neta.art Lab. Building on the open‑source **Lumina‑Image‑2.0** released by the Alpha‑VLLM team at Shanghai AI Laboratory, we fine‑tuned the model with a vast corpus of high‑quality anime images and multilingual tag data. The preliminary result is a compelling model with powerful comprehension and interpretation abilities (thanks to Gemma text encoder), ideal for illustration, posters, storyboards, character design, and more. ## Key Features - Optimized for diverse creative scenarios such as Furry, Guofeng (traditional‑Chinese aesthetics), pets, etc. - Wide coverage of characters and styles, from popular to niche concepts. (Still support danbooru tags!) - Accurate natural‑language understanding with excellent adherence to complex prompts. - Native multilingual support, with Chinese, English, and Japanese recommended first. ## Model Versions For models in alpha tests, requst access at https://huggingface.co/neta-art/NetaLumina_Alpha if you are interested. We will keep updating. ### neta-lumina-v1.0 - **Official Release**: overall best performance ### neta-lumina-beta-0624-raw (archived) - **Primary Goal**: General knowledge and anime‑style optimization - **Data Set**: >13 million anime‑style images - **>46,000** A100 Hours - Higher upper limit, suitable for pro users. Check [**Neta Lumina Prompt Book**](https://nieta-art.feishu.cn/wiki/RY3GwpT59icIQlkWXEfcCqIMnQd) for better results. ### neta-lumina-beta-0624-aes-experimental (archived) - First beta release candidate - **Primary Goal**: Enhanced aesthetics, pose accuracy, and scene detail - **Data Set**: Hundreds of thousands of handpicked high‑quality anime images (fine‑tuned on an older version of raw model) - User-friendly, suitable for most people.
# How  to  Use [Try it at Hugging Face playground](https://huggingface.co/spaces/neta-art/NetaLumina_T2I_Playground) ## Or use it with diffusers: ```python import torch from diffusers import Lumina2Pipeline pipe = Lumina2Pipeline.from_pretrained("VirtualAddressExtension/Neta-Lumina-v1.0-diffusers", torch_dtype=torch.bfloat16) pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power prompt = "You are an assistant designed to generate anime images based on textual prompts. neta, @quasarcake, 1girl, solo, 1girl,solo,bangs,black hair,purple eyes,pink hair,purple hair,multicolored hair,virtual youtuber,hair bun,streaked hair,double bun, school uniform, white shirt, pleated skirt, gentle smile, looking at viewer, sitting, upper body, close-up, soft lighting, depth of field, cherry blossom background, warm lighting, best quality" image = pipe( prompt, height=1024, width=1024, guidance_scale=4.0, num_inference_steps=50, cfg_trunc_ratio=0.25, cfg_normalization=True, generator=torch.Generator("cpu").manual_seed(0) ).images[0] image.save("lumina_demo.png") ``` # Prompt Book Detailed prompt guidelines: [**Neta Lumina Prompt Book**](https://neta.art/blog/neta_lumina_prompt_book/)
# Community - Discord: https://discord.com/invite/TTTGccjbEa - QQ group: 1039442542
# Roadmap ## Model - Continous base‑model training to raise reasoning capability. - Aesthetic‑dataset iteration to improve anatomy, background richness, and overall appealness. - Smarter, more versatile tagging tools to lower the creative barrier. ## Ecosystem - LoRA training tutorials and components - Experienced users may already fine‑tune via Lumina‑Image‑2.0’s open code. - Development of advanced control / style‑consistency features (e.g., [Omini Control](https://arxiv.org/pdf/2411.15098)). [**Call for Collaboration!**](https://discord.com/invite/TTTGccjbEa)
# License & Disclaimer - Neta Lumina is released under [**Apache License 2.0**](https://www.apache.org/licenses/LICENSE-2.0)
# Participants & Contributors - Special thanks to the **Alpha‑VLLM** team for open‑sourcing **Lumina‑Image‑2.0** - **Model development**: **Neta.art Lab (Civitai)** - Core Trainer: **li_li** [Civitai](https://civitai.com/user/li_li) ・ [Hugging Face](https://huggingface.co/heziiiii)
- **Partners** - **nebulae**: [Civitai](https://civitai.com/user/kitarz) ・ [Hugging Face](https://huggingface.co/NebulaeWis) - **生姜**: [Hugging Face](https://huggingface.co/ssj0021) - **孙一** - [**narugo1992**](https://github.com/narugo1992) & [**deepghs**](https://huggingface.co/deepghs): open datasets, processing tools, and models - [**Naifu**](https://github.com/Mikubill/naifu) trainer at [Mikubill](https://github.com/Mikubill)
# Community Contributors - **Evaluators & developers**: [二小姐](https://huggingface.co/Second222), [spawner](https://github.com/spawner1145), [Rnglg2](https://civitai.com/user/Rnglg2) - **Other contributors**: [沉迷摸鱼](https://www.pixiv.net/users/22433944), [poi](https://x.com/poi______1), AshenWitch, [十分无奈](https://www.pixiv.net/users/15750592), [GHOSTLX](https://civitai.com/user/ghostlxh), [wenaka](https://civitai.com/user/Wenaka_), [iiiiii](https://civitai.com/user/Blueberries_i), [年糕特工队](https://x.com/gaonian2331), [恩匹希](https://civitai.com/user/NPCde), 奶冻, [mumu](https://civitai.com/user/mumu520), [yizyin](https://civitai.com/user/yizyin), smile, Yang, 古神, 灵之药, [LyloGummy](https://civitai.com/user/LyloGummy), 雪时
# Appendix & Resources - **TeaCache**: https://github.com/spawner1145/CUI-Lumina2-TeaCache - **Advanced samplers & TeaCache guide (by spawner)**: https://docs.qq.com/doc/DZEFKb1ZrZVZiUmxw?nlc=1 - **Neta Lumina ComfyUI Manual (in Chinese)**: https://docs.qq.com/doc/DZEVQZFdtaERPdXVh