BlackList 4.2 PRO (Next-Gen 1B)
BlackList 4.2 PRO is a next-generation 1B-class Prompt Enhancer designed to transform short visual concepts into elite, production-grade prompts for modern text-to-image systems such as Stable Diffusion, Midjourney, and Flux.
Built on a custom LLaMA-based architecture with Grouped-Query Attention (GQA), this version represents the largest and most advanced evolution of the BlackList series.
Model Details
Model Description
BlackList 4.2 PRO is a domain-specialized text-to-text generative model engineered exclusively for aesthetic prompt enhancement.
It consumes short input phrases (1–5 words) and outputs:
- Structured masterpiece-grade prompts
- Balanced composition layers
- Cinematic lighting injection
- Technical rendering descriptors
- Artist-level stylistic blending
- Resolution & detail optimization
Example:
Input: [SIMPLE] cyberpunk sniper
Output: [ENHANCED] cyberpunk sniper, full body shot, neon megacity background, cinematic rim lighting, ultra detailed armor, dynamic perspective, volumetric fog, sharp focus, 4k resolution, concept art, highly detailed
Core Identity
- Model Name: BlackList 4.2 PRO
- Creator: Bl4ckSpaces
- Category: Prompt Enhancer (Text-to-Text)
- Target Systems: Stable Diffusion, Midjourney, Flux, and similar T2I engines
- License: Apache 2.0 (Open-Source)
Architecture & Engine (The Brain)
BlackList 4.2 PRO is built on a custom LLaMA-inspired Transformer architecture optimized for high-efficiency aesthetic reasoning.
- Base Architecture: Custom LLaMA
- Total Parameters:
950 Million (1B Class) - Hidden Layers: 32
- Attention Heads: 24
- Key-Value Heads: 8 (Grouped-Query Attention / GQA)
- Maximum Context Length: 256 tokens
- Vocabulary Size: 32,000 (Custom BPE)
Why GQA?
Grouped-Query Attention enables:
- Large-model reasoning behavior
- ~30% lower VRAM consumption
- Faster inference
- Stable deployment on consumer hardware
Tokenizer & Vocabulary
- Tokenizer: Custom Byte-Pair Encoding (BPE)
- Training Scope: Exclusively visual-art domain
- Vocabulary Size: 32,000 curated tokens
Optimized for recognizing:
- Lighting systems
- Rendering engines
- Camera angles
- Artistic mediums
- Resolution scaling
- Style descriptors
Training Data (The Soul)
- Total Dataset: ~788,000 high-quality prompts
- Data Quality: Strictly filtered and cleaned
- Sources:
- High-tier Stable Diffusion prompt collections
- Human-curated elite prompt datasets
Training Configuration
- Training Hardware: TPU v5e (8-core parallel processing)
- Epochs: 2
- Total Optimization Steps: 6,161
- Weight Format: FP16 / BF16
Native Performance Settings (Recommended)
For optimal inference stability:
- Temperature: 0.65
- Top-K: 45
- Top-P: 0.90
- Repetition Penalty: 1.15
These settings:
- Prevent keyword hallucination
- Reduce repetition loops
- Preserve structured enhancement
- Maintain high aesthetic density
Capabilities
1B-Class Aesthetic Intelligence
Compared to earlier BlackList versions:
- Dramatically stronger context control
- More proportional prompt layering
- Reduced random style injection
- Cleaner technical structuring
Structured Enhancement Behavior
The model intelligently organizes prompts into:
- Core Subject
- Pose / Shot Type
- Environment
- Lighting
- Detail Layer
- Rendering Quality
- Final Resolution
Intended Use
Direct Use
- Prompt enhancement before diffusion sampling
- API backend for creative AI apps
- Automated aesthetic upscaling systems
Downstream Use
- Integration into T2I pipelines
- SaaS creative platforms
- AI-assisted art tooling
Out-of-Scope
- Conversational AI
- Factual reasoning
- Long-form writing
- Legal, medical, or critical decision systems
BlackList 4.2 PRO is strictly a domain-specialized enhancer.
Evaluation
Evaluation focused on:
- Repetition resistance
- Aesthetic richness density
- Structural discipline
- Prompt hierarchy coherence
Results show:
- Stable non-repetitive expansion
- Clean formatting
- Balanced keyword stacking
- High compatibility with diffusion engines
Bias & Limitations
- Model inherits stylistic bias from curated visual-art datasets
- Prefers high-detail, cinematic aesthetics
- Limited context window (256 tokens) by design
- Not optimized for general NLP tasks
Users should test outputs based on their specific diffusion sampling configuration.
Environmental Impact
- Hardware: TPU v5e (8-core)
- Epochs: 2
- Training Steps: 6,161
- Precision: FP16 / BF16
- Efficient large-model training via parallel TPU architecture
How to Use
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "Bl4ckSpaces/BlackList-4.2-PRO"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
input_text = "[SIMPLE] fantasy knight"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(
**inputs,
max_length=150,
temperature=0.65,
top_k=45,
top_p=0.90,
repetition_penalty=1.15
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Citation
If you use BlackList 4.2 PRO in your project:
Bl4ckSpaces – BlackList 4.2 PRO (Next-Gen 1B)
Model Card Contact
Creator: Bl4ckSpaces
Hugging Face: https://huggingface.co/Bl4ckSpaces�
- Downloads last month
- 7