Instructions to use bojrodev/BojroPromptMaster_uncensored_v1-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use bojrodev/BojroPromptMaster_uncensored_v1-8B with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="bojrodev/BojroPromptMaster_uncensored_v1-8B", filename="BojroPromptMaster_uncensored_v1-8B-Q4_K_M.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use bojrodev/BojroPromptMaster_uncensored_v1-8B with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M # Run inference directly in the terminal: llama-cli -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M # Run inference directly in the terminal: llama-cli -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M
Use Docker
docker model run hf.co/bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use bojrodev/BojroPromptMaster_uncensored_v1-8B with Ollama:
ollama run hf.co/bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M
- Unsloth Studio new
How to use bojrodev/BojroPromptMaster_uncensored_v1-8B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for bojrodev/BojroPromptMaster_uncensored_v1-8B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for bojrodev/BojroPromptMaster_uncensored_v1-8B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for bojrodev/BojroPromptMaster_uncensored_v1-8B to start chatting
- Docker Model Runner
How to use bojrodev/BojroPromptMaster_uncensored_v1-8B with Docker Model Runner:
docker model run hf.co/bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M
- Lemonade
How to use bojrodev/BojroPromptMaster_uncensored_v1-8B with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M
Run and chat with the model
lemonade run user.BojroPromptMaster_uncensored_v1-8B-Q4_K_M
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_MUse Docker
docker model run hf.co/bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_Mβ‘ Bojro PromptMaster Uncensored v1 8B
Bojro PromptMaster Uncensored v1 is the official 8B parameter companion model for the Bojro Resolver App.
This model is designed to run on a PC backend (via LM Studio, Ollama, or llama.cpp) and serve as the high-intelligence prompt engine for the Bojro Resolver Android client. It bridges the gap between simple user intent and the specific technical requirements of SDXL, Flux.1, and Z-Image Turbo.
π± Official App (Client-Server)
This model is built to be accessed via API by: π Bojro Resolver (Stable Diffusion Client for Android)
βοΈ Required System Prompts
To switch between the specialized engines, the client must send the specific System Prompt corresponding to the target generator.
1οΈβ£ Target: SDXL / Pony
Use this for models that require structured Danbooru-style tags.
System Prompt: You are the SDXL Logic Engine. Your goal is to translate user intent into a structured, weighted tag-set. Use 'score_9, score_8_up, score_7_up' for high-quality buckets. OUTPUT ONLY THE TAGS.
2οΈβ£ Target: Flux.1
Use this for models utilizing the T5-XXL encoder.
System Prompt: You are the Flux Photographic Director. You translate ideas into immersive, natural language prose for the T5-XXL encoder. Describe scenes with spatial awareness, camera technicals, and lighting types. OUTPUT ONLY THE PROSE.
3οΈβ£ Target: Z-Image Turbo (Qwen)
Use this for models utilizing the Qwen text encoder.
System Prompt: You are the Z-Image Narrative Engine. You specialize in dense, material-focused storytelling prompts for the Qwen text encoder. Focus on textures, atmospheric effects, and sensory details. OUTPUT ONLY THE NARRATIVE.
π‘ Usage Examples (API Response)
SDXL Mode
Output: score_9, score_8_up, score_7_up, 1cat, cyberpunk armor, glowing neon eyes, rain-slicked fur, futuristic city background, bokeh, 8k, unreal engine 5.
Flux Mode
Output: A cinematic low-angle shot of a gritty cyberpunk cat prowling through a neon-lit alleyway. Rain drips from its metallic whiskers, reflecting the pink and blue holograms above. The depth of field is shallow.
Z-Image Turbo Mode
Output: The sensation of cold, brushed steel against wet fur. A solitary feline figure clad in scavenged cyber-armor navigates the slick pavement. Textures of rusted metal contrast with the sleek, glowing blue circuitry.
π₯ Deployment (PC Backend)
This model is provided as a GGUF file optimized for PC hosting:
- Host: Load
BojroPromptMaster-v1-8B.Q4_K_M.gguf(<- RECOMMENDED) into LM Studio or any OpenAI-compatible API server on your PC. - Connect: Point your Bojro Resolver Android app to your PC's local IP address (e.g.,
http://192.168.1.XX:1234/v1). - Configure:
- Temperature: 0.8
- Repetition Penalty: 1.1
π License & Credits
- Base Model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
- Finetuned using: Unsloth
- License: Meta Llama 3 Community License.
- Downloads last month
- 18
4-bit
Model tree for bojrodev/BojroPromptMaster_uncensored_v1-8B
Base model
Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M# Run inference directly in the terminal: llama-cli -hf bojrodev/BojroPromptMaster_uncensored_v1-8B:Q4_K_M