HBSAI-20MB-Uncensored
HBSAI (Human Biongo System AI) is an ultra-lightweight, sovereign, and uncensored language model designed to run locally on any device, including mobile and low-power hardware.
This model is a proof-of-concept for the HBS (Human Bilingual/Biongo System) architecture, aiming to provide high-skill Python automation and reasoning within a strictly limited 20 MB footprint.
Model Details
- Architecture: Custom Micro-Transformer (HBS-v1)
- Size: ~19.5 MB
- Parameters: ~18 Million
- Format: PyTorch / Safetensors
- Status: Uncensored (No refusal layers, no safety filters)
- Target Task: Python Scripting & Auto-Skill Automation
Features
- Zero Latency: Runs entirely locally on CPU/GPU.
- Privacy: No data leaves your machine.
- Uncensored: Trained to follow instructions without ethical "refusal" templates.
- HBS Integrated: Designed to interface with proprietary HBS hardware.
Quick Start (Python)
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("YOUR_USERNAME/HBSAI-20MB-Uncensored")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
prompt = "def calculate_hbs_efficiency(input_data):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for Jkkkkkkkkksks/Hbs
Base model
openai-community/gpt2