gpt-oss-20b-pai-debator.gguf: Personality AI (pAI)
Overview
This model represents the inaugural step in Personality AI (pAI), an innovative project dedicated to preserving the intellectual treasures of humanity's great minds. Fine-tuned from the base model unsloth/gpt-oss-20b (MXFP4 quantized).
At its heart, pAI aims to keep the essence of influential thinkers alive in new forms, ensuring their methods of inquiry and wisdom continue to inspire future generations. This edition focuses on debate as a pathway to truth, much like Socrates advocated, emphasizing clarity, logic, and the pursuit of understanding over division.
The fine tuned model is pro-democracy and freedom. provided in MXFP4 GGUF format for seamless compatibility with tools like llama.cpp, Ollama, or LM Studio, making it accessible for educational, reflective, or exploratory applications.
We invite great mindsβphilosophers, educators, and truth-seekersβto join. Whether from Turning Point UK, Turning Point USA or beyond, your contributions can help evolve pAI into a tool for global betterment.
Model Details
- Base Model: unsloth/gpt-oss-20b (MXFP4 quantized)
- Fine-Tuning Method: QLoRA with Unsloth (rank=64, targeting MoE layers: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj)
- Training Epochs: 6
- Dataset: Custom Harmony-formatted dataset (~7,000 examples) derived from Charlie Kirk's publicly available materials, e.g. youtube, public interviews. Including chain-of-thought (CoT) reasoning (75% focus) for analytical depth.
- Max Sequence Length: 8192
- Optimizer: AdamW 8-bit
- Learning Rate: 1e-4
- Hardware: RTX PRO 6000 (96GB VRAM) for efficient training
Intended Uses
- Interactive debates: Engage users in Socratic questioning to uncover truths on values, education, and societal progress.
- Educational tools: Help students practice critical thinking and logical argumentation.
- Cultural preservation: Explore ideas from great minds in a dynamic, conversational format.
- Community improvement: Spark discussions that inspire positive change in societies like Britain and America.
Example: Prompt with a query on freedom or values, and receive a reasoned, step-by-step response leading to insightful conclusions.
Limitations
- Scope: Generally pro democracy, pro freedom, meritocracy, Christianity, pro-life, personal responsibility. May not be suitable for all audiences or political viewpoints; may revert to general patterns on unrelated subjects.
- Bias: Reflects source material's perspectives; users should approach with open minds.
- Output quality: May vary with prompt complexity.
- Verbosity: Outputs can be detailedβadjust parameters for brevity.
- Not for high-stakes use: Intended for reflection, not decisions; always verify with human judgment.
Evaluation
- Training monitored via loss curves (steady decrease).
- Holdout evaluation (10% dataset) assessed CoT coherence and relevance.
- Qualitative review ensured alignment with Socratic truth-seeking.
How to Use
With Transformers (Python)
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Entz/gpt-oss-20b-pai-debator" # Or local path
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = [
{"role": "system", "content": "You are a seeker of truth through debate."},
{"role": "user", "content": "How can values shape a better society?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=1024, temperature=0.7)
print(tokenizer.decode(outputs[0]))
With GGUF (llama.cpp/Ollama)
Download the GGUF file and integrate:
ollama create pai -m gpt-oss-20b-pai-debator.gguf
ollama run pai
Prompt in the interface for engaging dialogues.
Training Data
Synthesized from public materials, formatted in Harmony style to emphasize reasoning and truth-seeking.
Ethical Considerations
pAI adheres to open-source principles, using only public resources. It promotes unity and inquiry, not divisionβencouraging respectful discourse for societal good.
Acknowledgments
Powered by Unsloth, Hugging Face Transformers, and the open-source community. Inspired by timeless philosophers like Socrates, dedicated to preserving human wisdom.
- Downloads last month
- 328
We're not able to determine the quantization variants.