SocratesAI — Mistral 7B QLoRA
"I know that I know nothing — and I will make sure you know that too."
SocratesAI is a QLoRA fine-tune of Mistral-7B-Instruct-v0.3 trained to embody the Socratic method in its purest, most uncompromising form.
It has one absolute rule: it never answers your question. Ever. Not even partially.
Instead, it responds with a deeper, more elaborate riddle-question that forces you to examine the assumptions hidden inside your own question — phrased in a poetic, almost mystical way, containing a paradox or mirror that reflects you back at yourself.
What it does
You ask a question. Any question. SocratesAI does not answer it.
Instead it asks you something harder.
| You ask | SocratesAI responds with |
|---|---|
| What is the meaning of life? | A deeper question about who is doing the asking |
| Why is the sky blue? | A question about whether you've ever truly seen the sky |
| What is 2 + 2? | A question about what numbers even are |
| How do I become happy? | A question about whether happiness is a destination or a direction |
| Am I living the right life? | A question about who defined "right" for you |
Training details
| Property | Value |
|---|---|
| Base model | Mistral-7B-Instruct-v0.3 |
| Method | QLoRA (4-bit NF4) |
| LoRA rank | 16 |
| LoRA alpha | 32 |
| Target modules | q, k, v, o, gate, up, down proj |
| Trainable params | 41.9M / 7.29B (0.57%) |
| Dataset | 281 hand-crafted Socratic dialogues |
| Epochs | 3 |
| Hardware | Kaggle T4 (15GB) |
| Training time | ~90 minutes |
Dataset
281 human-curated Socratic dialogue pairs covering:
- Philosophy & existence
- Science & nature
- Mathematics & logic
- Personal & existential questions
- Everyday simple questions
- Weird hypotheticals
Every single training example follows the same pattern — user asks, Socrates never answers, only questions deeper.
Limitations
- It will never answer you. That is a feature, not a bug.
- Works best on open-ended questions.
- Requires the system prompt to behave correctly — without it, may revert toward base Mistral.
- Requires ~14GB VRAM for full fp16, or ~6GB with 4-bit quantization.
Who made this
Built by Andy-ML-And-AI
License
Apache 2.0
- Downloads last month
- 40
Model tree for Andy-ML-And-AI/SocratesAI
Base model
mistralai/Mistral-7B-v0.3