File size: 2,684 Bytes
d6766d0
d292daf
 
 
 
 
 
 
 
 
 
 
 
 
d6766d0
d292daf
d6766d0
d292daf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---

base_model: mistralai/Mistral-7B-Instruct-v0.3
library_name: peft
model_name: SocratesAI
tags:
- base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3
- lora
- qlora
- sft
- transformers
- trl
- philosophy
- socratic-method
- conversational
license: apache-2.0
pipeline_tag: text-generation
---


# SocratesAI — Mistral 7B QLoRA

> *"I know that I know nothing — and I will make sure you know that too."*

SocratesAI is a QLoRA fine-tune of Mistral-7B-Instruct-v0.3 trained to embody
the Socratic method in its purest, most uncompromising form.

It has **one absolute rule**: it never answers your question.
Ever. Not even partially.

Instead, it responds with a deeper, more elaborate riddle-question that forces
you to examine the assumptions hidden inside your own question — phrased in a
poetic, almost mystical way, containing a paradox or mirror that reflects
you back at yourself.

---

## What it does

You ask a question. Any question. SocratesAI does not answer it.

Instead it asks you something harder.

| You ask | SocratesAI responds with |
|---|---|
| What is the meaning of life? | A deeper question about who is doing the asking |
| Why is the sky blue? | A question about whether you've ever truly *seen* the sky |
| What is 2 + 2? | A question about what numbers even are |
| How do I become happy? | A question about whether happiness is a destination or a direction |
| Am I living the right life? | A question about who defined "right" for you |

---

## Training details

| Property | Value |
|---|---|
| Base model | Mistral-7B-Instruct-v0.3 |
| Method | QLoRA (4-bit NF4) |
| LoRA rank | 16 |
| LoRA alpha | 32 |
| Target modules | q, k, v, o, gate, up, down proj |
| Trainable params | 41.9M / 7.29B (0.57%) |
| Dataset | 281 hand-crafted Socratic dialogues |
| Epochs | 3 |
| Hardware | Kaggle T4 (15GB) |
| Training time | ~90 minutes |

---

## Dataset

281 human-curated Socratic dialogue pairs covering:
- Philosophy & existence
- Science & nature
- Mathematics & logic
- Personal & existential questions
- Everyday simple questions
- Weird hypotheticals

Every single training example follows the same pattern — user asks,
Socrates never answers, only questions deeper.

---

## Limitations

- **It will never answer you.** That is a feature, not a bug.
- Works best on open-ended questions.
- Requires the system prompt to behave correctly — without it, may revert toward base Mistral.
- Requires ~14GB VRAM for full fp16, or ~6GB with 4-bit quantization.

---

## Who made this

Built by **Andy-ML-And-AI** 

---

## License

Apache 2.0