File size: 2,705 Bytes
1ff9e74
 
 
2dad787
1ff9e74
 
 
 
 
 
ed11bbc
1325f7c
 
 
 
 
1ff9e74
 
06b81a1
1ff9e74
06b81a1
1ff9e74
 
 
 
 
 
1325f7c
 
 
 
 
 
1ff9e74
1325f7c
1ff9e74
 
 
 
1325f7c
 
 
1ff9e74
 
 
1325f7c
 
1ff9e74
 
 
 
5b4c8c2
1325f7c
 
 
1ff9e74
1325f7c
1ff9e74
1325f7c
 
 
 
 
1ff9e74
06b81a1
1ff9e74
1325f7c
1ff9e74
1325f7c
 
 
1ff9e74
1325f7c
 
 
 
 
 
 
 
1ff9e74
1325f7c
 
 
 
 
 
 
1ff9e74
1325f7c
1ff9e74
1325f7c
1ff9e74
1325f7c
 
1ff9e74
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
base_model: unsloth/Qwen2.5-1.5B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct
- lora
- sft
- transformers
- trl
- unsloth 
license: mit
datasets:
- Heigke/stanford-enigma-philosophy-chat
language:
- en
---

# Model Card for Philosophy-chat

Philosophy-chat is a fine-tuned version of Qwen2.5-1.5B-Instruct, trained specifically on philosophical texts. The model specializes in understanding and generating responses related to complex philosophical concepts, arguments, and debates.


## Model Details

### Model Description

- **Language:** English
- **License:** MIT
- **Finetuned from model:** unsloth/Qwen2.5-1.5B-Instruct
- **Fine-Tuning Method**: Supervised Fine-tuning with LoRA
- **Domain**: Philosophy
- **Dataset**: Heigke/stanford-enigma-philosophy-chat

 
## Uses

### Direct Use

- Generating clear and concise explanations of philosophical concepts. 
- Providing structured responses to philosophical questions. 
- Assisting students, researchers, and enthusiasts in exploring philosophical arguments.

## Bias, Risks, and Limitations

- While fine-tuned on philosophy, the model may still occasionally generate hallucinations or less precise interpretations of highly nuanced philosophical arguments.
- The model does not replace expert human philosophical judgment.


## How to Get Started with the Model

```python
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

login(token="")

tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-1.5B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/Qwen2.5-1.5B-Instruct",
    device_map={"": 0}, token=""
)

model = PeftModel.from_pretrained(base_model,"Rustamshry/Philosophy-chat")

question = "According to William Whewell, what is necessary for gaining knowledge?"

system = """
You are an expert in philosophy.
"""

messages = [
    {"role" : "system", "content" : system},
    {"role" : "user",   "content" : question}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
)

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    max_new_tokens = 1024,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```

## Training Details

### Training Data

Roughly 27k questions and answers inspired by articles from Stanford Encyclopedia of Philosophy.
The questions range all the way from Zombies to the concept of Abduction, from Metaphysics to Neuroethics and thus cover some of the essence of mathematics, logic and philosophy.


### Framework versions

- PEFT 0.17.0