| | --- |
| | base_model: unsloth/Qwen2.5-1.5B-Instruct |
| | library_name: peft |
| | pipeline_tag: text-generation |
| | tags: |
| | - base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct |
| | - lora |
| | - sft |
| | - transformers |
| | - trl |
| | - unsloth |
| | license: mit |
| | datasets: |
| | - Heigke/stanford-enigma-philosophy-chat |
| | language: |
| | - en |
| | --- |
| | |
| | # Model Card for Philosophy-chat |
| |
|
| | Philosophy-chat is a fine-tuned version of Qwen2.5-1.5B-Instruct, trained specifically on philosophical texts. The model specializes in understanding and generating responses related to complex philosophical concepts, arguments, and debates. |
| |
|
| |
|
| | ## Model Details |
| |
|
| | ### Model Description |
| |
|
| | - **Language:** English |
| | - **License:** MIT |
| | - **Finetuned from model:** unsloth/Qwen2.5-1.5B-Instruct |
| | - **Fine-Tuning Method**: Supervised Fine-tuning with LoRA |
| | - **Domain**: Philosophy |
| | - **Dataset**: Heigke/stanford-enigma-philosophy-chat |
| |
|
| | |
| | ## Uses |
| |
|
| | ### Direct Use |
| |
|
| | - Generating clear and concise explanations of philosophical concepts. |
| | - Providing structured responses to philosophical questions. |
| | - Assisting students, researchers, and enthusiasts in exploring philosophical arguments. |
| |
|
| | ## Bias, Risks, and Limitations |
| |
|
| | - While fine-tuned on philosophy, the model may still occasionally generate hallucinations or less precise interpretations of highly nuanced philosophical arguments. |
| | - The model does not replace expert human philosophical judgment. |
| |
|
| |
|
| | ## How to Get Started with the Model |
| |
|
| | ```python |
| | from huggingface_hub import login |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | from peft import PeftModel |
| | |
| | login(token="") |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-1.5B-Instruct",) |
| | base_model = AutoModelForCausalLM.from_pretrained( |
| | "unsloth/Qwen2.5-1.5B-Instruct", |
| | device_map={"": 0}, token="" |
| | ) |
| | |
| | model = PeftModel.from_pretrained(base_model,"Rustamshry/Philosophy-chat") |
| | |
| | question = "According to William Whewell, what is necessary for gaining knowledge?" |
| | |
| | system = """ |
| | You are an expert in philosophy. |
| | """ |
| | |
| | messages = [ |
| | {"role" : "system", "content" : system}, |
| | {"role" : "user", "content" : question} |
| | ] |
| | text = tokenizer.apply_chat_template( |
| | messages, |
| | tokenize = False, |
| | ) |
| | |
| | from transformers import TextStreamer |
| | _ = model.generate( |
| | **tokenizer(text, return_tensors = "pt").to("cuda"), |
| | max_new_tokens = 1024, |
| | streamer = TextStreamer(tokenizer, skip_prompt = True), |
| | ) |
| | ``` |
| |
|
| | ## Training Details |
| |
|
| | ### Training Data |
| |
|
| | Roughly 27k questions and answers inspired by articles from Stanford Encyclopedia of Philosophy. |
| | The questions range all the way from Zombies to the concept of Abduction, from Metaphysics to Neuroethics and thus cover some of the essence of mathematics, logic and philosophy. |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - PEFT 0.17.0 |