File size: 3,809 Bytes
0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 72e52bf 0301190 d746dcd 0301190 d746dcd 0301190 d746dcd 0301190 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
---
base_model: unsloth/Qwen3-0.6B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen3-0.6B
- lora
- sft
- transformers
- trl
- unsloth
license: mit
datasets:
- musaoc/Quran-reasoning-SFT
language:
- en
---
# Model Card for Quran-R1
## Model Details
This model is a fine-tuned version of Qwen/Qwen3-0.6B on the musaoc/Quran-reasoning-SFT dataset.
It is designed to perform reasoning and question-answering tasks related to the Quran, providing structured reasoning steps along with the final answer.
### Model Description
- **Language(s) (NLP):** English
- **License:** MIT
- **Fine-tuning method**: Supervised fine-tuning (SFT)
- **Finetuned from model:** Qwen3-0.6B
- **Dataset:** musaoc/Quran-reasoning-SFT
## Uses
The model is intended for:
- Educational purposes: Assisting with structured reasoning about Quranic content.
- Research: Exploring reasoning capabilities of small LLMs fine-tuned on religious text.
- QA Systems: Providing answers with reasoning traces.
Not intended for:
- Authoritative religious rulings (fatwas)
- Sensitive or controversial theological debates
- High-stakes decision making
### Out-of-Scope Use
- Scope: The model is limited to the reasoning dataset it was trained on. It may not generalize to broader Quranic studies.
## Bias, Risks, and Limitations
- Bias: Outputs reflect dataset biases and may not represent all scholarly interpretations.
- Hallucination risk: Like all LLMs, it may generate incorrect or fabricated reasoning.
- Religious sensitivity: Responses may not align with every sect, school, or interpretation. Use with caution in sensitive contexts.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-0.6B",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen3-0.6B",
device_map={"": 0}
)
model = PeftModel.from_pretrained(base_model,"khazarai/Quran-R1")
question = "How does the Quran address the issue of parental authority and children’s rights?"
messages = [
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
enable_thinking = True,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 512,
temperature = 0.6,
top_p = 0.95,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```
**For pipeline:**
```python
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-0.6B")
base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3-0.6B")
model = PeftModel.from_pretrained(base_model, "khazarai/Quran-R1")
question = "How does the Quran address the issue of parental authority and children’s rights?"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [
{"role": "user", "content": question}
]
pipe(messages)
```
## Training Data
**Dataset**: musaoc/Quran-reasoning-SFT
The Quranic Reasoning Question Answering (QRQA) Dataset is a synthetic dataset designed for experimenting purposes and for training and evaluating models capable of answering complex, knowledge-intensive questions about the Quran with a strong emphasis on reasoning.
This dataset is particularly well-suited for Supervised Fine-Tuning (SFT) of Large Language Models (LLMs) to enhance their understanding of Islamic scripture and their ability to provide thoughtful, reasoned responses.
### Framework versions
- PEFT 0.17.0 |