justdeen commited on
Commit
a78e602
·
verified ·
1 Parent(s): bfc32fd

Add pipeline_tag and proper model card for Inference API

Browse files
Files changed (1) hide show
  1. README.md +69 -16
README.md CHANGED
@@ -1,22 +1,75 @@
1
- # Quran Q&A Model (Interrupted Training)
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- ## Status
4
- Training was interrupted but the model achieved excellent performance:
5
- - **ROUGE-1: 0.9989 (99.89%)**
6
- - Model is fully functional for Q&A tasks
7
 
8
- ## Model Details
9
- - Base: google/flan-t5-small
10
- - Date: 2025-07-11
11
- - Training interrupted at epoch Unknown
 
12
 
13
  ## Usage
14
- Run test_model.py to test the model:
15
- ```bash
16
- python test_model.py
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ```
18
 
19
- ## Files
20
- - final_model/: The trained model and tokenizer
21
- - training_summary.json: Training metrics and configuration
22
- - test_model.py: Example usage script with test cases
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - transformers
6
+ - text-generation
7
+ - arabic
8
+ - quran
9
+ - islamic
10
+ - safetensors
11
+ language:
12
+ - ar
13
+ library_name: transformers
14
+ ---
15
 
16
+ # QuranPlus
 
 
 
17
 
18
+ QuranPlus is a language model trained for Islamic and Quranic text generation.
19
+
20
+ ## Model Description
21
+
22
+ This model is designed to generate text related to Islamic teachings and Quranic content in Arabic.
23
 
24
  ## Usage
25
+
26
+ ```python
27
+ from transformers import AutoTokenizer, AutoModelForCausalLM
28
+
29
+ # Load model and tokenizer
30
+ tokenizer = AutoTokenizer.from_pretrained("justdeen/QuranPlus")
31
+ model = AutoModelForCausalLM.from_pretrained("justdeen/QuranPlus")
32
+
33
+ # Generate text
34
+ input_text = "ما هو الإسلام؟"
35
+ inputs = tokenizer(input_text, return_tensors="pt")
36
+
37
+ with torch.no_grad():
38
+ outputs = model.generate(**inputs, max_length=100, do_sample=True)
39
+
40
+ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
41
+ print(generated_text)
42
  ```
43
 
44
+ ## Inference API
45
+
46
+ You can also use this model via the Hugging Face Inference API:
47
+
48
+ ```python
49
+ import requests
50
+
51
+ API_URL = "https://api-inference.huggingface.co/models/justdeen/QuranPlus"
52
+ headers = {"Authorization": f"Bearer {YOUR_HF_TOKEN}"}
53
+
54
+ def query(payload):
55
+ response = requests.post(API_URL, headers=headers, json=payload)
56
+ return response.json()
57
+
58
+ output = query({
59
+ "inputs": "ما هو الإسلام؟",
60
+ })
61
+ ```
62
+
63
+ ## Training Details
64
+
65
+ This model was trained on Islamic and Quranic texts to provide accurate and contextually appropriate responses about Islamic teachings.
66
+
67
+ ## Limitations
68
+
69
+ - The model is specifically trained for Islamic content
70
+ - Responses should be verified by Islamic scholars for religious accuracy
71
+ - May not perform well on non-Islamic topics
72
+
73
+ ## License
74
+
75
+ Apache 2.0