Raniahossam33 commited on
Commit
d751f93
·
verified ·
1 Parent(s): 8b490ad

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +88 -104
README.md CHANGED
@@ -1,149 +1,133 @@
1
  ---
 
2
  language:
3
  - ar
4
- license: apache-2.0
5
- task_categories:
6
- - question-answering
7
- - text-generation
8
- pretty_name: Fatwa Q&A Evaluation Dataset
9
  tags:
10
- - islamic-jurisprudence
11
  - fatwa
12
- - sharia
13
- - fiqh
14
  - evaluation
15
  - benchmark
16
  - arabic
17
- - sahm-benchmark
18
- dataset_info:
19
- features:
20
- - name: id
21
- dtype: string
22
- - name: prompt
23
- dtype: string
24
- - name: question
25
- dtype: string
26
- - name: answer
27
- dtype: string
28
- - name: category
29
- dtype: string
30
- - name: question_length
31
- dtype: int64
32
- - name: answer_length
33
- dtype: int64
34
- splits:
35
- - name: test
36
- num_bytes: 8159968
37
- num_examples: 4000
38
- download_size: 3511512
39
- dataset_size: 8159968
40
- configs:
41
- - config_name: default
42
- data_files:
43
- - split: test
44
- path: data/test-*
45
  ---
46
 
47
- # Fatwa Q&A Evaluation Dataset
48
 
49
  ## Dataset Description
50
 
51
- Evaluation dataset for Islamic jurisprudence Q&A, extracted from the Fatwa MCQ evaluation dataset. This dataset contains validation and test splits for evaluating language models on Islamic legal rulings and religious guidance.
52
 
53
- ### Dataset Summary
54
 
55
- - **Language:** Arabic
56
- - **Size:** 250 evaluation examples (125 validation, 125 test)
57
- - **Domain:** Islamic jurisprudence, Sharia law, Fiqh
58
- - **Format:** Simple prompt-answer pairs
59
- - **Task:** Fatwa generation evaluation
60
 
61
  ## Dataset Structure
62
 
63
  ### Data Fields
64
 
65
- - `id`: Unique identifier for each example
66
- - `prompt`: The full prompt with Islamic context and question
67
- - `question`: The original question text
68
- - `answer`: Ground truth fatwa/religious ruling
69
- - `category`: Category of the fatwa (murabaha, ijara, takaful, sukuk, etc.)
70
- - `question_length`: Length of the question in characters
71
- - `answer_length`: Length of the answer in characters
72
-
73
- ### Data Splits
74
-
75
- - **Validation**: 125 examples (50%)
76
- - **Test**: 125 examples (50%)
77
-
78
- ## Categories
79
-
80
- The dataset covers various Islamic finance and jurisprudence topics:
81
- - Murabaha (Islamic financing)
82
- - Ijara (Islamic leasing)
83
- - Takaful (Islamic insurance)
84
- - Sukuk (Islamic bonds)
85
- - General Islamic jurisprudence
86
-
87
- ## Example
88
-
89
- ```json
90
- {
91
- "id": "fatwa_eval_00009",
92
- "prompt": "بناءً على أحكام الشريعة الإسلامية والفقه الإسلامي، أجب على السؤال التالي بفتوى شرعية مفصلة ومدعمة بالأدلة عند الإمكان.\n\nالسؤال: [question text]\n\nالفتوى الشرعية:",
93
- "question": "[Original question]",
94
- "answer": "[Ground truth fatwa]",
95
- "category": "murabaha",
96
- "question_length": 234,
97
- "answer_length": 567
98
- }
99
  ```
100
 
101
  ## Usage
102
-
103
  ```python
104
  from datasets import load_dataset
105
 
106
- # Load the evaluation dataset
107
  dataset = load_dataset("SahmBenchmark/fatwa-qa-evaluation")
108
 
109
- # Access splits
110
- val_data = dataset['validation']
111
- test_data = dataset['test']
 
 
 
 
 
 
 
 
 
 
112
 
113
- # Evaluation example
114
- for example in test_data:
115
- model_output = model.generate(example['prompt'])
 
 
 
 
 
 
 
 
 
 
 
 
116
  ground_truth = example['answer']
117
-
118
- # Evaluate the generated fatwa
119
- score = evaluate_fatwa(model_output, ground_truth)
120
  ```
121
 
122
- ## Evaluation Considerations
123
 
124
- When evaluating models on this dataset:
125
- - Consider theological accuracy and adherence to Islamic principles
126
- - Evaluate the quality of evidence and references provided
127
- - Assess clarity and comprehensiveness of the fatwa
128
- - Be aware that fatwas may vary based on different schools of Islamic thought
129
- - Consider using semantic similarity metrics in addition to exact matching
 
 
 
 
 
130
 
131
  ## Related Datasets
132
 
133
- - Training dataset: `SahmBenchmark/fatwa-training_standardized`
134
- - MCQ evaluation: `SahmBenchmark/fatwa-mcq-evaluation`
135
 
136
  ## Citation
137
-
138
  ```bibtex
139
- @dataset{fatwa_qa_evaluation_2025,
140
- title={Fatwa Q&A Evaluation Dataset},
141
  author={SahmBenchmark},
142
  year={2025},
143
- publisher={Hugging Face}
144
  }
145
  ```
146
 
147
- ## Disclaimer
148
 
149
- This dataset is for academic and research purposes. Religious rulings should be sought from qualified Islamic scholars for practical application.
 
1
  ---
2
+ license: apache-2.0
3
  language:
4
  - ar
 
 
 
 
 
5
  tags:
6
+ - islamic-finance
7
  - fatwa
8
+ - question-answering
 
9
  - evaluation
10
  - benchmark
11
  - arabic
12
+ size_categories:
13
+ - 1K<n<10K
14
+ task_categories:
15
+ - question-answering
16
+ - text-generation
17
+ pretty_name: "Fatwa QA Evaluation Dataset"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ---
19
 
20
+ # Fatwa QA Evaluation Dataset
21
 
22
  ## Dataset Description
23
 
24
+ This dataset contains Islamic finance and jurisprudence fatwa question-answer pairs for **evaluating** Arabic language models. This is an open-ended QA evaluation benchmark where models generate free-form answers.
25
 
26
+ ## Dataset Statistics
27
 
28
+ - **Total Samples**: 4,000
29
+ - **Average Question Length**: 237.3 characters
30
+ - **Average Answer Length**: 488.9 characters
 
 
31
 
32
  ## Dataset Structure
33
 
34
  ### Data Fields
35
 
36
+ - `id`: Unique identifier (format: `fatwa_eval_XXXXX`)
37
+ - `prompt`: Full evaluation prompt (instruction + question + الإجابة:)
38
+ - `question`: Original question text
39
+ - `answer`: Ground truth answer
40
+ - `category`: Islamic finance category
41
+ - `question_length`: Character count of the question
42
+ - `answer_length`: Character count of the answer
43
+
44
+ ### Categories
45
+
46
+ - **zakat**: 1616 samples
47
+ - **riba**: 818 samples
48
+ - **murabaha**: 466 samples
49
+ - **gharar**: 292 samples
50
+ - **waqf**: 246 samples
51
+ - **ijara**: 196 samples
52
+ - **maysir**: 125 samples
53
+ - **musharaka**: 84 samples
54
+ - **mudharaba**: 78 samples
55
+ - **takaful**: 68 samples
56
+ - **sukuk**: 11 samples
57
+
58
+ ### Prompt Format
59
+ ```
60
+ بناءً على أحكام الشريعة الإسلامية والفقه الإسلامي، أجب على السؤال التالي بطريقة مفصلة ومدعمة بالأدلة عند الإمكان. السؤال: [QUESTION] الإجابة:
 
 
 
 
 
 
 
 
 
61
  ```
62
 
63
  ## Usage
 
64
  ```python
65
  from datasets import load_dataset
66
 
 
67
  dataset = load_dataset("SahmBenchmark/fatwa-qa-evaluation")
68
 
69
+ # Access evaluation data
70
+ for example in dataset['test']:
71
+ print(f"ID: {example['id']}")
72
+ print(f"Prompt: {example['prompt']}")
73
+ print(f"Question: {example['question']}")
74
+ print(f"Answer: {example['answer']}")
75
+ print(f"Category: {example['category']}")
76
+ ```
77
+
78
+ ### Evaluation Example
79
+ ```python
80
+ from datasets import load_dataset
81
+ from transformers import AutoModelForCausalLM, AutoTokenizer
82
 
83
+ # Load dataset and model
84
+ dataset = load_dataset("SahmBenchmark/fatwa-qa-evaluation")
85
+ model_name = "your-model-name"
86
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
87
+ model = AutoModelForCausalLM.from_pretrained(model_name)
88
+
89
+ # Generate predictions
90
+ def generate_answer(prompt):
91
+ inputs = tokenizer(prompt, return_tensors="pt")
92
+ outputs = model.generate(**inputs, max_new_tokens=512)
93
+ return tokenizer.decode(outputs[0], skip_special_tokens=True)
94
+
95
+ # Evaluate
96
+ for example in dataset['test']:
97
+ prediction = generate_answer(example['prompt'])
98
  ground_truth = example['answer']
99
+ # Compare prediction with ground_truth using your metrics
 
 
100
  ```
101
 
102
+ ## Categories
103
 
104
+ - **zakat**: Islamic almsgiving
105
+ - **riba**: Interest/usury-related rulings
106
+ - **murabaha**: Cost-plus financing
107
+ - **gharar**: Uncertainty in contracts
108
+ - **waqf**: Islamic endowment
109
+ - **ijara**: Islamic leasing
110
+ - **maysir**: Gambling-related rulings
111
+ - **musharaka**: Partnership financing
112
+ - **mudharaba**: Profit-sharing partnership
113
+ - **takaful**: Islamic insurance
114
+ - **sukuk**: Islamic bonds
115
 
116
  ## Related Datasets
117
 
118
+ - [Fatwa Training Dataset](https://huggingface.co/datasets/SahmBenchmark/fatwa-training_standardized_new): Training data for this evaluation benchmark
119
+ - [Fatwa MCQ Evaluation](https://huggingface.co/datasets/SahmBenchmark/fatwa-mcq-evaluation_standardized): Multiple choice evaluation version
120
 
121
  ## Citation
 
122
  ```bibtex
123
+ @dataset{fatwa_qa_evaluation,
124
+ title={Fatwa QA Evaluation Dataset},
125
  author={SahmBenchmark},
126
  year={2025},
127
+ url={https://huggingface.co/datasets/SahmBenchmark/fatwa-qa-evaluation}
128
  }
129
  ```
130
 
131
+ ## License
132
 
133
+ Apache 2.0 License