Update README.md
Browse files
README.md
CHANGED
|
@@ -60,7 +60,7 @@ The dataset is provided in a structured format:
|
|
| 60 |
```python
|
| 61 |
from datasets import load_dataset
|
| 62 |
|
| 63 |
-
dataset = load_dataset("
|
| 64 |
print(dataset)
|
| 65 |
```
|
| 66 |
|
|
@@ -76,18 +76,21 @@ Each sample includes:
|
|
| 76 |
|
| 77 |
We evaluated the dataset using **LLaMA-based models** with different configurations. Below is a summary of our findings:
|
| 78 |
|
| 79 |
-
| Metric
|
| 80 |
-
|---------------|-------------|-------------|----------------|----------------|
|
| 81 |
-
| Coherence
|
| 82 |
-
| Brevity
|
| 83 |
-
| Legal Language
|
| 84 |
-
| Faithfulness
|
| 85 |
-
| Clarity
|
| 86 |
-
| Consistency
|
| 87 |
-
| **Avg. Score**
|
| 88 |
-
| ROUGE-1
|
| 89 |
-
| ROUGE-2
|
| 90 |
-
| BLEU
|
|
|
|
|
|
|
|
|
|
| 91 |
|
| 92 |
### **Key Findings**
|
| 93 |
- Fine-tuned smaller models (**LLaMA-3.2-3B-FT**) achieve performance **comparable to larger models** (LLaMA-3.1-8B).
|
|
@@ -101,7 +104,7 @@ To use the dataset in your research, load it as follows:
|
|
| 101 |
```python
|
| 102 |
from datasets import load_dataset
|
| 103 |
|
| 104 |
-
dataset = load_dataset("
|
| 105 |
|
| 106 |
# Access train and test splits
|
| 107 |
train_data = dataset["train"]
|
|
@@ -112,7 +115,7 @@ test_data = dataset["test"]
|
|
| 112 |
|
| 113 |
The full implementation, including preprocessing scripts and model training code, is available in our GitHub repository:
|
| 114 |
|
| 115 |
-
🔗 **[GitHub: MohamedBayan/to_be_released](https://github.com/MohamedBayan/
|
| 116 |
|
| 117 |
## Citation
|
| 118 |
|
|
|
|
| 60 |
```python
|
| 61 |
from datasets import load_dataset
|
| 62 |
|
| 63 |
+
dataset = load_dataset("mbayan/Arabic-LJP")
|
| 64 |
print(dataset)
|
| 65 |
```
|
| 66 |
|
|
|
|
| 76 |
|
| 77 |
We evaluated the dataset using **LLaMA-based models** with different configurations. Below is a summary of our findings:
|
| 78 |
|
| 79 |
+
| **Metric** | **LLaMA-3.2-3B** | **LLaMA-3.1-8B** | **LLaMA-3.2-3B-1S** | **LLaMA-3.2-3B-FT** | **LLaMA-3.1-8B-FT** |
|
| 80 |
+
|--------------------------|------------------|------------------|---------------------|---------------------|---------------------|
|
| 81 |
+
| **Coherence** | 2.69 | 5.49 | 4.52 | *6.60* | **6.94** |
|
| 82 |
+
| **Brevity** | 1.99 | 4.30 | 3.76 | *5.87* | **6.27** |
|
| 83 |
+
| **Legal Language** | 3.66 | 6.69 | 5.18 | *7.48* | **7.73** |
|
| 84 |
+
| **Faithfulness** | 3.00 | 5.99 | 4.00 | *6.08* | **6.42** |
|
| 85 |
+
| **Clarity** | 2.90 | 5.79 | 4.99 | *7.90* | **8.17** |
|
| 86 |
+
| **Consistency** | 3.04 | 5.93 | 5.14 | *8.47* | **8.65** |
|
| 87 |
+
| **Avg. Qualitative Score**| 3.01 | 5.89 | 4.66 | *7.13* | **7.44** |
|
| 88 |
+
| **ROUGE-1** | 0.08 | 0.12 | 0.29 | *0.50* | **0.53** |
|
| 89 |
+
| **ROUGE-2** | 0.02 | 0.04 | 0.19 | *0.39* | **0.41** |
|
| 90 |
+
| **BLEU** | 0.01 | 0.02 | 0.11 | *0.24* | **0.26** |
|
| 91 |
+
| **BERT** | 0.54 | 0.58 | 0.64 | *0.74* | **0.76** |
|
| 92 |
+
|
| 93 |
+
**Caption**: A comparative analysis of performance across different LLaMA models. The model names have been abbreviated for simplicity: **LLaMA-3.2-3B-Instruct** is represented as LLaMA-3.2-3B, **LLaMA-3.1-8B-Instruct** as LLaMA-3.1-8B, **LLaMA-3.2-3B-Instruct-1-Shot** as LLaMA-3.2-3B-1S, **LLaMA-3.2-3B-Instruct-Finetuned** as LLaMA-3.2-3B-FT, and **LLaMA-3.1-8B-Finetuned** as LLaMA-3.1-8B-FT.
|
| 94 |
|
| 95 |
### **Key Findings**
|
| 96 |
- Fine-tuned smaller models (**LLaMA-3.2-3B-FT**) achieve performance **comparable to larger models** (LLaMA-3.1-8B).
|
|
|
|
| 104 |
```python
|
| 105 |
from datasets import load_dataset
|
| 106 |
|
| 107 |
+
dataset = load_dataset("mbayan/Arabic-LJP")
|
| 108 |
|
| 109 |
# Access train and test splits
|
| 110 |
train_data = dataset["train"]
|
|
|
|
| 115 |
|
| 116 |
The full implementation, including preprocessing scripts and model training code, is available in our GitHub repository:
|
| 117 |
|
| 118 |
+
🔗 **[GitHub: MohamedBayan/to_be_released](https://github.com/MohamedBayan/Arabic-Legal-Judgment-Prediction)**
|
| 119 |
|
| 120 |
## Citation
|
| 121 |
|