Update README.md
Browse files
README.md
CHANGED
|
@@ -7,4 +7,22 @@ language:
|
|
| 7 |
tags:
|
| 8 |
- legal
|
| 9 |
pretty_name: Prediction of Chinese Judicial Documents
|
| 10 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
tags:
|
| 8 |
- legal
|
| 9 |
pretty_name: Prediction of Chinese Judicial Documents
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
## summary
|
| 13 |
+
The current research on large language models (LLMs) has demonstrated that general-purpose LLMs can retain considerable capability in vertical domains, which is partly attributed to the advancements in existing high-quality research on large model reasoning techniques. The following phenomena have been observed in the application of large language models to legal judgment prediction:
|
| 14 |
+
|
| 15 |
+
1. LLMs exhibit certain biases in predicting criminal charges, tending to favor common and frequently occurring offenses.
|
| 16 |
+
|
| 17 |
+
2. When faced with charges of weak inclination, the models may disregard instructions and opt for charges with stronger inclinations.
|
| 18 |
+
|
| 19 |
+
To fairly evaluate the legal judgment prediction capabilities of large models, we designed the Prediction of Chinese Judicial Documents (PCJD) and developed a sampling-based reasoning method called Elements Reward Guided Inference (ERGI). The PCJD consists of two components: the "Original Set"(ori) and the "Adversarial Set"(adv). The data can be accessed by running the following code:
|
| 20 |
+
|
| 21 |
+
```python
|
| 22 |
+
from datasets import load_dataset
|
| 23 |
+
dataset = load_dataset("knockknock404/PCJD", "all", split="test")
|
| 24 |
+
# Load "Original Set"(ori)
|
| 25 |
+
dataset_math = load_dataset("knockknock404/PCJD", "ori", split="test")
|
| 26 |
+
# Load "Adversarial Set"
|
| 27 |
+
dataset_math = load_dataset("knockknock404/PCJD", "adv", split="test")
|
| 28 |
+
```
|