license: cc-by-4.0
task_categories:
- text-generation
language:
- zh
tags:
- legal
pretty_name: Prediction of Chinese Judicial Documents
summary
The current research on large language models (LLMs) has demonstrated that general-purpose LLMs can retain considerable capability in vertical domains, which is partly attributed to the advancements in existing high-quality research on large model reasoning techniques. The following phenomena have been observed in the application of large language models to legal judgment prediction:
LLMs exhibit certain biases in predicting criminal charges, tending to favor common and frequently occurring offenses.
When faced with charges of weak inclination, the models may disregard instructions and opt for charges with stronger inclinations.
To fairly evaluate the legal judgment prediction capabilities of large models, we designed the Prediction of Chinese Judicial Documents (PCJD) and developed a sampling-based reasoning method called Elements Reward Guided Inference (ERGI). The PCJD consists of two components: the "Original Set"(ori) and the "Adversarial Set"(adv). The data can be accessed by running the following code:
from datasets import load_dataset
dataset = load_dataset("knockknock404/PCJD", "all", split="test")
# Load "Original Set"(ori)
dataset_ori = load_dataset("knockknock404/PCJD", "ori", split="test")
# Load "Adversarial Set"(adv)
dataset_adv = load_dataset("knockknock404/PCJD", "adv", split="test")
limitation
In compliance with the Supreme People's Court of China (SPC) guidelines on judicial document usage, we conducted comprehensive compliance reviews and established legally-binding usage protocols for the dataset. Given the stringent government-mandated requirements imposed on the original documents—including but not limited to personal privacy protection, social impact mitigation, ethical standards, and judicial fairness—the inherent risk of adverse societal effects from the source data is minimal. Throughout data processing, we strictly adhered to the platform’s usage requirements, ensuring all secondary processing and utilization of judicial documents fully complied with relevant regulations. Notably, we engaged both legal experts and non-legal professionals to conduct full-scope audits of the final training and test sets, aiming to mitigate potential risks from diverse perspectives, including social controversies and discrimination. Users must operate strictly under the published guidelines, with any derivative modifications requiring equivalent compliance.
composition
The PCJD consists of two components: the "Original Set"(ori) and the "Adversarial Set"(adv).
- ori: Contains 803 test samples and 176 training samples covering 176 different criminal charges. By default, "Procuratorate" is selected as the text input.
- adv: Contains 803 test samples and 176 training samples covering 176 different criminal charges. Based on ori, the samples with the highest cosine similarity to high-frequency charges are selected as adversarial examples.
format
The input for baseline testing meets the following format:
根据案情描述和已有步骤仅给出一个推理。如果是结论则直接输出<e></e>,例如<e>盗窃罪</e>。如果是步骤则直接输出<p></p>,例如<p>步骤1:…</p>\n案情描述:{inputs}\n已有推理步骤:\n{steps}\n
In this regard, we have provided a simple processing method in the '\code':
from main import get_args
from utils.warp import WarpLJP
args = get_args()
warp = WarpLJP(args=args)
for data in dataset:
x,_,y = warp.processing_single(data)
code
We have provided a class of Monte Carlo tree sampling-based inference methods as baselines in the code, which can be directly invoked via scripts:
sh test.sh
If fine-tuning a reward model is required, execute:
sh train.sh
The reward model in inference can be loaded by adding --reward_model_path to the command.
The evaluation script is located in 'code/utils/matrix'. We employ micro-F1 and macro-F1 as fundamental evaluation metrics. The model's output adheres to the following format:
{"x": "被告人史某...", "y": "<e>盗窃罪</e>", "pred": "<e>盗窃罪</e>"}
where "x" denotes the text description, "y" represents the label, and "pred" indicates the model's prediction. The evaluation results can be easily obtained by running the script:
from utils.matrix import get_f1
get_f1(outputs)