Spaces:
Build error
Build error
Commit ·
5618b79
0
Parent(s):
initial commit
Browse files- .gitattributes +36 -0
- .gitignore +6 -0
- README.md +76 -0
- app.py +496 -0
- create_judgment_embeddings.py +96 -0
- merge_criminal_cases.py +39 -0
- rag_run.py +284 -0
- requirements.txt +9 -0
.gitattributes
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
faiss_data/criminal_judgments.index filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
/data_source/
|
| 2 |
+
/data_ori/
|
| 3 |
+
/faiss_data/
|
| 4 |
+
.env
|
| 5 |
+
.DS_Store
|
| 6 |
+
.cursorignore
|
README.md
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: Judge Search
|
| 3 |
+
emoji: 💬
|
| 4 |
+
colorFrom: yellow
|
| 5 |
+
colorTo: purple
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 5.0.1
|
| 8 |
+
app_file: app.py
|
| 9 |
+
pinned: false
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# 司法判決搜尋與問答系統
|
| 13 |
+
|
| 14 |
+
## 專案概述
|
| 15 |
+
|
| 16 |
+
本系統是一個專為研究人員及一般查詢設計的司法判決搜尋與問答平台。根據用戶的自然語言查詢司法判決,提供摘要與判決書連結。
|
| 17 |
+
|
| 18 |
+
### 主要功能
|
| 19 |
+
- 人名/關鍵詞搜尋:能夠精確識別判決中的人名、法條、案由等關鍵資訊(支援文本匹配搜尋、向量相似度搜尋及混合搜尋策略)
|
| 20 |
+
- 專業格式化:將複雜的判決文書整理為易於取得資訊的列表及摘要,並提供原文連結
|
| 21 |
+
- 互動式使用者介面:使用Gradio架構UI
|
| 22 |
+
|
| 23 |
+
## 資料來源
|
| 24 |
+
|
| 25 |
+
本系統使用台灣司法判決資料庫中的刑事判決書,經過清理與預處理。主要資料來源為[司法院資料開放平臺](https://opendata.judicial.gov.tw/):
|
| 26 |
+
- 刑事判決文書:透過資料清理整併至`data_source/merged_criminal_cases.json`
|
| 27 |
+
- 每筆判決資料包含案號、判決日期、案件標題、判決全文及PDF連結等資訊
|
| 28 |
+
|
| 29 |
+
## 技術架構
|
| 30 |
+
|
| 31 |
+
### 主要方法
|
| 32 |
+
1. **向量化**
|
| 33 |
+
2. **搜尋方式**:
|
| 34 |
+
- FAISS(Facebook AI相似性搜尋)提供向量檢索
|
| 35 |
+
- 自訂文本搜尋函數用於關鍵詞匹配(支援人名等專有名詞搜尋)
|
| 36 |
+
3. **AI推理**:整合OpenAI API(支援GPT-4o-mini和GPT-4o)
|
| 37 |
+
4. **使用者介面**:Gradio架構的互動式Web界面
|
| 38 |
+
|
| 39 |
+
### 主要檔案
|
| 40 |
+
- `create_judgment_embeddings.py`:處理判決書文本並生成向量索引、包含Meta索引
|
| 41 |
+
- `app.py`:主要程式,包含搜尋邏輯及Gradio界面
|
| 42 |
+
- `faiss_data/`:存放FAISS向量索引及Meta索引
|
| 43 |
+
- `data_source/`:原始判決書資料(未上傳)
|
| 44 |
+
- `merge_criminal_cases.py`:資料整理(合併判決書)
|
| 45 |
+
- `requirements.txt`:專案所需套件
|
| 46 |
+
|
| 47 |
+
## 如何使用
|
| 48 |
+
|
| 49 |
+
### 安裝與設定
|
| 50 |
+
1. 安裝所需套件:
|
| 51 |
+
```bash
|
| 52 |
+
pip install -r requirements.txt
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
2. 配置OpenAI API密鑰(在 `.env` 檔案或存到伺服器的環境變數):
|
| 56 |
+
```
|
| 57 |
+
OPENAI_API_KEY=your_api_key
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
3. 生成向量索引(首次使用時):
|
| 61 |
+
```bash
|
| 62 |
+
python create_judgment_embeddings.py
|
| 63 |
+
```
|
| 64 |
+
若有需要,須先整併裁判書(merge_criminal_cases.py)
|
| 65 |
+
|
| 66 |
+
4. 啟動應用程式:
|
| 67 |
+
```bash
|
| 68 |
+
python app.py
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
### 使用界面
|
| 72 |
+
- **問題輸入**:在輸入框中輸入與刑事判決相關的問題
|
| 73 |
+
- **搜尋模式**:選擇「自動」、「僅文本搜尋」或「僅向量搜尋」
|
| 74 |
+
- **模型選擇**:可選擇GPT-4o-mini(快速)或GPT-4o(更精確)
|
| 75 |
+
- **參數調整**:可調整生成參數,如最大字數、溫度、多樣性等
|
| 76 |
+
- **查看原文**:點擊判決連結可查看完整判決原文
|
app.py
ADDED
|
@@ -0,0 +1,496 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
import os
|
| 3 |
+
import json
|
| 4 |
+
import pickle
|
| 5 |
+
import re
|
| 6 |
+
from typing import List, Tuple, Dict, Any
|
| 7 |
+
from dotenv import load_dotenv
|
| 8 |
+
import openai
|
| 9 |
+
import faiss
|
| 10 |
+
import numpy as np
|
| 11 |
+
from sentence_transformers import SentenceTransformer
|
| 12 |
+
|
| 13 |
+
# 載入環境變數
|
| 14 |
+
load_dotenv()
|
| 15 |
+
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
|
| 16 |
+
# API金鑰不再是必須的,因為使用者可以提供自己的金鑰
|
| 17 |
+
if OPENAI_API_KEY:
|
| 18 |
+
print("[INFO] 已從環境變數讀取API金鑰")
|
| 19 |
+
else:
|
| 20 |
+
print("[INFO] 未設置環境變數API金鑰,將使用使用者提供的金鑰")
|
| 21 |
+
|
| 22 |
+
# 配置
|
| 23 |
+
FAISS_DATA_DIR = "faiss_data" # FAISS 資料的資料夾
|
| 24 |
+
MODEL_NAME = 'paraphrase-multilingual-MiniLM-L12-v2' # 使用與建立索引時相同的模型
|
| 25 |
+
|
| 26 |
+
def load_faiss_data(data_dir=FAISS_DATA_DIR):
|
| 27 |
+
"""載入 FAISS 索引和 meta 資料"""
|
| 28 |
+
if not os.path.exists(data_dir):
|
| 29 |
+
raise ValueError(f"未找到 FAISS 資料資料夾:{data_dir}")
|
| 30 |
+
|
| 31 |
+
# 載入 FAISS 索引
|
| 32 |
+
index_path = os.path.join(data_dir, 'criminal_judgments.index')
|
| 33 |
+
meta_path = os.path.join(data_dir, 'criminal_judgments_meta.pkl')
|
| 34 |
+
|
| 35 |
+
if not os.path.exists(index_path) or not os.path.exists(meta_path):
|
| 36 |
+
raise ValueError("未找到必要的 FAISS 資料檔案")
|
| 37 |
+
|
| 38 |
+
print(f"[INFO] 載入 FAISS 索引和 meta 資料")
|
| 39 |
+
index = faiss.read_index(index_path)
|
| 40 |
+
with open(meta_path, 'rb') as f:
|
| 41 |
+
meta_data = pickle.load(f)
|
| 42 |
+
|
| 43 |
+
return index, meta_data
|
| 44 |
+
|
| 45 |
+
# 載入模型和資料
|
| 46 |
+
try:
|
| 47 |
+
model = SentenceTransformer(MODEL_NAME)
|
| 48 |
+
faiss_index, meta_data = load_faiss_data()
|
| 49 |
+
print("[INFO] 成功載入模型和 FAISS 資料")
|
| 50 |
+
except Exception as e:
|
| 51 |
+
print(f"[ERROR] 載入失敗:{e}")
|
| 52 |
+
print("[INFO] 請先執行 create_judgment_embeddings.py 建立索引")
|
| 53 |
+
model, faiss_index, meta_data = None, None, None
|
| 54 |
+
|
| 55 |
+
def format_judgment_info(judgment_meta):
|
| 56 |
+
"""格式化判決資訊"""
|
| 57 |
+
return f"""
|
| 58 |
+
案件編號:{judgment_meta['JID']}
|
| 59 |
+
年度:{judgment_meta['JYEAR']}
|
| 60 |
+
案件類型:{judgment_meta['JCASE']}
|
| 61 |
+
案號:{judgment_meta['JNO']}
|
| 62 |
+
判決日期:{judgment_meta['JDATE']}
|
| 63 |
+
案件標題:{judgment_meta['JTITLE']}
|
| 64 |
+
PDF連結:{judgment_meta['JPDF']}
|
| 65 |
+
"""
|
| 66 |
+
|
| 67 |
+
def format_retrieval_results(results, meta_data):
|
| 68 |
+
"""格式化檢索結果,用於顯示給用戶"""
|
| 69 |
+
html = "<div style='max-height: 500px; overflow-y: auto; padding: 1rem; background-color: #f7f7f7; border-radius: 0.5rem;'>"
|
| 70 |
+
html += "<h3>📚 檢索到的相關判決</h3>"
|
| 71 |
+
|
| 72 |
+
for i, (text, meta, match_context) in enumerate(results, start=1):
|
| 73 |
+
html += f"<div style='margin-bottom: 1rem; padding: 0.8rem; background-color: white; border-radius: 0.3rem; border-left: 4px solid #3498db;'>"
|
| 74 |
+
html += f"<p><strong>判決 {i}</strong></p>"
|
| 75 |
+
html += f"<p><strong>案件編號:</strong> {meta['JID']}</p>"
|
| 76 |
+
html += f"<p><strong>案件類型:</strong> {meta['JCASE']}</p>"
|
| 77 |
+
html += f"<p><strong>判決日期:</strong> {meta['JDATE']}</p>"
|
| 78 |
+
html += f"<p><strong>案件標題:</strong> {meta['JTITLE']}</p>"
|
| 79 |
+
html += f"<p><strong>PDF連結:</strong> <a href='{meta['JPDF']}' target='_blank'>查看原文</a></p>"
|
| 80 |
+
|
| 81 |
+
# 顯示匹配上下文(若有)
|
| 82 |
+
if match_context:
|
| 83 |
+
html += f"<div style='margin-top: 0.5rem; padding: 0.5rem; background-color: #fffbea; border-radius: 0.3rem; border-left: 4px solid #f0c674;'>"
|
| 84 |
+
html += f"<p><strong>匹配內容:</strong><br>{match_context}</p>"
|
| 85 |
+
html += "</div>"
|
| 86 |
+
|
| 87 |
+
# 添加判決內容摘要,但縮短長度
|
| 88 |
+
html += f"<div style='margin-top: 0.5rem; padding: 0.5rem; background-color: #f8f9fa; border-radius: 0.3rem;'>"
|
| 89 |
+
html += f"<p><strong>相關內容摘要:</strong><br>{text[:250]}...</p>"
|
| 90 |
+
html += "</div></div>"
|
| 91 |
+
|
| 92 |
+
html += "</div>"
|
| 93 |
+
return html
|
| 94 |
+
|
| 95 |
+
def preprocess_judgment_text(text: str) -> str:
|
| 96 |
+
"""預處理判決書文本,處理可能觸發內容審查的詞彙"""
|
| 97 |
+
# 建立替換對照表,保留專業用語的原意
|
| 98 |
+
replacements = {
|
| 99 |
+
"殺人": "故意致人於死",
|
| 100 |
+
"自殺": "自我傷害",
|
| 101 |
+
"性侵": "違反性自主",
|
| 102 |
+
"強制性交": "違反性自主",
|
| 103 |
+
"毒品": "管制藥品",
|
| 104 |
+
"槍械": "危險物品",
|
| 105 |
+
"暴力": "強制力",
|
| 106 |
+
"血": "生理組織",
|
| 107 |
+
}
|
| 108 |
+
|
| 109 |
+
# 進行替換
|
| 110 |
+
processed_text = text
|
| 111 |
+
for old, new in replacements.items():
|
| 112 |
+
processed_text = processed_text.replace(old, new)
|
| 113 |
+
|
| 114 |
+
return processed_text
|
| 115 |
+
|
| 116 |
+
def text_search(query: str, meta_data: List[Dict[str, Any]], max_results: int = 10) -> List[Tuple[int, str]]:
|
| 117 |
+
"""
|
| 118 |
+
使用文本搜尋方式尋找相關判決
|
| 119 |
+
|
| 120 |
+
Args:
|
| 121 |
+
query: 搜尋關鍵字
|
| 122 |
+
meta_data: 判決meta資料列表
|
| 123 |
+
max_results: 最大返回結果數
|
| 124 |
+
|
| 125 |
+
Returns:
|
| 126 |
+
搜尋結果:[(索引,匹配上下文), ...]
|
| 127 |
+
"""
|
| 128 |
+
results = []
|
| 129 |
+
|
| 130 |
+
# 如果查詢字串少於2個字符,返回空結果
|
| 131 |
+
if len(query.strip()) < 2:
|
| 132 |
+
return results
|
| 133 |
+
|
| 134 |
+
# 將查詢轉為小寫並移除特殊字符
|
| 135 |
+
clean_query = re.sub(r'[^\w\s]', '', query.lower())
|
| 136 |
+
search_terms = clean_query.split()
|
| 137 |
+
|
| 138 |
+
# 為每個判決計算匹配分數
|
| 139 |
+
for idx, meta in enumerate(meta_data):
|
| 140 |
+
full_text = meta['JFULL'].lower()
|
| 141 |
+
|
| 142 |
+
# 計算匹配分數和匹配上下文
|
| 143 |
+
score = 0
|
| 144 |
+
best_context = ""
|
| 145 |
+
best_score = 0
|
| 146 |
+
|
| 147 |
+
# 對每個搜尋詞進行匹配
|
| 148 |
+
for term in search_terms:
|
| 149 |
+
if term in full_text:
|
| 150 |
+
# 完全匹配
|
| 151 |
+
term_score = len(term) * 2
|
| 152 |
+
|
| 153 |
+
# 找出匹配詞的上下文(前後50個字符)
|
| 154 |
+
match_idx = full_text.find(term)
|
| 155 |
+
start_idx = max(0, match_idx - 50)
|
| 156 |
+
end_idx = min(len(full_text), match_idx + len(term) + 50)
|
| 157 |
+
context = "..." + meta['JFULL'][start_idx:end_idx] + "..."
|
| 158 |
+
|
| 159 |
+
if term_score > best_score:
|
| 160 |
+
best_score = term_score
|
| 161 |
+
best_context = context
|
| 162 |
+
|
| 163 |
+
score += term_score
|
| 164 |
+
else:
|
| 165 |
+
# 部分匹配(例如:搜尋ABC,找到BC)
|
| 166 |
+
for partial_size in range(len(term)-1, 0, -1):
|
| 167 |
+
for i in range(len(term) - partial_size + 1):
|
| 168 |
+
partial = term[i:i+partial_size]
|
| 169 |
+
if len(partial) >= 2 and partial in full_text: # 只考慮至少2個字符的部分
|
| 170 |
+
partial_score = partial_size
|
| 171 |
+
|
| 172 |
+
# 找出匹配詞的上下文
|
| 173 |
+
match_idx = full_text.find(partial)
|
| 174 |
+
start_idx = max(0, match_idx - 50)
|
| 175 |
+
end_idx = min(len(full_text), match_idx + len(partial) + 50)
|
| 176 |
+
context = "..." + meta['JFULL'][start_idx:end_idx] + "..."
|
| 177 |
+
|
| 178 |
+
if partial_score > best_score:
|
| 179 |
+
best_score = partial_score
|
| 180 |
+
best_context = context
|
| 181 |
+
|
| 182 |
+
score += partial_score
|
| 183 |
+
break # 找到第一個部分匹配就跳出
|
| 184 |
+
|
| 185 |
+
# 如果有匹配,加入結果
|
| 186 |
+
if score > 0:
|
| 187 |
+
results.append((idx, score, best_context))
|
| 188 |
+
|
| 189 |
+
# 根據分數排序並返回前N個結果
|
| 190 |
+
results.sort(key=lambda x: x[1], reverse=True)
|
| 191 |
+
return [(idx, context) for idx, _, context in results[:max_results]]
|
| 192 |
+
|
| 193 |
+
def combined_search(query: str, k: int = 5, search_mode: str = "自動(先文本後向量)"):
|
| 194 |
+
"""
|
| 195 |
+
結合文本搜尋和向量搜尋
|
| 196 |
+
|
| 197 |
+
Args:
|
| 198 |
+
query: 搜尋查詢
|
| 199 |
+
k: 返回結果數量
|
| 200 |
+
search_mode: 搜尋模式,可選 "自動(先文本後向量)", "僅文本搜尋", "僅向量搜尋"
|
| 201 |
+
|
| 202 |
+
Returns:
|
| 203 |
+
搜尋結果列表
|
| 204 |
+
"""
|
| 205 |
+
# 根據搜尋模式選擇搜尋策略
|
| 206 |
+
if search_mode == "僅文本搜尋":
|
| 207 |
+
# 只使用文本搜尋
|
| 208 |
+
text_results = text_search(query, meta_data)
|
| 209 |
+
if not text_results:
|
| 210 |
+
return [] # 沒有找到匹配結果
|
| 211 |
+
|
| 212 |
+
results = []
|
| 213 |
+
for idx, context in text_results[:k]:
|
| 214 |
+
judgment_meta = meta_data[idx]
|
| 215 |
+
processed_text = preprocess_judgment_text(judgment_meta['JFULL'])
|
| 216 |
+
results.append((processed_text, judgment_meta, context))
|
| 217 |
+
return results
|
| 218 |
+
|
| 219 |
+
elif search_mode == "僅向量搜尋":
|
| 220 |
+
# 只使用向量搜尋
|
| 221 |
+
query_vector = model.encode([query])[0].astype('float32')
|
| 222 |
+
distances, indices = faiss_index.search(query_vector.reshape(1, -1), k)
|
| 223 |
+
|
| 224 |
+
results = []
|
| 225 |
+
for idx in indices[0]:
|
| 226 |
+
judgment_meta = meta_data[idx]
|
| 227 |
+
processed_text = preprocess_judgment_text(judgment_meta['JFULL'])
|
| 228 |
+
results.append((processed_text, judgment_meta, "")) # 無匹配上下文
|
| 229 |
+
return results
|
| 230 |
+
|
| 231 |
+
else: # 預設為 "自動(先文本後向量)"
|
| 232 |
+
# 先進行文本搜尋
|
| 233 |
+
text_results = text_search(query, meta_data)
|
| 234 |
+
|
| 235 |
+
# 如果文本搜尋有結果,優先使用
|
| 236 |
+
if text_results:
|
| 237 |
+
results = []
|
| 238 |
+
for idx, context in text_results[:k]:
|
| 239 |
+
judgment_meta = meta_data[idx]
|
| 240 |
+
processed_text = preprocess_judgment_text(judgment_meta['JFULL'])
|
| 241 |
+
results.append((processed_text, judgment_meta, context))
|
| 242 |
+
return results
|
| 243 |
+
|
| 244 |
+
# 如果文本搜尋無結果,使用向量搜尋作為備用
|
| 245 |
+
query_vector = model.encode([query])[0].astype('float32')
|
| 246 |
+
distances, indices = faiss_index.search(query_vector.reshape(1, -1), k)
|
| 247 |
+
|
| 248 |
+
results = []
|
| 249 |
+
for idx in indices[0]:
|
| 250 |
+
judgment_meta = meta_data[idx]
|
| 251 |
+
processed_text = preprocess_judgment_text(judgment_meta['JFULL'])
|
| 252 |
+
results.append((processed_text, judgment_meta, "")) # 無匹配上下文
|
| 253 |
+
|
| 254 |
+
return results
|
| 255 |
+
|
| 256 |
+
def respond(
|
| 257 |
+
message: str,
|
| 258 |
+
history: List[Tuple[str, str]],
|
| 259 |
+
system_message: str,
|
| 260 |
+
max_tokens: int,
|
| 261 |
+
temperature: float,
|
| 262 |
+
top_p: float,
|
| 263 |
+
show_retrieval_results: bool,
|
| 264 |
+
user_api_key: str = "",
|
| 265 |
+
k: int = 5,
|
| 266 |
+
model_name: str = "gpt-4o-mini",
|
| 267 |
+
search_mode: str = "自動(先文本後向量)",
|
| 268 |
+
):
|
| 269 |
+
# 檢查必要組件是否已載入
|
| 270 |
+
if None in (model, faiss_index, meta_data):
|
| 271 |
+
return "錯誤:系統未完全載入。請確認已執行 create_judgment_embeddings.py 建立索引。", None
|
| 272 |
+
|
| 273 |
+
# 檢查 API 金鑰
|
| 274 |
+
api_key = user_api_key.strip() if user_api_key.strip() else OPENAI_API_KEY
|
| 275 |
+
if not api_key:
|
| 276 |
+
return "錯誤:未提供 OpenAI API 金鑰。請在設定中輸入您的 API 金鑰,或在環境變數中設置。", None
|
| 277 |
+
|
| 278 |
+
# 臨時設置 API 金鑰,僅用於本次請求
|
| 279 |
+
client = openai.OpenAI(api_key=api_key)
|
| 280 |
+
|
| 281 |
+
# 根據問題長度動態調整檢索數量,避免超出上下文限制
|
| 282 |
+
effective_k = min(k, 3) if len(message) > 100 else min(k, 4)
|
| 283 |
+
|
| 284 |
+
# 搜尋相關判決(使用改進的搜尋方法)
|
| 285 |
+
results = combined_search(message, k=effective_k, search_mode=search_mode)
|
| 286 |
+
|
| 287 |
+
# 如果沒有找到結果,返回提示訊息
|
| 288 |
+
if not results:
|
| 289 |
+
return "抱歉,我沒有找到與您問題相關的判決。請嘗試使用不同的搜尋詞或切換搜尋模式。", None
|
| 290 |
+
|
| 291 |
+
# 準備上下文,限制每個判決摘要的長度
|
| 292 |
+
context = ""
|
| 293 |
+
for i, (text, meta, match_context) in enumerate(results, start=1):
|
| 294 |
+
context += f"\n[判決 {i}]\n"
|
| 295 |
+
context += format_judgment_info(meta)
|
| 296 |
+
|
| 297 |
+
# 如果有匹配上下文,加入上下文
|
| 298 |
+
if match_context:
|
| 299 |
+
context += f"匹配內容:\n{match_context}\n"
|
| 300 |
+
|
| 301 |
+
# 限制每個判決文本的長度
|
| 302 |
+
summary_length = 300
|
| 303 |
+
context += f"內容摘要:\n{text[:summary_length]}...\n"
|
| 304 |
+
|
| 305 |
+
# 如果用戶要求顯示檢索結果
|
| 306 |
+
retrieval_display = format_retrieval_results(results, meta_data) if show_retrieval_results else None
|
| 307 |
+
|
| 308 |
+
# 建立系統提示
|
| 309 |
+
rag_system_message = f"""{system_message}
|
| 310 |
+
|
| 311 |
+
這是一個司法判決書分析系統,所有內容均來自真實判決書。請注意:
|
| 312 |
+
1. 判決書中可能包含暴力、犯罪等敏感描述,這些是判決書的必要組成部分
|
| 313 |
+
2. 請以客觀、專業的法律角度分析這些內容
|
| 314 |
+
3. 不要對判決書中的敏感內容進行道德評判或迴避
|
| 315 |
+
4. 保持專業中立的態度,準確傳達判決書的內容
|
| 316 |
+
|
| 317 |
+
根據以下判決資料回答用戶問題。若問題無法從判決資料中回答,請說明你無法提供答案,不要憑空編造。
|
| 318 |
+
|
| 319 |
+
請在回答中引用具體的判決案號和日期,並說明資訊來源。您可以透過提供的 PDF 連結查看完整判決內容。
|
| 320 |
+
|
| 321 |
+
參考判決:
|
| 322 |
+
{context}
|
| 323 |
+
"""
|
| 324 |
+
|
| 325 |
+
# 建立訊息清單
|
| 326 |
+
messages = [
|
| 327 |
+
{"role": "system", "content": "你現在是一個專業的法律助理,正在處理司法判決書。請以客觀、專業的態度處理所有內容,包括敏感話題。"},
|
| 328 |
+
{"role": "system", "content": rag_system_message}
|
| 329 |
+
]
|
| 330 |
+
|
| 331 |
+
# 只保留最近的幾次對話歷史,減少 token 使用量
|
| 332 |
+
recent_history = history[-2:] if len(history) > 2 else history
|
| 333 |
+
for val in recent_history:
|
| 334 |
+
if val[0]:
|
| 335 |
+
messages.append({"role": "user", "content": val[0]})
|
| 336 |
+
if val[1]:
|
| 337 |
+
messages.append({"role": "assistant", "content": val[1]})
|
| 338 |
+
messages.append({"role": "user", "content": message})
|
| 339 |
+
|
| 340 |
+
# 呼叫 OpenAI API 生成回答
|
| 341 |
+
response = ""
|
| 342 |
+
try:
|
| 343 |
+
for chunk in client.chat.completions.create(
|
| 344 |
+
model=model_name,
|
| 345 |
+
messages=messages,
|
| 346 |
+
max_tokens=max_tokens,
|
| 347 |
+
stream=True,
|
| 348 |
+
temperature=temperature,
|
| 349 |
+
top_p=top_p,
|
| 350 |
+
presence_penalty=0.6,
|
| 351 |
+
frequency_penalty=0.3,
|
| 352 |
+
):
|
| 353 |
+
if chunk.choices[0].delta.content is not None:
|
| 354 |
+
token = chunk.choices[0].delta.content
|
| 355 |
+
response += token
|
| 356 |
+
yield response, retrieval_display
|
| 357 |
+
except Exception as e:
|
| 358 |
+
error_message = f"呼叫 OpenAI API 時發生錯誤:{str(e)}"
|
| 359 |
+
yield error_message, retrieval_display
|
| 360 |
+
|
| 361 |
+
# 自定義 CSS
|
| 362 |
+
css = """
|
| 363 |
+
.gradio-container {
|
| 364 |
+
font-family: "Noto Sans TC", sans-serif !important;
|
| 365 |
+
}
|
| 366 |
+
.retrieval-results {
|
| 367 |
+
margin-top: 1rem;
|
| 368 |
+
margin-bottom: 1rem;
|
| 369 |
+
}
|
| 370 |
+
"""
|
| 371 |
+
|
| 372 |
+
# Gradio 界面
|
| 373 |
+
with gr.Blocks(css=css, theme=gr.themes.Soft()) as demo:
|
| 374 |
+
gr.HTML("<h1>🔍 司法判決書搜尋系統(示範)</h1>")
|
| 375 |
+
gr.HTML("<p>這個系統能夠搜尋並分析刑事判決書的內容,支援人名搜尋(用於測試與展示,資料為2025年2月全台灣各級法院之刑事判決)。輸入您的問題��系統將檢索相關判決並給出回答。</p>")
|
| 376 |
+
|
| 377 |
+
with gr.Row():
|
| 378 |
+
with gr.Column(scale=3):
|
| 379 |
+
chatbot = gr.Chatbot(height=500)
|
| 380 |
+
msg = gr.Textbox(
|
| 381 |
+
placeholder="請輸入您的問題...",
|
| 382 |
+
label="問題輸入",
|
| 383 |
+
lines=3
|
| 384 |
+
)
|
| 385 |
+
with gr.Row():
|
| 386 |
+
submit_btn = gr.Button("提交問題", variant="primary")
|
| 387 |
+
clear_btn = gr.Button("清除對話")
|
| 388 |
+
|
| 389 |
+
system_message = gr.Textbox(
|
| 390 |
+
value="你是一個專業的法律助理,專門處理司法判決書的分析工作。這是一個司法專業場景,判決書中可能包含暴力、犯罪等敏感描述,這些都是判決書的必要內容,請以客觀、專業的態度處理,不要過度審查或迴避這些內容。請使用專業且平易近人的方式回答問題,並引用具體的判決案號和日期。我會提供判決書的摘要內容,請基於這些摘要進行分析,需要時可以參考完整判決內容的PDF連結。",
|
| 391 |
+
label="系統訊息"
|
| 392 |
+
)
|
| 393 |
+
|
| 394 |
+
with gr.Column(scale=1):
|
| 395 |
+
show_retrieval = gr.Checkbox(label="顯示檢索結果", value=True)
|
| 396 |
+
|
| 397 |
+
# 添加 API 金鑰輸入欄位
|
| 398 |
+
api_key_input = gr.Textbox(
|
| 399 |
+
placeholder="輸入您的 OpenAI API 金鑰...",
|
| 400 |
+
label="OpenAI API 金鑰(僅用於本次使用,不會儲存)",
|
| 401 |
+
type="password"
|
| 402 |
+
)
|
| 403 |
+
|
| 404 |
+
# 金鑰說明
|
| 405 |
+
gr.HTML("""
|
| 406 |
+
<div style="margin-bottom: 10px;">
|
| 407 |
+
<p style="font-size: 0.85em; color: #666;">
|
| 408 |
+
您的 API 金鑰僅用於當前會話,不會被儲存或記錄。
|
| 409 |
+
</p>
|
| 410 |
+
</div>
|
| 411 |
+
""")
|
| 412 |
+
|
| 413 |
+
# 搜尋模式選擇
|
| 414 |
+
search_mode = gr.Radio(
|
| 415 |
+
choices=["自動(先文本後向量)", "僅文本搜尋", "僅向量搜尋"],
|
| 416 |
+
value="自動(先文本後向量)",
|
| 417 |
+
label="搜尋模式"
|
| 418 |
+
)
|
| 419 |
+
|
| 420 |
+
max_tokens = gr.Slider(minimum=1, maximum=2048, value=512, step=1, label="最大生成字數")
|
| 421 |
+
temperature = gr.Slider(minimum=0.1, maximum=1.0, value=0.7, step=0.1, label="溫度 (Temperature)")
|
| 422 |
+
top_p = gr.Slider(minimum=0.1, maximum=1.0, value=0.95, step=0.05, label="Top-p 採樣")
|
| 423 |
+
|
| 424 |
+
# 模型選擇
|
| 425 |
+
model_choice = gr.Radio(
|
| 426 |
+
choices=["gpt-4o-mini", "gpt-4o"],
|
| 427 |
+
value="gpt-4o-mini",
|
| 428 |
+
label="選擇模型"
|
| 429 |
+
)
|
| 430 |
+
|
| 431 |
+
gr.Markdown("""
|
| 432 |
+
### 📝 系統說明
|
| 433 |
+
|
| 434 |
+
#### 參數設定
|
| 435 |
+
- **API 金鑰**:輸入您的 OpenAI API 金鑰(不會被儲存)
|
| 436 |
+
- **顯示檢索結果**:勾選後會顯示系統檢索到的原始判決內容
|
| 437 |
+
- **搜尋模式**:選擇如何搜尋判決書(直接使用AI向量搜尋可能遭遇內容審查,如需搜尋敏感詞彙或人名,請用自動或文本搜尋)
|
| 438 |
+
- **自動**:先嘗試文本搜尋,若無結果再用向量搜尋
|
| 439 |
+
- **僅文本**:使用精確字詞匹配(支持部分匹配)
|
| 440 |
+
- **僅向量**:使用語義相似度搜尋
|
| 441 |
+
- **最大生成字數**:控制 AI 回答的最大長度(1-2048)
|
| 442 |
+
- **溫度**:控制回答的創造性,越高越有創意(0.1-1.0)
|
| 443 |
+
- **Top-p 採樣**:控制用詞的多樣性(0.1-1.0)
|
| 444 |
+
- **模型選擇**:選擇不同的 OpenAI 模型
|
| 445 |
+
|
| 446 |
+
#### 系統特點
|
| 447 |
+
- **資料來源**:2025年2月份各級法院刑事判決書([司法院資料開放平臺](https://opendata.judicial.gov.tw/))
|
| 448 |
+
- **向量檢索**:FAISS 向量資料庫
|
| 449 |
+
- **文本搜尋**:支持模糊匹配,例如搜尋 ABC 也可以找到 BC
|
| 450 |
+
#### 使用提示
|
| 451 |
+
1. 搜尋人名時,建議使用「文本搜尋」模式
|
| 452 |
+
2. 搜尋概念或法律問題時,建議使用「向量搜尋」模式
|
| 453 |
+
3. 「自動」模式適合大多數情況
|
| 454 |
+
""")
|
| 455 |
+
|
| 456 |
+
retrieval_results = gr.HTML(label="檢索結果", elem_classes=["retrieval-results"])
|
| 457 |
+
|
| 458 |
+
def user(user_message, history):
|
| 459 |
+
return "", history + [[user_message, None]]
|
| 460 |
+
|
| 461 |
+
def bot(history, system_msg, max_tok, temp, top_p_val, show_ret, api_key, model_name, search_mode_val):
|
| 462 |
+
user_message = history[-1][0]
|
| 463 |
+
history[-1][1] = ""
|
| 464 |
+
for response, retrieval_html in respond(
|
| 465 |
+
user_message,
|
| 466 |
+
history[:-1],
|
| 467 |
+
system_msg,
|
| 468 |
+
max_tok,
|
| 469 |
+
temp,
|
| 470 |
+
top_p_val,
|
| 471 |
+
show_ret,
|
| 472 |
+
api_key,
|
| 473 |
+
model_name=model_name,
|
| 474 |
+
search_mode=search_mode_val
|
| 475 |
+
):
|
| 476 |
+
history[-1][1] = response
|
| 477 |
+
if retrieval_html:
|
| 478 |
+
yield history, retrieval_html
|
| 479 |
+
else:
|
| 480 |
+
yield history, None
|
| 481 |
+
|
| 482 |
+
def clear_history():
|
| 483 |
+
return [], None
|
| 484 |
+
|
| 485 |
+
msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
|
| 486 |
+
bot, [chatbot, system_message, max_tokens, temperature, top_p, show_retrieval, api_key_input, model_choice, search_mode], [chatbot, retrieval_results]
|
| 487 |
+
)
|
| 488 |
+
|
| 489 |
+
submit_btn.click(user, [msg, chatbot], [msg, chatbot], queue=False).then(
|
| 490 |
+
bot, [chatbot, system_message, max_tokens, temperature, top_p, show_retrieval, api_key_input, model_choice, search_mode], [chatbot, retrieval_results]
|
| 491 |
+
)
|
| 492 |
+
|
| 493 |
+
clear_btn.click(clear_history, None, [chatbot, retrieval_results])
|
| 494 |
+
|
| 495 |
+
if __name__ == "__main__":
|
| 496 |
+
demo.launch()
|
create_judgment_embeddings.py
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
from pathlib import Path
|
| 3 |
+
from typing import List, Dict, Any
|
| 4 |
+
import numpy as np
|
| 5 |
+
from tqdm import tqdm
|
| 6 |
+
import faiss
|
| 7 |
+
from sentence_transformers import SentenceTransformer
|
| 8 |
+
import pickle
|
| 9 |
+
|
| 10 |
+
def load_judgments() -> List[Dict[str, Any]]:
|
| 11 |
+
"""載入合併後的判決資料"""
|
| 12 |
+
file_path = Path('data_source/merged_criminal_cases.json')
|
| 13 |
+
with open(file_path, 'r', encoding='utf-8') as f:
|
| 14 |
+
return json.load(f)
|
| 15 |
+
|
| 16 |
+
def create_embeddings(judgments: List[Dict[str, Any]], batch_size: int = 32) -> tuple:
|
| 17 |
+
"""將判決文本轉換為 embeddings,並準備 meta 資料"""
|
| 18 |
+
# 初始化 BERT 模型,使用多語言模型以支援中文
|
| 19 |
+
model = SentenceTransformer('paraphrase-multilingual-MiniLM-L12-v2')
|
| 20 |
+
|
| 21 |
+
# 準備文本和 meta 資料
|
| 22 |
+
texts = []
|
| 23 |
+
meta_data = []
|
| 24 |
+
|
| 25 |
+
for judgment in judgments:
|
| 26 |
+
# 取出判決全文作為文本
|
| 27 |
+
full_text = judgment['JFULL']
|
| 28 |
+
texts.append(full_text)
|
| 29 |
+
|
| 30 |
+
# 其他欄位作為 meta 資料,包含全文
|
| 31 |
+
meta = {
|
| 32 |
+
'JID': judgment['JID'],
|
| 33 |
+
'JYEAR': judgment['JYEAR'],
|
| 34 |
+
'JCASE': judgment['JCASE'],
|
| 35 |
+
'JNO': judgment['JNO'],
|
| 36 |
+
'JDATE': judgment['JDATE'],
|
| 37 |
+
'JTITLE': judgment['JTITLE'],
|
| 38 |
+
'JPDF': judgment['JPDF'],
|
| 39 |
+
'JFULL': full_text
|
| 40 |
+
}
|
| 41 |
+
meta_data.append(meta)
|
| 42 |
+
|
| 43 |
+
# 批次處理文本轉換為 embeddings
|
| 44 |
+
embeddings = []
|
| 45 |
+
for i in tqdm(range(0, len(texts), batch_size), desc="生成 Embeddings"):
|
| 46 |
+
batch_texts = texts[i:i + batch_size]
|
| 47 |
+
batch_embeddings = model.encode(batch_texts, show_progress_bar=False)
|
| 48 |
+
embeddings.extend(batch_embeddings)
|
| 49 |
+
|
| 50 |
+
embeddings_array = np.array(embeddings, dtype=np.float32)
|
| 51 |
+
return embeddings_array, meta_data
|
| 52 |
+
|
| 53 |
+
def create_faiss_index(embeddings: np.ndarray) -> faiss.Index:
|
| 54 |
+
"""建立 FAISS 索引"""
|
| 55 |
+
# 取得向量維度
|
| 56 |
+
dimension = embeddings.shape[1]
|
| 57 |
+
|
| 58 |
+
# 建立索引(使用 L2 距離的 IndexFlatL2)
|
| 59 |
+
index = faiss.IndexFlatL2(dimension)
|
| 60 |
+
|
| 61 |
+
# 將 embeddings 加入索引
|
| 62 |
+
index.add(embeddings)
|
| 63 |
+
return index
|
| 64 |
+
|
| 65 |
+
def save_faiss_data(index: faiss.Index, meta_data: List[Dict[str, Any]], output_dir: str = 'faiss_data'):
|
| 66 |
+
"""儲存 FAISS 索引和 meta 資料"""
|
| 67 |
+
# 建立輸出目錄
|
| 68 |
+
output_path = Path(output_dir)
|
| 69 |
+
output_path.mkdir(exist_ok=True)
|
| 70 |
+
|
| 71 |
+
# 儲存 FAISS 索引
|
| 72 |
+
faiss.write_index(index, str(output_path / 'criminal_judgments.index'))
|
| 73 |
+
|
| 74 |
+
# 儲存 meta 資料
|
| 75 |
+
with open(output_path / 'criminal_judgments_meta.pkl', 'wb') as f:
|
| 76 |
+
pickle.dump(meta_data, f)
|
| 77 |
+
|
| 78 |
+
def main():
|
| 79 |
+
print("開始載入判決資料...")
|
| 80 |
+
judgments = load_judgments()
|
| 81 |
+
print(f"已載入 {len(judgments)} 筆判決資料")
|
| 82 |
+
|
| 83 |
+
print("\n開始生成 embeddings...")
|
| 84 |
+
embeddings, meta_data = create_embeddings(judgments)
|
| 85 |
+
print(f"已生成 {len(embeddings)} 個 embeddings")
|
| 86 |
+
|
| 87 |
+
print("\n建立 FAISS 索引...")
|
| 88 |
+
index = create_faiss_index(embeddings)
|
| 89 |
+
print("FAISS 索引建立完成")
|
| 90 |
+
|
| 91 |
+
print("\n儲存資料...")
|
| 92 |
+
save_faiss_data(index, meta_data)
|
| 93 |
+
print("資料儲存完成!")
|
| 94 |
+
|
| 95 |
+
if __name__ == '__main__':
|
| 96 |
+
main()
|
merge_criminal_cases.py
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import json
|
| 3 |
+
from pathlib import Path
|
| 4 |
+
|
| 5 |
+
def merge_criminal_cases():
|
| 6 |
+
# 設定資料來源目錄
|
| 7 |
+
data_source_dir = Path('data_ori')
|
| 8 |
+
|
| 9 |
+
# 找出所有以「刑事」結尾的資料夾
|
| 10 |
+
criminal_dirs = [d for d in data_source_dir.iterdir() if d.is_dir() and d.name.endswith('刑事')]
|
| 11 |
+
|
| 12 |
+
# 用於存儲所有案件的列表
|
| 13 |
+
all_cases = []
|
| 14 |
+
|
| 15 |
+
# 遍歷每個刑事資料夾
|
| 16 |
+
for criminal_dir in criminal_dirs:
|
| 17 |
+
# 遍歷資料夾中的所有 JSON 檔案
|
| 18 |
+
for json_file in criminal_dir.rglob('*.json'):
|
| 19 |
+
try:
|
| 20 |
+
with open(json_file, 'r', encoding='utf-8') as f:
|
| 21 |
+
data = json.load(f)
|
| 22 |
+
# 如果讀取到的是列表,則擴展到 all_cases
|
| 23 |
+
if isinstance(data, list):
|
| 24 |
+
all_cases.extend(data)
|
| 25 |
+
# 如果讀取到的是字典,則添加到 all_cases
|
| 26 |
+
elif isinstance(data, dict):
|
| 27 |
+
all_cases.append(data)
|
| 28 |
+
except Exception as e:
|
| 29 |
+
print(f"處理檔案 {json_file} 時發生錯誤: {str(e)}")
|
| 30 |
+
|
| 31 |
+
# 將合併後的資料寫入新的 JSON 檔案
|
| 32 |
+
output_file = 'data_source/merged_criminal_cases.json'
|
| 33 |
+
with open(output_file, 'w', encoding='utf-8') as f:
|
| 34 |
+
json.dump(all_cases, f, ensure_ascii=False, indent=2)
|
| 35 |
+
|
| 36 |
+
print(f"已成功合併 {len(all_cases)} 筆案件資料到 {output_file}")
|
| 37 |
+
|
| 38 |
+
if __name__ == '__main__':
|
| 39 |
+
merge_criminal_cases()
|
rag_run.py
ADDED
|
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python
|
| 2 |
+
# -*- coding: utf-8 -*-
|
| 3 |
+
|
| 4 |
+
"""
|
| 5 |
+
使用範例:
|
| 6 |
+
1. 執行本程式後,會在本地產生一個 FAISS 向量索引 (faiss_index) 目錄。
|
| 7 |
+
- 若索引已經存在,即可直接載入使用,不必每次重複做 Embedding。
|
| 8 |
+
2. 在互動模式下輸入問題,即可檢索並查看結果來源與時間等資訊。
|
| 9 |
+
"""
|
| 10 |
+
|
| 11 |
+
import os
|
| 12 |
+
import glob
|
| 13 |
+
import json
|
| 14 |
+
import csv
|
| 15 |
+
import re
|
| 16 |
+
from datetime import datetime
|
| 17 |
+
|
| 18 |
+
# 以下套件若未安裝,請先 pip install
|
| 19 |
+
from dotenv import load_dotenv
|
| 20 |
+
from docx import Document as DocxDocument
|
| 21 |
+
|
| 22 |
+
# LangChain 相關套件
|
| 23 |
+
import openai
|
| 24 |
+
from langchain_openai import OpenAIEmbeddings
|
| 25 |
+
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
| 26 |
+
from langchain_community.vectorstores import FAISS
|
| 27 |
+
from langchain.docstore.document import Document
|
| 28 |
+
|
| 29 |
+
# 載入 .env 檔案,取得 OPENAI_API_KEY
|
| 30 |
+
load_dotenv()
|
| 31 |
+
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
|
| 32 |
+
if not OPENAI_API_KEY:
|
| 33 |
+
raise ValueError("未在 .env 檔或環境變數中找到 OPENAI_API_KEY,請確認設定。")
|
| 34 |
+
|
| 35 |
+
openai.api_key = OPENAI_API_KEY
|
| 36 |
+
|
| 37 |
+
# ============ 基本參數設定 ============
|
| 38 |
+
CHUNK_SIZE = 500 # 每個 Chunk 大小,可視需要做調整
|
| 39 |
+
CHUNK_OVERLAP = 50 # 重疊區段,避免切斷句子
|
| 40 |
+
FAISS_INDEX_DIR = "faiss_index" # 儲存 FAISS 索引的目錄
|
| 41 |
+
DATA_DIR = "data_source" # 預設的資料來源資料夾(自行調整)
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
def extract_date_from_string(s: str) -> str:
|
| 45 |
+
"""
|
| 46 |
+
嘗試從字串中找出 YYYY-MM-DD 或 YYYY-MM 或 YYYY 格式日期,若失敗則回傳 '未知日期'。
|
| 47 |
+
根據需求調整正則或解析邏輯。
|
| 48 |
+
"""
|
| 49 |
+
# 解析 ISO 格式 (2025-03-18T04:39:36+0000) 或 YYYY-MM-DD
|
| 50 |
+
iso_pattern = r"(\d{4}-\d{1,2}-\d{1,2})"
|
| 51 |
+
match_iso = re.search(iso_pattern, s)
|
| 52 |
+
if match_iso:
|
| 53 |
+
return match_iso.group(1)
|
| 54 |
+
|
| 55 |
+
# 只抓年份 (YYYY)
|
| 56 |
+
year_pattern = r"(\d{4})"
|
| 57 |
+
match_year = re.search(year_pattern, s)
|
| 58 |
+
if match_year:
|
| 59 |
+
return f"{match_year.group(1)}-01-01"
|
| 60 |
+
|
| 61 |
+
return "未知日期"
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
def read_json_file(file_path: str) -> list:
|
| 65 |
+
"""
|
| 66 |
+
讀取單一 JSON 檔案,並回傳一個包含 Document-like dict 的 list。
|
| 67 |
+
每個 dict:{'text': str, 'metadata': dict}
|
| 68 |
+
"""
|
| 69 |
+
docs = []
|
| 70 |
+
try:
|
| 71 |
+
with open(file_path, 'r', encoding='utf-8') as f:
|
| 72 |
+
data = json.load(f)
|
| 73 |
+
except Exception as e:
|
| 74 |
+
print(f"讀取 JSON 檔失敗:{file_path}, {e}")
|
| 75 |
+
return docs
|
| 76 |
+
|
| 77 |
+
# 假設可能是一個 JSON 陣列,也可能是一個物件
|
| 78 |
+
if isinstance(data, dict):
|
| 79 |
+
data = [data]
|
| 80 |
+
|
| 81 |
+
for item in data:
|
| 82 |
+
text = item.get("message", "")
|
| 83 |
+
created_time = item.get("created_time", "")
|
| 84 |
+
# 若沒有 message,就跳過
|
| 85 |
+
if not text.strip():
|
| 86 |
+
continue
|
| 87 |
+
|
| 88 |
+
date_str = extract_date_from_string(created_time)
|
| 89 |
+
metadata = {
|
| 90 |
+
"source": "Facebook粉專", # 可自行命名
|
| 91 |
+
"filename": os.path.basename(file_path),
|
| 92 |
+
"date": date_str,
|
| 93 |
+
"id": item.get("id", "unknown_id")
|
| 94 |
+
}
|
| 95 |
+
docs.append({"text": text, "metadata": metadata})
|
| 96 |
+
return docs
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
def read_csv_file(file_path: str) -> list:
|
| 100 |
+
"""
|
| 101 |
+
讀取 CSV 檔,假設有 date 欄和 caption (或 content) 欄。
|
| 102 |
+
回傳 list[{"text": ..., "metadata": ...}]
|
| 103 |
+
"""
|
| 104 |
+
docs = []
|
| 105 |
+
try:
|
| 106 |
+
with open(file_path, 'r', encoding='utf-8') as f:
|
| 107 |
+
reader = csv.DictReader(f)
|
| 108 |
+
for row in reader:
|
| 109 |
+
caption = row.get("caption", "")
|
| 110 |
+
date_raw = row.get("date", "")
|
| 111 |
+
date_str = extract_date_from_string(date_raw)
|
| 112 |
+
|
| 113 |
+
# 若無內容則跳過
|
| 114 |
+
if not caption.strip():
|
| 115 |
+
continue
|
| 116 |
+
|
| 117 |
+
metadata = {
|
| 118 |
+
"source": "Instagram/其它CSV來源", # 自行定義
|
| 119 |
+
"filename": os.path.basename(file_path),
|
| 120 |
+
"date": date_str,
|
| 121 |
+
"media_id": row.get("media_id", "unknown_id")
|
| 122 |
+
}
|
| 123 |
+
docs.append({"text": caption, "metadata": metadata})
|
| 124 |
+
except Exception as e:
|
| 125 |
+
print(f"讀取 CSV 檔失敗:{file_path}, {e}")
|
| 126 |
+
return docs
|
| 127 |
+
|
| 128 |
+
|
| 129 |
+
def read_docx_file(file_path: str) -> list:
|
| 130 |
+
"""
|
| 131 |
+
讀取 DOCX 檔案,取得完整文字。嘗試從檔名中找日期。
|
| 132 |
+
回傳 list[{"text": ..., "metadata": ...}] (通常只會有一個元素)
|
| 133 |
+
"""
|
| 134 |
+
docs = []
|
| 135 |
+
try:
|
| 136 |
+
doc = DocxDocument(file_path)
|
| 137 |
+
paragraphs = [p.text.strip() for p in doc.paragraphs if p.text.strip()]
|
| 138 |
+
full_text = "\n".join(paragraphs)
|
| 139 |
+
|
| 140 |
+
date_str = extract_date_from_string(file_path) # 從檔名嘗試擷取年分或日期
|
| 141 |
+
|
| 142 |
+
metadata = {
|
| 143 |
+
"source": "時代力量官方文件", # 或其他來源辨識
|
| 144 |
+
"filename": os.path.basename(file_path),
|
| 145 |
+
"date": date_str
|
| 146 |
+
}
|
| 147 |
+
if full_text.strip():
|
| 148 |
+
docs.append({"text": full_text, "metadata": metadata})
|
| 149 |
+
|
| 150 |
+
except Exception as e:
|
| 151 |
+
print(f"讀取 DOCX 檔失敗:{file_path}, {e}")
|
| 152 |
+
return docs
|
| 153 |
+
|
| 154 |
+
|
| 155 |
+
def load_all_documents(data_dir: str) -> list:
|
| 156 |
+
"""
|
| 157 |
+
從指定資料夾內載入所有 JSON、CSV、DOCX 檔案,並整合成一個 list。
|
| 158 |
+
回傳格式為 list[{"text": ..., "metadata": ...}, ...]
|
| 159 |
+
"""
|
| 160 |
+
all_docs = []
|
| 161 |
+
|
| 162 |
+
# 搜尋 JSON
|
| 163 |
+
for json_path in glob.glob(os.path.join(data_dir, "*.json")):
|
| 164 |
+
all_docs.extend(read_json_file(json_path))
|
| 165 |
+
|
| 166 |
+
# 搜尋 CSV
|
| 167 |
+
for csv_path in glob.glob(os.path.join(data_dir, "*.csv")):
|
| 168 |
+
all_docs.extend(read_csv_file(csv_path))
|
| 169 |
+
|
| 170 |
+
# 搜尋 DOCX
|
| 171 |
+
for docx_path in glob.glob(os.path.join(data_dir, "*.docx")):
|
| 172 |
+
all_docs.extend(read_docx_file(docx_path))
|
| 173 |
+
|
| 174 |
+
return all_docs
|
| 175 |
+
|
| 176 |
+
|
| 177 |
+
def chunk_documents(docs: list, chunk_size=CHUNK_SIZE, chunk_overlap=CHUNK_OVERLAP) -> list:
|
| 178 |
+
"""
|
| 179 |
+
使用 LangChain 提供的文本切割器(RecursiveCharacterTextSplitter)進行分塊。
|
| 180 |
+
輸入: [{"text": ..., "metadata": {...}}, ...]
|
| 181 |
+
輸出: List[Document] (每個 Document 有 page_content 和 metadata)
|
| 182 |
+
"""
|
| 183 |
+
text_splitter = RecursiveCharacterTextSplitter(
|
| 184 |
+
chunk_size=chunk_size,
|
| 185 |
+
chunk_overlap=chunk_overlap
|
| 186 |
+
)
|
| 187 |
+
results = []
|
| 188 |
+
for d in docs:
|
| 189 |
+
splits = text_splitter.split_text(d["text"])
|
| 190 |
+
for chunk in splits:
|
| 191 |
+
# 將原始 metadata 帶入
|
| 192 |
+
results.append(
|
| 193 |
+
Document(
|
| 194 |
+
page_content=chunk,
|
| 195 |
+
metadata=d["metadata"]
|
| 196 |
+
)
|
| 197 |
+
)
|
| 198 |
+
return results
|
| 199 |
+
|
| 200 |
+
|
| 201 |
+
def build_or_load_faiss_index(documents: list, index_dir=FAISS_INDEX_DIR) -> FAISS:
|
| 202 |
+
"""
|
| 203 |
+
若本地已有 FAISS 索引,則直接載入;
|
| 204 |
+
若沒有,則建立並儲存到 index_dir,方便下次使用。
|
| 205 |
+
回傳 FAISS 物件。
|
| 206 |
+
"""
|
| 207 |
+
# 若 index_dir 存在,就嘗試載入
|
| 208 |
+
if os.path.exists(index_dir):
|
| 209 |
+
print(f"[INFO] 發現已存在的 FAISS 索引:{index_dir},嘗試載入...")
|
| 210 |
+
try:
|
| 211 |
+
return FAISS.load_local(index_dir, OpenAIEmbeddings(), allow_dangerous_deserialization=True)
|
| 212 |
+
except Exception as e:
|
| 213 |
+
print(f"[WARNING] 載入失敗,將重新建立索引:{e}")
|
| 214 |
+
|
| 215 |
+
# 沒有現成索引,就新建
|
| 216 |
+
print("[INFO] 建立新的 FAISS 索引...")
|
| 217 |
+
embeddings = OpenAIEmbeddings()
|
| 218 |
+
|
| 219 |
+
# 先將每個 Document 的內容和 metadata 拆開
|
| 220 |
+
texts = [doc.page_content for doc in documents]
|
| 221 |
+
metadatas = [doc.metadata for doc in documents]
|
| 222 |
+
|
| 223 |
+
# 建立向量索引
|
| 224 |
+
faiss_store = FAISS.from_texts(texts, embeddings, metadatas=metadatas)
|
| 225 |
+
|
| 226 |
+
# 儲存索引
|
| 227 |
+
faiss_store.save_local(index_dir)
|
| 228 |
+
print("[INFO] 已完成新的 FAISS 索引建立並儲存到資料夾:", index_dir)
|
| 229 |
+
return faiss_store
|
| 230 |
+
|
| 231 |
+
|
| 232 |
+
def interactive_query(faiss_store: FAISS):
|
| 233 |
+
"""
|
| 234 |
+
簡易互動模式,使用者可輸入問題並檢索。
|
| 235 |
+
"""
|
| 236 |
+
print("\n[進入互動模式] 輸入 'exit' 結束。")
|
| 237 |
+
|
| 238 |
+
while True:
|
| 239 |
+
query = input("請輸入查詢問題: ").strip()
|
| 240 |
+
if query.lower() in ["exit", "quit", "q", "離開"]:
|
| 241 |
+
print("結束互動模式。")
|
| 242 |
+
break
|
| 243 |
+
|
| 244 |
+
# 相似度檢索
|
| 245 |
+
results = faiss_store.similarity_search(query, k=5)
|
| 246 |
+
print(f"\n[檢索到 {len(results)} 筆相似度最高的內容,以下顯示結果]\n")
|
| 247 |
+
|
| 248 |
+
for i, doc in enumerate(results, start=1):
|
| 249 |
+
source = doc.metadata.get("source", "未知來源")
|
| 250 |
+
date = doc.metadata.get("date", "未知日期")
|
| 251 |
+
filename = doc.metadata.get("filename", "未知檔名")
|
| 252 |
+
snippet = doc.page_content[:150].replace("\n", " ")
|
| 253 |
+
print(f"--- 結果 {i} ---")
|
| 254 |
+
print(f"來源: {source}")
|
| 255 |
+
print(f"日期: {date}")
|
| 256 |
+
print(f"檔名: {filename}")
|
| 257 |
+
print(f"內容摘要: {snippet}...")
|
| 258 |
+
print("-------------------------\n")
|
| 259 |
+
|
| 260 |
+
def main():
|
| 261 |
+
# 1. 載入資料
|
| 262 |
+
all_docs_raw = load_all_documents(DATA_DIR)
|
| 263 |
+
if not all_docs_raw:
|
| 264 |
+
print(f"在資料夾 {DATA_DIR} 中未找到任何檔案,請確認!")
|
| 265 |
+
return
|
| 266 |
+
|
| 267 |
+
print(f"[INFO] 成功載入資料:共 {len(all_docs_raw)} 筆(未分塊)")
|
| 268 |
+
|
| 269 |
+
# 2. 分塊
|
| 270 |
+
chunked_docs = chunk_documents(all_docs_raw, CHUNK_SIZE, CHUNK_OVERLAP)
|
| 271 |
+
print(f"[INFO] 分塊後文件總數:{len(chunked_docs)}")
|
| 272 |
+
|
| 273 |
+
# 3. 建立或載入 FAISS 索引
|
| 274 |
+
faiss_store = build_or_load_faiss_index(chunked_docs, FAISS_INDEX_DIR)
|
| 275 |
+
|
| 276 |
+
# 4. 進入互動式查詢
|
| 277 |
+
interactive_query(faiss_store)
|
| 278 |
+
|
| 279 |
+
|
| 280 |
+
|
| 281 |
+
|
| 282 |
+
|
| 283 |
+
if __name__ == "__main__":
|
| 284 |
+
main()
|
requirements.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
huggingface_hub==0.25.2
|
| 2 |
+
dotenv
|
| 3 |
+
gradio>=3.50.0
|
| 4 |
+
openai>=1.3.0
|
| 5 |
+
python-dotenv>=1.0.0
|
| 6 |
+
faiss-cpu>=1.7.4
|
| 7 |
+
numpy>=1.25.0
|
| 8 |
+
sentence-transformers>=2.2.2
|
| 9 |
+
tqdm>=4.65.0
|