MwSpace commited on
Commit
fa64448
Β·
verified Β·
1 Parent(s): 44982a7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +286 -0
README.md ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - it
4
+ - en
5
+ license: apache-2.0
6
+ library_name: transformers
7
+ base_model: Qwen/Qwen2.5-32B-Instruct
8
+ tags:
9
+ - lora
10
+ - fine-tuned
11
+ - banking
12
+ - regtech
13
+ - compliance
14
+ - rag
15
+ - tool-calling
16
+ - italian
17
+ - qwen2.5
18
+ pipeline_tag: text-generation
19
+ ---
20
+
21
+ # 🏦 RegTech-32B-Instruct
22
+
23
+ > **Fine-tuned for RAG-powered banking compliance β€” not general knowledge.**
24
+
25
+ A specialized [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) model fine-tuned to excel within a **Retrieval-Augmented Generation (RAG) pipeline** for Italian banking regulatory compliance.
26
+
27
+ This model doesn't try to memorize regulations β€” it's trained to **work with retrieved context**: follow instructions precisely, produce structured outputs, call compliance tools, and maintain the right tone and terminology when grounded on regulatory documents.
28
+
29
+ ---
30
+
31
+ ## 🎯 What This Model Does
32
+
33
+ This fine-tuning optimizes the model's **behavior within a RAG system**, not its factual knowledge. Specifically:
34
+
35
+ | Task | Description |
36
+ |---|---|
37
+ | πŸ“‹ **RAG Q&A** | Answer regulatory questions grounded on retrieved documents |
38
+ | πŸ”§ **Tool Calling** | KYC verification, risk scoring, PEP checks, SOS reporting |
39
+ | πŸ” **Query Expansion** | Rewrite user queries with regulatory terminology for better retrieval |
40
+ | 🧠 **Intent Detection** | Classify if a message needs document search or is conversational |
41
+ | πŸ“Š **Document Reranking** | Score candidate documents by relevance |
42
+ | πŸ“ **Structured JSON** | Topic extraction, metadata, impact analysis in JSON format |
43
+ | βš–οΈ **Impact Analysis** | Cross-reference external regulations against internal bank procedures |
44
+
45
+ ---
46
+
47
+ ## πŸ“ˆ Evaluation β€” LLM-as-Judge
48
+
49
+ Evaluated by **Claude Opus 4.6** (Anthropic) across 11 blind test scenarios. The judge compared base vs fine-tuned model outputs without knowing which was which.
50
+
51
+ ### πŸ† Head-to-Head
52
+
53
+ ```
54
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
55
+ β”‚ 🟒 Tuned Wins 7/11 (68.2%) β”‚
56
+ β”‚ πŸ”΄ Base Wins 3/11 (31.8%) β”‚
57
+ β”‚ βšͺ Ties 1/11 β”‚
58
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
59
+ ```
60
+
61
+ ### πŸ“Š Quality Scores (1–5)
62
+
63
+ | Criterion | Base | Tuned | Delta | |
64
+ |---|:---:|:---:|:---:|---|
65
+ | 🎯 Instruction Following | 4.00 | **4.82** | +0.82 | 🟒🟒🟒 |
66
+ | πŸ“Ž Context Adherence | 4.36 | **4.91** | +0.55 | 🟒🟒 |
67
+ | βœ… Accuracy | 4.27 | **4.73** | +0.46 | 🟒 |
68
+ | πŸ“ Format | 4.36 | **4.55** | +0.19 | βž– |
69
+ | πŸ—£οΈ Tone | 4.82 | **5.00** | +0.18 | βž– |
70
+ | **πŸ“Š Overall** | **4.36** | **4.80** | **+0.44** | **🟒** |
71
+
72
+ > The biggest gains are in **instruction following** (+0.82) and **context adherence** (+0.55) β€” exactly what matters when the model must follow retrieved regulatory context faithfully. Tone reaches a perfect 5.00.
73
+
74
+ ### πŸ“‚ Results by Category
75
+
76
+ | Category | Base | Tuned | Tie |
77
+ |---|:---:|:---:|:---:|
78
+ | 🚫 Refusal Handling | 0 | **1** | 1 |
79
+ | 🎨 Style & Tone | 0 | **1** | 0 |
80
+ | πŸ“€ Data Extraction | 0 | **1** | 0 |
81
+ | ⚠️ Edge Cases | 0 | **1** | 0 |
82
+ | πŸ“‹ JSON Output | 1 | 1 | 0 |
83
+ | πŸ“– RAG Q&A | 1 | 1 | 0 |
84
+ | πŸ”§ Tool Use | 1 | 1 | 0 |
85
+
86
+ ### πŸ”„ Comparison with RegTech-4B-Instruct
87
+
88
+ | Metric | 4B | 32B |
89
+ |---|:---:|:---:|
90
+ | Base score (pre-tuning) | 4.11 | **4.36** |
91
+ | Tuned score | 4.68 | **4.80** |
92
+ | Best eval loss | 1.191 | **0.813** |
93
+ | Token accuracy | ~73% | **~81%** |
94
+ | Train/eval gap | 0.050 | **0.030** |
95
+
96
+ ---
97
+
98
+ ## πŸ’‘ Usage Examples
99
+
100
+ ### πŸ“‹ RAG Q&A β€” Answering from Retrieved Context
101
+
102
+ The model is designed to receive **retrieved regulatory documents as context** and answer based on them:
103
+
104
+ ```python
105
+ messages = [
106
+ {
107
+ "role": "system",
108
+ "content": """Sei un assistente per la compliance bancaria.
109
+ Rispondi SOLO basandoti sul contesto fornito.
110
+
111
+ <contesto_recuperato>
112
+ Art. 92 CRR - Gli enti soddisfano in qualsiasi momento i seguenti
113
+ requisiti: a) CET1 del 4,5%; b) Tier 1 del 6%; c) capitale totale dell'8%.
114
+ Il coefficiente Γ¨ calcolato come rapporto tra i fondi propri e
115
+ l'importo complessivo dell'esposizione al rischio.
116
+ </contesto_recuperato>"""
117
+ },
118
+ {
119
+ "role": "user",
120
+ "content": "Quali sono i requisiti minimi di capitale secondo il CRR?"
121
+ }
122
+ ]
123
+ ```
124
+
125
+ ### πŸ” Query Expansion β€” Improving RAG Retrieval
126
+
127
+ ```python
128
+ messages = [
129
+ {
130
+ "role": "system",
131
+ "content": "Riscrivi la query dell'utente in una versione piΓΉ ricca per migliorare il recupero documentale (RAG). Aggiungi termini tecnici e riferimenti normativi. Rispondi SOLO con il JSON richiesto."
132
+ },
133
+ {
134
+ "role": "user",
135
+ "content": "## QUERY ORIGINALE: [obblighi segnalazione operazioni sospette]"
136
+ }
137
+ ]
138
+
139
+ # Expected output:
140
+ # {"query": "obblighi segnalazione operazioni sospette SOS UIF D.Lgs. 231/2007
141
+ # art. 35 riciclaggio finanziamento terrorismo portale RADAR tempistiche
142
+ # invio indicatori anomalia"}
143
+ ```
144
+
145
+ ### πŸ”§ Tool Calling β€” Compliance Workflows
146
+
147
+ ```python
148
+ messages = [
149
+ {
150
+ "role": "system",
151
+ "content": """Sei un assistente operativo per la compliance.
152
+
153
+ <tools>
154
+ {"name": "calcola_scoring_rischio", "parameters": {...}}
155
+ {"name": "controlla_liste_pep", "parameters": {...}}
156
+ {"name": "verifica_kyc", "parameters": {...}}
157
+ </tools>
158
+
159
+ <contesto_recuperato>
160
+ Procedura AML-003: L'adeguata verifica rafforzata (EDD) deve essere
161
+ applicata per PEP, paesi ad alto rischio e profili con scoring > 60.
162
+ </contesto_recuperato>"""
163
+ },
164
+ {
165
+ "role": "user",
166
+ "content": "Devo aprire un conto per una societΓ  con sede a Dubai. Il legale rappresentante Γ¨ il sig. Al-Rashid."
167
+ }
168
+ ]
169
+
170
+ # The model will:
171
+ # 1. Call controlla_liste_pep for the representative
172
+ # 2. Call calcola_scoring_rischio based on risk factors
173
+ # 3. Recommend EDD procedure per AML-003, grounded on retrieved policy
174
+ ```
175
+
176
+ ### πŸ“Š Document Reranking
177
+
178
+ ```python
179
+ messages = [
180
+ {
181
+ "role": "system",
182
+ "content": "Valuta la rilevanza di ciascun candidato rispetto alla query. Restituisci solo i candidati rilevanti con score 0-100. Rispondi SOLO con il JSON richiesto."
183
+ },
184
+ {
185
+ "role": "user",
186
+ "content": '{"query": "requisiti CET1 fondi propri", "candidates": [{"id": "doc_001", "title": "Art. 92 CRR", "content": "..."}, {"id": "doc_002", "title": "DORA Art. 5", "content": "..."}]}'
187
+ }
188
+ ]
189
+
190
+ # Expected: {"matches": [{"id": "doc_001", "relevance": 95}]}
191
+ ```
192
+
193
+ ---
194
+
195
+ ## βš™οΈ Training Details
196
+
197
+ | | |
198
+ |---|---|
199
+ | 🧬 **Method** | LoRA β€” bf16 full precision (no quantization) |
200
+ | πŸ—οΈ **Base Model** | Qwen2.5-32B-Instruct |
201
+ | πŸ“¦ **Dataset** | 923 train / 102 eval samples |
202
+ | ⏱️ **Duration** | 40.0 minutes |
203
+
204
+ ### Hyperparameters
205
+
206
+ | Parameter | Value |
207
+ |---|---|
208
+ | LoRA Rank / Alpha | 16 / 32 |
209
+ | LoRA Dropout | 0.10 |
210
+ | Target Modules | q, k, v, o, gate, up, down proj |
211
+ | Learning Rate | 5e-6 (cosine scheduler) |
212
+ | Epochs | 3 |
213
+ | Effective Batch Size | 4 (1 Γ— 4 accum) |
214
+ | Max Sequence Length | 4096 |
215
+ | NEFTune Alpha | 5.0 |
216
+ | Warmup Ratio | 0.05 |
217
+
218
+ ### πŸ“‰ Training Metrics
219
+
220
+ | Metric | Value |
221
+ |---|---|
222
+ | Final Train Loss | 0.843 |
223
+ | Best Eval Loss | 0.813 (step 640/693) |
224
+ | Train/Eval Gap | 0.030 βœ… |
225
+
226
+ > Gap of 0.030 indicates **very stable training with no overfitting**.
227
+
228
+ ---
229
+
230
+ ## πŸ“š Dataset Coverage
231
+
232
+ The training data covers the full lifecycle of a RAG-based compliance assistant:
233
+
234
+ | Category | Purpose |
235
+ |---|---|
236
+ | 🏷️ Title Generation | Generate conversation titles from user queries |
237
+ | πŸ” Query Expansion | Enrich queries with regulatory terms for better retrieval |
238
+ | 🧠 Intent Classification | Route queries to RAG vs conversational responses |
239
+ | πŸ“Š Document Reranking | Score retrieved documents by relevance |
240
+ | πŸ“ Topic Extraction | Extract main topics from regulatory text pages |
241
+ | πŸ“– Document Summarization | Summarize multi-page regulatory documents |
242
+ | βš–οΈ Relevance Filtering | Filter regulatory text relevant to banks |
243
+ | πŸ“… Metadata Extraction | Find application dates, issuing authorities |
244
+ | πŸ”§ Impact Analysis | Cross-reference regulations vs internal procedures |
245
+ | πŸ’¬ RAG Q&A + Tool Calling | Multi-turn compliance conversations with tools |
246
+
247
+ **Regulatory sources covered:** CRR/CRR3, DORA (UE 2022/2554), D.Lgs. 231/2007 (AML), D.Lgs. 385/1993 (TUB), Circolare 285, PSD2, MiFID II/MiFIR, D.P.R. 180/1950 and related Banca d'Italia provisions.
248
+
249
+ ---
250
+
251
+ ## πŸš€ Deployment
252
+
253
+ ### With vLLM
254
+ ```bash
255
+ vllm serve ./models/RegTech-32B-Instruct --dtype bfloat16
256
+ ```
257
+
258
+ ### With Transformers
259
+ ```python
260
+ from transformers import AutoModelForCausalLM, AutoTokenizer
261
+
262
+ model = AutoModelForCausalLM.from_pretrained("YOUR_REPO_ID", torch_dtype="bfloat16", device_map="auto")
263
+ tokenizer = AutoTokenizer.from_pretrained("YOUR_REPO_ID")
264
+
265
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
266
+ inputs = tokenizer(text, return_tensors="pt").to(model.device)
267
+ outputs = model.generate(**inputs, max_new_tokens=512)
268
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
269
+ ```
270
+
271
+ ---
272
+
273
+ ## ⚠️ Important Notes
274
+
275
+ - 🎯 **RAG-optimized** β€” trained to work with retrieved context, not to memorize regulations. Always provide relevant documents in the system prompt.
276
+ - 🏦 **Domain-specific** β€” optimized for Italian banking compliance. General capabilities may differ from the base model.
277
+ - βš–οΈ **Not legal advice** β€” a tool to assist compliance professionals, not a substitute for regulatory expertise.
278
+ - πŸ”§ **Tool schemas** β€” tool calling works best with the specific function signatures used during training.
279
+
280
+ ---
281
+
282
+ <p align="center">
283
+ Built with ❀️ for banking RAG<br>
284
+ <em>Fine-tuned with LoRA β€’ Evaluated by Claude Opus 4.6 β€’ Powered by Qwen2.5</em><br>
285
+ <em>Contact For Commercial Use: https://landing.2sophia.ai</em>
286
+ </p>