File size: 6,039 Bytes
500f1bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
---
tags:
- patents
- climate
- green-technology
- text-classification
- patent-classification
- human-in-the-loop
- multi-agent
- patentsberta
language:
- en
pipeline_tag: text-classification
library_name: transformers
---

# Green Patent Detection: Multi-Agent HITL + PatentSBERTa

This repository contains an advanced green patent detection workflow built for **binary classification of patent claims** into:

- **1 = Green / climate mitigation related**
- **0 = Non-green**

The project extends a baseline PatentSBERTa workflow by adding a **Human-in-the-Loop (HITL)** review stage and a **multi-agent debate system** before final fine-tuning.

## Project overview

The goal of this project is to improve green patent detection by combining:

1. **High-risk sample selection** from uncertainty sampling
2. **Multi-agent LLM review** of difficult claims
3. **Human verification** of the AI suggestions
4. **Final fine-tuning of PatentSBERTa** using silver labels + gold HITL labels

This workflow was designed to test whether a more advanced labeling pipeline produces stronger training data than a simple single-LLM labeling approach.

## Base model

The final classifier is built from:

- **Base encoder:** `AI-Growth-Lab/PatentSBERTa`
- **Task:** Binary text classification
- **Domain:** Patent claim classification for climate mitigation / green technology

## Data used in the notebook

The notebook uses the following files:

- `patents_50k_green.parquet`
- `train_meta.csv`
- `y_train.npy`
- `eval_silver.parquet`
- `hitl_green_100.csv`
- `hitl_review_progress_with_llm.csv`
- `hitl_green_gold.csv`
- `hitl_three_agents.csv`

## Methodology

### 1. High-risk claim selection

A set of **100 high-risk patent claims** was selected from earlier uncertainty sampling outputs. These were the most difficult / ambiguous examples for the model.

### 2. Multi-agent debate system

Three agents were created using `CrewAI` and an Ollama-hosted model (`qwen2.5:3b-instruct`):

- **Advocate Agent** – argues why the claim should be classified as green under Y02 climate mitigation logic
- **Skeptic Agent** – argues why the claim may not qualify and checks for weak evidence or greenwashing
- **Judge Agent** – reviews both sides and returns a structured final output with:
  - predicted label
  - confidence
  - rationale

This produces an AI suggestion for each difficult claim.

### 3. Human-in-the-Loop review

The AI-generated suggestion was then manually reviewed.

The final human label was stored as:

- `is_green_human`

These human-reviewed labels form the **gold dataset** for the difficult claims.

### 4. Gold-enhanced training

The final training set combines:

- **Silver labels** from the earlier training data
- **100 gold human-reviewed claims** from the multi-agent workflow

This combined dataset was used to fine-tune PatentSBERTa.

## Training configuration

The notebook fine-tunes the model with the following setup:

- **Model:** `AI-Growth-Lab/PatentSBERTa`
- **Max sequence length:** `256`
- **Epochs:** `1`
- **Learning rate:** `2e-5`
- **Train batch size:** `8`
- **Eval batch size:** `8`
- **Weight decay:** `0.01`
- **Framework:** Hugging Face Transformers Trainer

## Dataset splits used during fine-tuning

From the notebook:

- **Training data:** silver training set + gold HITL labels
- **Evaluation data:** `eval_silver`
- **Additional check:** `gold_100`

The notebook text states that the final training dataset contains **35,200 claims**.

## Human vs AI agreement

According to the notebook:

- **Simple LLM from Assignment 2:** `94%` agreement with human labels
- **Agentic system from Assignment 3:** `87%` agreement with human labels

This suggests that the multi-agent system used stricter reasoning criteria, which created more disagreement on borderline cases.

## Repository contents

Depending on what you upload, this repository may include:

- the processed HITL dataset
- the final trained model
- tokenizer files
- training notebook
- prediction / rationale outputs for the 100 reviewed claims

## Expected columns in the HITL dataset

The notebook shows or creates columns such as:

- `id`
- `text`
- `p_green`
- `u`
- `llm_green_suggested`
- `llm_confidence`
- `llm_rationale`
- `is_green_human`

## Example use

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model_name = "YOUR_HF_REPO_NAME"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

text = "A company develops a carbon capture system that reduces CO2 emissions from cement factories."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=256)

with torch.no_grad():
    logits = model(**inputs).logits
    pred = torch.argmax(logits, dim=-1).item()

print("Predicted label:", pred)
```

## Intended use

This project is intended for:

- research and coursework on green patent detection
- experimentation with HITL labeling pipelines
- comparison of simple vs advanced AI-assisted annotation workflows
- climate-tech related document classification

## Limitations

- The gold set is relatively small (**100 reviewed claims**)
- The multi-agent workflow depends on LLM reasoning quality
- Agreement with humans does not automatically guarantee better downstream model performance
- Final performance metrics should be reported from the actual training run in this repository

## Notes

This README was prepared from the notebook workflow and code structure. If you are uploading the **model repo**, add the final evaluation metrics from your training output. If you are uploading the **dataset repo**, you can keep the methodology sections and remove the model inference example if not needed.

## Citation

If you use this work, please cite the repository and the base model:

- `AI-Growth-Lab/PatentSBERTa`

You may also describe the workflow as:

> Multi-Agent Human-in-the-Loop green patent detection using PatentSBERTa with gold-enhanced fine-tuning.