Datasets:
tags:
- patents
- climate
- green-technology
- text-classification
- patent-classification
- human-in-the-loop
- multi-agent
- patentsberta
language:
- en
pipeline_tag: text-classification
library_name: transformers
Green Patent Detection: Multi-Agent HITL + PatentSBERTa
This repository contains an advanced green patent detection workflow built for binary classification of patent claims into:
- 1 = Green / climate mitigation related
- 0 = Non-green
The project extends a baseline PatentSBERTa workflow by adding a Human-in-the-Loop (HITL) review stage and a multi-agent debate system before final fine-tuning.
Project overview
The goal of this project is to improve green patent detection by combining:
- High-risk sample selection from uncertainty sampling
- Multi-agent LLM review of difficult claims
- Human verification of the AI suggestions
- Final fine-tuning of PatentSBERTa using silver labels + gold HITL labels
This workflow was designed to test whether a more advanced labeling pipeline produces stronger training data than a simple single-LLM labeling approach.
Base model
The final classifier is built from:
- Base encoder:
AI-Growth-Lab/PatentSBERTa - Task: Binary text classification
- Domain: Patent claim classification for climate mitigation / green technology
Data used in the notebook
The notebook uses the following files:
patents_50k_green.parquettrain_meta.csvy_train.npyeval_silver.parquethitl_green_100.csvhitl_review_progress_with_llm.csvhitl_green_gold.csvhitl_three_agents.csv
Methodology
1. High-risk claim selection
A set of 100 high-risk patent claims was selected from earlier uncertainty sampling outputs. These were the most difficult / ambiguous examples for the model.
2. Multi-agent debate system
Three agents were created using CrewAI and an Ollama-hosted model (qwen2.5:3b-instruct):
- Advocate Agent – argues why the claim should be classified as green under Y02 climate mitigation logic
- Skeptic Agent – argues why the claim may not qualify and checks for weak evidence or greenwashing
- Judge Agent – reviews both sides and returns a structured final output with:
- predicted label
- confidence
- rationale
This produces an AI suggestion for each difficult claim.
3. Human-in-the-Loop review
The AI-generated suggestion was then manually reviewed.
The final human label was stored as:
is_green_human
These human-reviewed labels form the gold dataset for the difficult claims.
4. Gold-enhanced training
The final training set combines:
- Silver labels from the earlier training data
- 100 gold human-reviewed claims from the multi-agent workflow
This combined dataset was used to fine-tune PatentSBERTa.
Training configuration
The notebook fine-tunes the model with the following setup:
- Model:
AI-Growth-Lab/PatentSBERTa - Max sequence length:
256 - Epochs:
1 - Learning rate:
2e-5 - Train batch size:
8 - Eval batch size:
8 - Weight decay:
0.01 - Framework: Hugging Face Transformers Trainer
Dataset splits used during fine-tuning
From the notebook:
- Training data: silver training set + gold HITL labels
- Evaluation data:
eval_silver - Additional check:
gold_100
The notebook text states that the final training dataset contains 35,200 claims.
Human vs AI agreement
According to the notebook:
- Simple LLM from Assignment 2:
94%agreement with human labels - Agentic system from Assignment 3:
87%agreement with human labels
This suggests that the multi-agent system used stricter reasoning criteria, which created more disagreement on borderline cases.
Repository contents
Depending on what you upload, this repository may include:
- the processed HITL dataset
- the final trained model
- tokenizer files
- training notebook
- prediction / rationale outputs for the 100 reviewed claims
Expected columns in the HITL dataset
The notebook shows or creates columns such as:
idtextp_greenullm_green_suggestedllm_confidencellm_rationaleis_green_human
Example use
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "YOUR_HF_REPO_NAME"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "A company develops a carbon capture system that reduces CO2 emissions from cement factories."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=256)
with torch.no_grad():
logits = model(**inputs).logits
pred = torch.argmax(logits, dim=-1).item()
print("Predicted label:", pred)
Intended use
This project is intended for:
- research and coursework on green patent detection
- experimentation with HITL labeling pipelines
- comparison of simple vs advanced AI-assisted annotation workflows
- climate-tech related document classification
Limitations
- The gold set is relatively small (100 reviewed claims)
- The multi-agent workflow depends on LLM reasoning quality
- Agreement with humans does not automatically guarantee better downstream model performance
- Final performance metrics should be reported from the actual training run in this repository
Notes
This README was prepared from the notebook workflow and code structure. If you are uploading the model repo, add the final evaluation metrics from your training output. If you are uploading the dataset repo, you can keep the methodology sections and remove the model inference example if not needed.
Citation
If you use this work, please cite the repository and the base model:
AI-Growth-Lab/PatentSBERTa
You may also describe the workflow as:
Multi-Agent Human-in-the-Loop green patent detection using PatentSBERTa with gold-enhanced fine-tuning.