Assignment3 - Green Patent Detection: Advanced Architectures (Option: Agentic-CrewAI llama3:4b)
This model is a fine-tuned version of AI-Growth-Lab/PatentSBERTa for the task of Green Patent Detection using patent claim text (to be specific, fine tuned on top of Ailee52/PatentSBERTa-green-dataset ).
The model was developed as part of Assignment 3: Advanced Architectures (Agents vs QLoRA)** and incorporates labels generated using a Multi-Agent System followed by Human-in-the-Loop (HITL) verification.
Evaluation results
- eval_silver: 0.8066
- eval_gold: 0.4791
- The model performs well on the general patent distribution but shows lower performance on high-risk edge cases, highlighting the difficulty of green patent classification.
Agreement report (HITL)
- The agreement between the agent system and the human labels was 56%. This shows that the multi-agent system correctly matched human decisions in slightly more than half of the reviewed high-risk patent claims.
Training Method
The model was fine-tuned using a combination of:
• Silver labels (50K patents) • Gold labels (100 high-risk patents reviewed by humans)
The gold labels were generated through a Multi-Agent System (MAS) using the following agents under framework of CrewAI (llama3:4b):
- Advocate Agent – argues why the patent should be classified as green technology.
- Skeptic Agent – challenges the green classification and identifies possible greenwashing.
- Judge Agent – evaluates both arguments and produces a final classification.
After the agent debate, the outputs were reviewed using Human-in-the-Loop (HITL) to produce the final gold labels.
Training details
- Epochs: 1
- Learning rate: 2e-5
- Max sequence length: 256
- Loss: Cross-entropy
- Evaluation metric: F1 score
Dataset
- Ailee52/PatentSBERTa_finetuned_green_multiagent-dataset
F1 Comparative analysis on eval_silver (Model Baseline, Simple LLM fintuned, and MAS fintuned)
| 1. Baseline | Frozen Embedding (No Fine Tuning) | F1 = 0.77 |
| 2. Assignment 2 Model | Silver + Gold (Simple Generic LLM+HITL) | F1 = 0.8009 |
| 3. Assignment 3 Model | Silver + Gold (Multi-Agent Fine-Tuning+HITL) | F1 = 0.8066 |
- Conclusion: Fine-tuning significantly improves performance compared to the frozen baseline. The Multi-Agent System slightly outperforms the Assignment 2 model, indicating that structured agent-based reasoning can provide modest gains in classification performance.
Disclaimer
- This project was developed for academic purposes only. The classification results are intended for research and educational use, and should not be interpreted as legal advice or professional patent evaluation. The Human-in-the-Loop (HITL) annotations were performed by students as part of a coursework assignment and do not represent expert legal judgment. The model may contain biases and errors inherited from both automated labeling (silver labels) and LLM-assisted human review.
Usage of Generative AI
- This project utilized Generative AI tools including ChatGPT, Claude, and Grammarly to assist in code implementation, some debuggings, and grammatical corrections thoughout the development process. All the parameter selections, agent configurations, and final judgements, including HITL label decisions were made solely by the author based on personal consideration and understanding of the project.
Video Link for Explaination: https://aaudk-my.sharepoint.com/:v:/g/personal/sm42zm_student_aau_dk/IQAlr-XVaSjLTpLLW0MH5LZKAUhA5aISCR6qsWVu7CR9kuE
- Downloads last month
- 4
Model tree for Ailee52/PatentSBERTa_finetuned_green_multiagent
Base model
AI-Growth-Lab/PatentSBERTa