File size: 1,221 Bytes
e88c0eb e7e1e15 e88c0eb be1c27d e88c0eb be1c27d e88c0eb be1c27d e88c0eb be1c27d e88c0eb be1c27d e88c0eb be1c27d e88c0eb be1c27d e88c0eb be1c27d e88c0eb be1c27d e88c0eb be1c27d e88c0eb be1c27d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
library_name: peft
license: apache-2.0
tags:
- json-extraction
- modernbert
- lora
- diffuberta
metrics:
- name: train_loss
value: 4.7773
- name: eval_loss
value: 4.316555023193359
---
# DiffuBERTa: JSON Extraction Adapter
This model is a Fine-tuned version of **answerdotai/ModernBERT-base** using LoRA. It is designed to extract structured JSON data from unstructured text using a parallel decoding approach.
## Model Performance
- **Final Training Loss**: 4.7773
- **Final Evaluation Loss**: 4.316555023193359
- **Training Epochs**: 5
- **Date Trained**: 2025-11-28
## 🚀 Live Demo Output
*(Generated automatically after training)*
**Input Text:**
> "We are excited to welcome Dr. Sarah to our Paris office as Senior Data Scientist."
**Template:**
> `{'name': '[1]', 'job': '[2]', 'city': '[1]'}`
**Model Output:**
```json
{
"name": "Sarah",
"job": "Data scientist",
"city": "Paris"
}
```
## Usage
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForMaskedLM.from_pretrained("answerdotai/ModernBERT-base")
model = PeftModel.from_pretrained(base_model, "philipp-zettl/DiffuBERTa")
# ... use extract_parallel helper ...
```
|