File size: 2,077 Bytes
411000a dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 dbd4d8b c0f41c5 411000a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: apache-2.0
datasets:
- jspaulsen/actionable-bert-dataset
base_model:
- answerdotai/ModernBERT-base
---
# Actionable-BERT
A ModernBERT-based binary classifier for determining whether a conversation should be routed to a larger, more capable agent. The model analyzes conversation history and predicts if the user's request requires advanced reasoning or can be handled by a simpler system.
## Model Details
- **Base Model**: [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base)
- **Task**: Binary sequence classification
- **Max Sequence Length**: 8192 tokens
- **Labels**:
- `0`: Not Actionable (simple request, no routing needed)
- `1`: Actionable (complex request, route to larger agent)
## Input Format
Conversations are formatted as a single string with turn markers:
```
[U] User message here. [A] Assistant response here. [U] Follow-up user message...
```
- `[U]` marks the start of a user turn
- `[A]` marks the start of an assistant turn
- Turns are concatenated into a single string
## Installation
```bash
pip install transformers torch
```
## Inference Example
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "jspaulsen/actionable-bert"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()
# Format conversation history
conversation = "[U] Can you help me write a Python function? [A] Sure, what would you like the function to do? [U] I need a recursive function to calculate fibonacci numbers with memoization."
inputs = tokenizer(
conversation,
return_tensors="pt",
truncation=True,
max_length=8192,
)
with torch.no_grad():
outputs = model(**inputs)
prediction = torch.argmax(outputs.logits, dim=-1).item()
labels = {0: "Not Actionable", 1: "Actionable"}
print(f"Prediction: {labels[prediction]}")
```
## Dataset
Trained on [jspaulsen/actionable-bert-dataset](https://huggingface.co/datasets/jspaulsen/actionable-bert-dataset). |