File size: 1,340 Bytes
b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b b2ed553 5a2df2b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | ---
datasets:
- CausalNewsCorpus
language: en
library_name: transformers
license: mit
metrics:
- accuracy
- f1
- precision
- recall
tags:
- text-classification
- roberta
- causal-narrative
- sequence-classification
---
# RoBERTa Causal Narrative Classifier
This model is a fine-tuned version of `roberta-base` for causal narrative sentence classification.
## Model Description
- **Base Model**: roberta-base
- **Task**: Binary classification (causal vs non-causal sentences)
- **Training Data**: CausalNewsCorpus V2
## Training Results
- **Accuracy**: 83.82%
- **Precision**: 84.31%
- **Recall**: 83.20%
- **F1 Score**: 83.48%
## Usage
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
import torch
# Load model and tokenizer
model_name = "causal-narrative/roberta-causal-narrative-classifier"
tokenizer = RobertaTokenizer.from_pretrained(model_name)
model = RobertaForSequenceClassification.from_pretrained(model_name)
# Predict
text = "The heavy rain caused flooding in the city."
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
prediction = torch.argmax(outputs.logits, dim=-1).item()
print(f"Is causal: {prediction == 1}")
```
## Labels
- **0**: Non-causal sentence
- **1**: Causal narrative sentence
|