This model can be used to classify communicative intentions in Dialogue. It takes a single sentence/utterance as input and returns an illocutionary force label.

Below are the labels used to fine-tune a RoBERTa-large model:

"0": "Agreeing",  
"1": "Arguing",  
"2": "Asserting",  
"3": "Assertive Questioning",  
"4": "Challenging",  
"5": "Default Illocuting",  
"6": "Disagreeing",  
"7": "Pure Questioning",  
"8": "Restating",  
"9": "Rhetorical Questioning"  

F1_macro - 0.55

F1_weighted - 0.67

Accuracy - 0.67

These are the links to the Datasets used:

MM2012 - https://corpora.aifdb.org/mm2012

MM123 - https://corpora.aifdb.org/mm123

QT30 - https://corpora.aifdb.org/qt30

US2016 - https://corpora.aifdb.org/US2016

precision recall f1-score support
Agreeing 0.63 0.67 0.65 18
Arguing 0.67 0.40 0.50 5
Asserting 0.79 0.89 0.84 200
Assertive Questioning 0.52 0.44 0.48 124
Challenging 0.48 0.42 0.44 24
Default Illocuting 1.00 0.07 0.13 14
Disagreeing 0.33 0.40 0.36 5
Pure Questioning 0.75 0.76 0.76 293
Restating 1.00 1.00 1.00 2
Rhetorical Questioning 0.36 0.40 0.38 81
accuracy 0.67 766
macro avg 0.65 0.54 0.55 766
weighted avg 0.67 0.67 0.67 766

The data preprocessing and fine-tuning technique can be found here: https://discovery.dundee.ac.uk/en/studentTheses/exploiting-illocutionary-forces-in-dialogue-structures-for-enhanc/

Citation:

Inyama, G. (2025). Exploiting Illocutionary Forces in Dialogue Structures for Enhancing Authorship Identification [Master of Philosophy thesis, University of Dundee]. Discovery - the University of Dundee Research Portal. https://doi.org/10.15132/20000713

Downloads last month
3
Safetensors
Model size
0.4B params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Godfrey2712/amf_illoc_force_intent_recognition

Finetuned
(457)
this model