File size: 1,779 Bytes
25c1f63 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
language:
- en
- tl
tags:
- sarcasm-detection
- mock-politeness
- multi-task-learning
- code-mixed
license: mit
base_model:
- FacebookAI/xlm-roberta-base
---
# XLM-RoBERTa with Multi-Task Learning for Sarcasm and Mock Politeness Detection
## Model Description
This project fine-tunes **XLM-RoBERTa** for detecting **sarcasm** and **mock politeness** in **Filipino (English, Tagalog, or code-mixed (Taglish))** faculty evaluation texts.
Two models are included:
- **MTL model** → sarcasm detection (main task) + mock politeness detection (auxiliary task)
- **STL model** → sarcasm detection only
The models are packaged into a **desktop app (Tkinter + Python)** for easy testing.
---
## Intended Uses & Limitations
### Intended Use
- Demonstrating multi-task learning in NLP
- Exploring sarcasm and politeness detection in Taglish text
- Academic/research purposes only
### Limitations
- Trained on a domain-specific dataset (faculty evaluations)
- May not generalize well outside Taglish or academic settings
- Predictions are not guaranteed to be accurate for all contexts
---
## How to Use
1. Download the **XLM-R folder** from this repository.
2. Inside the folder, locate and open: XLM-R/XLM-R.exe
3. Use the GUI to input text or upload a `.csv` file (see included `INPUT_SAMPLE.csv`).
4. The app will output predictions for sarcasm (and mock politeness if using MTL).
*(No coding required — the `.exe` is standalone on Windows.)*
---
## Training Data
- Collected faculty evaluation texts written in **Filipino** (English, Tagalog, or code-mixed (Taglish))
- Annotated for sarcasm and mock politeness
---
## Evaluation
- Compared **Single-Task (STL)** vs **Multi-Task (MTL)**
- Metrics: accuracy, precision, recall, F1 |