|
|
--- |
|
|
language: |
|
|
- en |
|
|
- tl |
|
|
tags: |
|
|
- sarcasm-detection |
|
|
- mock-politeness |
|
|
- multi-task-learning |
|
|
- code-mixed |
|
|
license: mit |
|
|
base_model: |
|
|
- FacebookAI/xlm-roberta-base |
|
|
--- |
|
|
|
|
|
# XLM-RoBERTa with Multi-Task Learning for Sarcasm and Mock Politeness Detection |
|
|
|
|
|
## Model Description |
|
|
This project fine-tunes **XLM-RoBERTa** for detecting **sarcasm** and **mock politeness** in **Filipino (English, Tagalog, or code-mixed (Taglish))** faculty evaluation texts. |
|
|
|
|
|
Two models are included: |
|
|
- **MTL model** → sarcasm detection (main task) + mock politeness detection (auxiliary task) |
|
|
- **STL model** → sarcasm detection only |
|
|
|
|
|
The models are packaged into a **desktop app (Tkinter + Python)** for easy testing. |
|
|
|
|
|
--- |
|
|
|
|
|
## Intended Uses & Limitations |
|
|
|
|
|
### Intended Use |
|
|
- Demonstrating multi-task learning in NLP |
|
|
- Exploring sarcasm and politeness detection in Taglish text |
|
|
- Academic/research purposes only |
|
|
|
|
|
### Limitations |
|
|
- Trained on a domain-specific dataset (faculty evaluations) |
|
|
- May not generalize well outside Taglish or academic settings |
|
|
- Predictions are not guaranteed to be accurate for all contexts |
|
|
|
|
|
--- |
|
|
|
|
|
## How to Use |
|
|
|
|
|
1. Download the **XLM-R folder** from this repository. |
|
|
2. Inside the folder, locate and open: XLM-R/XLM-R.exe |
|
|
3. Use the GUI to input text or upload a `.csv` file (see included `INPUT_SAMPLE.csv`). |
|
|
4. The app will output predictions for sarcasm (and mock politeness if using MTL). |
|
|
|
|
|
*(No coding required — the `.exe` is standalone on Windows.)* |
|
|
|
|
|
--- |
|
|
|
|
|
## Training Data |
|
|
- Collected faculty evaluation texts written in **Filipino** (English, Tagalog, or code-mixed (Taglish)) |
|
|
- Annotated for sarcasm and mock politeness |
|
|
|
|
|
--- |
|
|
|
|
|
## Evaluation |
|
|
- Compared **Single-Task (STL)** vs **Multi-Task (MTL)** |
|
|
- Metrics: accuracy, precision, recall, F1 |