Update README.md
Browse files
README.md
CHANGED
|
@@ -1,199 +1,261 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
library_name: transformers
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
-
#
|
| 7 |
-
|
| 8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
|
|
|
|
|
|
| 10 |
|
|
|
|
| 11 |
|
| 12 |
## Model Details
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
- **Developed by:** [More Information Needed]
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Model type:** [More Information Needed]
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
-
|
| 28 |
-
### Model Sources [optional]
|
| 29 |
-
|
| 30 |
-
<!-- Provide the basic links for the model. -->
|
| 31 |
-
|
| 32 |
-
- **Repository:** [More Information Needed]
|
| 33 |
-
- **Paper [optional]:** [More Information Needed]
|
| 34 |
-
- **Demo [optional]:** [More Information Needed]
|
| 35 |
-
|
| 36 |
-
## Uses
|
| 37 |
-
|
| 38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 39 |
-
|
| 40 |
-
### Direct Use
|
| 41 |
|
| 42 |
-
|
| 43 |
|
| 44 |
-
|
| 45 |
|
| 46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
-
|
| 49 |
|
| 50 |
-
|
| 51 |
|
| 52 |
-
|
| 53 |
|
| 54 |
-
|
|
|
|
| 55 |
|
| 56 |
-
|
| 57 |
|
| 58 |
-
|
|
|
|
|
|
|
| 59 |
|
| 60 |
-
|
| 61 |
|
| 62 |
-
|
|
|
|
| 63 |
|
| 64 |
-
###
|
| 65 |
|
| 66 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
-
|
| 69 |
|
| 70 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
-
|
| 73 |
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
### Training Data
|
| 79 |
-
|
| 80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 81 |
-
|
| 82 |
-
[More Information Needed]
|
| 83 |
-
|
| 84 |
-
### Training Procedure
|
| 85 |
-
|
| 86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 87 |
-
|
| 88 |
-
#### Preprocessing [optional]
|
| 89 |
-
|
| 90 |
-
[More Information Needed]
|
| 91 |
|
|
|
|
| 92 |
|
| 93 |
-
|
| 94 |
|
| 95 |
-
|
| 96 |
|
| 97 |
-
|
|
|
|
|
|
|
| 98 |
|
| 99 |
-
|
| 100 |
|
| 101 |
-
|
| 102 |
|
| 103 |
-
##
|
| 104 |
|
| 105 |
-
|
| 106 |
|
| 107 |
-
|
|
|
|
|
|
|
|
|
|
| 108 |
|
| 109 |
-
|
| 110 |
|
| 111 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 112 |
|
| 113 |
-
|
| 114 |
|
| 115 |
-
|
|
|
|
|
|
|
|
|
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
-
|
|
|
|
|
|
|
| 120 |
|
| 121 |
-
|
| 122 |
|
| 123 |
-
|
| 124 |
|
| 125 |
-
|
| 126 |
|
| 127 |
-
|
| 128 |
|
| 129 |
-
|
| 130 |
|
| 131 |
-
|
| 132 |
|
|
|
|
| 133 |
|
|
|
|
| 134 |
|
| 135 |
-
##
|
| 136 |
|
| 137 |
-
|
| 138 |
|
| 139 |
-
|
|
|
|
| 140 |
|
| 141 |
-
|
| 142 |
|
| 143 |
-
|
|
|
|
| 144 |
|
| 145 |
-
|
| 146 |
|
| 147 |
-
|
| 148 |
-
- **Hours used:** [More Information Needed]
|
| 149 |
-
- **Cloud Provider:** [More Information Needed]
|
| 150 |
-
- **Compute Region:** [More Information Needed]
|
| 151 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
|
| 153 |
-
|
| 154 |
|
| 155 |
-
|
| 156 |
|
| 157 |
-
|
| 158 |
|
| 159 |
-
|
| 160 |
|
| 161 |
-
|
| 162 |
|
| 163 |
-
|
| 164 |
|
| 165 |
-
|
| 166 |
|
| 167 |
-
|
| 168 |
|
| 169 |
-
|
| 170 |
|
| 171 |
-
|
|
|
|
|
|
|
| 172 |
|
| 173 |
-
|
| 174 |
|
| 175 |
-
|
|
|
|
| 176 |
|
| 177 |
-
|
| 178 |
|
| 179 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 180 |
|
| 181 |
-
|
|
|
|
| 182 |
|
| 183 |
-
|
|
|
|
|
|
|
| 184 |
|
| 185 |
-
|
|
|
|
| 186 |
|
| 187 |
-
|
|
|
|
|
|
|
| 188 |
|
| 189 |
-
## More Information [optional]
|
| 190 |
|
| 191 |
-
|
| 192 |
|
| 193 |
-
|
|
|
|
| 194 |
|
| 195 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 196 |
|
| 197 |
-
|
|
|
|
| 198 |
|
| 199 |
-
|
|
|
|
| 1 |
---
|
| 2 |
+
language: en
|
| 3 |
library_name: transformers
|
| 4 |
+
pipeline_tag: text-classification
|
| 5 |
+
tags:
|
| 6 |
+
- text-classification
|
| 7 |
+
- sequence-classification
|
| 8 |
+
- roberta
|
| 9 |
+
- distilroberta
|
| 10 |
+
- climate-change
|
| 11 |
+
- logical-fallacy-detection
|
| 12 |
+
- nlp
|
| 13 |
+
license: apache-2.0
|
| 14 |
+
model-index:
|
| 15 |
+
- name: climate-fallacy-roberta
|
| 16 |
+
results:
|
| 17 |
+
- task:
|
| 18 |
+
type: text-classification
|
| 19 |
+
name: Climate logical fallacy classification
|
| 20 |
+
dataset:
|
| 21 |
+
name: Climate subset of Tariq60/fallacy-detection
|
| 22 |
+
type: custom
|
| 23 |
+
split: test
|
| 24 |
+
metrics:
|
| 25 |
+
- name: Accuracy
|
| 26 |
+
type: accuracy
|
| 27 |
+
value: 0.24
|
| 28 |
+
- name: Macro F1
|
| 29 |
+
type: f1
|
| 30 |
+
value: 0.20
|
| 31 |
+
- name: Weighted F1
|
| 32 |
+
type: f1
|
| 33 |
+
value: 0.24
|
| 34 |
---
|
| 35 |
|
| 36 |
+
# Climate Logical Fallacy Classifier (DistilRoBERTa)
|
|
|
|
|
|
|
| 37 |
|
| 38 |
+
This model is a **DistilRoBERTa**–based text classification model fine-tuned to detect **logical fallacies in climate-related text**.
|
| 39 |
+
It predicts one of 11 logical fallacy labels (including “NO_FALLACY”) for a given sentence or short paragraph.
|
| 40 |
|
| 41 |
+
The model was trained as part of an academic NLP project on _“Automated Detection of Logical Fallacies in Climate Change Social Media Posts using Small Language Models (SLMs)”_.
|
| 42 |
|
| 43 |
## Model Details
|
| 44 |
|
| 45 |
+
- **Base model**: `distilroberta-base`
|
| 46 |
+
- **Architecture**: DistilRoBERTa (Transformer encoder, 6 layers)
|
| 47 |
+
- **Task**: Multi-class text classification
|
| 48 |
+
- **Number of classes**: 11
|
| 49 |
+
- **Language**: English
|
| 50 |
+
- **Framework**: Transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
+
### Label Set
|
| 53 |
|
| 54 |
+
The model is trained to predict the following labels:
|
| 55 |
|
| 56 |
+
1. `CHERRY_PICKING`
|
| 57 |
+
2. `EVADING_THE_BURDEN_OF_PROOF`
|
| 58 |
+
3. `FALSE_ANALOGY`
|
| 59 |
+
4. `FALSE_AUTHORITY`
|
| 60 |
+
5. `FALSE_CAUSE`
|
| 61 |
+
6. `HASTY_GENERALISATION`
|
| 62 |
+
7. `NO_FALLACY`
|
| 63 |
+
8. `POST_HOC`
|
| 64 |
+
9. `RED_HERRINGS`
|
| 65 |
+
10. `STRAWMAN`
|
| 66 |
+
11. `VAGUENESS`
|
| 67 |
|
| 68 |
+
`id2label` / `label2id` mappings are stored in the model config and are consistent with the training code.
|
| 69 |
|
| 70 |
+
## 📚 Training Data
|
| 71 |
|
| 72 |
+
The model was fine-tuned on the **climate subset** of the open-source dataset from:
|
| 73 |
|
| 74 |
+
> Tariq60 – *fallacy-detection* repository
|
| 75 |
+
> https://github.com/Tariq60/fallacy-detection
|
| 76 |
|
| 77 |
+
Only the **climate** portion of the dataset was used, with the standard split:
|
| 78 |
|
| 79 |
+
- `train/` – training examples
|
| 80 |
+
- `dev/` – validation examples
|
| 81 |
+
- `test/` – held-out evaluation set
|
| 82 |
|
| 83 |
+
Each example includes:
|
| 84 |
|
| 85 |
+
- The climate-related text segment
|
| 86 |
+
- A manually assigned fallacy label (or `No fallacy`)
|
| 87 |
|
| 88 |
+
### Preprocessing
|
| 89 |
|
| 90 |
+
- Texts were lower-cased and cleaned using a light `basic_clean` function:
|
| 91 |
+
- Stripping extra whitespace
|
| 92 |
+
- Normalising some punctuation
|
| 93 |
+
- Some classes were **minority labels** (few examples), so basic **class balancing** was applied via up-sampling in the training set.
|
| 94 |
+
- NaN or empty texts were dropped before training.
|
| 95 |
|
| 96 |
+
## Training Procedure
|
| 97 |
|
| 98 |
+
- **Base model**: `distilroberta-base`
|
| 99 |
+
- **Optimizer**: AdamW (via `Trainer`)
|
| 100 |
+
- **Learning rate**: 2e-5
|
| 101 |
+
- **Batch size**: 16
|
| 102 |
+
- **Max sequence length**: 128–256 tokens
|
| 103 |
+
- **Epochs**: 10
|
| 104 |
+
- **Weight decay**: 0.01
|
| 105 |
+
- **Loss function**: Cross-entropy, optionally with class weights to mitigate class imbalance
|
| 106 |
+
- **Validation split**: 80/20 stratified split of the training data
|
| 107 |
|
| 108 |
+
## Implementation used:
|
| 109 |
|
| 110 |
+
- `AutoTokenizer`
|
| 111 |
+
- `AutoModelForSequenceClassification`
|
| 112 |
+
- `TrainingArguments`
|
| 113 |
+
- `Trainer`
|
| 114 |
|
| 115 |
+
from the Transformers library.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 116 |
|
| 117 |
+
## Evaluation
|
| 118 |
|
| 119 |
+
Evaluation was done on the **held-out climate test set** from the dataset.
|
| 120 |
|
| 121 |
+
**Metrics (multi-class):**
|
| 122 |
|
| 123 |
+
- **Accuracy** ≈ 0.24
|
| 124 |
+
- **Macro F1** ≈ 0.20
|
| 125 |
+
- **Weighted F1** ≈ 0.24
|
| 126 |
|
| 127 |
+
These values are **baseline experimental results** on a relatively small and imbalanced dataset. They should be interpreted as *preliminary research numbers*, not as production-ready performance.
|
| 128 |
|
| 129 |
+
Different random seeds, data balancing strategies, or more aggressive hyperparameter tuning can change these numbers.
|
| 130 |
|
| 131 |
+
## Intended Use
|
| 132 |
|
| 133 |
+
### Primary Use
|
| 134 |
|
| 135 |
+
- Research and experimentation on:
|
| 136 |
+
- Automated detection of logical fallacies in climate discourse
|
| 137 |
+
- Comparing traditional baselines (TF-IDF + SVM) vs. Transformer-based models
|
| 138 |
+
- Building educational tools that flag potential fallacies in climate arguments
|
| 139 |
|
| 140 |
+
### Suitable Scenarios
|
| 141 |
|
| 142 |
+
- Analyzing **short climate-related social media posts**
|
| 143 |
+
- Demonstration / teaching examples on:
|
| 144 |
+
- Argumentation quality
|
| 145 |
+
- Climate misinformation
|
| 146 |
+
- Explainable NLP (combined with a small language model explainer, e.g. FLAN-T5)
|
| 147 |
+
-
|
| 148 |
+
## Limitations & Ethical Considerations
|
| 149 |
|
| 150 |
+
### Limitations
|
| 151 |
|
| 152 |
+
- **Small dataset**: Training data is limited in size, especially for rarer fallacy types.
|
| 153 |
+
- **Class imbalance**: Some fallacies occur far less frequently, which affects per-class F1 scores.
|
| 154 |
+
- **Modest performance**: Overall accuracy and macro F1 are relatively low. The model should be treated as an exploratory research artifact, not a production system.
|
| 155 |
+
- **Domain specificity**: The model is trained only on **climate** discourse; performance on other topics (e.g. politics, health) is unknown and likely poor.
|
| 156 |
|
| 157 |
+
### Ethical Considerations
|
| 158 |
|
| 159 |
+
- Predictions are **probabilistic**, not definitive judgments of truth or deception.
|
| 160 |
+
- The model can be **wrong or over-confident**, especially on borderline or nuanced arguments.
|
| 161 |
+
- It should **not** be used for automated moderation, censorship, or any high-stakes decision-making without strong human oversight.
|
| 162 |
|
| 163 |
+
## How to Integration with Explanatory SLM
|
| 164 |
|
| 165 |
+
In the associated project, this classifier is combined with a small language model (e.g., google/flan-t5-small) to generate natural-language explanations of the predicted fallacy label:
|
| 166 |
|
| 167 |
+
What the fallacy means in simple terms
|
| 168 |
|
| 169 |
+
Why the input text might be an example
|
| 170 |
|
| 171 |
+
This setup is used in a Streamlit app:
|
| 172 |
|
| 173 |
+
Users enter a climate-related argument
|
| 174 |
|
| 175 |
+
The model predicts a fallacy label
|
| 176 |
|
| 177 |
+
FLAN-T5 generates a short explanation
|
| 178 |
|
| 179 |
+
## Citation
|
| 180 |
|
| 181 |
+
If you use this model in academic work, you can cite it as:
|
| 182 |
|
| 183 |
+
Kyeremeh, F. (2025). Climate Logical Fallacy Classifier (DistilRoBERTa). Hugging Face.
|
| 184 |
+
Model: SteadyHands/climate-fallacy-roberta.
|
| 185 |
|
| 186 |
+
And also consider citing the original dataset author(s):
|
| 187 |
|
| 188 |
+
Tariq60. fallacy-detection GitHub repository.
|
| 189 |
+
https://github.com/Tariq60/fallacy-detection
|
| 190 |
|
| 191 |
+
## Acknowledgements
|
| 192 |
|
| 193 |
+
Base model: distilroberta-base by Hugging Face
|
|
|
|
|
|
|
|
|
|
|
|
|
| 194 |
|
| 195 |
+
Dataset: Climate subset from Tariq60’s fallacy-detection repository
|
| 196 |
|
| 197 |
+
## Libraries:
|
| 198 |
|
| 199 |
+
Transformers
|
| 200 |
|
| 201 |
+
Datasets
|
| 202 |
|
| 203 |
+
scikit-learn
|
| 204 |
|
| 205 |
+
## Project context:
|
| 206 |
|
| 207 |
+
Master ’s-level NLP / Data Science coursework on Small Language Models and explainable NLP.
|
| 208 |
|
| 209 |
+
## How to Use
|
| 210 |
|
| 211 |
+
### Python Example (Logits → Label)
|
| 212 |
|
| 213 |
+
```python
|
| 214 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 215 |
+
import torch
|
| 216 |
|
| 217 |
+
model_id = "SteadyHands/climate-fallacy-roberta"
|
| 218 |
|
| 219 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 220 |
+
model = AutoModelForSequenceClassification.from_pretrained(model_id)
|
| 221 |
|
| 222 |
+
text = "Climate has always changed in the past, so current warming can't be caused by humans."
|
| 223 |
|
| 224 |
+
inputs = tokenizer(
|
| 225 |
+
text,
|
| 226 |
+
return_tensors="pt",
|
| 227 |
+
truncation=True,
|
| 228 |
+
padding="max_length",
|
| 229 |
+
max_length=256,
|
| 230 |
+
)
|
| 231 |
|
| 232 |
+
with torch.no_grad():
|
| 233 |
+
outputs = model(**inputs)
|
| 234 |
|
| 235 |
+
logits = outputs.logits
|
| 236 |
+
probs = torch.softmax(logits, dim=-1)[0].tolist()
|
| 237 |
+
pred_id = int(torch.argmax(logits, dim=-1).item())
|
| 238 |
|
| 239 |
+
id2label = model.config.id2label
|
| 240 |
+
pred_label = id2label[str(pred_id)] if isinstance(id2label, dict) else id2label[pred_id]
|
| 241 |
|
| 242 |
+
print("Text:", text)
|
| 243 |
+
print("Predicted label:", pred_label)
|
| 244 |
+
print("Probabilities:", probs)
|
| 245 |
|
|
|
|
| 246 |
|
| 247 |
+
Using the Transformers Pipeline
|
| 248 |
|
| 249 |
+
```python
|
| 250 |
+
from transformers import pipeline
|
| 251 |
|
| 252 |
+
clf = pipeline(
|
| 253 |
+
"text-classification",
|
| 254 |
+
model="SteadyHands/climate-fallacy-roberta",
|
| 255 |
+
top_k=None, # set top_k=3 to see top-3 fallacies
|
| 256 |
+
)
|
| 257 |
|
| 258 |
+
text = "Temperatures dropped this winter, so global warming must be a hoax."
|
| 259 |
+
outputs = clf(text)
|
| 260 |
|
| 261 |
+
print(outputs)
|