File size: 3,913 Bytes
0d81add
 
 
339f0bc
 
 
0d81add
339f0bc
 
 
 
 
0d81add
339f0bc
0d81add
339f0bc
 
0d81add
 
 
 
 
 
 
fb563b2
 
ee431ec
8ceb87c
 
 
95d21e4
ee6454a
97f56e3
 
 
 
 
 
95d21e4
713105d
 
991af9d
 
713105d
 
991af9d
8ceb87c
0e4f8ae
8ceb87c
fb563b2
 
0e4f8ae
 
95d21e4
 
 
0e4f8ae
95d21e4
 
 
0e4f8ae
 
515d980
 
 
0d81add
f00ee10
 
 
 
 
 
 
 
 
0d81add
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f00ee10
 
 
 
 
0d81add
 
 
 
 
f00ee10
0d81add
4a56400
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
license: mit
base_model: microsoft/deberta-v3-base
language:
  - en
pipeline_tag: text-classification
tags:
  - generated_from_trainer
  - climate
  - un-general-assembly
  - text-classification
  - fine-tuned
metrics:
  - accuracy
model-index:
  - name: unga-climate-classifier
    results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# unga-climate-classifier

This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) trained to classify climate-related sentences in English using a dataset of 5,600 annotated sentences from the United Nations General Assembly Corpus. It was developed to build the Executive Comparative Climate Attention (ECCA) indicator, introduced in a [paper](https://doi.org/10.1162/glep.a.1
) published in Global Environmental Politics.


# How to use

```python 
from transformers import pipeline classifier = pipeline("text-classification", model="mljn/unga-climate-classifier")

text = "Climate change poses a fundamental threat to our future."

result = classifier(text)

print(result)

[{'label': 'climate', 'score': 0.9988275170326233}]

```



# How to cite

If you use this model or the underlying dataset or indicator, please cite:

> Emiliano Grossman, Malo Jan; Executive Climate Change Attention: Toward an Indicator of Comparative Climate Change Attention. Global Environmental Politics 2025; doi: https://doi.org/10.1162/glep.a.1


```bibtex
@article{grossman2025executive,
  title={Executive Climate Change Attention: Toward an Indicator of Comparative Climate Change Attention},
  author={Grossman, Emiliano and Jan, Malo},
  journal={Global Environmental Politics},
  pages={1--14},
  year={2025},
  publisher={MIT Press 255 Main Street, 9th Floor, Cambridge, Massachusetts 02142, USA~…}
}
```

### Model evaluation

It achieves the following results on the evaluation set:
- Loss: 0.0807
- Accuracy: 0.975
- F1 Macro: 0.9710
- Accuracy Balanced: 0.9715
- F1 Micro: 0.975
- Precision Macro: 0.9705
- Recall Macro: 0.9715
- Precision Micro: 0.975
- Recall Micro: 0.975


### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 80
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 5

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Accuracy Balanced | F1 Micro | Precision Macro | Recall Macro | Precision Micro | Recall Micro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------------:|:--------:|:---------------:|:------------:|:---------------:|:------------:|
| No log        | 1.0   | 123  | 0.1057          | 0.9726   | 0.9675   | 0.9583            | 0.9726   | 0.9783          | 0.9583       | 0.9726          | 0.9726       |
| No log        | 2.0   | 246  | 0.1102          | 0.9726   | 0.9683   | 0.9697            | 0.9726   | 0.9669          | 0.9697       | 0.9726          | 0.9726       |
| No log        | 3.0   | 369  | 0.0894          | 0.9798   | 0.9763   | 0.9729            | 0.9798   | 0.9800          | 0.9729       | 0.9798          | 0.9798       |
| No log        | 4.0   | 492  | 0.1098          | 0.9762   | 0.9723   | 0.9723            | 0.9762   | 0.9723          | 0.9723       | 0.9762          | 0.9762       |
| 0.1374        | 5.0   | 615  | 0.1026          | 0.9798   | 0.9763   | 0.9729            | 0.9798   | 0.9800          | 0.9729       | 0.9798          | 0.9798       |


### Framework versions

- Transformers 4.36.2
- Pytorch 2.5.0+cu121
- Datasets 2.6.0
- Tokenizers 0.15.2