MatteoFasulo commited on
Commit
6570616
·
verified ·
1 Parent(s): 175a7f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +343 -342
README.md CHANGED
@@ -1,343 +1,344 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- task_categories:
4
- - text-classification
5
- language:
6
- - en
7
- - ar
8
- - bg
9
- - de
10
- - el
11
- - it
12
- - pl
13
- - ro
14
- - uk
15
- tags:
16
- - subjectivity-detection
17
- - news-articles
18
- pretty_name: 'CLEF 2025 CheckThat! Lab - Task 1: Subjectivity in News Articles'
19
- size_categories:
20
- - 1K<n<10K
21
- configs:
22
- - config_name: arabic
23
- data_files:
24
- - split: train
25
- path:
26
- - "data/arabic/train_ar.tsv"
27
- - split: dev
28
- path:
29
- - "data/arabic/dev_ar.tsv"
30
- - split: dev_test
31
- path:
32
- - "data/arabic/dev_test_ar.tsv"
33
- - split: test
34
- path:
35
- - "data/arabic/test_ar_unlabeled.tsv"
36
- sep: "\t"
37
- - config_name: bulgarian
38
- data_files:
39
- - split: train
40
- path:
41
- - "data/bulgarian/train_bg.tsv"
42
- - split: dev
43
- path:
44
- - "data/bulgarian/dev_bg.tsv"
45
- - split: dev_test
46
- path:
47
- - "data/bulgarian/dev_test_bg.tsv"
48
- sep: "\t"
49
- - config_name: english
50
- data_files:
51
- - split: train
52
- path:
53
- - "data/english/train_en.tsv"
54
- - split: dev
55
- path:
56
- - "data/english/dev_en.tsv"
57
- - split: dev_test
58
- path:
59
- - "data/english/dev_test_en.tsv"
60
- - split: test
61
- path:
62
- - "data/english/test_en_unlabeled.tsv"
63
- sep: "\t"
64
- - config_name: german
65
- data_files:
66
- - split: train
67
- path:
68
- - "data/german/train_de.tsv"
69
- - split: dev
70
- path:
71
- - "data/german/dev_de.tsv"
72
- - split: dev_test
73
- path:
74
- - "data/german/dev_test_de.tsv"
75
- - split: test
76
- path:
77
- - "data/german/test_de_unlabeled.tsv"
78
- sep: "\t"
79
- - config_name: greek
80
- data_files:
81
- - split: test
82
- path:
83
- - "data/greek/test_gr_unlabeled.tsv"
84
- sep: "\t"
85
- - config_name: italian
86
- data_files:
87
- - split: train
88
- path:
89
- - "data/italian/train_it.tsv"
90
- - split: dev
91
- path:
92
- - "data/italian/dev_it.tsv"
93
- - split: dev_test
94
- path:
95
- - "data/italian/dev_test_it.tsv"
96
- - split: test
97
- path:
98
- - "data/italian/test_it_unlabeled.tsv"
99
- sep: "\t"
100
- - config_name: multilingual
101
- data_files:
102
- - split: dev_test
103
- path:
104
- - "data/multilingual/dev_test_multilingual.tsv"
105
- - split: test
106
- path:
107
- - "data/multilingual/test_multilingual_unlabeled.tsv"
108
- sep: "\t"
109
- - config_name: polish
110
- data_files:
111
- - split: test
112
- path:
113
- - "data/polish/test_pol_unlabeled.tsv"
114
- sep: "\t"
115
- - config_name: romanian
116
- data_files:
117
- - split: test
118
- path:
119
- - "data/romanian/test_ro_unlabeled.tsv"
120
- sep: "\t"
121
- - config_name: ukrainian
122
- data_files:
123
- - split: test
124
- path:
125
- - "data/ukrainian/test_ukr_unlabeled.tsv"
126
- sep: "\t"
127
- ---
128
-
129
- # CLEF‑2025 CheckThat! Lab Task 1: Subjectivity in News Articles
130
-
131
- Systems are challenged to distinguish whether a sentence from a news article expresses the subjective view of the author behind it or presents an objective view on the covered topic instead.
132
-
133
- This is a binary classification tasks in which systems have to identify whether a text sequence (a sentence or a paragraph) is subjective (**SUBJ**) or objective (**OBJ**).
134
-
135
- The task comprises three settings:
136
- - **Monolingual**: train and test on data in a given language L
137
- - **Multilingual**: train and test on data comprising several languages
138
- - **Zero-shot**: train on several languages and test on unseen languages
139
-
140
- ## Datasets statistics
141
-
142
- * **English**
143
- - train: 830 sentences, 532 OBJ, 298 SUBJ
144
- - dev: 462 sentences, 222 OBJ, 240 SUBJ
145
- - dev-test: 484 sentences, 362 OBJ, 122 SUBJ
146
- * **Italian**
147
- - train: 1613 sentences, 1231 OBJ, 382 SUBJ
148
- - dev: 667 sentences, 490 OBJ, 177 SUBJ
149
- - dev-test - 513 sentences, 377 OBJ, 136 SUBJ
150
- * **German**
151
- - train: 800 sentences, 492 OBJ, 308 SUBJ
152
- - dev: 491 sentences, 317 OBJ, 174 SUBJ
153
- - dev-test - 337 sentences, 226 OBJ, 111 SUBJ
154
- * **Bulgarian**
155
- - train: 729 sentences, 406 OBJ, 323 SUBJ
156
- - dev: 467 sentences, 175 OBJ, 139 SUBJ
157
- - dev-test - 250 sentences, 143 OBJ, 107 SUBJ
158
- - test: TBA
159
- * **Arabic**
160
- - train: 2,446 sentences, 1391 OBJ, 1055 SUBJ
161
- - dev: 742 sentences, 266 OBJ, 201 SUBJ
162
- - dev-test - 748 sentences, 425 OBJ, 323 SUBJ
163
-
164
- ## Input Data Format
165
-
166
- The data will be provided as a TSV file with three columns:
167
- > sentence_id <TAB> sentence <TAB> label
168
-
169
- Where: <br>
170
- * sentence_id: sentence id for a given sentence in a news article<br/>
171
- * sentence: sentence's text <br/>
172
- * label: *OBJ* and *SUBJ*
173
-
174
- **Note:** For English, the training and development (validation) sets will also include a fourth column, "solved_conflict", whose boolean value reflects whether the annotators had a strong disagreement.
175
-
176
- **Examples:**
177
-
178
- > b9e1635a-72aa-467f-86d6-f56ef09f62c3 Gone are the days when they led the world in recession-busting SUBJ
179
- >
180
- > f99b5143-70d2-494a-a2f5-c68f10d09d0a The trend is expected to reverse as soon as next month. OBJ
181
-
182
- ## Output Data Format
183
-
184
- The output must be a TSV format with two columns: sentence_id and label.
185
-
186
- ## Evaluation Metrics
187
-
188
- This task is evaluated as a classification task using F1-macro measure. Other metrics include Precision, Recall, and F1 of the SUBJ class and the macro-averaged scores.
189
-
190
- ## Scorers
191
-
192
- The code base with the scorer script is available on the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1).
193
-
194
- To evaluate the output of your model which should be in the output format required, please run the script below:
195
-
196
- > python evaluate.py -g dev_truth.tsv -p dev_predicted.tsv
197
-
198
- where dev_predicted.tsv is the output of your model on the dev set, and dev_truth.tsv is the golden label file provided by authors.
199
-
200
- The file can be used also to validate the format of the submission, simply use the provided test file as gold data.
201
-
202
- ## Baselines
203
-
204
- The code base with the script to train the baseline model is provided in the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1).
205
- The script can be run as follow:
206
-
207
- > python baseline.py -trp train_data.tsv -ttp dev_data.tsv
208
-
209
- where train_data.tsv is the file to be used for training and dev_data.tsv is the file on which doing the prediction.
210
-
211
- The baseline is a logistic regressor trained on a Sentence-BERT multilingual representation of the data.
212
-
213
- ## Leaderboard
214
-
215
- The leaderboard is available in the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1).
216
-
217
- ## Related Work
218
-
219
- Information regarding the annotation guidelines can be found in the following papers:
220
-
221
- > Federico Ruggeri, Francesco Antici, Andrea Galassi, aikaterini Korre, Arianna Muti, Alberto Barron, _[On the Definition of Prescriptive Annotation Guidelines for Language-Agnostic Subjectivity Detection](https://ceur-ws.org/Vol-3370/paper10.pdf)_, in: Proceedings of Text2Story — Sixth Workshop on Narrative Extraction From Texts, CEUR-WS.org, 2023, Vol 3370, pp. 103 - 111
222
-
223
- > Francesco Antici, Andrea Galassi, Federico Ruggeri, Katerina Korre, Arianna Muti, Alessandra Bardi, Alice Fedotova, Alberto Barrón-Cedeño, _[A Corpus for Sentence-level Subjectivity Detection on English News Articles](https://arxiv.org/abs/2305.18034)_, in: Proceedings of Joint International Conference on Computational Linguistics, Language Resources and Evaluation (COLING-LREC), 2024
224
-
225
- > Suwaileh, Reem, Maram Hasanain, Fatema Hubail, Wajdi Zaghouani, and Firoj Alam. "ThatiAR: Subjectivity Detection in Arabic News Sentences." arXiv preprint arXiv:2406.05559 (2024).
226
-
227
- ## Credits
228
-
229
- ### ECIR 2025
230
-
231
- Alam, F. et al. (2025). The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval. In: Hauff, C., et al. Advances in Information Retrieval. ECIR 2025. Lecture Notes in Computer Science, vol 15576. Springer, Cham. https://doi.org/10.1007/978-3-031-88720-8_68
232
-
233
- ```bibtex
234
- @InProceedings{10.1007/978-3-031-88720-8_68,
235
- author="Alam, Firoj
236
- and Stru{\ss}, Julia Maria
237
- and Chakraborty, Tanmoy
238
- and Dietze, Stefan
239
- and Hafid, Salim
240
- and Korre, Katerina
241
- and Muti, Arianna
242
- and Nakov, Preslav
243
- and Ruggeri, Federico
244
- and Schellhammer, Sebastian
245
- and Setty, Vinay
246
- and Sundriyal, Megha
247
- and Todorov, Konstantin
248
- and V., Venktesh",
249
- editor="Hauff, Claudia
250
- and Macdonald, Craig
251
- and Jannach, Dietmar
252
- and Kazai, Gabriella
253
- and Nardini, Franco Maria
254
- and Pinelli, Fabio
255
- and Silvestri, Fabrizio
256
- and Tonellotto, Nicola",
257
- title="The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval",
258
- booktitle="Advances in Information Retrieval",
259
- year="2025",
260
- publisher="Springer Nature Switzerland",
261
- address="Cham",
262
- pages="467--478",
263
- isbn="978-3-031-88720-8",
264
- }
265
- ```
266
-
267
- ### CLEF 2025 LNCS
268
-
269
- ```bibtex
270
- @InProceedings{clef-checkthat:2025-lncs,
271
- author = {
272
- Alam, Firoj
273
- and Struß, Julia Maria
274
- and Chakraborty, Tanmoy
275
- and Dietze, Stefan
276
- and Hafid, Salim
277
- and Korre, Katerina
278
- and Muti, Arianna
279
- and Nakov, Preslav
280
- and Ruggeri, Federico
281
- and Schellhammer, Sebastian
282
- and Setty, Vinay
283
- and Sundriyal, Megha
284
- and Todorov, Konstantin
285
- and Venktesh, V
286
- },
287
- title = {Overview of the {CLEF}-2025 {CheckThat! Lab}: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval},
288
- editor = {
289
- Carrillo-de-Albornoz, Jorge and
290
- Gonzalo, Julio and
291
- Plaza, Laura and
292
- García Seco de Herrera, Alba and
293
- Mothe, Josiane and
294
- Piroi, Florina and
295
- Rosso, Paolo and
296
- Spina, Damiano and
297
- Faggioli, Guglielmo and
298
- Ferro, Nicola
299
- },
300
- booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Sixteenth International Conference of the CLEF Association (CLEF 2025)},
301
- year = {2025}
302
- }
303
- ```
304
-
305
- ### CLEF 2025 CEUR papers
306
-
307
- ```bibtex
308
- @proceedings{clef2025-workingnotes,
309
- editor = "Faggioli, Guglielmo and
310
- Ferro, Nicola and
311
- Rosso, Paolo and
312
- Spina, Damiano",
313
- title = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum",
314
- booktitle = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum",
315
- series = "CLEF~2025",
316
- address = "Madrid, Spain",
317
- year = 2025
318
- }
319
- ```
320
-
321
- ### Task 1 overview paper
322
-
323
- ```bibtex
324
- @inproceedings{clef-checkthat:2025:task1,
325
- title = {Overview of the {CLEF-2025 CheckThat!} Lab Task 1 on Subjectivity in News Article},
326
- author = {
327
- Ruggeri, Federico and
328
- Muti, Arianna and
329
- Korre, Katerina and
330
- Stru{\ss}, Julia Maria and
331
- Siegel, Melanie and
332
- Wiegand, Michael and
333
- Alam, Firoj and
334
- Biswas, Rafiul and
335
- Zaghouani, Wajdi and
336
- Nawrocka, Maria and
337
- Ivasiuk, Bogdan and
338
- Razvan, Gogu and
339
- Mihail, Andreiana
340
- },
341
- crossref = {clef2025-workingnotes}
342
- }
 
343
  ```
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - text-classification
5
+ language:
6
+ - en
7
+ - ar
8
+ - bg
9
+ - de
10
+ - el
11
+ - it
12
+ - pl
13
+ - ro
14
+ - uk
15
+ tags:
16
+ - subjectivity-detection
17
+ - news-articles
18
+ viewer: true
19
+ pretty_name: 'CLEF 2025 CheckThat! Lab - Task 1: Subjectivity in News Articles'
20
+ size_categories:
21
+ - 1K<n<10K
22
+ configs:
23
+ - config_name: arabic
24
+ data_files:
25
+ - split: train
26
+ path:
27
+ - "data/arabic/train_ar.tsv"
28
+ - split: dev
29
+ path:
30
+ - "data/arabic/dev_ar.tsv"
31
+ - split: dev_test
32
+ path:
33
+ - "data/arabic/dev_test_ar.tsv"
34
+ - split: test
35
+ path:
36
+ - "data/arabic/test_ar_unlabeled.tsv"
37
+ sep: "\t"
38
+ - config_name: bulgarian
39
+ data_files:
40
+ - split: train
41
+ path:
42
+ - "data/bulgarian/train_bg.tsv"
43
+ - split: dev
44
+ path:
45
+ - "data/bulgarian/dev_bg.tsv"
46
+ - split: dev_test
47
+ path:
48
+ - "data/bulgarian/dev_test_bg.tsv"
49
+ sep: "\t"
50
+ - config_name: english
51
+ data_files:
52
+ - split: train
53
+ path:
54
+ - "data/english/train_en.tsv"
55
+ - split: dev
56
+ path:
57
+ - "data/english/dev_en.tsv"
58
+ - split: dev_test
59
+ path:
60
+ - "data/english/dev_test_en.tsv"
61
+ - split: test
62
+ path:
63
+ - "data/english/test_en_unlabeled.tsv"
64
+ sep: "\t"
65
+ - config_name: german
66
+ data_files:
67
+ - split: train
68
+ path:
69
+ - "data/german/train_de.tsv"
70
+ - split: dev
71
+ path:
72
+ - "data/german/dev_de.tsv"
73
+ - split: dev_test
74
+ path:
75
+ - "data/german/dev_test_de.tsv"
76
+ - split: test
77
+ path:
78
+ - "data/german/test_de_unlabeled.tsv"
79
+ sep: "\t"
80
+ - config_name: greek
81
+ data_files:
82
+ - split: test
83
+ path:
84
+ - "data/greek/test_gr_unlabeled.tsv"
85
+ sep: "\t"
86
+ - config_name: italian
87
+ data_files:
88
+ - split: train
89
+ path:
90
+ - "data/italian/train_it.tsv"
91
+ - split: dev
92
+ path:
93
+ - "data/italian/dev_it.tsv"
94
+ - split: dev_test
95
+ path:
96
+ - "data/italian/dev_test_it.tsv"
97
+ - split: test
98
+ path:
99
+ - "data/italian/test_it_unlabeled.tsv"
100
+ sep: "\t"
101
+ - config_name: multilingual
102
+ data_files:
103
+ - split: dev_test
104
+ path:
105
+ - "data/multilingual/dev_test_multilingual.tsv"
106
+ - split: test
107
+ path:
108
+ - "data/multilingual/test_multilingual_unlabeled.tsv"
109
+ sep: "\t"
110
+ - config_name: polish
111
+ data_files:
112
+ - split: test
113
+ path:
114
+ - "data/polish/test_pol_unlabeled.tsv"
115
+ sep: "\t"
116
+ - config_name: romanian
117
+ data_files:
118
+ - split: test
119
+ path:
120
+ - "data/romanian/test_ro_unlabeled.tsv"
121
+ sep: "\t"
122
+ - config_name: ukrainian
123
+ data_files:
124
+ - split: test
125
+ path:
126
+ - "data/ukrainian/test_ukr_unlabeled.tsv"
127
+ sep: "\t"
128
+ ---
129
+
130
+ # CLEF‑2025 CheckThat! Lab Task 1: Subjectivity in News Articles
131
+
132
+ Systems are challenged to distinguish whether a sentence from a news article expresses the subjective view of the author behind it or presents an objective view on the covered topic instead.
133
+
134
+ This is a binary classification tasks in which systems have to identify whether a text sequence (a sentence or a paragraph) is subjective (**SUBJ**) or objective (**OBJ**).
135
+
136
+ The task comprises three settings:
137
+ - **Monolingual**: train and test on data in a given language L
138
+ - **Multilingual**: train and test on data comprising several languages
139
+ - **Zero-shot**: train on several languages and test on unseen languages
140
+
141
+ ## Datasets statistics
142
+
143
+ * **English**
144
+ - train: 830 sentences, 532 OBJ, 298 SUBJ
145
+ - dev: 462 sentences, 222 OBJ, 240 SUBJ
146
+ - dev-test: 484 sentences, 362 OBJ, 122 SUBJ
147
+ * **Italian**
148
+ - train: 1613 sentences, 1231 OBJ, 382 SUBJ
149
+ - dev: 667 sentences, 490 OBJ, 177 SUBJ
150
+ - dev-test - 513 sentences, 377 OBJ, 136 SUBJ
151
+ * **German**
152
+ - train: 800 sentences, 492 OBJ, 308 SUBJ
153
+ - dev: 491 sentences, 317 OBJ, 174 SUBJ
154
+ - dev-test - 337 sentences, 226 OBJ, 111 SUBJ
155
+ * **Bulgarian**
156
+ - train: 729 sentences, 406 OBJ, 323 SUBJ
157
+ - dev: 467 sentences, 175 OBJ, 139 SUBJ
158
+ - dev-test - 250 sentences, 143 OBJ, 107 SUBJ
159
+ - test: TBA
160
+ * **Arabic**
161
+ - train: 2,446 sentences, 1391 OBJ, 1055 SUBJ
162
+ - dev: 742 sentences, 266 OBJ, 201 SUBJ
163
+ - dev-test - 748 sentences, 425 OBJ, 323 SUBJ
164
+
165
+ ## Input Data Format
166
+
167
+ The data will be provided as a TSV file with three columns:
168
+ > sentence_id <TAB> sentence <TAB> label
169
+
170
+ Where: <br>
171
+ * sentence_id: sentence id for a given sentence in a news article<br/>
172
+ * sentence: sentence's text <br/>
173
+ * label: *OBJ* and *SUBJ*
174
+
175
+ **Note:** For English, the training and development (validation) sets will also include a fourth column, "solved_conflict", whose boolean value reflects whether the annotators had a strong disagreement.
176
+
177
+ **Examples:**
178
+
179
+ > b9e1635a-72aa-467f-86d6-f56ef09f62c3 Gone are the days when they led the world in recession-busting SUBJ
180
+ >
181
+ > f99b5143-70d2-494a-a2f5-c68f10d09d0a The trend is expected to reverse as soon as next month. OBJ
182
+
183
+ ## Output Data Format
184
+
185
+ The output must be a TSV format with two columns: sentence_id and label.
186
+
187
+ ## Evaluation Metrics
188
+
189
+ This task is evaluated as a classification task using F1-macro measure. Other metrics include Precision, Recall, and F1 of the SUBJ class and the macro-averaged scores.
190
+
191
+ ## Scorers
192
+
193
+ The code base with the scorer script is available on the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1).
194
+
195
+ To evaluate the output of your model which should be in the output format required, please run the script below:
196
+
197
+ > python evaluate.py -g dev_truth.tsv -p dev_predicted.tsv
198
+
199
+ where dev_predicted.tsv is the output of your model on the dev set, and dev_truth.tsv is the golden label file provided by authors.
200
+
201
+ The file can be used also to validate the format of the submission, simply use the provided test file as gold data.
202
+
203
+ ## Baselines
204
+
205
+ The code base with the script to train the baseline model is provided in the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1).
206
+ The script can be run as follow:
207
+
208
+ > python baseline.py -trp train_data.tsv -ttp dev_data.tsv
209
+
210
+ where train_data.tsv is the file to be used for training and dev_data.tsv is the file on which doing the prediction.
211
+
212
+ The baseline is a logistic regressor trained on a Sentence-BERT multilingual representation of the data.
213
+
214
+ ## Leaderboard
215
+
216
+ The leaderboard is available in the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1).
217
+
218
+ ## Related Work
219
+
220
+ Information regarding the annotation guidelines can be found in the following papers:
221
+
222
+ > Federico Ruggeri, Francesco Antici, Andrea Galassi, aikaterini Korre, Arianna Muti, Alberto Barron, _[On the Definition of Prescriptive Annotation Guidelines for Language-Agnostic Subjectivity Detection](https://ceur-ws.org/Vol-3370/paper10.pdf)_, in: Proceedings of Text2Story — Sixth Workshop on Narrative Extraction From Texts, CEUR-WS.org, 2023, Vol 3370, pp. 103 - 111
223
+
224
+ > Francesco Antici, Andrea Galassi, Federico Ruggeri, Katerina Korre, Arianna Muti, Alessandra Bardi, Alice Fedotova, Alberto Barrón-Cedeño, _[A Corpus for Sentence-level Subjectivity Detection on English News Articles](https://arxiv.org/abs/2305.18034)_, in: Proceedings of Joint International Conference on Computational Linguistics, Language Resources and Evaluation (COLING-LREC), 2024
225
+
226
+ > Suwaileh, Reem, Maram Hasanain, Fatema Hubail, Wajdi Zaghouani, and Firoj Alam. "ThatiAR: Subjectivity Detection in Arabic News Sentences." arXiv preprint arXiv:2406.05559 (2024).
227
+
228
+ ## Credits
229
+
230
+ ### ECIR 2025
231
+
232
+ Alam, F. et al. (2025). The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval. In: Hauff, C., et al. Advances in Information Retrieval. ECIR 2025. Lecture Notes in Computer Science, vol 15576. Springer, Cham. https://doi.org/10.1007/978-3-031-88720-8_68
233
+
234
+ ```bibtex
235
+ @InProceedings{10.1007/978-3-031-88720-8_68,
236
+ author="Alam, Firoj
237
+ and Stru{\ss}, Julia Maria
238
+ and Chakraborty, Tanmoy
239
+ and Dietze, Stefan
240
+ and Hafid, Salim
241
+ and Korre, Katerina
242
+ and Muti, Arianna
243
+ and Nakov, Preslav
244
+ and Ruggeri, Federico
245
+ and Schellhammer, Sebastian
246
+ and Setty, Vinay
247
+ and Sundriyal, Megha
248
+ and Todorov, Konstantin
249
+ and V., Venktesh",
250
+ editor="Hauff, Claudia
251
+ and Macdonald, Craig
252
+ and Jannach, Dietmar
253
+ and Kazai, Gabriella
254
+ and Nardini, Franco Maria
255
+ and Pinelli, Fabio
256
+ and Silvestri, Fabrizio
257
+ and Tonellotto, Nicola",
258
+ title="The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval",
259
+ booktitle="Advances in Information Retrieval",
260
+ year="2025",
261
+ publisher="Springer Nature Switzerland",
262
+ address="Cham",
263
+ pages="467--478",
264
+ isbn="978-3-031-88720-8",
265
+ }
266
+ ```
267
+
268
+ ### CLEF 2025 LNCS
269
+
270
+ ```bibtex
271
+ @InProceedings{clef-checkthat:2025-lncs,
272
+ author = {
273
+ Alam, Firoj
274
+ and Struß, Julia Maria
275
+ and Chakraborty, Tanmoy
276
+ and Dietze, Stefan
277
+ and Hafid, Salim
278
+ and Korre, Katerina
279
+ and Muti, Arianna
280
+ and Nakov, Preslav
281
+ and Ruggeri, Federico
282
+ and Schellhammer, Sebastian
283
+ and Setty, Vinay
284
+ and Sundriyal, Megha
285
+ and Todorov, Konstantin
286
+ and Venktesh, V
287
+ },
288
+ title = {Overview of the {CLEF}-2025 {CheckThat! Lab}: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval},
289
+ editor = {
290
+ Carrillo-de-Albornoz, Jorge and
291
+ Gonzalo, Julio and
292
+ Plaza, Laura and
293
+ García Seco de Herrera, Alba and
294
+ Mothe, Josiane and
295
+ Piroi, Florina and
296
+ Rosso, Paolo and
297
+ Spina, Damiano and
298
+ Faggioli, Guglielmo and
299
+ Ferro, Nicola
300
+ },
301
+ booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Sixteenth International Conference of the CLEF Association (CLEF 2025)},
302
+ year = {2025}
303
+ }
304
+ ```
305
+
306
+ ### CLEF 2025 CEUR papers
307
+
308
+ ```bibtex
309
+ @proceedings{clef2025-workingnotes,
310
+ editor = "Faggioli, Guglielmo and
311
+ Ferro, Nicola and
312
+ Rosso, Paolo and
313
+ Spina, Damiano",
314
+ title = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum",
315
+ booktitle = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum",
316
+ series = "CLEF~2025",
317
+ address = "Madrid, Spain",
318
+ year = 2025
319
+ }
320
+ ```
321
+
322
+ ### Task 1 overview paper
323
+
324
+ ```bibtex
325
+ @inproceedings{clef-checkthat:2025:task1,
326
+ title = {Overview of the {CLEF-2025 CheckThat!} Lab Task 1 on Subjectivity in News Article},
327
+ author = {
328
+ Ruggeri, Federico and
329
+ Muti, Arianna and
330
+ Korre, Katerina and
331
+ Stru{\ss}, Julia Maria and
332
+ Siegel, Melanie and
333
+ Wiegand, Michael and
334
+ Alam, Firoj and
335
+ Biswas, Rafiul and
336
+ Zaghouani, Wajdi and
337
+ Nawrocka, Maria and
338
+ Ivasiuk, Bogdan and
339
+ Razvan, Gogu and
340
+ Mihail, Andreiana
341
+ },
342
+ crossref = {clef2025-workingnotes}
343
+ }
344
  ```