lvwerra HF Staff commited on
Commit
71781d4
Β·
1 Parent(s): 2575a48

Update Space (evaluate main: 041ca0e7)

Browse files
Files changed (5) hide show
  1. README.md +176 -5
  2. app.py +6 -0
  3. fever.py +148 -0
  4. requirements.txt +1 -0
  5. test_fever.py +134 -0
README.md CHANGED
@@ -1,12 +1,183 @@
1
  ---
2
- title: Fever
3
- emoji: πŸ†
4
  colorFrom: blue
5
- colorTo: indigo
6
  sdk: gradio
7
- sdk_version: 5.49.1
8
  app_file: app.py
9
  pinned: false
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: FEVER
3
+ emoji: πŸ”₯
4
  colorFrom: blue
5
+ colorTo: red
6
  sdk: gradio
7
+ sdk_version: 3.19.1
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - metric
13
+ description: >-
14
+ The FEVER (Fact Extraction and VERification) metric evaluates the performance of systems that verify factual claims against evidence retrieved from Wikipedia.
15
+
16
+ It consists of three main components: Label accuracy (measures how often the predicted claim label matches the gold label), FEVER score (considers a prediction correct only if the label is correct and at least one complete gold evidence set is retrieved), and Evidence F1 (computes the micro-averaged precision, recall, and F1 between predicted and gold evidence sentences).
17
+
18
+ The FEVER score is the official leaderboard metric used in the FEVER shared tasks. All metrics range from 0 to 1, with higher values indicating better performance.
19
  ---
20
 
21
+ # Metric Card for FEVER
22
+
23
+ ## Metric description
24
+
25
+ The FEVER (Fact Extraction and VERification) metric evaluates the performance of systems that verify factual claims against evidence retrieved from Wikipedia. It was introduced in the FEVER shared task and has become a standard benchmark for fact verification systems.
26
+
27
+ FEVER consists of three main evaluation components:
28
+
29
+ 1. **Label accuracy**: measures how often the predicted claim label (SUPPORTED, REFUTED, or NOT ENOUGH INFO) matches the gold label
30
+ 2. **FEVER score**: considers a prediction correct only if the label is correct _and_ at least one complete gold evidence set is retrieved
31
+ 3. **Evidence F1**: computes the micro-averaged precision, recall, and F1 between predicted and gold evidence sentences
32
+
33
+ ## How to use
34
+
35
+ The metric takes two inputs: predictions (a list of dictionaries containing predicted labels and evidence) and references (a list of dictionaries containing gold labels and evidence sets).
36
+
37
+ ```python
38
+ from evaluate import load
39
+ fever = load("fever")
40
+ predictions = [{"label": "SUPPORTED", "evidence": ["E1", "E2"]}]
41
+ references = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]}]
42
+ results = fever.compute(predictions=predictions, references=references)
43
+ ```
44
+
45
+ ## Output values
46
+
47
+ This metric outputs a dictionary containing five float values:
48
+
49
+ ```python
50
+ print(results)
51
+ {
52
+ 'label_accuracy': 1.0,
53
+ 'fever_score': 1.0,
54
+ 'evidence_precision': 1.0,
55
+ 'evidence_recall': 1.0,
56
+ 'evidence_f1': 1.0
57
+ }
58
+ ```
59
+
60
+ - **label_accuracy**: Proportion of claims with correctly predicted labels (0-1, higher is better)
61
+ - **fever_score**: Proportion of claims where both the label and at least one full gold evidence set are correct (0-1, higher is better). This is the **official FEVER leaderboard metric**
62
+ - **evidence_precision**: Micro-averaged precision of evidence retrieval (0-1, higher is better)
63
+ - **evidence_recall**: Micro-averaged recall of evidence retrieval (0-1, higher is better)
64
+ - **evidence_f1**: Micro-averaged F1 of evidence retrieval (0-1, higher is better)
65
+
66
+ All values range from 0 to 1, with **1.0 representing perfect performance**.
67
+
68
+ ### Values from popular papers
69
+
70
+ The FEVER shared task has established performance benchmarks on the FEVER dataset:
71
+
72
+ - Human performance: FEVER score of ~0.92
73
+ - Top systems (2018-2019): FEVER scores ranging from 0.64 to 0.70
74
+ - State-of-the-art models (2020+): FEVER scores above 0.75
75
+
76
+ Performance varies significantly based on:
77
+
78
+ - Model architecture (retrieval + verification pipeline vs. end-to-end)
79
+ - Pre-training (BERT, RoBERTa, etc.)
80
+ - Evidence retrieval quality
81
+
82
+ ## Examples
83
+
84
+ Perfect prediction (label and evidence both correct):
85
+
86
+ ```python
87
+ from evaluate import load
88
+ fever = load("fever")
89
+ predictions = [{"label": "SUPPORTED", "evidence": ["E1", "E2"]}]
90
+ references = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]}]
91
+ results = fever.compute(predictions=predictions, references=references)
92
+ print(results)
93
+ {
94
+ 'label_accuracy': 1.0,
95
+ 'fever_score': 1.0,
96
+ 'evidence_precision': 1.0,
97
+ 'evidence_recall': 1.0,
98
+ 'evidence_f1': 1.0
99
+ }
100
+ ```
101
+
102
+ Correct label but incomplete evidence:
103
+
104
+ ```python
105
+ from evaluate import load
106
+ fever = load("fever")
107
+ predictions = [{"label": "SUPPORTED", "evidence": ["E1"]}]
108
+ references = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]}]
109
+ results = fever.compute(predictions=predictions, references=references)
110
+ print(results)
111
+ {
112
+ 'label_accuracy': 1.0,
113
+ 'fever_score': 0.0,
114
+ 'evidence_precision': 1.0,
115
+ 'evidence_recall': 0.5,
116
+ 'evidence_f1': 0.6666666666666666
117
+ }
118
+ ```
119
+
120
+ Incorrect label (FEVER score is 0):
121
+
122
+ ```python
123
+ from evaluate import load
124
+ fever = load("fever")
125
+ predictions = [{"label": "REFUTED", "evidence": ["E1", "E2"]}]
126
+ references = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]}]
127
+ results = fever.compute(predictions=predictions, references=references)
128
+ print(results)
129
+ {
130
+ 'label_accuracy': 0.0,
131
+ 'fever_score': 0.0,
132
+ 'evidence_precision': 1.0,
133
+ 'evidence_recall': 1.0,
134
+ 'evidence_f1': 1.0
135
+ }
136
+ ```
137
+
138
+ Multiple valid evidence sets:
139
+
140
+ ```python
141
+ from evaluate import load
142
+ fever = load("fever")
143
+ predictions = [{"label": "SUPPORTED", "evidence": ["E3", "E4"]}]
144
+ references = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"], ["E3", "E4"]]}]
145
+ results = fever.compute(predictions=predictions, references=references)
146
+ print(results)
147
+ {
148
+ 'label_accuracy': 1.0,
149
+ 'fever_score': 1.0,
150
+ 'evidence_precision': 0.5,
151
+ 'evidence_recall': 0.5,
152
+ 'evidence_f1': 0.5
153
+ }
154
+ ```
155
+
156
+ ## Limitations and bias
157
+
158
+ The FEVER metric has several important considerations:
159
+
160
+ 1. **Evidence set completeness**: The FEVER score requires retrieving _all_ sentences in at least one gold evidence set. Partial evidence retrieval (even if sufficient for verification) results in a score of 0.
161
+ 2. **Multiple valid evidence sets**: Some claims can be verified using different sets of evidence. The metric gives credit if any one complete set is retrieved.
162
+ 3. **Micro-averaging**: Evidence precision, recall, and F1 are micro-averaged across all examples, which means performance on longer evidence sets has more influence on the final metrics.
163
+ 4. **Label dependency**: The FEVER score requires both correct labeling _and_ correct evidence retrieval, making it a strict metric that penalizes systems for either type of error.
164
+ 5. **Wikipedia-specific**: The metric was designed for Wikipedia-based fact verification and may not generalize directly to other knowledge sources or domains.
165
+
166
+ ## Citation
167
+
168
+ ```bibtex
169
+ @inproceedings{thorne2018fever,
170
+ title={FEVER: a Large-scale Dataset for Fact Extraction and VERification},
171
+ author={Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
172
+ booktitle={Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)},
173
+ pages={809--819},
174
+ year={2018}
175
+ }
176
+ ```
177
+
178
+ ## Further References
179
+
180
+ - [FEVER Dataset Website](https://fever.ai/dataset/)
181
+ - [FEVER Paper on arXiv](https://arxiv.org/abs/1803.05355)
182
+ - [Hugging Face Tasks -- Fact Checking](https://huggingface.co/tasks/text-classification)
183
+ - [FEVER Shared Task Overview](https://fever.ai/task.html)
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("fever")
6
+ launch_gradio_widget(module)
fever.py ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2021 The HuggingFace Evaluate Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """FEVER (Fact Extraction and VERification) metric."""
16
+
17
+ import datasets
18
+
19
+ import evaluate
20
+
21
+
22
+ _CITATION = """\
23
+ @inproceedings{thorne2018fever,
24
+ title={FEVER: Fact Extraction and VERification},
25
+ author={Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
26
+ booktitle={Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
27
+ pages={809--819},
28
+ year={2018}
29
+ }
30
+ """
31
+ _DESCRIPTION = """\
32
+ The FEVER (Fact Extraction and VERification) metric evaluates the performance of systems that verify factual claims against evidence retrieved from Wikipedia.
33
+
34
+ It consists of three main components:
35
+ - **Label accuracy**: measures how often the predicted claim label (SUPPORTED, REFUTED, or NOT ENOUGH INFO) matches the gold label.
36
+ - **FEVER score**: considers a prediction correct only if the label is correct *and* at least one complete gold evidence set is retrieved.
37
+ - **Evidence F1**: computes the micro-averaged precision, recall, and F1 between predicted and gold evidence sentences.
38
+
39
+ The FEVER score is the official leaderboard metric used in the FEVER shared tasks.
40
+ """
41
+ _KWARGS_DESCRIPTION = """
42
+ Computes the FEVER evaluation metrics.
43
+
44
+ Args:
45
+ predictions (list of dict): Each prediction should be a dictionary with:
46
+ - "label" (str): the predicted claim label.
47
+ - "evidence" (list of str): the predicted evidence sentences.
48
+ references (list of dict): Each reference should be a dictionary with:
49
+ - "label" (str): the gold claim label.
50
+ - "evidence_sets" (list of list of str): all possible gold evidence sets.
51
+
52
+ Returns:
53
+ A dictionary containing:
54
+ - 'label_accuracy': proportion of claims with correctly predicted labels.
55
+ - 'fever_score': proportion of claims where both the label and at least one full gold evidence set are correct.
56
+ - 'evidence_precision': micro-averaged precision of evidence retrieval.
57
+ - 'evidence_recall': micro-averaged recall of evidence retrieval.
58
+ - 'evidence_f1': micro-averaged F1 of evidence retrieval.
59
+
60
+ Example:
61
+ >>> predictions = [{"label": "SUPPORTED", "evidence": ["E1", "E2"]}]
62
+ >>> references = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"], ["E3", "E4"]]}]
63
+ >>> fever = evaluate.load("fever")
64
+ >>> results = fever.compute(predictions=predictions, references=references)
65
+ >>> print(results["label_accuracy"])
66
+ 1.0
67
+ >>> print(results["fever_score"])
68
+ 1.0
69
+ >>> print(results["evidence_precision"])
70
+ 1.0
71
+ >>> print(results["evidence_recall"])
72
+ 1.0
73
+ >>> print(round(results["evidence_f1"], 3))
74
+ 0.667
75
+ """
76
+
77
+
78
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
79
+ class FEVER(evaluate.Metric):
80
+ def _info(self):
81
+ return evaluate.MetricInfo(
82
+ description=_DESCRIPTION,
83
+ citation=_CITATION,
84
+ inputs_description=_KWARGS_DESCRIPTION,
85
+ features=datasets.Features(
86
+ {
87
+ "predictions": {
88
+ "label": datasets.Value("string"),
89
+ "evidence": datasets.Sequence(datasets.Value("string")),
90
+ },
91
+ "references": {
92
+ "label": datasets.Value("string"),
93
+ "evidence_sets": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
94
+ },
95
+ }
96
+ ),
97
+ reference_urls=[
98
+ "https://fever.ai/dataset/",
99
+ "https://arxiv.org/abs/1803.05355",
100
+ ],
101
+ )
102
+
103
+ def _compute(self, predictions, references):
104
+ """
105
+ Computes FEVER metrics:
106
+ - Label accuracy
107
+ - FEVER score (label + complete evidence set)
108
+ - Evidence precision, recall, and F1 (micro-averaged)
109
+ """
110
+ total = len(predictions)
111
+ label_correct, fever_correct = 0, 0
112
+ total_overlap, total_pred, total_gold = 0, 0, 0
113
+
114
+ for pred, ref in zip(predictions, references):
115
+ pred_label = pred["label"]
116
+ pred_evidence = set(e.strip().lower() for e in pred["evidence"])
117
+ gold_label = ref["label"]
118
+ gold_sets = []
119
+ for s in ref["evidence_sets"]:
120
+ gold_sets.append([e.strip().lower() for e in s])
121
+
122
+ if pred_label == gold_label:
123
+ label_correct += 1
124
+ for g_set in gold_sets:
125
+ if set(g_set).issubset(pred_evidence):
126
+ fever_correct += 1
127
+ break
128
+
129
+ gold_evidence = set().union(*gold_sets) if gold_sets else set()
130
+ overlap = len(gold_evidence.intersection(pred_evidence))
131
+ total_overlap += overlap
132
+ total_pred += len(pred_evidence)
133
+ total_gold += len(gold_evidence)
134
+
135
+ precision = (total_overlap / total_pred) if total_pred else 0
136
+ recall = (total_overlap / total_gold) if total_gold else 0
137
+ evidence_f1 = 2 * precision * recall / (precision + recall) if (precision + recall) > 0 else 0
138
+
139
+ fever_score = fever_correct / total if total else 0
140
+ label_accuracy = label_correct / total if total else 0
141
+
142
+ return {
143
+ "label_accuracy": label_accuracy,
144
+ "fever_score": fever_score,
145
+ "evidence_precision": precision,
146
+ "evidence_recall": recall,
147
+ "evidence_f1": evidence_f1,
148
+ }
requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ git+https://github.com/huggingface/evaluate@041ca0e709b3b5cf67787b150b4572fd766d9048
test_fever.py ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Evaluate Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """Tests for the FEVER (Fact Extraction and VERification) metric."""
16
+
17
+ import unittest
18
+
19
+ from fever import FEVER # assuming your metric file is named fever.py
20
+
21
+
22
+ fever = FEVER()
23
+
24
+
25
+ class TestFEVER(unittest.TestCase):
26
+ def test_perfect_prediction(self):
27
+ preds = [{"label": "SUPPORTED", "evidence": ["E1", "E2"]}]
28
+ refs = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]}]
29
+ result = fever.compute(predictions=preds, references=refs)
30
+ self.assertAlmostEqual(result["label_accuracy"], 1.0)
31
+ self.assertAlmostEqual(result["fever_score"], 1.0)
32
+ self.assertAlmostEqual(result["evidence_precision"], 1.0)
33
+ self.assertAlmostEqual(result["evidence_recall"], 1.0)
34
+ self.assertAlmostEqual(result["evidence_f1"], 1.0)
35
+
36
+ def test_label_only_correct(self):
37
+ preds = [{"label": "SUPPORTED", "evidence": ["X1", "X2"]}]
38
+ refs = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]}]
39
+ result = fever.compute(predictions=preds, references=refs)
40
+ self.assertAlmostEqual(result["label_accuracy"], 1.0)
41
+ self.assertAlmostEqual(result["fever_score"], 0.0)
42
+ self.assertTrue(result["evidence_f1"] < 1.0)
43
+
44
+ def test_label_incorrect(self):
45
+ preds = [{"label": "REFUTED", "evidence": ["E1", "E2"]}]
46
+ refs = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]}]
47
+ result = fever.compute(predictions=preds, references=refs)
48
+ self.assertAlmostEqual(result["label_accuracy"], 0.0)
49
+ self.assertAlmostEqual(result["fever_score"], 0.0)
50
+
51
+ def test_partial_evidence_overlap(self):
52
+ preds = [{"label": "SUPPORTED", "evidence": ["E1"]}]
53
+ refs = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]}]
54
+ result = fever.compute(predictions=preds, references=refs)
55
+ self.assertAlmostEqual(result["label_accuracy"], 1.0)
56
+ self.assertAlmostEqual(result["fever_score"], 0.0)
57
+ self.assertAlmostEqual(result["evidence_precision"], 1.0)
58
+ self.assertAlmostEqual(result["evidence_recall"], 0.5)
59
+ self.assertTrue(0 < result["evidence_f1"] < 1.0)
60
+
61
+ def test_extra_evidence_still_correct(self):
62
+ preds = [{"label": "SUPPORTED", "evidence": ["E1", "E2", "X1"]}]
63
+ refs = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]}]
64
+ result = fever.compute(predictions=preds, references=refs)
65
+ self.assertAlmostEqual(result["fever_score"], 1.0)
66
+ self.assertTrue(result["evidence_precision"] < 1.0)
67
+ self.assertAlmostEqual(result["evidence_recall"], 1.0)
68
+
69
+ def test_multiple_gold_sets(self):
70
+ preds = [{"label": "SUPPORTED", "evidence": ["E3", "E4"]}]
71
+ refs = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"], ["E3", "E4"]]}]
72
+ result = fever.compute(predictions=preds, references=refs)
73
+ self.assertAlmostEqual(result["fever_score"], 1.0)
74
+ self.assertAlmostEqual(result["label_accuracy"], 1.0)
75
+
76
+ def test_mixed_examples(self):
77
+ preds = [
78
+ {"label": "SUPPORTED", "evidence": ["A1", "A2"]},
79
+ {"label": "SUPPORTED", "evidence": ["B1"]},
80
+ {"label": "REFUTED", "evidence": ["C1", "C2"]},
81
+ ]
82
+ refs = [
83
+ {"label": "SUPPORTED", "evidence_sets": [["A1", "A2"]]},
84
+ {"label": "SUPPORTED", "evidence_sets": [["B1", "B2"]]},
85
+ {"label": "SUPPORTED", "evidence_sets": [["C1", "C2"]]},
86
+ ]
87
+ result = fever.compute(predictions=preds, references=refs)
88
+ self.assertTrue(0 < result["label_accuracy"] < 1.0)
89
+ self.assertTrue(0 <= result["fever_score"] < 1.0)
90
+ self.assertTrue(0 <= result["evidence_f1"] <= 1.0)
91
+
92
+ def test_empty_evidence_prediction(self):
93
+ preds = [{"label": "SUPPORTED", "evidence": []}]
94
+ refs = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]}]
95
+ result = fever.compute(predictions=preds, references=refs)
96
+ self.assertEqual(result["evidence_precision"], 0.0)
97
+ self.assertEqual(result["evidence_recall"], 0.0)
98
+ self.assertEqual(result["evidence_f1"], 0.0)
99
+
100
+ def test_empty_gold_evidence(self):
101
+ preds = [{"label": "SUPPORTED", "evidence": ["E1", "E2"]}]
102
+ refs = [{"label": "SUPPORTED", "evidence_sets": [[]]}]
103
+ result = fever.compute(predictions=preds, references=refs)
104
+ self.assertEqual(result["evidence_recall"], 0.0)
105
+
106
+ def test_multiple_examples_micro_averaging(self):
107
+ preds = [
108
+ {"label": "SUPPORTED", "evidence": ["E1"]},
109
+ {"label": "SUPPORTED", "evidence": ["F1", "F2"]},
110
+ ]
111
+ refs = [
112
+ {"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]},
113
+ {"label": "SUPPORTED", "evidence_sets": [["F1", "F2"]]},
114
+ ]
115
+ result = fever.compute(predictions=preds, references=refs)
116
+ self.assertTrue(result["evidence_f1"] < 1.0)
117
+ self.assertAlmostEqual(result["label_accuracy"], 1.0)
118
+
119
+ def test_fever_score_requires_label_match(self):
120
+ preds = [{"label": "REFUTED", "evidence": ["E1", "E2"]}]
121
+ refs = [{"label": "SUPPORTED", "evidence_sets": [["E1", "E2"]]}]
122
+ result = fever.compute(predictions=preds, references=refs)
123
+ self.assertEqual(result["fever_score"], 0.0)
124
+ self.assertEqual(result["label_accuracy"], 0.0)
125
+
126
+ def test_empty_input_list(self):
127
+ preds, refs = [], []
128
+ result = fever.compute(predictions=preds, references=refs)
129
+ for k in result:
130
+ self.assertEqual(result[k], 0.0)
131
+
132
+
133
+ if __name__ == "__main__":
134
+ unittest.main()