pszemraj commited on
Commit
210f476
·
verified ·
0 Parent(s):

Super-squash branch 'main' using huggingface_hub

Browse files
Files changed (5) hide show
  1. .gitattributes +39 -0
  2. README.md +87 -0
  3. dev.csv +3 -0
  4. test.csv +3 -0
  5. train.csv +3 -0
.gitattributes ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.model filter=lfs diff=lfs merge=lfs -text
11
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
12
+ *.onnx filter=lfs diff=lfs merge=lfs -text
13
+ *.ot filter=lfs diff=lfs merge=lfs -text
14
+ *.parquet filter=lfs diff=lfs merge=lfs -text
15
+ *.pb filter=lfs diff=lfs merge=lfs -text
16
+ *.pt filter=lfs diff=lfs merge=lfs -text
17
+ *.pth filter=lfs diff=lfs merge=lfs -text
18
+ *.rar filter=lfs diff=lfs merge=lfs -text
19
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
21
+ *.tflite filter=lfs diff=lfs merge=lfs -text
22
+ *.tgz filter=lfs diff=lfs merge=lfs -text
23
+ *.wasm filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ # Audio files - uncompressed
29
+ *.pcm filter=lfs diff=lfs merge=lfs -text
30
+ *.sam filter=lfs diff=lfs merge=lfs -text
31
+ *.raw filter=lfs diff=lfs merge=lfs -text
32
+ # Audio files - compressed
33
+ *.aac filter=lfs diff=lfs merge=lfs -text
34
+ *.flac filter=lfs diff=lfs merge=lfs -text
35
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
36
+ *.ogg filter=lfs diff=lfs merge=lfs -text
37
+ *.wav filter=lfs diff=lfs merge=lfs -text
38
+ train.tsv filter=lfs diff=lfs merge=lfs -text
39
+ *.csv filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ tags:
4
+ - automatic claim verification
5
+ - claims
6
+ ---
7
+
8
+
9
+ # multiFC
10
+
11
+ - a dataset for the task of **automatic claim verification**
12
+ - License is currently unknown, please refer to the original paper/[dataset site](http://www.copenlu.com/publication/2019_emnlp_augenstein/):
13
+
14
+ - https://arxiv.org/abs/1909.03242
15
+
16
+ ## Dataset contents
17
+
18
+ - **IMPORTANT:** the `label` column in the `test` set has dummy values as these were not provided (see original readme section for explanation)
19
+
20
+ ```
21
+ DatasetDict({
22
+ train: Dataset({
23
+ features: ['claimID', 'claim', 'label', 'claimURL', 'reason', 'categories', 'speaker', 'checker', 'tags', 'article title', 'publish date', 'climate', 'entities'],
24
+ num_rows: 27871
25
+ })
26
+ test: Dataset({
27
+ features: ['claimID', 'claim', 'label', 'claimURL', 'reason', 'categories', 'speaker', 'checker', 'tags', 'article title', 'publish date', 'climate', 'entities'],
28
+ num_rows: 3487
29
+ })
30
+ validation: Dataset({
31
+ features: ['claimID', 'claim', 'label', 'claimURL', 'reason', 'categories', 'speaker', 'checker', 'tags', 'article title', 'publish date', 'climate', 'entities'],
32
+ num_rows: 3484
33
+ })
34
+ })
35
+ ```
36
+ ## Paper Abstract / Citation
37
+ > We contribute the largest publicly available dataset of naturally occurring factual claims for the purpose of automatic claim verification. It is collected from 26 fact checking websites in English, paired with textual sources and rich metadata, and labelled for veracity by human expert journalists. We present an in-depth analysis of the dataset, highlighting characteristics and challenges. Further, we present results for automatic veracity prediction, both with established baselines and with a novel method for joint ranking of evidence pages and predicting veracity that outperforms all baselines. Significant performance increases are achieved by encoding evidence, and by modelling metadata. Our best-performing model achieves a Macro F1 of 49.2%, showing that this is a challenging testbed for claim veracity prediction.
38
+
39
+ ```
40
+ @inproceedings{conf/emnlp2019/Augenstein,
41
+ added-at = {2019-10-27T00:00:00.000+0200},
42
+ author = {Augenstein, Isabelle and Lioma, Christina and Wang, Dongsheng and Chaves Lima, Lucas and Hansen, Casper and Hansen, Christian and Grue Simonsen, Jakob},
43
+ booktitle = {EMNLP},
44
+ crossref = {conf/emnlp/2019},
45
+ publisher = {Association for Computational Linguistics},
46
+ title = {MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims},
47
+ year = 2019
48
+ }
49
+ ```
50
+
51
+ ## Original README
52
+ Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims
53
+
54
+ The MultiFC is the largest publicly available dataset of naturally occurring factual claims for automatic claim verification.
55
+ It is collected from 26 English fact-checking websites paired with textual sources and rich metadata and labeled for veracity by human expert journalists.
56
+
57
+
58
+ ###### TRAIN and DEV #######
59
+ The train and dev files are (tab-separated) and contain the following metadata:
60
+ claimID, claim, label, claimURL, reason, categories, speaker, checker, tags, article title, publish date, climate, entities
61
+
62
+ Fields that could not be crawled were set as "None." Please refer to Table 11 of our paper to see the summary statistics.
63
+
64
+
65
+ ###### TEST #######
66
+ The test file follows the same structure. However, we have removed the label. Thus, it only presents 12 metadata.
67
+ claimID, claim, claim, reason, categories, speaker, checker, tags, article title, publish date, climate, entities
68
+
69
+ Fields that could not be crawled were set as "None." Please refer to Table 11 of our paper to see the summary statistics.
70
+
71
+
72
+ ###### Snippets ######
73
+ The text of each claim is submitted verbatim as a query to the Google Search API (without quotes).
74
+ In the folder snippet, we provide the top 10 snippets retrieved. In some cases, fewer snippets are provided
75
+ since we have excluded the claimURL from the snippets.
76
+ Each file in the snippets folder is named after the claimID of the claim submitted as a query.
77
+ Snippets file is (tab-separated) and contains the following metadata:
78
+ rank_position, title, snippet, snippet_url
79
+
80
+
81
+ For more information, please refer to our paper:
82
+ References:
83
+ Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. 2019.
84
+ MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims. In EMNLP. Association for Computational Linguistics.
85
+
86
+ https://copenlu.github.io/publication/2019_emnlp_augenstein/
87
+
dev.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1b643306c0ac5e1d11a017d1acd9c30afc362710e80613f1e6d61355103ef4d
3
+ size 9240141
test.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9f24024ef951f4b0a45edc317ed4a878c23e12cbdfc1be4d2af2abc46f04243
3
+ size 9234920
train.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6250b1f2f29f4b302bc5238077b699acc85d5cb1e6c8fa3871b3c0534ecdb10
3
+ size 74917497