guydepauw commited on
Commit
291075d
·
verified ·
1 Parent(s): 8634f78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -8,11 +8,13 @@ license: mit
8
  ---
9
  # Model Card
10
 
11
- This model is a fine-tuned version of [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It was fine-tuned on [Factrank](https://github.com/lejafar/FactRank/tree/master/factrank) data with additional samples from Dutch and Belgian parliaments tagged by GPT and Gemini. The primary goal of this model is to determine whether a given statement warrants fact-checking. It does **not** determine whether the statement is factually correct.
12
- 1 label is given: FR, FNR or NF.
 
13
 
 
14
 
15
- - **FR**: Factual Relevant (the statement is fact-checkable and requites verification)
16
  - **FNR**: Factual, Not Relevant (the statement can be fact-checked, but the wider relevance is lower)
17
  - **NF**: Not Factual (the statement does not contain information for fact-checking)
18
 
@@ -47,7 +49,6 @@ sample_texts = [
47
 
48
  results = pipe(sample_texts)
49
  predicted_labels = [res["label"] for res in results]
50
-
51
  ```
52
 
53
  ## Interpretation of Results
 
8
  ---
9
  # Model Card
10
 
11
+ This model is a fine-tuned version of [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It was fine-tuned on [Factrank](https://github.com/lejafar/FactRank/tree/master/factrank) data with additional machine annotated data from Dutch and Belgian parliamentary proceedings.
12
+
13
+ The primary goal of this model is to determine whether a given statement warrants fact-checking. It does **not** determine whether the statement is factually correct.
14
 
15
+ 1 label is given: FR, FNR or NF.
16
 
17
+ - **FR**: Factual Relevant (the statement is fact-checkable and requires verification)
18
  - **FNR**: Factual, Not Relevant (the statement can be fact-checked, but the wider relevance is lower)
19
  - **NF**: Not Factual (the statement does not contain information for fact-checking)
20
 
 
49
 
50
  results = pipe(sample_texts)
51
  predicted_labels = [res["label"] for res in results]
 
52
  ```
53
 
54
  ## Interpretation of Results