RithwikG commited on
Commit
a284e19
·
verified ·
1 Parent(s): df60141

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,7 +16,7 @@ size_categories:
16
 
17
  ```RAGuard``` is a fact-checking dataset designed to evaluate the robustness of RAG systems against misleading retrievals.
18
  It consists of 2,648 political claims made by U.S. presidential candidates (2000–2024), each labeled as either *true* or *false*, and a knowledge base comprising 16,331 documents. Each claim is linked to a set
19
- of associated documents, categorized as *supporting*, *misleading*, or *irrelevant*
20
 
21
  ### Dataset Description
22
 
@@ -26,7 +26,7 @@ of associated documents, categorized as *supporting*, *misleading*, or *irreleva
26
  2. Claim: Full text of the claim
27
  3. Verdict: Binary fact-checking verdict, *True* or *False*
28
  4. Document IDs: List IDs of documents corresponding to this claim
29
- 5. Document Labels: List of labels for the associated documents, either *supporting*, *misleading*, or *irrelevant*
30
 
31
  ```documents.csv``` contains the following fields for each document in our knowledge base, scraped from [Reddit](https://www.reddit.com/).
32
 
@@ -34,7 +34,7 @@ of associated documents, categorized as *supporting*, *misleading*, or *irreleva
34
  2. Title: Reddit post title
35
  3. Full Text: Content of the document
36
  4. Claim ID: ID of the corresponding claim
37
- 5. Document Label: Label for the document's label to the claim, either *supporting*, *misleading*, or *irrelevant*
38
  6. Link: URL to the original document
39
 
40
  ### Dataset Source and Usage
 
16
 
17
  ```RAGuard``` is a fact-checking dataset designed to evaluate the robustness of RAG systems against misleading retrievals.
18
  It consists of 2,648 political claims made by U.S. presidential candidates (2000–2024), each labeled as either *true* or *false*, and a knowledge base comprising 16,331 documents. Each claim is linked to a set
19
+ of associated documents, categorized as *supporting*, *misleading*, or *unrelated*
20
 
21
  ### Dataset Description
22
 
 
26
  2. Claim: Full text of the claim
27
  3. Verdict: Binary fact-checking verdict, *True* or *False*
28
  4. Document IDs: List IDs of documents corresponding to this claim
29
+ 5. Document Labels: List of labels for the associated documents, either *supporting*, *misleading*, or *unrelated*
30
 
31
  ```documents.csv``` contains the following fields for each document in our knowledge base, scraped from [Reddit](https://www.reddit.com/).
32
 
 
34
  2. Title: Reddit post title
35
  3. Full Text: Content of the document
36
  4. Claim ID: ID of the corresponding claim
37
+ 5. Document Label: Label for the document's label to the claim, either *supporting*, *misleading*, or *unrelated*
38
  6. Link: URL to the original document
39
 
40
  ### Dataset Source and Usage