IMISLab commited on
Commit
396306c
·
verified ·
1 Parent(s): 12b3506

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -21,14 +21,14 @@ size_categories:
21
 
22
  # DemosQA
23
 
 
 
24
  We introduce DemosQA, a novel Greek QA dataset, which is constructed using social media user questions and community-reviewed answers to better capture the Greek social and cultural zeitgeist.
25
  It comprises questions extracted from the “r/greece” subreddit, each accompanied by four candidate answers, the selected best answer and its index, the date of posting, and the corresponding Reddit post ID.
26
  Candidate answers are ranked based on community voting, with the highest-upvoted response designated as the reference answer.
27
  This community-driven ranking mechanism not only ensures that the dataset captures genuine user preferences but also establishes a meaningful benchmark for assessing how closely large language models align with human judgments of response quality.
28
  For information about dataset creation, limitations etc. see the cited preprint below.
29
 
30
- <img src="demosqa.png" width="400"/>
31
-
32
  ### Supported Tasks
33
 
34
  This dataset supports evaluation of LLMs for **Question Answering**.
 
21
 
22
  # DemosQA
23
 
24
+ <img src="demosqa.png" width="400"/>
25
+
26
  We introduce DemosQA, a novel Greek QA dataset, which is constructed using social media user questions and community-reviewed answers to better capture the Greek social and cultural zeitgeist.
27
  It comprises questions extracted from the “r/greece” subreddit, each accompanied by four candidate answers, the selected best answer and its index, the date of posting, and the corresponding Reddit post ID.
28
  Candidate answers are ranked based on community voting, with the highest-upvoted response designated as the reference answer.
29
  This community-driven ranking mechanism not only ensures that the dataset captures genuine user preferences but also establishes a meaningful benchmark for assessing how closely large language models align with human judgments of response quality.
30
  For information about dataset creation, limitations etc. see the cited preprint below.
31
 
 
 
32
  ### Supported Tasks
33
 
34
  This dataset supports evaluation of LLMs for **Question Answering**.