Blanca commited on
Commit
9f30340
·
verified ·
1 Parent(s): 1a23095

Update content.py

Browse files
Files changed (1) hide show
  1. content.py +4 -14
content.py CHANGED
@@ -1,23 +1,15 @@
1
  TITLE = """
2
- <div style="display: flex; align-items: center; gap: 5px;">
3
  <h1 align="center" id="space-title">Critical Questions Generation Leaderboard</h1>
4
- <img src="logo.svg" alt="Logo" width="20"/>
5
  </div>
6
 
7
  """
8
 
9
  INTRODUCTION_TEXT = """
10
  <div style="display: flex; align-items: center; gap: 10px;">
11
- <span style="font-size:20px;">Critical Questions Generation is the task of automatically generating questions that can unmask the assumptions held by the premises of an argumentative text.
12
-
13
- This leaderboard, aims at benchmarking the capacity of language technology systems to create Critical Questions (CQs). That is, questions that should be asked in order to judge if an argument is acceptable or fallacious.
14
-
15
- The task consists on generating 3 Useful Critical Questions per argumentative text.
16
-
17
- All details on the task, the dataset, and the evaluation can be found in the paper [Benchmarking Critical Questions Generation: A Challenging Reasoning Task for Large Language Models](https://arxiv.org/abs/2505.11341)
18
- </span>
19
  <img src="examples.png" alt="Example" width="50"/>
20
-
21
  </div>
22
 
23
  ## Data
@@ -37,9 +29,7 @@ INTRODUCTION_TEXT = """
37
 
38
  SUBMISSION_TEXT = """
39
  ## Submissions
40
- <p style='font-size:20px;'> Results can be submitted for the test set only.
41
-
42
- We expect submissions to be json files with the following format: </p>
43
 
44
  ```json
45
  {
 
1
  TITLE = """
2
+ <div style="text-align: center;">
3
  <h1 align="center" id="space-title">Critical Questions Generation Leaderboard</h1>
4
+ <img src="logo_st1.svg" alt="Logo" width="20"/>
5
  </div>
6
 
7
  """
8
 
9
  INTRODUCTION_TEXT = """
10
  <div style="display: flex; align-items: center; gap: 10px;">
11
+ <span style="font-size:25px;">Critical Questions Generation is the task of automatically generating questions that can unmask the assumptions held by the premises of an argumentative text. \nThis leaderboard, aims at benchmarking the capacity of language technology systems to create Critical Questions (CQs). That is, questions that should be asked in order to judge if an argument is acceptable or fallacious.\nThe task consists on generating 3 Useful Critical Questions per argumentative text. \nAll details on the task, the dataset, and the evaluation can be found in the paper [Benchmarking Critical Questions Generation: A Challenging Reasoning Task for Large Language Models](https://arxiv.org/abs/2505.11341)</span>
 
 
 
 
 
 
 
12
  <img src="examples.png" alt="Example" width="50"/>
 
13
  </div>
14
 
15
  ## Data
 
29
 
30
  SUBMISSION_TEXT = """
31
  ## Submissions
32
+ <p style='font-size:20px;'> Results can be submitted for the test set only. \nWe expect submissions to be json files with the following format: </p>
 
 
33
 
34
  ```json
35
  {