Update content.py
Browse files- content.py +19 -10
content.py
CHANGED
|
@@ -1,16 +1,25 @@
|
|
| 1 |
-
TITLE = """
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
-
|
| 4 |
-
<p style='font-size:20px;'>Critical Questions Generation is the task of automatically generating questions that can unmask the assumptions held by the premises of an argumentative text.
|
| 5 |
-
|
| 6 |
-
This leaderboard, aims at benchmarking the capacity of language technology systems to create Critical Questions (CQs). That is, questions that should be asked in order to judge if an argument is acceptable or fallacious.
|
| 7 |
-
|
| 8 |
-
The task consists on generating 3 Useful Critical Questions per argumentative text.
|
| 9 |
|
| 10 |
-
|
| 11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
-
DATA_TEXT = """
|
| 14 |
## Data
|
| 15 |
|
| 16 |
<p style='font-size:20px;'> The [CQs-Gen dataset](https://huggingface.co/datasets/HiTZ/CQs-Gen) gathers 220 interventions of real debates. And contains:
|
|
|
|
| 1 |
+
TITLE = """
|
| 2 |
+
<div style="display: flex; align-items: center; gap: 5px;">
|
| 3 |
+
<h1 align="center" id="space-title">Critical Questions Generation Leaderboard</h1>
|
| 4 |
+
<img src="logo.svg" alt="Logo" width="20"/>
|
| 5 |
+
</div>
|
| 6 |
|
| 7 |
+
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
+
INTRODUCTION_TEXT = """
|
| 10 |
+
<div style="display: flex; align-items: center; gap: 10px;">
|
| 11 |
+
<span style="font-size:20px;">Critical Questions Generation is the task of automatically generating questions that can unmask the assumptions held by the premises of an argumentative text.
|
| 12 |
+
|
| 13 |
+
This leaderboard, aims at benchmarking the capacity of language technology systems to create Critical Questions (CQs). That is, questions that should be asked in order to judge if an argument is acceptable or fallacious.
|
| 14 |
+
|
| 15 |
+
The task consists on generating 3 Useful Critical Questions per argumentative text.
|
| 16 |
+
|
| 17 |
+
All details on the task, the dataset, and the evaluation can be found in the paper [Benchmarking Critical Questions Generation: A Challenging Reasoning Task for Large Language Models](https://arxiv.org/abs/2505.11341)
|
| 18 |
+
</span>
|
| 19 |
+
<img src="examples.png" alt="Example" width="50"/>
|
| 20 |
+
|
| 21 |
+
</div>
|
| 22 |
|
|
|
|
| 23 |
## Data
|
| 24 |
|
| 25 |
<p style='font-size:20px;'> The [CQs-Gen dataset](https://huggingface.co/datasets/HiTZ/CQs-Gen) gathers 220 interventions of real debates. And contains:
|