Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,18 +1,13 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
task_categories:
|
| 4 |
-
-
|
| 5 |
-
- table-question-answering
|
| 6 |
-
- zero-shot-classification
|
| 7 |
-
- text-generation
|
| 8 |
language:
|
| 9 |
- zh
|
| 10 |
- en
|
| 11 |
size_categories:
|
| 12 |
- n<1K
|
| 13 |
---
|
| 14 |
-
Based on the content of **COLI2026JJ.pdf** and the structure of the Hugging Face dataset card you referenced, here is a comprehensive dataset card for your work.
|
| 15 |
-
|
| 16 |
***
|
| 17 |
|
| 18 |
# Dataset Card for Chinese Degree Expressions for Pragmatic Reasoning (CDE-Prag)
|
|
@@ -49,7 +44,7 @@ Based on the content of **COLI2026JJ.pdf** and the structure of the Hugging Face
|
|
| 49 |
|
| 50 |
**CDE-Prag** is a theory-driven evaluation dataset designed to probe the pragmatic competence of Large Language Models (LLMs) and Vision-Language Models (VLMs). It focuses specifically on **manner implicatures** and **ambiguity detection** through the lens of Chinese degree expressions (e.g., *Kai gao*, which is ambiguous between "Kai is tall" and "Kai is taller").
|
| 51 |
|
| 52 |
-
|
| 53 |
1. **Exploratory VLM Dataset:** A multimodal set (text + image) derived from human-subject research.
|
| 54 |
2. **Large-Scale LLM Dataset:** A text-only expansion containing 400 balanced context-utterance sets generating over 28,000 unique items.
|
| 55 |
|
|
@@ -57,7 +52,7 @@ Unlike benchmarks that rely on surface-level pattern matching, this dataset oper
|
|
| 57 |
|
| 58 |
The dataset supports three primary pragmatic tasks:
|
| 59 |
|
| 60 |
-
1. **Truth Value Judgment (TVJ):** A "test of contradiction" to determine if the model can detect
|
| 61 |
2. **Alternative Choice (ALT):** A pragmatic reconciliation task. The model must choose between a simple, ambiguous utterance (Economy) and complex, unambiguous alternatives (Specificity).
|
| 62 |
3. **Contextual Modulation (ALT+QUD):** A conversational task where an explicit **Question Under Discussion (QUD)** is provided to test if the model shifts its preference based on contextual salience.
|
| 63 |
|
|
@@ -128,7 +123,7 @@ An example for the text-only ALT task:
|
|
| 128 |
|
| 129 |
### Curation Rationale
|
| 130 |
|
| 131 |
-
This dataset was created to bridge the gap in **
|
| 132 |
|
| 133 |
### Source Data
|
| 134 |
|
|
@@ -146,16 +141,16 @@ This dataset was created to bridge the gap in **non-English** and **multimodal**
|
|
| 146 |
|
| 147 |
### Social Impact of Dataset
|
| 148 |
|
| 149 |
-
This dataset contributes to the development of more **culturally and linguistically inclusive AI**.
|
| 150 |
|
| 151 |
### Discussion of Biases
|
| 152 |
|
| 153 |
-
While the dataset addresses linguistic bias, the models evaluated using it may still exhibit **
|
| 154 |
|
| 155 |
### Other Known Limitations
|
| 156 |
|
| 157 |
* **Scale of VLM Data:** The VLM subset is small (26 items) and should be treated as a proof-of-concept. Statistical claims based on this subset rely on bootstrapping over generations rather than items.
|
| 158 |
-
* **Domain Specificity:** The dataset focuses
|
| 159 |
|
| 160 |
---
|
| 161 |
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
task_categories:
|
| 4 |
+
- question-answering
|
|
|
|
|
|
|
|
|
|
| 5 |
language:
|
| 6 |
- zh
|
| 7 |
- en
|
| 8 |
size_categories:
|
| 9 |
- n<1K
|
| 10 |
---
|
|
|
|
|
|
|
| 11 |
***
|
| 12 |
|
| 13 |
# Dataset Card for Chinese Degree Expressions for Pragmatic Reasoning (CDE-Prag)
|
|
|
|
| 44 |
|
| 45 |
**CDE-Prag** is a theory-driven evaluation dataset designed to probe the pragmatic competence of Large Language Models (LLMs) and Vision-Language Models (VLMs). It focuses specifically on **manner implicatures** and **ambiguity detection** through the lens of Chinese degree expressions (e.g., *Kai gao*, which is ambiguous between "Kai is tall" and "Kai is taller").
|
| 46 |
|
| 47 |
+
CDE-Prag tests whether models can navigate the trade-off between **production cost** (economy) and **communicative utility** (specificity). The dataset is divided into two subsets:
|
| 48 |
1. **Exploratory VLM Dataset:** A multimodal set (text + image) derived from human-subject research.
|
| 49 |
2. **Large-Scale LLM Dataset:** A text-only expansion containing 400 balanced context-utterance sets generating over 28,000 unique items.
|
| 50 |
|
|
|
|
| 52 |
|
| 53 |
The dataset supports three primary pragmatic tasks:
|
| 54 |
|
| 55 |
+
1. **Truth Value Judgment (TVJ):** A "test of contradiction" to determine if the model can detect ambiguity. The model must judge if an utterance is true in contexts where only one interpretation (Positive or Comparative) holds.
|
| 56 |
2. **Alternative Choice (ALT):** A pragmatic reconciliation task. The model must choose between a simple, ambiguous utterance (Economy) and complex, unambiguous alternatives (Specificity).
|
| 57 |
3. **Contextual Modulation (ALT+QUD):** A conversational task where an explicit **Question Under Discussion (QUD)** is provided to test if the model shifts its preference based on contextual salience.
|
| 58 |
|
|
|
|
| 123 |
|
| 124 |
### Curation Rationale
|
| 125 |
|
| 126 |
+
This dataset was created to bridge the gap in **multilingual** and **multimodal** pragmatic resources. Current benchmarks often focus on literal semantics or scalar implicatures (mostly in English). CDE-Prag explicitly targets **M-implicatures** (Manner), where the choice of form (simple vs. complex) drives meaning. It allows researchers to test if models behave as "rational agents" by optimizing the trade-off between production cost and communicative clarity.
|
| 127 |
|
| 128 |
### Source Data
|
| 129 |
|
|
|
|
| 141 |
|
| 142 |
### Social Impact of Dataset
|
| 143 |
|
| 144 |
+
This dataset contributes to the development of more **culturally and linguistically inclusive AI**. It evaluates models on an understudied pragmatic phenomena. Furthermore, by probing "rational" communication strategies, it aids in creating agents that communicate more efficiently and naturally with humans.
|
| 145 |
|
| 146 |
### Discussion of Biases
|
| 147 |
|
| 148 |
+
While the dataset addresses linguistic bias, the models evaluated using it may still exhibit **prompting bias**. As noted in the accompanying paper, instruction-tuned models often default to over-specification (choosing explicit but costly alternatives) rather than the economical forms preferred by rational agents. Users should be aware that high accuracy on the sanity checks (superlatives) does not guarantee pragmatic competence in ambiguous contexts.
|
| 149 |
|
| 150 |
### Other Known Limitations
|
| 151 |
|
| 152 |
* **Scale of VLM Data:** The VLM subset is small (26 items) and should be treated as a proof-of-concept. Statistical claims based on this subset rely on bootstrapping over generations rather than items.
|
| 153 |
+
* **Domain Specificity:** The dataset focuses on degree expressions (gradable adjectives). Generalizability to other pragmatic phenomena (e.g., irony, metaphors) remains to be tested.
|
| 154 |
|
| 155 |
---
|
| 156 |
|