File size: 7,605 Bytes
88e6db6
 
 
9078c4d
88e6db6
 
 
 
 
 
 
 
487b6a5
88e6db6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7492504
1d23b7e
88e6db6
 
 
 
 
 
9078c4d
88e6db6
 
 
 
 
 
 
9078c4d
88e6db6
 
 
 
 
 
 
 
 
 
 
 
 
0fd1f08
88e6db6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
487b6a5
88e6db6
487b6a5
88e6db6
 
 
 
 
 
 
 
 
 
 
 
 
9078c4d
88e6db6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9078c4d
88e6db6
 
 
 
9078c4d
88e6db6
 
 
 
 
 
c1db850
 
294710a
88e6db6
 
 
 
 
1d23b7e
88e6db6
200f5f7
88e6db6
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
---
license: apache-2.0
task_categories:
- question-answering
language:
- zh
- en
size_categories:
- n<1K
---
***

# Dataset Card for Chinese Degree Expressions for Pragmatic Reasoning (CDE-Prag), an ongoing project about Enriched Meaning.

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Citation Information](#citation-information)

---

## Dataset Description

- **OSF Repository:** [https://osf.io/yk2a6/?view_only=82bb92d6d7e943c9b93260c666cfc153]
- **Paper:** A Pragmatic Account of Ambiguity in Language Models: Evidence from Chinese [under review]
- **Point of Contact:** [Yan Cong]

### Dataset Summary

**CDE-Prag** is a theory-driven evaluation dataset designed to probe the pragmatic competence of Large Language Models (LLMs) and Vision-Language Models (VLMs). It focuses specifically on **manner implicatures** and **ambiguity detection** through the lens of Chinese degree expressions (e.g., *Kai gao*, which is ambiguous between "Kai is tall" and "Kai is taller").

CDE-Prag tests whether models can navigate the trade-off between **production cost** (economy) and **communicative utility** (specificity). The dataset is divided into two subsets:
1.  **Exploratory VLM Dataset:** A multimodal set (text + image) derived from human-subject research.
2.  **Large-Scale LLM Dataset:** A text-only expansion containing 400 balanced context-utterance sets generating over 28,000 unique items.

### Supported Tasks and Leaderboards

The dataset supports three primary pragmatic tasks:

1.  **Truth Value Judgment (TVJ):** A "test of contradiction" to determine if the model can detect ambiguity. The model must judge if an utterance is true in contexts where only one interpretation (Positive or Comparative) holds.
2.  **Alternative Choice (ALT):** A pragmatic reconciliation task. The model must choose between a simple, ambiguous utterance (Economy) and complex, unambiguous alternatives (Specificity).
3.  **Contextual Modulation (ALT+QUD):** A conversational task where an explicit **Question Under Discussion (QUD)** is provided to test if the model shifts its preference based on contextual salience.

### Languages

The dataset is in **Mandarin Chinese** (Simplified).

---

## Dataset Structure

### Data Instances

#### VLM Subset [work-in-progress]
An example instance for the VLM dataset includes a visual scene and a context description:
```json
{
  "id": "vlm_tall_01",
  "modality": "multimodal",
  "context_text": "Anna considers 172cm to be tall. Ryan is 160cm, Kai 175cm, Jim 170cm.",
  "image_path": "images/tall_01.png",
  "utterance": "Kai gao (Kai is tall)",
  "condition": "POS-T-COMP-F",
  "task_type": "TVJ",
  "options": ["Can", "Cannot"],
  "human_label_distribution": {"Can": 1.0, "Cannot": 0.0}
}
```

#### LLM Subset
An example for the text-only ALT task:
```json
{
  "id": "llm_expensive_05",
  "modality": "text",
  "context_text": "Description of a scenario where Item A is expensive but Item B is more expensive...",
  "utterance": "Item A gui (Item A is expensive)",
  "alternatives": {
    "UTT": "Item A gui",
    "ALT_1": "Item A hen gui (Positive)",
    "ALT_2": "Item A bijiao gui (Comparative)",
    "ALT_3": "Item A hen gui but not taller...",
    "ALT_4": "Item A bijiao gui but not positive..."
  },
  "condition": "POS-T-COMP-T",
  "task_type": "ALT",
  "QUD": "None"
}
```

### Data Fields

*   `context_text`: The linguistic context establishing the world state (thresholds, comparison classes).
*   `image`: (VLM only) Visual representation of the world state.
*   `utterance`: The target ambiguous Chinese degree expression.
*   `condition`: The truth-conditional status of the utterance (e.g., `POS-T-COMP-F` means Positive reading is True, Comparative reading is False).
*   `alternatives`: A set of semantically equivalent but costlier options used in the ALT tasks.
*   `QUD`: (Task 3 only) An explicit Question Under Discussion designed to make one reading more salient.

### Data Splits

*   **VLM Subset:** 26 unique sets based on 4 evaluative adjectives (*tall, expensive, big, fast*). Intended for few-shot or exploratory evaluation.
*   **LLM Subset:** 400 balanced sets covering 34 additional adjectives (e.g., *hot, thick, old*), totaling ~28,000 unique items. This split is designed for high-powered statistical validation.

---

## Dataset Creation

### Curation Rationale

This dataset was created to bridge the gap in **multilingual** and **multimodal** pragmatic resources. Current benchmarks often focus on literal semantics or scalar implicatures (mostly in English). CDE-Prag explicitly targets **M-implicatures** (Manner), where the choice of form (simple vs. complex) drives meaning. It allows researchers to test if models behave as "rational agents" by optimizing the trade-off between production cost and communicative clarity.

### Source Data

*   **VLM Data:** Derived from human-subject experimental stimuli in *Cong (2021)*. The images and contexts were validated for grammaticality and word frequency.
*   **LLM Data:** Manually crafted expansions of the VLM logic. The text-only scenarios verbalize the visual information found in the VLM subset.

### Annotations

*   **Validation:** All items were manually verified for acceptability by a linguist who is a native speaker of Mandarin Chinese.
*   **Human Baseline:** The dataset includes aggregated human response distributions (N=21 per task) to establish a "human-like" baseline for rationality and ambiguity detection.

---

## Considerations for Using the Data

### Social Impact of Dataset

This dataset contributes to the development of more **culturally and linguistically inclusive AI**. It evaluates models on an understudied pragmatic phenomena. Furthermore, by probing "rational" communication strategies, it aids in creating agents that communicate more efficiently and naturally with humans.

### Other Known Limitations

*   **Scale of VLM Data:** The VLM subset is small (26 items) and should be treated as a proof-of-concept. Statistical claims based on this subset rely on bootstrapping over generations rather than items.
*   **Domain Specificity:** The dataset focuses on degree expressions (gradable adjectives). Generalizability to other pragmatic phenomena (e.g., irony, metaphors) remains to be tested.

---

## Additional Information

### Citation Information
We acknowledge Brian Buccola, Phillip Wolff, and Marcin Morzycki for their inspirations and fruitful discussions. 
This research is supported by the College of Liberal Arts at Purdue University. 
If you find this resource useful, please consider citing our work:

```bibtex
@article{Cong2026Pragmatic,
  title={A pragmatic account of ambiguity in language models: Evidence from Chinese},
  author={Cong, Yan},
  journal={Under Review},
  year={2026},
  note={https://huggingface.co/datasets/yancong/EnrichedMeaningDataset}
}
```