Update README.md
Browse files
README.md
CHANGED
|
@@ -270,6 +270,7 @@ The Question Type and Complexity (QTC) dataset is a comprehensive resource for l
|
|
| 270 |
- 6 numeric linguistic complexity metrics, all normalized using min-max scaling
|
| 271 |
- Combined/summed complexity scores
|
| 272 |
- Train(silver)/test(gold)/dev(mix) split using complementary data sources
|
|
|
|
| 273 |
|
| 274 |
## Data Sources
|
| 275 |
|
|
@@ -315,7 +316,7 @@ Brunato, D., Cimino, A., Dell'Orletta, F., Venturi, G., & Montemagni, S. (2020).
|
|
| 315 |
|
| 316 |
## Preprocessing and Feature Extraction
|
| 317 |
|
| 318 |
-
|
| 319 |
|
| 320 |
For the TyDi data, we applied strategic downsampling using token-based stratified sampling. This balances the distribution across languages and question types while preserving the original sentence length distribution, resulting in a more balanced dataset without sacrificing linguistic diversity.
|
| 321 |
|
|
@@ -327,12 +328,37 @@ The dataset is organized into three main components corresponding to the train/d
|
|
| 327 |
|
| 328 |
```text
|
| 329 |
QTC-Dataset
|
| 330 |
-
├── base
|
| 331 |
-
│ ├── tydi_train_base.csv
|
|
|
|
|
|
|
|
|
|
|
|
|
| 332 |
│ ├── dev_base.csv
|
| 333 |
│ └── ud_test_base.csv
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 334 |
```
|
| 335 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 336 |
## Features Description
|
| 337 |
|
| 338 |
### Core Attributes
|
|
@@ -344,6 +370,7 @@ QTC-Dataset
|
|
| 344 |
| `language` | string | ISO language code (ar, en, fi, id, ja, ko, ru) |
|
| 345 |
| `question_type` | int | Binary encoding (0 = content, 1 = polar) |
|
| 346 |
| `complexity_score` | float | Combined linguistic complexity score |
|
|
|
|
| 347 |
|
| 348 |
### Linguistic Features
|
| 349 |
|
|
@@ -382,11 +409,12 @@ The Universal Dependencies component forms our gold standard test set. These que
|
|
| 382 |
|
| 383 |
## Usage Examples
|
| 384 |
|
|
|
|
| 385 |
```python
|
| 386 |
from datasets import load_dataset
|
| 387 |
|
| 388 |
-
# Load the
|
| 389 |
-
dataset = load_dataset("rokokot/question-type-and-complexity")
|
| 390 |
|
| 391 |
# Access the training split (TyDi data)
|
| 392 |
tydi_data = dataset["train"]
|
|
@@ -398,13 +426,31 @@ dev_data = dataset["validation"]
|
|
| 398 |
ud_data = dataset["test"]
|
| 399 |
|
| 400 |
# Filter for a specific language
|
| 401 |
-
finnish_questions = dataset.filter(lambda x: x["language"] == "fi")
|
| 402 |
|
| 403 |
# Filter for a specific type
|
| 404 |
-
polar_questions = dataset.filter(lambda x: x["question_type"] == 1)
|
| 405 |
-
content_questions = dataset.filter(lambda x: x["question_type"] == 0)
|
| 406 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 407 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 408 |
## Research Applications
|
| 409 |
|
| 410 |
This dataset enables various research directions:
|
|
@@ -413,7 +459,7 @@ This dataset enables various research directions:
|
|
| 413 |
2. **Question answering systems**: Analyze how question complexity affects QA system performance.
|
| 414 |
3. **Language teaching**: Develop difficulty-aware educational materials for language learners.
|
| 415 |
4. **Psycholinguistics**: Study processing difficulty predictions for different question constructions.
|
| 416 |
-
5. **Machine translation**:
|
| 417 |
|
| 418 |
## Citation
|
| 419 |
|
|
|
|
| 270 |
- 6 numeric linguistic complexity metrics, all normalized using min-max scaling
|
| 271 |
- Combined/summed complexity scores
|
| 272 |
- Train(silver)/test(gold)/dev(mix) split using complementary data sources
|
| 273 |
+
- Control datasets for evaluating probe selectivity
|
| 274 |
|
| 275 |
## Data Sources
|
| 276 |
|
|
|
|
| 316 |
|
| 317 |
## Preprocessing and Feature Extraction
|
| 318 |
|
| 319 |
+
We normalized all linguistic features using min-max scaling per language. This approach ensures cross-linguistic comparability by mapping each feature to a 0-1 range for each language separately.
|
| 320 |
|
| 321 |
For the TyDi data, we applied strategic downsampling using token-based stratified sampling. This balances the distribution across languages and question types while preserving the original sentence length distribution, resulting in a more balanced dataset without sacrificing linguistic diversity.
|
| 322 |
|
|
|
|
| 328 |
|
| 329 |
```text
|
| 330 |
QTC-Dataset
|
| 331 |
+
├── base
|
| 332 |
+
│ ├── tydi_train_base.csv
|
| 333 |
+
│ ├── dev_base.csv
|
| 334 |
+
│ └── ud_test_base.csv
|
| 335 |
+
├── control_question_type_seed1
|
| 336 |
+
│ ├── tydi_train_control_question_type_seed1.csv
|
| 337 |
│ ├── dev_base.csv
|
| 338 |
│ └── ud_test_base.csv
|
| 339 |
+
├── control_complexity_seed1
|
| 340 |
+
│ ├── tydi_train_control_complexity_seed1.csv
|
| 341 |
+
│ ├── dev_base.csv
|
| 342 |
+
│ └── ud_test_base.csv
|
| 343 |
+
└── control_[metric]_seed[n]
|
| 344 |
+
├── tydi_train_control_[metric]_seed[n].csv
|
| 345 |
+
├── dev_base.csv
|
| 346 |
+
└── ud_test_base.csv
|
| 347 |
```
|
| 348 |
|
| 349 |
+
## Control Tasks
|
| 350 |
+
The dataset includes control task variants for evaluating probe selectivity, following the methodology of Hewitt & Liang (2019). Each control task preserves the structure of the original dataset but with randomized target values:
|
| 351 |
+
|
| 352 |
+
- Question Type Controls: Three seeds of randomly shuffled question type labels (within each language)
|
| 353 |
+
- Complexity Score Controls: Three seeds of randomly shuffled complexity scores (within each language)
|
| 354 |
+
- Individual Metric Controls: Three seeds for each of the six linguistic complexity metrics
|
| 355 |
+
|
| 356 |
+
These control tasks allow researchers to assess whether a probe is truly learning linguistic structure or simply memorizing patterns in the data.
|
| 357 |
+
|
| 358 |
+
**Reference:**
|
| 359 |
+
Hewitt, J., & Liang, P. (2019). Designing and Interpreting Probes with Control Tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (pp. 2733-2743).
|
| 360 |
+
|
| 361 |
+
|
| 362 |
## Features Description
|
| 363 |
|
| 364 |
### Core Attributes
|
|
|
|
| 370 |
| `language` | string | ISO language code (ar, en, fi, id, ja, ko, ru) |
|
| 371 |
| `question_type` | int | Binary encoding (0 = content, 1 = polar) |
|
| 372 |
| `complexity_score` | float | Combined linguistic complexity score |
|
| 373 |
+
| `lang_norm_complexity_score`| float | Language-normalized complexity score (0-1)|
|
| 374 |
|
| 375 |
### Linguistic Features
|
| 376 |
|
|
|
|
| 409 |
|
| 410 |
## Usage Examples
|
| 411 |
|
| 412 |
+
### Basic Usage
|
| 413 |
```python
|
| 414 |
from datasets import load_dataset
|
| 415 |
|
| 416 |
+
# Load the base dataset
|
| 417 |
+
dataset = load_dataset("rokokot/question-type-and-complexity-v2", name="base")
|
| 418 |
|
| 419 |
# Access the training split (TyDi data)
|
| 420 |
tydi_data = dataset["train"]
|
|
|
|
| 426 |
ud_data = dataset["test"]
|
| 427 |
|
| 428 |
# Filter for a specific language
|
| 429 |
+
finnish_questions = dataset["train"].filter(lambda x: x["language"] == "fi")
|
| 430 |
|
| 431 |
# Filter for a specific type
|
| 432 |
+
polar_questions = dataset["train"].filter(lambda x: x["question_type"] == 1)
|
| 433 |
+
content_questions = dataset["train"].filter(lambda x: x["question_type"] == 0)
|
| 434 |
```
|
| 435 |
+
### Working with Control Tasks
|
| 436 |
+
```python
|
| 437 |
+
from datasets import load_dataset
|
| 438 |
+
|
| 439 |
+
# Load the original dataset
|
| 440 |
+
original_data = load_dataset("rokokot/question-type-and-complexity-v2", name="base")
|
| 441 |
|
| 442 |
+
# Load question type control tasks
|
| 443 |
+
question_control1 = load_dataset("rokokot/question-type-and-complexity-v2", name="control_question_type_seed1")
|
| 444 |
+
question_control2 = load_dataset("rokokot/question-type-and-complexity-v2", name="control_question_type_seed2")
|
| 445 |
+
question_control3 = load_dataset("rokokot/question-type-and-complexity-v2", name="control_question_type_seed3")
|
| 446 |
+
|
| 447 |
+
# Load complexity score control tasks
|
| 448 |
+
complexity_control1 = load_dataset("rokokot/question-type-and-complexity-v2", name="control_complexity_seed1")
|
| 449 |
+
|
| 450 |
+
# Load individual metric control tasks
|
| 451 |
+
links_control = load_dataset("rokokot/question-type-and-complexity-v2", name="control_avg_links_len_seed1")
|
| 452 |
+
depth_control = load_dataset("rokokot/question-type-and-complexity-v2", name="control_avg_max_depth_seed2")
|
| 453 |
+
```
|
| 454 |
## Research Applications
|
| 455 |
|
| 456 |
This dataset enables various research directions:
|
|
|
|
| 459 |
2. **Question answering systems**: Analyze how question complexity affects QA system performance.
|
| 460 |
3. **Language teaching**: Develop difficulty-aware educational materials for language learners.
|
| 461 |
4. **Psycholinguistics**: Study processing difficulty predictions for different question constructions.
|
| 462 |
+
5. **Machine translation**: Evaluate translation symmetry for questions of varying complexity.
|
| 463 |
|
| 464 |
## Citation
|
| 465 |
|