Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
SvenKlaassen's picture
update Dataset Sources
b5bbc6b verified
|
raw
history blame
5.57 kB
metadata
license: bsd-3-clause
dataset_info:
  features:
    - name: cond_exp_y
      dtype: float64
    - name: m1
      dtype: float64
    - name: g1
      dtype: float64
    - name: l1
      dtype: float64
    - name: 'Y'
      dtype: float64
    - name: D_1
      dtype: float64
    - name: carat
      dtype: float64
    - name: depth
      dtype: float64
    - name: table
      dtype: float64
    - name: price
      dtype: float64
    - name: review
      dtype: string
    - name: sentiment
      dtype: string
    - name: label
      dtype: int64
    - name: cut_Good
      dtype: bool
    - name: cut_Ideal
      dtype: bool
    - name: cut_Premium
      dtype: bool
    - name: cut_Very Good
      dtype: bool
    - name: color_E
      dtype: bool
    - name: color_F
      dtype: bool
    - name: color_G
      dtype: bool
    - name: color_H
      dtype: bool
    - name: color_I
      dtype: bool
    - name: color_J
      dtype: bool
    - name: clarity_IF
      dtype: bool
    - name: clarity_SI1
      dtype: bool
    - name: clarity_SI2
      dtype: bool
    - name: clarity_VS1
      dtype: bool
    - name: clarity_VS2
      dtype: bool
    - name: clarity_VVS1
      dtype: bool
    - name: clarity_VVS2
      dtype: bool
    - name: image
      dtype: image
  splits:
    - name: train
      num_bytes: 184009908
      num_examples: 50000
  download_size: 173099846
  dataset_size: 184009908
tags:
  - Causal Inference
size_categories:
  - 10K<n<100K

Dataset Card

Semi-synthetic dataset with multimodal confounding. The dataset is generated according to the description in DoubleMLDeep: Estimation of Causal Effects with Multimodal Data.

Dataset Details

Dataset Description & Usage

The dataset contains the following columns:

Dataset Sources

The dataset is based on the three commonly used datasets:

All datasets are subsampled to be of equal size (n=50,000). The CIFAR-10 data is based on the trainings dataset, whereas the IMDB data contains train and test data to obtain 50,000 observations. The labels of the CIFAR-10 data are set to integer values 0 to 9. The Diamonds dataset is cleaned (values with x, y, z equal to 0 are removed) and outliers are dropped (such that 45<depth<75, 40<table<80, x<30, y<30 and 2<z<30). The remaining 53,907 observations are downsampled to the same size of 50,000 observations. Further price and carat are transformed with the natural logarithm and cut, color and clarity are dummy coded (with baselines Fair, D and I1).

The versions to create this dataset can be found on Kaggle:

The original citations can be found below.

Uses

The dataset should as a benchmark to compare different causal inference methods for observational data under multimodal confounding.

Dataset Structure

Data Instances

Data Fields

The data fields can be devided into several categories:

  • Outcome and Treatments

    • Y (float64): Outcome of interest
    • D_1 (float64): Treatment value
  • Tabular Features

    • price (float64):
  • Text Features

    • review (string): IMDB review text
    • sentiment (string): Corresponding
  • Image Features

    • image (image): Image
    • label (int64): Corresponding label from 0 to 9
  • Oracle Features

    • cond_exp_y (float64): Expected value Y conditional on D_1, etc.
    • D_1 (float64): Treatment value (generated)

Limitations

As the confounding is generated via original labels, completely removing the confounding might not be possible.

Citation Information

Dataset Citation

If you use the dataset please cite this article:

@article{klaassen2024doublemldeep,
  title={DoubleMLDeep: Estimation of Causal Effects with Multimodal Data},
  author={Klaassen, Sven and Teichert-Kluge, Jan and Bach, Philipp and Chernozhukov, Victor and Spindler, Martin and Vijaykumar, Suhas},
  journal={arXiv preprint arXiv:2402.01785},
  year={2024}
}

Dataset Sources

The three original datasets can be cited via

Diamonds dataset:

@Book{ggplot2_book,
  author = {Hadley Wickham},
  title = {ggplot2: Elegant Graphics for Data Analysis},
  publisher = {Springer-Verlag New York},
  year = {2016},
  isbn = {978-3-319-24277-4},
  url = {https://ggplot2.tidyverse.org},
}

IMDB dataset:

@InProceedings{maas-EtAl:2011:ACL-HLT2011,
  author    = {Maas, Andrew L.  and  Daly, Raymond E.  and  Pham, Peter T.  and  Huang, Dan  and  Ng, Andrew Y.  and  Potts, Christopher},
  title     = {Learning Word Vectors for Sentiment Analysis},
  booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
  month     = {June},
  year      = {2011},
  address   = {Portland, Oregon, USA},
  publisher = {Association for Computational Linguistics},
  pages     = {142--150},
  url       = {http://www.aclweb.org/anthology/P11-1015}
}

CIFAR-10 dataset:

@TECHREPORT{Krizhevsky09learningmultiple,
    author = {Alex Krizhevsky},
    title = {Learning multiple layers of features from tiny images},
    institution = {},
    year = {2009}
}

Dataset Card Authors

Sven Klaassen