Bravansky's picture
README fixed
5b5815e
metadata
pretty_name: Dataset Featurization
language:
  - en
license:
  - mit
task_categories:
  - feature-extraction
task_ids:
  - language-modeling
configs:
  - config_name: nyt
    data_files:
      - split: train
        path: data/nyt/samples.csv
  - config_name: nyt-evaluation-0
    data_files:
      - split: train
        path: data/nyt/evaluation/evaluation_df_group_0.csv
  - config_name: nyt-evaluation-1
    data_files:
      - split: train
        path: data/nyt/evaluation/evaluation_df_group_1.csv
  - config_name: nyt-evaluation-2
    data_files:
      - split: train
        path: data/nyt/evaluation/evaluation_df_group_2.csv
  - config_name: amazon
    data_files:
      - split: train
        path: data/amazon/samples.csv
  - config_name: amazon-evaluation-0
    data_files:
      - split: train
        path: data/amazon/evaluation/evaluation_df_group_0.csv
  - config_name: amazon-evaluation-1
    data_files:
      - split: train
        path: data/amazon/evaluation/evaluation_df_group_1.csv
  - config_name: amazon-evaluation-2
    data_files:
      - split: train
        path: data/amazon/evaluation/evaluation_df_group_2.csv
  - config_name: dbpedia
    data_files:
      - split: train
        path: data/dbpedia/samples.csv
  - config_name: dbpedia-evaluation-0
    data_files:
      - split: train
        path: data/dbpedia/evaluation/evaluation_df_group_0.csv
  - config_name: dbpedia-evaluation-1
    data_files:
      - split: train
        path: data/dbpedia/evaluation/evaluation_df_group_1.csv
  - config_name: dbpedia-evaluation-2
    data_files:
      - split: train
        path: data/dbpedia/evaluation/evaluation_df_group_2.csv

Dataset Featurization: Experiments

This repository contains datasets used in evaluating Dataset Featurization against the prompting baseline. For datasets used in the case studies, please refer to Compositional Preference Modeling and Compact Jailbreaks.

The evaluation focuses on three datasets: The New York Times Annotated Corpus (NYT), Amazon Reviews (Amazon), and DBPEDIA. For each dataset, we sample 15 different categories and construct three separate subsets, each containing 5 categories with 100 samples per category. We evaluate the featurization method's performance on each subset.

NYT

From the NYT corpus, we utilize manually reviewed tags from the NYT taxonomy classifier, specifically focusing on articles under "Features" and "News" categories, to construct a dataset of texts with their assigned categories. Below is how to access the input datasets and the proposed features with their assignments from the evaluation stage:

import datasets
text_df = load_dataset("Bravansky/dataset-featurization", "nyt", split="train").to_pandas()
evaluation_df_0 = load_dataset("Bravansky/dataset-featurization", "nyt-evaluation-0", split="train").to_pandas()
evaluation_df_1 = load_dataset("Bravansky/dataset-featurization", "nyt-evaluation-1", split="train").to_pandas()
evaluation_df_2 = load_dataset("Bravansky/dataset-featurization", "nyt-evaluation-2", split="train").to_pandas()

Amazon

Using a dataset of half a million customer reviews, we focus on identifying high-level item categories (e.g., Books, Fashion, Beauty), excluding reviews labeled "Unknown". The input datasets and the proposed features with their assignments from the evaluation stage can be accessed as follows:

import datasets
text_df = load_dataset("Bravansky/dataset-featurization", "amazon", split="train").to_pandas()
evaluation_df_0 = load_dataset("Bravansky/dataset-featurization", "amazon-evaluation-0", split="train").to_pandas()
evaluation_df_1 = load_dataset("Bravansky/dataset-featurization", "amazon-evaluation-1", split="train").to_pandas()
evaluation_df_2 = load_dataset("Bravansky/dataset-featurization", "amazon-evaluation-2", split="train").to_pandas()

DBPEDIA

Using the pre-processed DBPEDIA dataset, we focus on reconstructing categories labeled as level l2:

import datasets
text_df = load_dataset("Bravansky/dataset-featurization", "dbpedia", split="train").to_pandas()
evaluation_df_0 = load_dataset("Bravansky/dataset-featurization", "dbpedia-evaluation-0", split="train").to_pandas()
evaluation_df_1 = load_dataset("Bravansky/dataset-featurization", "dbpedia-evaluation-1", split="train").to_pandas()
evaluation_df_2 = load_dataset("Bravansky/dataset-featurization", "dbpedia-evaluation-2", split="train").to_pandas()