Datasets:
Tasks:
Feature Extraction
Formats:
csv
Sub-tasks:
language-modeling
Languages:
English
Size:
1K - 10K
License:
File size: 4,801 Bytes
dab8a62 7586836 5b5815e 7586836 5b5815e 7586836 5b5815e 7586836 5b5815e 7586836 5b5815e 7586836 5b5815e 7586836 5b5815e 7586836 5b5815e 7586836 5b5815e dab8a62 7586836 dab8a62 7586836 5b5815e 7586836 5b5815e 7586836 5b5815e 7586836 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
pretty_name: Dataset Featurization
language:
- en
license:
- mit
task_categories:
- feature-extraction
task_ids:
- language-modeling
configs:
- config_name: nyt
data_files:
- split: train
path: data/nyt/samples.csv
- config_name: nyt-evaluation-0
data_files:
- split: train
path: data/nyt/evaluation/evaluation_df_group_0.csv
- config_name: nyt-evaluation-1
data_files:
- split: train
path: data/nyt/evaluation/evaluation_df_group_1.csv
- config_name: nyt-evaluation-2
data_files:
- split: train
path: data/nyt/evaluation/evaluation_df_group_2.csv
- config_name: amazon
data_files:
- split: train
path: data/amazon/samples.csv
- config_name: amazon-evaluation-0
data_files:
- split: train
path: data/amazon/evaluation/evaluation_df_group_0.csv
- config_name: amazon-evaluation-1
data_files:
- split: train
path: data/amazon/evaluation/evaluation_df_group_1.csv
- config_name: amazon-evaluation-2
data_files:
- split: train
path: data/amazon/evaluation/evaluation_df_group_2.csv
- config_name: dbpedia
data_files:
- split: train
path: data/dbpedia/samples.csv
- config_name: dbpedia-evaluation-0
data_files:
- split: train
path: data/dbpedia/evaluation/evaluation_df_group_0.csv
- config_name: dbpedia-evaluation-1
data_files:
- split: train
path: data/dbpedia/evaluation/evaluation_df_group_1.csv
- config_name: dbpedia-evaluation-2
data_files:
- split: train
path: data/dbpedia/evaluation/evaluation_df_group_2.csv
---
# Dataset Featurization: Experiments
This repository contains datasets used in evaluating **Dataset Featurization** against the prompting baseline. For datasets used in the case studies, please refer to [Compositional Preference Modeling](https://huggingface.co/datasets/Bravansky/compositional-preference-modeling) and [Compact Jailbreaks](https://huggingface.co/datasets/Bravansky/compact-jailbreaks).
The evaluation focuses on three datasets: The [New York Times Annotated Corpus (NYT)](https://catalog.ldc.upenn.edu/docs/LDC2008T19/new_york_times_annotated_corpus.pdf), [Amazon Reviews (Amazon)](https://amazon-reviews-2023.github.io/), and [DBPEDIA](https://huggingface.co/datasets/DeveloperOats/DBPedia_Classes). For each dataset, we sample 15 different categories and construct three separate subsets, each containing 5 categories with 100 samples per category. We evaluate the featurization method's performance on each subset.
### NYT
From the NYT corpus, we utilize manually reviewed tags from the NYT taxonomy classifier, specifically focusing on articles under "Features" and "News" categories, to construct a dataset of texts with their assigned categories. Below is how to access the input datasets and the proposed features with their assignments from the evaluation stage:
```python
import datasets
text_df = load_dataset("Bravansky/dataset-featurization", "nyt", split="train").to_pandas()
evaluation_df_0 = load_dataset("Bravansky/dataset-featurization", "nyt-evaluation-0", split="train").to_pandas()
evaluation_df_1 = load_dataset("Bravansky/dataset-featurization", "nyt-evaluation-1", split="train").to_pandas()
evaluation_df_2 = load_dataset("Bravansky/dataset-featurization", "nyt-evaluation-2", split="train").to_pandas()
```
### Amazon
Using a dataset of half a million customer reviews, we focus on identifying high-level item categories (e.g., Books, Fashion, Beauty), excluding reviews labeled "Unknown". The input datasets and the proposed features with their assignments from the evaluation stage can be accessed as follows:
```python
import datasets
text_df = load_dataset("Bravansky/dataset-featurization", "amazon", split="train").to_pandas()
evaluation_df_0 = load_dataset("Bravansky/dataset-featurization", "amazon-evaluation-0", split="train").to_pandas()
evaluation_df_1 = load_dataset("Bravansky/dataset-featurization", "amazon-evaluation-1", split="train").to_pandas()
evaluation_df_2 = load_dataset("Bravansky/dataset-featurization", "amazon-evaluation-2", split="train").to_pandas()
```
### DBPEDIA
Using the pre-processed DBPEDIA dataset, we focus on reconstructing categories labeled as level `l2`:
```python
import datasets
text_df = load_dataset("Bravansky/dataset-featurization", "dbpedia", split="train").to_pandas()
evaluation_df_0 = load_dataset("Bravansky/dataset-featurization", "dbpedia-evaluation-0", split="train").to_pandas()
evaluation_df_1 = load_dataset("Bravansky/dataset-featurization", "dbpedia-evaluation-1", split="train").to_pandas()
evaluation_df_2 = load_dataset("Bravansky/dataset-featurization", "dbpedia-evaluation-2", split="train").to_pandas()
``` |