Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,57 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
tags:
|
| 4 |
+
- optimized item selection
|
| 5 |
+
- recommender systems
|
| 6 |
+
- online experimentation
|
| 7 |
+
- multi-objective optimization
|
| 8 |
+
pretty_name: ISP
|
| 9 |
---
|
| 10 |
+
|
| 11 |
+
# Optimized Item Selection Datasets
|
| 12 |
+
|
| 13 |
+
We provide the datasets that are used to test the multi-level optimization framework ([CPAIOR'21](https://link.springer.com/chapter/10.1007/978-3-030-78230-6_27), [DSO@IJCAI'22](https://arxiv.org/abs/2112.03105)), for solving Item Selection Problem (ISP) to boost exploration in Recommender Systems.
|
| 14 |
+
|
| 15 |
+
## Overview of Datasets
|
| 16 |
+
The datasets include:
|
| 17 |
+
|
| 18 |
+
* [**GoodReads datasets**](book_recommenders_data/) for book recommenders. Two datasets are randomly selected from the source data [GoodReads Book Reviews](https://dl.acm.org/doi/10.1145/3240323.3240369), a small version with 1000 items and a large version with 10,000 items. For book recommendations, there are 11 different genres (e.g., fiction, non-fiction, children), 231 different publishers (e.g. Vintage, Penguin Books, Mariner Books), and genre-publisher pairs. This leads to 574 and 1,322 unique book labels for the small and large datasets, respectively.
|
| 19 |
+
|
| 20 |
+
* [**MovieLens datasets**](movie_recommenders_data/) for movie recommenders. Two datasets are randomly selected from the source data [MovieLens Movie Ratings](https://dl.acm.org/doi/10.1145/2827872), a small version with 1000 items and a large version with 10,000 items. For movie recommendations, there are 19 different genres (e.g. action, comedy, drama, romance), 587 different producers, 34 different languages (e.g. English, French, Mandarin), and genre-language pairs. This leads to 473 and 1,011 unique movie labels for the small and large datasets, respectively.
|
| 21 |
+
|
| 22 |
+
Each dataset in GoodReads and MovieLens contains a `*_data.csv` file, which contain the text content (i.e., title + description) of the items, and a `*_label.csv`, which contains the labels (e.g., genre or language) and a binary 0/1 denoting whether an item exbihits a label.
|
| 23 |
+
|
| 24 |
+
Each column in the csv file is for an item, indexed by book/movie ID. The order of columns in data and label files are the same.
|
| 25 |
+
|
| 26 |
+
[Selective](https://github.com/fidelity/selective) implements the multi-objective optimization approach from ([CPAIOR'21](https://link.springer.com/chapter/10.1007/978-3-030-78230-6_27), [DSO@IJCAI'22](https://arxiv.org/abs/2112.03105)) as part of `TextBased Selection`.
|
| 27 |
+
|
| 28 |
+
By solving the ISP with Text-based Selection in Selective, we select a smaller subset of items with maximum diversity in the latent embedding space of items and maximum coverage of labels.
|
| 29 |
+
|
| 30 |
+
## Usage Example
|
| 31 |
+
```python
|
| 32 |
+
# Import Selective (for text-based selection) and TextWiser (for embedding space)
|
| 33 |
+
import pandas as pd
|
| 34 |
+
from feature.selector import Selective, SelectionMethod
|
| 35 |
+
from textwiser import TextWiser, Embedding, Transformation
|
| 36 |
+
|
| 37 |
+
# Load Text Contents
|
| 38 |
+
data = pd.read_csv("goodreads_1k_data.csv").astype(str)
|
| 39 |
+
|
| 40 |
+
# Load Labels
|
| 41 |
+
labels = pd.read_csv("goodreads_1k_label.csv")
|
| 42 |
+
labels.set_index('label', inplace=True)
|
| 43 |
+
|
| 44 |
+
# TextWiser featurization method to create text embeddings
|
| 45 |
+
textwiser = TextWiser(Embedding.TfIdf(), Transformation.NMF(n_components=20, random_state=1234))
|
| 46 |
+
|
| 47 |
+
# Text-based selection
|
| 48 |
+
selector = Selective(SelectionMethod.TextBased(num_features=30, featurization_method=textwiser))
|
| 49 |
+
|
| 50 |
+
# Result
|
| 51 |
+
subset = selector.fit_transform(data, labels)
|
| 52 |
+
print("Reduction:", list(subset.columns))
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
## Citation
|
| 56 |
+
If you use ISP in our research/applications, please cite as follows:
|
| 57 |
+
|