tosch22 commited on
Commit
327613b
·
verified ·
1 Parent(s): 4dec61f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +146 -3
README.md CHANGED
@@ -1,3 +1,146 @@
1
- ---
2
- license: cdla-permissive-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cdla-permissive-2.0
3
+ task_categories:
4
+ - text-ranking
5
+ - text-classification
6
+ language:
7
+ - en
8
+ tags:
9
+ - food
10
+ - news
11
+ - travel
12
+ - recommendations
13
+ ---
14
+
15
+ # Overview
16
+ RecoReact is a novel dataset of multi-turn interactions between real users and a (random) AI assistant, providing a unique set of feedback signals, including multi-turn natural language requests, structured item selections, item ratings, and user profiles.
17
+ The dataset spans three different domains: news articles, travel destinations, and meal planning.
18
+
19
+ ## Dataset Details
20
+
21
+ This dataset contains 1785 interactions from 595 users (with a median completion time of about 22 min). RecoReact also includes the user profile data of each user (fully anonymized), and information on all items in each domain including a title, high-level category, description, and URL to a thumbnail image. This dataset also contains various types of feedback signals besides natural language: the set of selected items, survey responses.
22
+ Other information:
23
+ - **Curated by:** Felix Leeb and Tobias Schnabel
24
+ - **Funded by:** Microsoft Research (Augmented Learning and Reasoning team)
25
+ - **Language(s) (NLP):** English
26
+ - **License:** CDLA Permissive, Version 2.0
27
+
28
+ ### User and Item Features
29
+ The dataset broadly contains three types of user feedback over three user-assistant turns:
30
+ - **User Profiles**: Structured responses to intake questions on interests, goals, and habits.
31
+ - **Behavioral History**: Items selected in each turn
32
+ - **Satisfaction**: Ratings of how satisfied or relevant the entire set of recommendations was in each turn
33
+ - **Natural Language Messages**: The user's initial request and requested modifications/feedback in each turn
34
+
35
+ ## Uses
36
+
37
+ The dataset is intended for research on personalized recommendation systems, in particular to study how different types of feedback signals (ratings, written feedback, user profiles) can be used to increase the relevance of recommendations.
38
+
39
+ ## Dataset Structure
40
+
41
+ The dataset is split into three different files for each of the three domains: news, travel, and food for a total of 9 files. For each domain, there is one file containing all the information about the items that can be recommended `products.csv`, one file containing all the user information `users.csv`, and one file containing the interactions between users and the assistant `impressions.csv` (including the user requests and selections).
42
+
43
+ The `*-products.csv` file contains the following columns:
44
+ - `pid`: a unique identifier for the item
45
+ - `title`: the title of the item (shown to the user)
46
+ - `description`: a longer description of the item (shown to the user)
47
+ - `thumbnail`: a URL to a thumbnail image of the item (shown to the user)
48
+ - `category`: a category label for the item (not shown to the user)
49
+
50
+ The `*-impressions.csv` file contains the following columns:
51
+ - `iid`: a unique identifier for the interaction
52
+ - `user`: the user identifier
53
+ - `round`: the round number of the interaction (1-3)
54
+ - `categories`: a list of categories the user selected for the recommendations
55
+ - `request`: the user's request to the assistant
56
+ - `selected1`, `selected2`, `selected3`: the user's selection of the recommendations
57
+ - `update1`, `update2`: the user's update to the recommendations
58
+ - `rating1`, `rating2`, `rating3`: 9-point scale ratings of the recommendations
59
+ - `summary`: a free-response summary of the user's feedback on the recommendations
60
+ - `rating_summary`: 9-point scale rating of the summary
61
+ - `good_suggestions`: a 5-point scale rating of how good the suggestions were
62
+ - `good_selections`: a 5-point scale rating of how good the selections were
63
+ - `good_request_match`: a 5-point scale rating of how well the recommendations matched the request
64
+ - `good_summary_match`: a 5-point scale rating of how well the summary matched the feedback
65
+
66
+ The `travel-users.csv` file contains the following columns:
67
+ - `uid`: a unique identifier for the user
68
+ - `source_types`: a list of categories selected by the user
69
+ - `desc_sources`: a free-response description of how the user plans trips
70
+ - `companions`: a list of companions the user prefers to travel with
71
+ - `personalized`: a 5-point scale rating of how much the user values personalized recommendations
72
+ - `explore`: a 5-point scale rating of how much the user values exploring new places
73
+ - `desc_selection`: a free-response description of the user's selection process of travel destinations
74
+ - `task_clear`: a 5-point scale rating of how clear the user found the task
75
+ - `task_difficult`: a 5-point scale rating of how difficult the user found the task
76
+ - `dialogue_helpful`: a 5-point scale rating of how helpful the user found the dialogue with the assistant
77
+ - `recs_satisfied`: a 5-point scale rating of how satisfied the user was with the recommendations
78
+ - `task_feedback`: a free-response description of the user's feedback on the task
79
+
80
+ The `food-users.csv` file contains the following columns:
81
+ - `uid`: a unique identifier for the user
82
+ - `source_types`: a list of categories selected by the user
83
+ - `desc_sources`: a free-response description of how the user plans meals
84
+ - `companions`: a list of companions the user eats with or cooks for
85
+ - `personalized`: a 5-point scale rating of how much the user values personalized recommendations
86
+ - `explore`: a 5-point scale rating of how much the user values exploring new places
87
+ - `desc_selection`: a free-response description of the user's selection process of meals
88
+ - `task_clear`: a 5-point scale rating of how clear the user found the task
89
+ - `task_difficult`: a 5-point scale rating of how difficult the user found the task
90
+ - `dialogue_helpful`: a 5-point scale rating of how helpful the user found the dialogue with the assistant
91
+ - `recs_satisfied`: a 5-point scale rating of how satisfied the user was with the recommendations
92
+ - `task_feedback`: a free-response description of the user's feedback on the task
93
+
94
+ The `news-users.csv` file contains the following columns:
95
+ - `uid`: a unique identifier for the user
96
+ - `source_types`: a list of categories selected by the user
97
+ - `frequency`: a list of categories the user selected for the recommendations
98
+ - `reasons`: a free-response description of why the user reads news
99
+ - `desc_sources`: a free-response description of how the user reads news
100
+ - `personalized`: a 5-point scale rating of how much the user values personalized recommendations
101
+ - `explore`: a 5-point scale rating of how much the user values exploring new places
102
+ - `demand`: a 5-point scale rating of how much the user would like to use a personalized AI assistant for news
103
+ - `desc_selection`: a free-response description of the user's selection process of news articles
104
+ - `task_clear`: a 5-point scale rating of how clear the user found the task
105
+ - `task_difficult`: a 5-point scale rating of how difficult the user found the task
106
+ - `dialogue_helpful`: a 5-point scale rating of how helpful the user found the dialogue with the assistant
107
+ - `recs_satisfied`: a 5-point scale rating of how satisfied the user was with the recommendations
108
+ - `task_feedback`: a free-response description of the user's feedback on the task
109
+
110
+ ## Dataset Creation
111
+
112
+ The data was collected using a web-survey with participants recruited through Prolific.
113
+ The participants were asked to provide general information about their habits and preferences in one of the three domains, then they were asked to make a specific request to the assistant, and provide feedback on the recommendations they received.
114
+ We use a hybrid interface with both free-text and structured input, for example, in item selection, users choose from a visual layout of 12 items with thumbnails.
115
+ We choose 12 items without replacement in each round, drawn uniformely at random from the general inventory filtered down to the main interest categories selected in the profile building process in order to ensure both unbiasedness as well as a baseline level of relevance.
116
+
117
+ ### Curation Rationale
118
+ We created this dataset because there were no public recommendation datasets that included both conventional feedback signals (ratings, selection) as well as natural language feedback and profile information.
119
+
120
+
121
+ ### Dataset Sources
122
+ For each domain, we created an inventory:
123
+ - News: 629 news articles sourced from [lifehacker.com](https://www.lifehacker.com/)
124
+ - Travel: 630 travel destinations sourced from [wikivoyage.org](https://www.wikivoyage.org/)
125
+ - Meals: 3904 meals sourced from [blueapron.com](https://www.blueapron.com/cookbook)
126
+
127
+ ## Bias, Risks, and Limitations
128
+
129
+ Participants in the survey were recruited using Prolific, with the only requirement being that they were fluent in English, and their residence was in the US. This may introduce biases in the dataset, as the participants may not be representative of the general population. Additionally only a small number (approximately 600 participants) were recruited for the survey, which may limit the generalizability of the dataset.
130
+
131
+
132
+ ## Citation
133
+
134
+ **BibTeX:**
135
+
136
+ ```
137
+ @misc{stargalla2025computationalcomplexitycountinglinear,
138
+ title={The Computational Complexity of Counting Linear Regions in ReLU Neural Networks},
139
+ author={Moritz Stargalla and Christoph Hertrich and Daniel Reichman},
140
+ year={2025},
141
+ eprint={2505.16716},
142
+ archivePrefix={arXiv},
143
+ primaryClass={cs.CC},
144
+ url={https://arxiv.org/abs/2505.16716},
145
+ }
146
+ ```