Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Missing a closing quotation mark in string. in row 19
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 85559)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 19
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Randomized controlled trial}\label{RCT} A standard approach in recommendation systems literature is to evaluate a counterfactual policy using off-policy evaluation methods \citep{swaminathan2015batch, swaminathan2015self,gilotte2018offline}. Conceptually this involves identifying which observed user-item interactions would have also occurred under the counterfactual policy and using observed utilities from these interactions to compute the mean utility under the counterfactual policy. In our context, this is problematic for two reasons. First, recommendation policies impact both the outcomes of user-story interactions and the number of interactions. Thus, the natural metric for evaluating a recommendation policy is the \emph{total utility}, which we cannot reliably estimate with standard off-policy metrics that do not capture the change in the number of interactions (see \cite{forbes} for a similar argument). Second, even if we abstract away from changes on the extensive margin and assume that the impact of a new recommendation policy can be summarized by additive effects across user-story interactions we will systematically miss some of them. We can adjust for differences between user-item interactions captured in the off-policy evaluation and those in the population at large but this is likely to be incomplete due to data sparsity. Both these challenges can be thought of as a problem of the overlap between the data generated using the baseline policy and data that would have been generated under the counterfactual policy. Considering a general case with effects on the extensive margin and interaction effects between stories, reliable off-policy estimates can be obtained only when the baseline and counterfactual policies coincide, making the method impracticable.\footnote{An alternative is to consider a structural model.} When one is willing to consider the case of simple additive utilities, the extent of the overlap between user-story interactions in the baseline and the counterfactual policy determines how reliable this approach is; \cite{contrastingoffon} and \cite{offonecomm} show these limitations in empirical studies.\footnote{By construction this approach is more suitable for evaluating small changes in the policy. A large change that results in new user-story interactions will imply a low overlap.} An alternative approach to evaluate a new policy is an A/B experiment in which the targeted metric is \emph{total utility} of a user. This is the method we use in this paper. This section discusses the design of the experiment and presents the results. \subsection{Design of the experiment} In the experiment, 7750 users were randomized into treatment and control. We considered only users that had at least sixty story interactions before the experiment. The treatment group received personalized recommendations in the \emph{Recommended Story} tray, while the control group remained with the baseline system of stories selected randomly from a list specified by editors. Tray's UI was consistent across the control and treatment group; the only thing exogenously varied was the set of stories displayed in the tray. Content presented in the other trays of the app was unchanged. Treated users were not aware of the change in the recommendation system. The experiment lasted for two weeks, which was pre-determined with the partner. Based on the analysis of past data the minimum detectable effect on total utility (per user sum of utility over two weeks) was 0.08 standard deviation. The experiment started on the 22nd of July 2021 and lasted until the 4th of August.\footnote{After the experiment, our system of personalized recommendations was launched for all eligible users on the \emph{Recommended Story} tray.} During the experiment, 3023 users from the experimental groups launched the app at least once and of them, 525 viewed at least one story in \emph{Recommended Story} tray.\footnote{The large difference in the number of randomized students and the number of students who were active during the experimental period is because one, a number of students were only active on other trays and two, there is continuous churn and students drop off the app over time.} We report the balance of observable characteristics between the treatment and control groups in Appendix \ref{cobal}. In the evaluation of the experiment, we consider subjects that launched the app at least once during the experiment. This means that we exclude users that did not launch the app in the experiment period, but we include users who launched the app but did not click on any of the stories in the \emph{Recommended Story} tray. The reason for including the latter group is that users can see the front page of the first story in \emph{Recommended Story} tray without starting to interact with any of the stories in the tray. Thus, we also capture the change from not interacting at all with content in the \emph{Recommended Story} to having some non-zero utility interaction. \subsection{Outcome metrics} We focus on two types of outcomes: first, outcomes specific to \emph{Recommended Story} tray and, second, overall app usage. Even though other trays in the app remained unchanged, we are interested in the impact on overall app usage to understand whether changes in one tray are compensated by altered utilization of content elsewhere, or the overall time spent on the app also shifts. In this specific context where many users are consuming content based on the recommendation of parents or teachers, understanding the overall elasticity of consumption with respect to changes in the app quality is an important, strategic metric that can guide app development. We consider the following outcome metrics: (i) \emph{total utility} - per user sums of utility from all user-story interactions in \emph{Recommended Story} tray during the experiment, (ii) \emph{total utility all trays} - per user sums of utility from all user-story interactions in all trays of the app during the experiment, (iii) \emph{total stories} - per user sums of completed stories in \emph{Recommended Story} tray during the experiment, (iv) \emph{total stories all trays} - per user sums of completed stories in all trays, (v) \emph{total reading time} - per user sums of estimated reading time of stories completed in \emph{Recommended Story} tray, (vi) \emph{total reading time all trays} - per user sums of estimated reading time of stories completed in all trays.\footnote{The estimates of the reading time per story are provided by \emph{S2M} as intervals, e.g., from two to four minutes. For each story we take the mid point of the interval.} All metrics relate to total app utilization per user. This approach assigns the same weight to each user without distinguishing between users of varying consumption patterns. In \Cref{section:otherutils}, we additionally consider mean utility from user-story interactions. We constructed all variables based on raw log files provided by \emph{S2M}. These log files are internal data used by \emph{S2M} data analytics teams, they constitute the most accurate available picture of users' behavior on the platform. Nevertheless, occasional instrumentation errors occur. The type of instrumentation errors that are problematic for our analysis is an incorrect attribution of user-story interactions.\footnote{This can for example take a form of a user being assigned interactions of another user, or assigned completions instead of views.} This results in some users having spurious, very high utilization during specific sessions. To avoid including such sessions in the analysis we drop users that had at least one session in which they completed more than 10 stories. In result, we drop 40 users. \Cref{tab:sum_stats_exp} provides summary statistics of variables describing utilization in the \emph{Recommended Stories} tray. \begin{table}[!htbp] \centering \caption{Summary statistics of outcome variables describing activity on the \emph{Recommended Stories} tray per group.} \label{tab:sum_stats_exp} \resizebox{\textwidth}{!}{ \begin{tabular}{>{}l|lrrrrrr} \toprule names & group & min & mean & percentile 75th & percentile 90th & percentile 95th & max\\ \midrule {\textcolor{black}{\textbf{Total utility}}} & control & 0 & 0.28 & 0 & 0.51 & 1.6 & 13.6\\ {\textcolor{black}{\textbf{Total utility}}} & treatment & 0 & 0.45 & 0 & 1.30 & 3.0 & 23.9\\ {\textcolor{black}{\textbf{Total stories}}} & control & 0 & 0.15 & 0 & 0.00 & 1.0 & 11.0\\ {\textcolor{black}{\textbf{Total stories}}} & treatment & 0 & 0.27 & 0 & 1.00 & 2.0 & 21.0\\ {\textcolor{black}{\textbf{Total reading time}}} & control & 0 & 1.04 & 0 & 0.00 & 7.5 & 78.5\\ {\textcolor{black}{\textbf{Total reading time}}} & treatment & 0 & 1.94 & 0 & 7.25 & 11.0 & 148.5\\ \bottomrule \end{tabular} } \caption*{\footnotesize{\textit{Note: Summary statistics of variables measuring utilization of the \emph{Recommended Story} tray during the experiment. Sample includes only users that launched the app during the experiment period. }}} \end{table} Even though we consider only users that launched the app during the experiment, most of them had zero utilization of the app in the \emph{Recommended Story} tray. Nevertheless, we still include them in the experiment evaluation as different recommendation policies might impact the share of users consuming any content in the tray. From \Cref{tab:sum_stats_exp} we can notice that the treatment group has higher mean utilization and higher utilization on the 90th and 95th percentiles. We are also interested in the impact of the personalization of content recommendations in \emph{Recommended Stories} tray on the overall app usage. \Cref{tab:sum_stats_exp_allpaths} presents summary statistics of variables describing utilization on all trays in the app. \begin{table}[!htbp] \centering \caption{Summary statistics of outcome variables describing activity on all trays per group.} \label{tab:sum_stats_exp_allpaths} \resizebox{\textwidth}{!}{ \begin{tabular}{>{}l|lrrrrrr} \toprule names & group & min & mean & percentile 75th & percentile 90th & percentile 95th & max\\ \midrule {\textcolor{black}{\textbf{Total utility}}} & control & 0 & 3.79 & 4.47 & 11.4 & 17.77 & 81.5\\ {\textcolor{black}{\textbf{Total utility}}} & treatment & 0 & 4.32 & 5.50 & 13.5 & 19.02 & 63.7\\ {\textcolor{black}{\textbf{Total stories}}} & control & 0 & 1.89 & 2.00 & 6.0 & 9.00 & 41.0\\ {\textcolor{black}{\textbf{Total stories}}} & treatment & 0 & 2.25 & 3.00 & 7.0 & 11.00 & 42.0\\ {\textcolor{black}{\textbf{Total reading time}}} & control & 0 & 12.65 & 14.50 & 40.0 & 65.32 & 273.0\\ {\textcolor{black}{\textbf{Total reading time}}} & treatment & 0 & 15.15 & 15.00 & 46.0 & 73.12 & 365.0\\ \bottomrule \end{tabular} } \caption*{\footnotesize{\textit{Note: Summary statistics of variables measuring the overall app utilization during the experiment. Sample includes only users that launched the app during the experiment period. }}} \end{table} In \Cref{tab:sum_stats_exp_allpaths} we see that mean outcomes are higher in the treatment group for all outcome variables. Treatment has higher or equal outcomes at the 75th, 90th, and the 95th percentile. To compare distributions of total utility in treatment and control we carry out Wilcox test (one sided alternative). Using the total utility in the \emph{Recommended Story} tray we reject the hypothesis that the true location shift is less than zero, with p-value 0.0007, and for all trays in the app with p-value of 0.05. In \Cref{fig:total_utility_distribution} we present entire distributions of total utility. Panel A shows cumulative distribution functions of total utility from \emph{Recommended Story} tray per experimental group; panel B shows the difference between probability density functions of the treatment and the control group. We can notice that a larger share of control group users did not have any positive-utility content interaction during the experiment. Treatment group has a higher probability mass for almost any non-zero utility. \begin{figure}% \centering \caption{Distribution of total utility in \emph{Recommended Stories} tray per group.}% \subfloat[\centering Cumulative distribution function per group. Treatment in blue, control in red.]{{\includegraphics[width=15cm]{images/cdfs_total_utils.png} }}% \qquad \subfloat[\centering Difference between the probability density functions of treatment and control groups. Treatment in blue, control in red. ]{{\includegraphics[width=15cm]{images/diff_pdfs_total_utility.png} }}% \label{fig:total_utility_distribution}% \end{figure} \subsection{Average treatment effects}\label{ATE_section} Estimates of the average treatment effects are presented in \Cref{tab:ATE}. We use the difference in means, the linear regression, and the augmented inverse propensity weighing (AIPW) estimators. We find a strong positive effect of personalization on all outcomes metrics. The impact on utilization of the \emph{Recommended Stories} tray has high economic and statistical significance. Total utility increases by 63\% ($\pm$ 28\%), the number of stories completed in the tray by 78\% ($\pm$39\%), and total reading time by 87\% ($\pm$ 41\%).\footnote{Confidence intervals in brackets. Standard errors based on difference in means estimator.} We also find an increase in the utilization of the app across all trays; total utility increases by 14\% ($\pm$ 12\%), the number of stories completed by 19\% ($\pm$ 14\%) , and the reading time in all trays by 20\% ($\pm$ 14\%). Thus, the increase of consumption of content in \emph{Recommended Stories} did not come entirely at the expense of consumption in other trays; on the contrary, this evidence suggest that users started using the app more.\footnote{In \Cref{ap_out} we provide robustness check of this estimates by trimming the top 5\% users with the highest daily number of completed stories instead of the cap on 10 stories.} \begin{table}[!htbp] \centering \caption{Estimates of average treatment effects for all outcome variables} \label{tab:ATE} \resizebox{\textwidth}{!}{% \begin{tabular}{>{}l|rrrrrrrr} \toprule variable & ATE & std.err. & p.value & ATE \% & ATE reg adj. & std. err. reg adj. & ATE AIPW adj. & std. err. AIPW adj.\\ \midrule {\textcolor{black}{\textbf{Total utility RS}}} & 0.17 & 0.05 & 0.00 & 60 & 0.18 & 0.05 & 0.18 & 0.05\\ {\textcolor{black}{\textbf{Total stories RS}}} & 0.12 & 0.04 & 0.00 & 78 & 0.13 & 0.03 & 0.13 & 0.04\\ {\textcolor{black}{\textbf{Total reading time RS}}} & 0.90 & 0.26 & 0.00 & 87 & 0.96 & 0.25 & 0.98 & 0.25\\ \addlinespace {\textcolor{black}{\textbf{Total utility all trays}}} & 0.52 & 0.27 & 0.05 & 14 & 0.50 & 0.26 & 0.51 & 0.26\\ {\textcolor{black}{\textbf{Total stories all trays}}} & 0.36 & 0.16 & 0.03 & 19 & 0.36 & 0.15 & 0.35 & 0.15\\ {\textcolor{black}{\textbf{Total reading time all trays}}} & 2.50 & 1.10 & 0.02 & 20 & 2.47 & 1.07 & 2.49 & 1.06\\ \bottomrule \end{tabular} } \caption*{\footnotesize{\textit{Note: Estimates of the average treatment effect using difference-in-means estimator (first column), adjusting for covariates with a linear regression (fifth column), and adjusting for covariates using Augmented Inverse Propensity Weighting - AIPW (column seven); covariates used: users' grade, user type (B2B, B2C, or paid), past utilization, niche type (indicator whether user consumes content that is popular amongst other users or more niche content), past usage of the \emph{Recommended Story} tray. Columns two, six, and eight show standard errors. Column three presents p-values. Three first rows describe outcomes in \emph{Recommended Story} tray, three bottom rows overall app utilization.}}} \end{table} Additionally, we review differences in total utility in the most popular trays in the app across treatment and control. \Cref{fig:ate_all_paths} shows differences in average total utility in treatment and control groups in other popular trays in the app. The experiment period is marked in blue; we can see that the difference between the two groups is statistically significant only for Recommended Story tray. We carry out this comparison for the same users in a pre-experiment period; before the experiment, differences in average utility across treatment and control are insignificant in all of the trays (which is expected since the users where randomly assigned). \begin{figure}[!ht] \centering \caption{Difference in average total utility in treatment and control groups for eight most popular trays.} \includegraphics[scale = 0.63]{images/plot_al_stories.png} \caption*{\footnotesize{\textit{Note: Difference in average total utility in treatment and control groups for eight most popular trays. Experimental period in blue, pre-experimental in red. Pre-experimental period is 7-19.06.2021, there are approximately twice as many users in the per-experimental period (this date is chosen on the basis of being the closest two-weeks long period without other major experiments and alterations in the app).}}} \label{fig:ate_all_paths} \end{figure} \paragraph{Impact on time spent on the app.} In \Cref{tab:ATE}, we see a strongly significant positive effect on the time spent, both in \emph{Recommended Story} tray as well as across all trays.\footnote{Note, this outcome metric is a sum of the duration of completed stories. We do not include stories that were started, but not completed, since we do not observe the moment in which users stopped engaging with a specific story. The average number of stories started but not completed in treatment and control is roughly the same 2.33 in treatment and 2.17 in control; the test for difference in means has p-value of 0.55.} The increase in total time spent on the app is particularly interesting because it means that students prefer to spend time on the app than engage in other activities, outside of the app. In our context, this result suggests that if the content is interesting to students, they are willing to go beyond the time prescribed by parents or teachers. Generally, we can consider users responding to the improvements in the app quality on an intensive and extensive margin. Gains on the intensive margin would be due to users better allocating their time; in our case, that is reallocation of the time to more attractive, personalized content in the \emph{Recommended Story} tray. While the impact on the extensive margin means that users substitute away from other activities and start using the app more. The effect on the extensive margin highlights that the app quality matters to the users, and improving it will result in more time spent with the app. \section*{Appendix} \section{Covariate balance check}\label{cobal} Table \ref{tab:baltab} presents comparison of means of user characteristics across treatment and control. We find that difference between treatment and control are small and statistically insignificant. \begin{table}[!htbp] \centering \caption{Balance of covariates across treatment and control} \label{tab:baltab} \resizebox{0.75\textwidth}{!}{% \begin{tabular}{>{}l|rrrrr} \toprule covariate & mean treatment & sd treatment & mean control & sd control & p value\\ \midrule {\textcolor{black}{\textbf{past utility}}} & 101.23 & 139.38 & 94.67 & 114.35 & 0.17\\ {\textcolor{black}{\textbf{past stories}}} & 57.14 & 89.03 & 52.75 & 77.22 & 0.16\\ {\textcolor{black}{\textbf{max streak}}} & 18.58 & 85.00 & 17.79 & 84.00 & 0.80\\ {\textcolor{black}{\textbf{share b2b}}} & 0.31 & 0.46 & 0.29 & 0.46 & 0.46\\ {\textcolor{black}{\textbf{share b2c}}} & 0.41 & 0.49 & 0.41 & 0.49 & 0.79\\ {\textcolor{black}{\textbf{share paid}}} & 0.25 & 0.43 & 0.26 & 0.44 & 0.66\\ {\textcolor{black}{\textbf{share grade 2}}} & 0.24 & 0.43 & 0.24 & 0.43 & 0.82\\ {\textcolor{black}{\textbf{share grade 3}}} & 0.22 & 0.42 & 0.22 & 0.41 & 0.69\\ \bottomrule \end{tabular} } \caption*{\footnotesize{\textit{Note: Means of users' characteristics in treatment and control. Last column p-value from a t.test for difference in means. Category paid includes users from a paid fLive program and regular paying users; category b2b includes regular b2b customers and club 1br users, a B2B promotion.}}} \end{table} \section{Robustness check of the average treatment effect estimates}\label{ap_out} In table \ref{tab:ATE_rob} we present estimates of the average treatment effect based on data which is trimmed at the 95\% percentile of daily stories completed, i.e., we remove users that are in top 5\% of users with highest daily number of completed stories across all paths of the app. We find very similar estimates of the ATE for outcomes across all paths in the app. The path specific estimates are smaller, but still high and statistically significant. The confidence intervals include the point estimates from the baseline specification. \begin{table}[!htbp] \centering \caption{Estimates of average treatment effects for all outcome variables} \label{tab:ATE_rob} \resizebox{0.8\textwidth}{!}{% \begin{tabular}{>{}l|rrrr} \toprule variable & ATE & std.error & p.value & ATE percentage\\ \midrule {\textcolor{black}{\textbf{Total utility RS}}} & 0.14 & 0.04 & <0.001 & 58\\ {\textcolor{black}{\textbf{Total stories completed RS}}} & 0.09 & 0.03 & <0.001 & 68\\ {\textcolor{black}{\textbf{Total reading time RS}}} & 0.67 & 0.21 & <0.001 & 77\\ \addlinespace {\textcolor{black}{\textbf{Total utility all paths}}} & 0.50 & 0.23 & 0.03 & 15\\ {\textcolor{black}{\textbf{Total stories completed all paths}}} & 0.32 & 0.13 & 0.01 & 21\\ {\textcolor{black}{\textbf{Total reading time all paths}}} & 2.07 & 0.88 & 0.02 & 20\\ \bottomrule \end{tabular} } \caption*{\footnotesize{\textit{Note: Estimates of the average treatment effect using difference-in-means estimator. Three first rows describe outcomes in \emph{Recommended Story} path, three bottom rows overall app utilization. Last columns shows the ATE estimate as a percent share of the baseline.}}} \end{table} \section{Alternative utility metrics.}\label{section:otherutils} The utility metric that we analyzed so far is the per user sum of utility from all user-story interactions during the experiment period. This metric assigns the same weight to each user, irrespective of the number of stories consumed by that user. It also captures the fact that a new policy might impact the number of stories consumed by users. However, a firm introducing a new recommendation system might have a different objective, for example to weigh each user-story observation equally, or simply focus on maximizing the mean utility each user receives. In \Cref{tab:ATE_alt}, we provide treatment effects on such alternative utility metrics. \begin{table}[] \centering \caption{Average treatment effects: alternative utility metrics}\label{tab:ATE_alt} \resizebox{0.7\textwidth}{!}{% \begin{tabular}{>{}l|rrrr} \toprule variable & ATE & std. error & p. value & ATE \% \\ \midrule {\textcolor{black}{\textbf{Mean utility RS}}} & 0.015 & 0.006 & 0.013 & 0.31\\ {\textcolor{black}{\textbf{Utility RS}}} & 0.006 & 0.013 & 0.626 & 0.01\\ \bottomrule \end{tabular} } \caption*{\footnotesize{\textit{Note: Estimates of the average treatment effect using difference-in-means estimator. First row mean utility per user in \emph{Recommended Story} path (mean of the mean utilities within the group); only users that launched the app considered. Second row mean utility per user-story interaction. }}} \end{table} In the first row of \Cref{tab:ATE_alt} we present the average treatment effect on mean utility per user, it is insignificant. This metric weights each user equally, but does not capture the increase in the number of user-story interactions. Finally, in the second row, we present the treatment's impact on mean utility per user-story interaction, this metric puts more weight on heavy users, as they have move interactions. We find that there is a strongly positive and statistically significant treatment effect. This suggests that the personalized policy had a stronger positive effect on heavy users, that consume many stories, than on somewhat infrequent users. \section{Heterogeneous treatment effects: regressions analysis}\label{robust} To provide further robustness into the finding that heavy and niche users benefit from personalized recommendations, we present results of regressions of AIPW scores based on total utility on users' past utilization (see \cite{athey2019estimating} for methodology). See \cref{tab:aipw_reg} for summary of results. \begin{table}[!htbp] \centering \caption{Results of a regression of user types on AIPW scores.} \label{tab:aipw_reg} \resizebox{\textwidth}{!}{ \begin{tabular}{@{\extracolsep{5pt}}lccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{7}{c}{\textit{Dependent variable:}} \\ \cline{2-8} \\[-1.8ex] & \multicolumn{7}{c}{Total utility (aipw.scores)} \\ \\[-1.8ex] & (1) & (2) & (3) & (4) & (5) & (6) & (7)\\ \hline \\[-1.8ex] past utility & 0.001$^{***}$ (0.0003) & & & & & & 0.001 (0.0004) \\ stories completed & & 0.002$^{***}$ (0.001) & & & & 0.001$^{**}$ (0.001) & \\ heavy user utility & & & 0.366$^{***}$ (0.075) & & & & \\ heavy user completions & & & & 0.352$^{***}$ (0.076) & & & \\ niche type & & & & & 0.342$^{***}$ (0.074) & 0.231$^{***}$ (0.088) & 0.256$^{***}$ (0.091) \\ \hline \\[-1.8ex] Observations & 2,661 & 2,661 & 2,661 & 2,661 & 2,661 & 2,661 & 2,661 \\ R$^{2}$ & 0.006 & 0.007 & 0.009 & 0.008 & 0.008 & 0.010 & 0.009 \\ \hline \hline \\[-1.8ex] \textit{Note:} & \multicolumn{7}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\ \end{tabular} } \caption*{\footnotesize{\textit{Note: Outcome variable is AIPW score of total utility per user. OLS estimator. All covariates defined based on the pre-experiment app usage. $^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01}}} \end{table} Columns one to four of \cref{tab:aipw_reg} show that users with high past utilization have higher treatment effects. Column five shows higher treatment effect for niche type users. Finally, columns six and seven control both for heavy utilization and niche type; niche type remains to have a high and statistically significant treatment effect. \section{Data-driven treatment effects heterogeneity}\label{hte_appendix} We use the estimated causal forest to divide our users into tertiles according to their estimates CATE prediction (see \cite{chernozhukov2018generic} for details of this approach). To avoid using model that was fitted using observations for which we make predictions, we use honest sample splitting with 10 folds. \Cref{fig:CATE_4} shows the the predicted CATES in the four groups. First of all, the treatment effects are quite similar for the four groups. The fourth quartile appears to have higher treatment effects, but the differences are small. \begin{figure}[H] \centering \caption{Average CATE within each ranking (as defined by predicted CATE). Predictions with OLS in blue and AIPW scores in red.} \includegraphics[height=3.5in]{images/cate_3.png} \label{fig:CATE_4} \end{figure} Finally, we can also compare average characteristics for individuals in the four quartiles. We present such a comparison in \Cref{fig:average_CATEs}. Heavy users (high maximal streak and freq-user indicator) appear more frequently in the highest quartile. We also see more niche users in the fourth group. We look in detail into these groups in the next subsections. \begin{figure}[H] \centering \caption{Average covariate values within group (based on CATE estimate ranking).} \includegraphics[height=5in]{images/average_CATEs.png} \label{fig:average_CATEs} \end{figure} \section{Model calibration with experimental data}\label{calibration} The main component of the recommendation system is the collaborative filtering model that predicts user utility from user-story interactions. In our analysis, high treatment effects suggest that the model has successfully identified user preferences and selected stories that users liked.' In this section, we further evaluate the calibration of the model by correlating the models predicted user utilities with observed utilities from the experiment. Figure \ref{hist} shows the histogram of the predicted utilities; we can notice that they vary from very high values of around 1 to lower values of $0.2$. We don't see values lower than 0.2 because in the experiment we considered only stories ranked at the top of the ranking of predicted utilities. \begin{figure} \centering \caption{Histogram of predicted utility} \includegraphics[scale = 0.6]{images/his_pred.png} \label{hist} \end{figure} To evaluate the model calibration, we regress the predicted utility on the observed utility from the experiment. Regression results are in Table \ref{tab:ev_pred_all}. \begin{table}[!htbp] \centering \caption{Correlation between utility predicted by the collaborative filtering model and observed in the experiment. $^{***}: p < 0.01$} \begin{tabular}{@{\extracolsep{5pt}}lccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{3}{c}{\textit{Dependent variable:} utility} \\ \cline{2-4} & All users & Frequent users & Infrequent users \\ \hline \\[-1.8ex] pred. utility & 0.44$^{***}$ (0.01) & 0.46$^{***}$ (0.01) & 0.43$^{***}$ (0.01) \\ \hline \\[-1.8ex] Observations & 9,344 & 2,268 & 7,076 \\ R$^{2}$ & 0.40 & 0.38 & 0.41 \\ \hline \hline \\[-1.8ex] \end{tabular} \label{tab:ev_pred_all} \end{table} \Cref{tab:ev_pred_all} shows that the model predictions are strongly correlated with observed utilities. The model is better calibrated for frequent users, for whom we have longer consumption histories (albeit the difference is small). We also break down the analysis by the experimental groups, which is shown in \Cref{tab:ev_pred_groups}. \begin{table}[!htp] \centering \caption{Results from linear regressions of actual utility on predicted utility. Column (1) treatment group, column (2) control group.} \begin{tabular}{@{\extracolsep{5pt}}lcc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{2}{c}{\textit{Dependent variable:} utility} \\ \cline{2-3} & treatment & control \\ \hline \\[-1.8ex] pred. utility & 0.33$^{***}$ (0.01) & 0.60$^{***}$ (0.01) \\ \hline \\[-1.8ex] Observations & 4,695 & 1,773 \\ R$^{2}$ & 0.32 & 0.51 \\ \hline \hline \\[-1.8ex] \end{tabular} \label{tab:ev_pred_groups} \end{table} In \Cref{tab:ev_pred_groups} we can notice that the predictions from the model correlate strongly with observed utility in both treatment and control groups. The model is much better calibrated in the control group, this is not surprising because the model is trained on similar data \section{Introduction} Recommendation systems, the algorithms that determine which pieces of content will be displayed to each users, have been widely deployed in online services and credited with being an important factor in determining user engagement with the service. Personalized content recommendations have contributed to the success of some of the most valuable companies in the world. Market leaders in the entertainment sector (e.g., \emph{Netflix} or \emph{Spotify}) and in online retail (e.g., \emph{Amazon}) are at the forefront of developing algorithms that provide personalized recommendations, and reap high benefits from implementing them.\footnote{See \citep{gomez2015netflix} for a discussion of the purpose and business value of personalized recommendation algorithms at \emph{Netflix}.} Recommendations, particularly personalized ones, in principle, have the potential to create significant value in other settings where user preferences for items vary. \par However, the incremental benefits of personalization have also been challenged, and the empirical question of its impact remains open in many settings, particularly in education. In real-world educational applications, the user base may be orders of magnitude smaller than popular entertainment applications, and so it is unclear whether data-driven personalization would be effective in such settings. In addition, the benefits of personalization depend on the fundamental preferences of the users (e.g. students); if their preferences are homogeneous, then human curation or simple popularity-based algorithms may be sufficient. Therefore, empirical evidence is required to understand the importance of personalization in a given setting. \par In education and training, students might spend more time with educational material (and potentially learn more) if it matches their interests. Yet perhaps surprisingly, the publicly available evidence of the impact of personalized content recommendations in education is limited. This paper aims to fill some of these gaps by providing evidence from a large-scale randomized controlled trial (RCT) designed to measure the impact of the introduction of personalized recommendations in place of editor based manually curated recommendations into \emph{Freadom}, an educational app designed to help children in India learn to read in English. In particular, we conducted a two-week-long randomized experiment, where the control group was exposed to stories based on the status quo, a system in which editors select content for all users (the ``editorial-based'' system), while the treatment group was exposed to stories from a personalized recommendation system in one section of the app. \par Our most important finding is that personalization of recommended content leads to a substantial increase in user engagement with the app compared to the editorial-based system: our estimate of the increase in usage of the personalized section is 63\% ($\pm$ 28\%).\footnote{In the brackets we show a 95\% confidence interval.} A key element of the experiment is that the personalized content was shown in one section of the app; thus, it is possible that users might simply shift from consuming editor-based content to personalized content without increasing overall engagement. We also estimate the total increase in app usage which includes all sections of the app and estimate an increase of approximately 14\% ($\pm$ 12\%). \par Increases in the consumption of educational content of the magnitude that we estimate can lead to substantial societal benefits. Notably, as the app's content is curated by pedagogy experts, higher levels of engagement are likely to accelerate learning. It is worth noting that Freadom has wide reach at a low cost; therefore, improving its efficiency can potentially benefit a large user base. \par Personalization of content selection in the ed-tech context typically takes the form of either assigning learning materials at the difficulty level that is right for the specific user or adjusting the content's style so that it matches the user's preferences.\footnote{See \cite{escueta2020upgrading} for a review of the literature on the impact of personalization of learning content difficulty on learning outcomes.} In this paper, we focus on the latter. It is not a priori clear that personalized content increases app usage. Notably, learners might engage with ed-tech products following a specific routine or, in the case of children, the recommendation of parents or teachers. The finding that overall usage of the app increases following the introduction of personalized recommendations suggests that investments in recommendation systems in the ed-tech context can create substantial value. \par To understand better the potential impact of the intervention, consider the context of the \emph{Freadom} app. It is developed by \emph{Stones2Milestones (S2M)}, and it is targeted at children aged 3 to 12 years old. Short illustrated stories are the main content of the app. Each story is a self-contained learning unit, generally consisting of a reading part and a quiz. Stories are curated by \emph{S2M} pedagogy experts; they are grade-appropriate and have clear educational goals. \emph{Freadom} is mostly used on smartphones, where the main page of the app consists of various sections. Each section contains a tray of stories. A tray is a sequence of stories sorted by an algorithm. Trays are labelled with different names e.g., \emph{Trending Now}, \emph{New Releases}, or \emph{Recommended Story} and display stories following different algorithms (e.g., \emph{New Releases} features stories recently added to the app). At the time we conducted this research, the algorithms assigning stories to trays were not personalized, and either manual curation or simple algorithms such as the most recently added stories, were used to select stories. The first step of the project was to develop a personalized recommendation system using data on historical user-story interactions. We compared several alternative approaches, selecting an approach based on collaborative filtering \citep{mnih2007probabilistic, rendle2010factorization} which performed best of the alternatives we considered in terms of estimated policy values (estimated using doubly robust off-line policy evaluation \citep{gilotte2018offline, zhan2021policy}). However, off-line analysis is tailored to understanding the impact of recommending different individual stories to users on their engagement with the particular story, but it does not capture the effects of sustained exposure to a personalized recommendation system. In addition, off-line policy evaluation of recommendation systems has known limitations in terms of both bias and variance. This motivates our next step, which was to design a Randomized Controlled Trial (RCT) in order to compare the status quo system of manually curated recommendations to the personalized algorithm. To evaluate the impact of personalized content recommendations on the utilization of the app, we carried out a randomized experiment. Since collaborative filtering requires substantial user history to perform well, the experiment included users who interacted with at least sixty stories before the start of the experiment. The main outcome metric is a user's total utilization of the app, defined as the sum of utilities from all user-story interactions during the experiment. Utility is a constructed metric, which assigns a value of one if a user completed a story, 0.5 if a user started the story but did not finish it, and 0.2 if the user clicked on the story to view the description but did not start it. Otherwise, the user is assigned the utility of zero. The experiment lasted for two weeks. We summarize our findings next. We find that users in the treatment group had a 63\% ($\pm$ 28\% ) higher total utility from content interactions in the personalized tray compared to users in the control group. Treated users also completed 78\% ($\pm$ 39\%) more stories and spent 87\% ($\pm$ 41\%) more time-consuming content on the personalized tray. We document significant patterns of heterogeneity in treatment effects. Users who consumed more niche content (i.e., content that is less popular overall) in the pre-experimental period had substantially higher treatment effects than users who like popular content. This is an expected result as the editorial team selects content targeted to typical tastes. Therefore, users with preferences that are different than those of the majority are likely to benefit more from personalization. Furthermore, users with long histories of content interactions also gained more from the personalization of content. This is because the performance of the collaborative filtering model improves when more information about past interactions is available. Last, we compare outcomes of users that had used the \emph{Recommended Story} tray in the past and users that have not. We find statistically significant treatment effects in both groups. The positive treatment effect for users that were not interacting with stories in this tray in the past suggests that users explore the app enough to notice content even in trays they rarely use and adjust their consumption decisions. Users who received personalized recommendations in the \emph{Recommended Story} tray increased utilization of the app across all trays. We find a 14\% ($\pm$ 12\%) increase in the total utilization of the app, a 19\% ($\pm$ 14\%) increase in the number of completed stories, and a 20\% ($\pm$ 14\%) growth in the time spent reading stories. Also, users in the treatment group who didn't read any stories on the \emph{Recommended Story} tray prior to the experiment exhibited a much larger (statistically significant) propensity to start reading on this tray compared to users in the control group. These results suggest that the increased usage of the \emph{Recommended Story} tray is not driven entirely by substitution away from other trays in the app. On the contrary, we find that users substitute away from other non-app activities to use the app more. In summary, better content selection can increase the overall utilization of an ed-tech app, justifying investments in developing recommendation systems. \paragraph{Literature review.} This paper relates to several strands of literature. Personalized recommendation systems have been studied intensively in entertainment \citep{davidson2010youtube, gomez2015netflix, jacobson2016music, holtz2020engagement} and in retail shopping \citep{linden2003amazon, sharma2015estimating, smith2017two,greenstein2018personal, ursu2018power}. For example, in the entertainment context and using a similar approach to our paper, \citep{holtz2020engagement} show that personalized recommendations increase consumption of podcasts on Spotify. However, there is little empirical evidence of the usefulness of recommendation engines beyond entertainment platforms and e-commerce. This paper attempts to fill this gap by providing evidence from the ed-tech sector.\footnote{\cite{DBLP:reference/sp/DrachslerVSM15} provide an extensive review of literature on recommendation systems in ed-tech and point out a shortage of papers documenting the efficiency of recommendation systems using reliable evaluation methods. They conclude the review by calling for more comprehensive user studies in a controlled experimental environment.} Additionally, we show that personalized recommendations can be an effective method of boosting user engagement in settings with moderate amounts of data.\par The existing evidence of the efficacy of recommendation systems in education is generally based on small studies that combine the introduction of personalized recommendations with other changes to the user interface. \citep{ISIS} use a recruited group of university students to study the effect of showing personalized recommendations of course materials to not showing any recommendations at all. While this study is an A/B experiment, it bundles two changes in one treatment: adding a user interface element and personalizing recommendations. Furthermore, this study is based on a relatively small sample of 250 subjects. \citep{Ruiz-Iniesta2018} develop and test a recommendation system on an ed-tech platform called \textit{Smile and Learn}, and evaluate it in an observational study. Their proposed treatment is a new user interface component with recommendations generated using collaborative filtering. The newly introduced system helps users navigate the app and reach desired content quicker. They find substantial increases in consumption of recommended items versus non-recommended items. However, the treatment in \citep{Ruiz-Iniesta2018} has two elements: the part simplifying app navigation by adding a user interface component and a personalization component. Our work provides results that isolate the impact of personalization on the consumption of learning items.\footnote{Contexts of \citep{Ruiz-Iniesta2018} and of this paper also differ substantially. In our setting, we have thousands of stories to choose from as compared to around one hundred games. This seemingly technical difference results in problems of data sparsity, which is a serious challenge in creating recommendations for stories that are relatively new. In section \ref{sectionoffline}, we present the methodology for designing and evaluating a recommendation system in such settings.} To the best of our knowledge, our paper is the first large-scale study in the ed-tech context that estimates the effect of personalization on user engagement in isolation from other changes in the app. Second, our work contributes to the growing literature assessing the effects of personalized recommendation systems on the diversity of consumed content. To our knowledge, we are the first to do so in an ed-tech context. \cite{anderson2020algorithmic, holtz2020engagement} provide evidence from a randomized experiment indicating that personalized recommendations reduce the diversity of content consumed on \emph{Spotify}. In the context of retail, \citep{AMAZONREC} show that, while recommendations reduce within-consumer diversity, their effect on aggregate diversity is ambiguous. \citep{NEWSREC} find that recommendations reduce consumption diversity in the context of news consumption. In this paper, we show that users with niche preferences are recommended more niche content and less often interact with stories liked by the majority of users. This closely relates to the literature documenting 'filter-bubbles' due to the personalization of content on media platforms \citep{haim2018burst, moller2018not}. Last, this paper relates to a rich literature on technology-assisted language learning.\footnote{See \cite{garrett2009computer}, \citep{zhao2003recent}, and \citep{tafazoli2019technology} for reviews of this literature.} Personalization in the language learning context has been shown to be effective in task assignment \citep{xie2019personalized} and learning resource recommendations \citep{sun2020vocabulary}. We contribute to this literature by bringing causal evidence of the impact of personalization on time spent interacting with language learning content. The rest of the paper is organized as follows. \Cref{empdata} details the empirical setting. \Cref{sectionoffline} presents the methodology used to develop and test the recommendation model using offline data. \Cref{RCT} describes the design of the randomized experiment and presents the results. Finally, \cref{conclusion} concludes. \section{Empirical setting}\label{empdata} \emph{Stones2Milestones (S2M)} was founded in 2009 in India. The company provides technology-enabled English education through a variety of programs serving a diverse set of users. The main product of \emph{S2M} is a smartphone app called \emph{Freadom}, aimed at 3 to 12-year-old children. Throughout 2021, the average daily number of users amounted to approximately 7,500. Users come to the app through two main channels: customer acquisition through schools, where the \emph{S2M} sales team reaches out to schools that later recommend the app to their students (B2B), and independent users who download the app from the app store (B2C). Additionally, there is a paid version of the app which gives access to some additional non-essential features. The main content of \emph{Freadom} is short illustrated stories. Stories are organized in different trays based on various themes such as \emph{Trending now} or \emph{Recommended Story}. Figure \ref{screenshot} presents screenshots from the app. \Cref{fig:f1} shows the landing page that a user sees when launching the app. The landing page contains trays of stories and news, but also occasional promotions and announcements. \Cref{fig:f2} presents the \emph{Stories} subpage, which contains only stories. The tray displayed at the top is \emph{Recommended Story}. Each tray is a slate of stories that a user can browse, and choose the ones to read. The selection of stories into trays follows various rule-based algorithms. For example, \emph{Trending now} displays stories that are currently consumed by many users. \Cref{fig:f3} presents the top part of the \emph{Recommended Story} tray. Importantly, during the pre-experimental period, none of the trays of the app assigned students to content in a personalized fashion. \begin{figure}[!ht] \caption{Screenshots from \emph{Freadom}.}\label{screenshot} \subfloat[Home Feed page: users open the app on this page.]{\includegraphics[scale = 0.13]{images/freadom_home_tray.jpg}\label{fig:f1}} \hfill \subfloat[Stories page: contains all story trays.]{\includegraphics[scale = 0.13]{images/freadom_stories_tray.jpg}\label{fig:f2}} \hfill \subfloat[\emph{Recommended Stories}: one of the most popular trays.]{\includegraphics[scale = 0.13]{images/freadom_rs_path.jpg}\label{fig:f3}} \end{figure} \emph{Freadom} stories are curated by the \emph{S2M} pedagogical team together with publishers specializing in educational content for kids. They are age-appropriate and created with a pedagogical goal in mind. Therefore, \emph{S2M} operates under the premise that maximizing the consumption of content on the app helps learners achieve their educational goals. \emph{Freadom} users can browse stories in the selected tray before deciding on which one to click. Clicking allows the user to open the story and view its description. Many users that view a description decide to go back to browsing; others start the story but do not finish it. Only a small minority of user-story interactions lead to the completion of a story. \Cref{fig:utility_funnel} shows a content interaction funnel representing frequencies of users' content consumption decisions. We divide users into three main categories: \emph{B2C}, \emph{B2B}, and \emph{paid} and show frequencies of different outcomes from interactions with stories. Thus, the unit of interest is the interaction between a user and a story. Users decide whether to view a story or not (second column), whether to start reading it (third column), and whether to complete it or not (final column). We can notice that users tend to explore many stories and acquire information about them through viewing or starting before deciding which stories to complete. \begin{figure}[H] \centering \caption{User-story interaction utility funnel} \includegraphics[scale = 0.60]{images/alluvial_new.png} \caption*{\footnotesize{\textit{Note: Utility funnel broken by the type of user (B2B, B2C, paid) and the outcome of user-story interactions. In intense colors shares of user-story interactions that resulted in story completion. In red B2B users, in blue B2C, and in green paid users. The first column shows shares of user categories, the second one is the share of users that viewed the story, the third that started the story, and the fourth that completed it.}}} \label{fig:utility_funnel} \end{figure} \section{Using offline data to develop a recommendation system}\label{sectionoffline} In this section, we aim to describe how we decided on what type of recommendation system we deployed and why. The objective of this section is, on the one hand, to describe the process and decisions taken in the development of the recommendation system that we eventually implemented, but also to act as a guide to practitioners interested in building a similar system who are positioned in a setting similar to ours. \subsection{Target metric and datasets} \paragraph{Our goal.} With a story catalog as large as \emph{Freadom}'s, it is unpractical for a child to manually choose which stories to consume. Just as in the context of entertainment (movie recommendations) or e-commerce (product recommendations), serving \emph{personalized} recommendations will potentially elicit the most child engagement on the app. Since stories are curated with a focus on pedagogy, this potentially accelerates child learning. Before the experiment, \emph{Freadom} served stories based on editorial recommendations by experts, which were the same across users with no personalization. Therefore, the goal of our research was to develop a personalized recommendation system and evaluate its efficacy. \paragraph{Datasets available.} We have historical log data of children's interactions with stories. Every entry of this dataset records an interaction of a child with a story, as well as to what extent they consumed the story; specifically whether a child did not consume it at all, considered reading it by viewing the story description card, started reading it or completed reading it. We have information about a child's grade level, as well as a tag recording the collections a story belongs to; a collection is a theme such as \textit{animal} or \textit{sport}.\footnote{This is analogous to the type of data in the \textit{movielens} benchmark dataset\citep{movielens}; however, a key difference is that our dataset does not contain a rich set of child and story characteristics.} \paragraph{Utility.} Based on our interaction data, it is unclear what our goal is in maximizing engagement. There are numerous apparent options; such as maximizing story card view rate or start rate or completion rate. While the ultimate goal of recommending stories to users is that they complete them, viewing and starting a story are prerequisites to completing it, and these are outcomes on a continuum rather than unrelated outcomes. Therefore, we define a metric \emph{utility}, determined together with \emph{S2M} to reflect their organizational objectives. The utility is derived from user-story interactions as follows \begin{itemize} \item If a story was not shown or was shown to a user who did not interact with any story in that specific session, we do not assign any value: $NA$,\footnote{User-item interactions database contains records of only users' sessions that resulted in at least one click on a story. Thus, sessions in which a user launched the app and skipped all shown stories are not recorded in the data that we have access to.} \item If the story was shown to the user, but the user skipped the story and viewed another story later in the session: $0$, \item If the user viewed the story page, but did not start the story: $0.3$, \item If the user viewed and started, but did not complete the story: $0.5$, \item If the user viewed, started, and completed the story: $1$. \end{itemize} We note that we distinguish between the user choosing not to engage with a story that was shown (0) and the user never having an opportunity to interact with a story because it was not shown (NA), a critical distinction for understanding user preferences not always made in previous studies. The above utility assignment can be thought of as giving us a utility matrix, with a child represented by a row, and a story by a column. This is the main building block of the recommendation system. \subsection{Recommendation System} We now present how we used observational data on user-story interactions to design the recommendation system for \emph{S2M}. Our dataset does not contain rich user and story characteristics; therefore, we chose a classic collaborative filtering model \citep{mnih2007probabilistic, rendle2010factorization} as the basis for our personalized recommendation system. We start by describing the collaborative-filtering model. \paragraph{Model Description} Consider two models for our recommendation system. We evaluate them based on out-of-sample performance in the observational data. In what follows, $\epsilon_{ij}$ is an unobserved error drawn iid for each child/story pair, and $\sigma(x) = 1/(1 + e^{-x})$, the sigmoid function. First, is a popularity-based model, (also called a two-way fixed effects model: TWFE). \begin{equation}\label{eq:twfe} U_{ij} = \sigma(\beta_{0} + \Psi_{i} + \Gamma_{j} + \epsilon_{ij}) \end{equation} , where $\Gamma_{j}$ and $\Psi_{i}$ are user and story fixed effects, respectively; Note, the popularity-based model is non-personalized, stories are simply ranked by their mean popularity and users receive stories at the top of the rank. We include this model for two reasons: first, it is a useful benchmark for evaluating personalized models. Second, such a model is simpler to implement; thus, to justify the development and introduction of a more complicated personalized model, it is useful to show that simpler models do not achieve similar performance. Second, our main candidate model is the collaborative filtering approach. \begin{equation}\label{eq:cf} U_{ij} = \sigma(\Lambda_{j} \times \Theta_{i} + \beta_{0} + \Psi_{i} + \Gamma_{j} + \epsilon_{ij}) \end{equation} , where $\Lambda_{j}$ is a latent preferences vector per user, $\Theta_{i}$ a latent vector per story. This approach follows the seminal model proposed in \citep{rendle2010factorization}. The latent vectors, $\Lambda_{j}$ and $\Theta_{i}$, are of length $k$ and are rows and columns of matrices $\Lambda$ and $\Theta$. Columns of $\Lambda$ and rows of $\Theta$ are k-dimensional representations of user and story latent preference characteristics, respectively. This approach allows for simplifying the utility matrix. Instead of modeling the preferences of each user for each story, we express user preferences and story features as each having k-dimensions. These dimensions (or axes of variation) could be thought of as characteristics (e.g., a serious or a funny story); each story and every user are placed along these axes. A higher value in a particular dimension for a story results in a higher expected user valuation in that dimension, and a higher expected preference for that story among users that also have a high value on that dimension. In sum, the collaborative filtering model identifies a low-dimensional representation of both users and stories, so that users with preferences for a particular type of story are located close (in the sense of Euclidean distance) to one another and to their preferred story types. The collaborative filtering model can achieve high performance, if the matrices $\Lambda$ and $\Theta$ represent underlying preferences well. The more data we have on users' and stories' past interactions, the higher the chance of arriving at an accurate representation of the utility matrix. Crucially, this depends on how well-structured the data is; if there are clear repetitive patterns of user preferences and story types, we are more likely to capture them with this approach. \paragraph{System Implementation Details.} We build our collaborative filtering system using the PyTorch \citep{pytorch} framework in python. We learn our model using Stochastic Gradient Descent (SGD) using the Adam Optimization method. To regularize, we use an L2 penalty on our parameter. We tune the number of latents, $k$ (the dimension of $\Lambda_{j}$ and $\Theta_{i}$), and our L2 penalty parameter using a randomly held out validation set.\footnote{We split our dataset into a train, test, and validation set at random. Another approach is to split the data by time into a train dataset and a test dataset; so that we test in the period following our training data. In this setting, the train data is randomly split into a train set and a validation set. We also executed this approach; this leads to similar results.} Once we have our optimal learning hyperparameters, we relearn our model on the entire dataset which gives us the final model. \paragraph{Personalized and baseline model performance on offline data.} To test the accuracy of the prediction models, we compare the performance of the popularity-based model from \Cref{eq:twfe} to the performance of the collaborative filtering from \Cref{eq:cf}, additionally for completeness we include the performance of a model with just a constant term (mean model). We compare the performance of these models in terms of Mean Squared Error (MSE) calculated using randomly held out historical data. See \Cref{tab:mse-table} for results. We find that collaborative filtering outperforms the other models. \begin{table} \caption{MSE values for collaborative filtering (PYTF), two-way fixed effects (TWFE), and a simple mean model.} \begin{center} \resizebox{0.33\textwidth}{!}{% \begin{tabular}{@{}rrr@{}} \toprule \textbf{PYTF} & \textbf{TWFE} & \textbf{Mean Model} \\ \midrule 0.0962 & 0.1022 & 0.1309 \\ \bottomrule \end{tabular}% } \label{tab:mse-table} \caption*{\footnotesize{\textit{Note: Models are trained and evaluated on the dataset including all users and stories (no filtering based on user-item history length).}}} \end{center} \end{table} \paragraph{Determining the target audience.} The performance of the collaborative filtering model depends on the length of histories of interactions of users and stories.\footnote{Our approach is generally not suitable for new users and new stories. The so-called cold-start problem of assigning content recommendations to users that have not yet revealed preferences from content interactions or stories whose latent style is still unknown is well-documented in recommendation systems literature, see e.g., \cite{lam2008addressing, lika2014facing, bobadilla2012collaborative}. } To determine the right set of users and stories for the deployment of the recommendation model, we compared the MSEs of utility predictions from the selected utility model trained over different amounts of data and tested on a held-out test set. The training sets differ by the minimum histories of interactions of stories and users. This analysis tells us how much user and story history that is necessary for the recommendation model to provide high-quality recommendations. \Cref{tab:my-table} presents MSEs for nine specifications depending on the length of the history of stories (columns) and of users (rows). We evaluate all specifications on the same dataset with thresholds (20,20). \begin{table} \caption{Collaborative Filtering Model Mean Squared Error for various user and story histories.} \begin{center} \resizebox{0.4\textwidth}{!}{% \begin{tabular}{@{}crrrr@{}}\toprule & \multicolumn{1}{c}{} & \multicolumn{3}{c}{\textbf{Stories}} \\ & \textbf{} & \textbf{20} & \textbf{60} & \textbf{100} \\ \midrule \multicolumn{1}{c|}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\textbf{Users}}}} & \multicolumn{1}{r|}{\textbf{20}} & 0.0967 & 0.0931 & 0.0932 \\ \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{\textbf{60}} & 0.0964 & 0.0931 & 0.0931 \\ \multicolumn{1}{c|}{} & \multicolumn{1}{r|}{\textbf{100}} & 0.0959 & 0.0930 & 0.0931 \\ \bottomrule \end{tabular}% } \caption*{\footnotesize{\textit{Note: The rows represent the minimum interactions per user, and the columns represent minimum interactions per story. We use a single trained model on the largest dataset (20, 20), and report MSEs on different test sets.}}} \label{tab:my-table} \end{center} \end{table} Based on the results from \Cref{tab:my-table}, we decided that the population of users that will receive personal recommendations will consist of users and stories with at least 60 interactions in our historical data. Two factors contributed to this decision; first, high-quality predictions as measured by MSE and, second, the sample size requirement for the A/B experiment.\footnote{Approximately 15\% of users in the entire user base and 92\% of all stories in the app have at least 60 interactions.} \paragraph{Choosing the right tray for the new recommendation system.} \emph{Freadom} is built based on multiple horizontally scrollable trays. Trays vary by popularity; one important driver of the tray's popularity is its position on the page. The most popular trays are \emph{Popular}, \emph{Trending Now}, \emph{Recommended Story}, and \emph{Today For you}. We chose to deploy the recommendation model in the tray called \emph{Recommended Story}. This tray was popular amongst more experienced users of the app, which meant that we were able to deploy the new system to many users of this tray. \emph{S2M} had also originally intended the tray to be for personalized recommendations hence the name - \emph{Recommended Story}. Before deploying our recommendation model, the tray content was chosen by \emph{editors} on a weekly basis. \paragraph{Re-ranking over time.} On our chosen tray \emph{Recommended Story}, stories are presented in a slate of 15 entries. The slate design task consists of deciding how to rank the stories and how frequently to update the ranking. We wanted to keep the ranking and refreshing module similar to the baseline one, so we can focus on isolating the effects of personalization. Due to computational constraints, new utility predictions were generated once per week. Thus, every week we would rank stories in decreasing order of predicted utility and the top 15 stories would make the slate. Within the week we would remove completed stories every day. Completed stories were replaced by stories that appeared next in the ranking of predicted utility. Whenever the user was active in the tray for two days but did not engage with any of the top 3 stories, we would remove those stories from their tray. This decision was motivated by the limitations of our data collection process, which does not allow for observing story skipping behavior in the case when the user did not click on any story during the session. This prevents us from accurately determining, the stories that users chose to ignore. Last, the ranking does not change if the user was inactive on the tray. The ranking algorithm was run every day. \input{ATE_section} \subsection{Heterogeneous treatment effects}\label{HTE_section} The evidence presented so far relates to the average impact of personalization. In this section, we analyze heterogeneity in treatment across past usage intensity, taste for popular vs. niche content, and the usage of the \emph{Recommended Story} tray prior to the experiment. In \Cref{hte_appendix}, we carry out a data-driven analysis of treatment heterogeneity and find a moderate amount of treatment heterogeneity. We expect that the personalization of content recommendations will mostly benefit heavy and niche-type users. Frequent users leave a long record of user-story interactions, which allows us to well understand their tastes. Additionally, we expect niche users to have high benefits because, in the baseline system, stories are targeted at a typical user, whereas in the personalized system, their niche tastes are taken into account. \paragraph{Definitions of users' types.} To determine whether someone is a heavy user we analyze the pre-experimental app usage. For each user, we compute the total utility and the total number of completed stories prior to the start of the experiment. Additionally, we construct indicator variables: \emph{high utility user} and \emph{high story completion user}, which take a value of one when a user is in the top 50th percentile of the distribution of past utilization (past number of completed stories) and zero otherwise. Niche-type users are users that consume content that is generally not very popular. We consider a story to be a popular story if it is one of the top 25\% of stories in terms of pre-experiment completions.\footnote{Top 25\% of stories correspond to 67\% of impressions in the \emph{Recommended Story} during the experiment.} \Cref{fig:hist_popularity} shows the histogram of shares of popular content consumption per user prior to the experiment. There are some users whose content is largely niche. We consider a user to be a niche type if the share of niche content in her pre-experiment consumption is more than 50\% (in red in \Cref{fig:hist_popularity}). Note, that all users were receiving the same recommendations prior to the experiment; thus, finding niche stories required searching beyond the top of the recommendation list. \begin{figure}[!ht] \centering \caption{Histogram of the share of popular stories consumed by users.} \includegraphics[height=3.5in]{images/hist_pop.png} \caption*{\footnotesize{\textit{Note: A popular story is a story in the top 25\% of stories ranked by the number of pre-experiment completions. Niche users in red.}}} \label{fig:hist_popularity} \end{figure} \paragraph{Treatment effects per group.} We start by providing estimates of the average treatment effects per group of interest. We consider total utility in \emph{Recommended Story} tray as the outcome variable of interest and use a difference-in-means estimator. \Cref{tab:HTE_groups} presents the results. \begin{table}[!htbp] \centering \caption{Estimates of average treatment effects per group.} \label{tab:HTE_groups} \resizebox{0.85\textwidth}{!}{ \begin{tabular}{>{}l|lrrr} \toprule category & group & ATE & std. error & p. value\\ \midrule {\textcolor{black}{\textbf{Type}}} & Niche users & 0.334 & 0.089 & 0.000\\ {\textcolor{black}{\textbf{}}} & Non-niche users & 0.044 & 0.063 & 0.487\\ \addlinespace {\textcolor{black}{\textbf{Past utilization}}} & High utility users & 0.299 & 0.094 & 0.001\\ {\textcolor{black}{\textbf{}}} & Low utility users & 0.092 & 0.056 & 0.103\\ {\textcolor{black}{\textbf{}}} & High story completion users & 0.249 & 0.094 & 0.008\\ {\textcolor{black}{\textbf{}}} & Low story completion users & 0.120 & 0.056 & 0.032\\ \addlinespace {\textcolor{black}{\textbf{Type and past utilization}}} & High utility and niche users & 0.508 & 0.132 & 0.000\\ {\textcolor{black}{\textbf{}}} & High utility and not niche users & -0.023 & 0.122 & 0.848\\ \bottomrule \end{tabular} } \caption*{\footnotesize{\textit{Note: Outcome variable is total utility per user. ATE is estimated using a difference-in-means estimator. All groups are defined based on the pre-experiment app usage.}}} \end{table} We find that the gains from personalization are higher for niche users than for non-niche users and for heavy users than for light users. The niche dimension is of higher magnitude and statistical significance. In the last two rows of \Cref{tab:HTE_groups}, we focus on the distinction between niche and non-niche users in the heavy utility group and find that niche users in this group have much higher treatment effects. This highlights, that the niche users form a distinct category, rather than are just heavy users who completed all popular stories and need to explore less popular ones.\footnote{One might argue that a user becomes niche after having seen all the popular stories. Note, that there are 839 users in the heavy utility and niche and 573 in heavy utility and non-niche. This indicates that niche users are indeed a distinct category of users.} In \Cref{robust} we provide further robustness of this result by regressing AIPW scores on past utilization and user type.\footnote{To estimate AIPW scores we use the \emph{grf} package (see \cite{athey2019generalized}). This methodology allows us to flexibly adjust for individual characteristics and estimate conditional average treatment effects. We consider users' school grade, type (B2B, B2C, paid), max streak (maximal number of consecutive days in which users completed at least one story), past utilization (the total number of completed stories prior to the experiment, and total utility prior to the experiment), and whether a user is a niche type. To determine the variables based on past consumption we consider a period of app usage between July 2020 and the start of the experiment.} Last, in \Cref{fig:aipw_utilization} we show how AIPW scores change across users depending on their past utilization. Panel A shows how AIPW scores change depending on the percentile of the pre-experiment number of story completions and panel B on users' past utility. We can notice upward trends in both figures. The differences are, however, moderate. \begin{figure}% \caption{AIPW scores across past utilization. AIPW scores for users with past utility higher than the percentile.}% \centering \subfloat[\centering Past story completions. AIPW scores for users with the past number of completions higher than the percentile.]{{\includegraphics[width=15cm]{images/aipw_completions.png} }}% \qquad \subfloat[\centering Past utility. AIPW scores for users with past utility higher than the percentile. ]{{\includegraphics[width=15cm]{images/aipw_utility.png} }}% \label{fig:aipw_utilization}% \end{figure} \paragraph{Niche-type users see more niche content.} Personalized recommendations benefit niche users because they do not need to seek out their favorite niche stories away from the top of the list of recommended stories, but receive them right away. In \Cref{tab:nicher}, we confirm this intuition by comparing the popularity of stories shown to popular and niche types in the two experimental groups. For each story, we compute the share of its impressions in total impressions in an experimental group and rank stories by it (\emph{Rank of impressions}). Additionally, we compute each story's percentile in the distribution of impressions within the experimental group (the total number of stories per experimental group differs). \begin{table} \begin{center} \caption{Type of stories shown to niche and popular-type users across treatment and control.}\label{tab:nicher} \begin{tabular}{>{}l|lrrrr} \toprule group & variable & mean niche & mean non-niche & std. error & p. value\\ \midrule {\textcolor{black}{\textbf{Treatment}}} & Rank of impressions & 379.184 & 343.442 & 17.313 & 0.040\\ {\textcolor{black}{\textbf{Control}}} & Rank of impressions & 498.730 & 489.391 & 18.357 & 0.611\\ \addlinespace {\textcolor{black}{\textbf{Treatment}}} & Percentile of impressions & 0.390 & 0.447 & 0.028 & 0.040\\ {\textcolor{black}{\textbf{Control}}} & Percentile of impressions & 0.401 & 0.412 & 0.022 & 0.611\\ \bottomrule \end{tabular} \caption*{\footnotesize{\textit{Note: Type of stories shown to niche and popular-type users across treatment and control. The rank of impressions - stories ranked by the number of impressions during the experiment in the experimental group. Percentile refers to the percentile of the distribution of the share of impressions per story in the total impressions in the experimental group.}}} \end{center} \end{table} In the control group, popular and niche type users see stories of similar popularity, while in the treatment group niche users are shown more niche stories; the difference is statistically significant. \paragraph{New and old users of \emph{Recommended Story} tray.} Another important layer of heterogeneity is between users that have been consuming stories in \emph{Recommended Story} tray before the experiment and those that started using this tray because of the personalized recommendations. Out of all experimental subjects only 14\% interacted with at least one story from the \emph{Recommended Story} tray in the two weeks prior to the experiment, and 47\% have never interacted with a story in this tray. \begin{table} \begin{center} \caption{ATE by past usage of \emph{Recommended Story} tray.}\label{tab:new_old} \begin{tabular}{>{}l|lrrrr} \toprule group & variable & ATE & ATE \% baseline & std.error & p.value\\ \midrule {\textcolor{black}{\textbf{Past users of RS}}} & Total utility & 0.788 & 54.461 & 0.322 & 0.015\\ {\textcolor{black}{\textbf{Past users of RS}}} & Total stories & 0.497 & 56.182 & 0.248 & 0.046\\ {\textcolor{black}{\textbf{Past users of RS}}} & Total time reading & 3.980 & 65.417 & 1.777 & 0.026\\ \addlinespace {\textcolor{black}{\textbf{New RS users}}} & Total utility & 0.132 & 87.423 & 0.039 & 0.001\\ {\textcolor{black}{\textbf{New RS users}}} & Total stories & 0.097 & 143.136 & 0.026 & <0.001\\ {\textcolor{black}{\textbf{New RS users}}} & Total time reading & 0.692 & 148.707 & 0.189 & <0.001\\ \bottomrule \end{tabular} \caption*{\footnotesize{\textit{Note: Average treatment effects estimates using Difference-in-Means estimator. Subjects were grouped based on the usage of \emph{Recommended Story} tray two weeks prior to the experiment. Three first rows show results for users that viewed at least one story in the tray; three bottom rows users that did not interact with any stories in the \emph{Recommended Story} tray in this period. }}} \end{center} \end{table} \Cref{tab:new_old} presents estimates of conditional average treatment effects. We consider only outcomes specific to the utilization of \emph{Recommended Story} tray. Three top rows present results for users that have interacted with at least one story in the \emph{Recommended Story} tray in the two weeks prior to the experiment. We find high and statistically significant treatment effects for this group. The three bottom rows of \cref{tab:new_old} present the results for users that were not actively using this tray prior to the experiment. We find that the treatment effects for such users are highly statistically significant and have high economic magnitudes. While the point estimates are small, the percentage change compared to the baseline (usage in the control group) is very high. These results suggest that the introduction of personalized recommendations attracted users to the tray that otherwise would not be using it at all. \paragraph{Stories that drive the treatment effect.} Is the increase in total utility driven by a few stories liked by many users or a better assignment of many stories? To answer this question, we want to group stories into frequently and rarely shown and compare user utilities in these categories, in both the treatment and control groups. \Cref{fig:reg_buckets} shows estimates of the conditional expectation of utility from user-story interactions for stories in different buckets of popularity. We use a linear regression where we adjust for users' grade, type, and past utilization. Buckets are constructed according to the rank of the number of story impressions in the experimental group (the total number of impressions in the treatment group is approximately equal in each bucket). Differences across experimental groups in the average utility in a bucket are (apart from personalization) due to, the selection of stories into buckets and differences in users that see stories in these buckets. Adjusting for user features allows us to isolate the effect of the story selection. \begin{figure}[H] \caption{Estimates of the conditional expectation of utility per bucket.} \centering \includegraphics[height=4in]{images/summs_plot.png} \caption*{\footnotesize{\textit{Note: Utility estimates adjusted for the difference in grades, user types, and past usage intensity across buckets.}}} \label{fig:reg_buckets} \end{figure} We find that utilities in the treatment group are higher in all buckets. There is a high and statistically significant difference in the two first buckets. This suggests that our model picked up stories that were liked by many users. However, there is also a substantial increase in utility from the least impressed, niche stories. This means that there is a component of personalized niche content driving higher utility in the treatment group. In sum, we see that there are two mechanisms in story selection that increase the utility in the treatment group: (i) stories that are shown to many users on average lead to higher utility in the treatment group, and (ii) personalization of niche, infrequent stories in the treatment group leads on average to higher utility from interactions with these stories. \section{Conclusion}\label{conclusion} In this paper, we provide evidence from a randomized controlled trial of the efficacy of personalized recommendations in promoting user engagement on an ed-tech app. We show that children learning to read in English engage more with content when it is selected based on their preferences. We find an effect of an over 60\% increase in the utilization of the personalized content as compared to the baseline system of content selected by editors. We also find a 15\% boost in overall app usage. We evaluate the effects of the treatment on different user subgroups in the experiment and find interesting patterns of heterogeneity. We find that heavy users have substantially higher treatment effects. We have more data about such users; thus, we know their preferences better and can provide them with higher-quality recommendations. Second, we find that users that ex-ante prefer niche stories are the main beneficiaries of the personalized system; we also find that the personalized recommendation system makes it easier for them to discover niche content on the platform. Third, we show that both users who have been using the personalized section of the app prior to the experiment as well as those who have not benefited from the personalization. We examine whether the increased utilization comes with increased diversity, and find that while the recommendation algorithm picks up on stories that are popular, it also increases utility from the least shown, niche stories. This paper contributes to the recommendation systems literature by bringing evidence from the educational sector and a setting with limited data (as compared to big-tech environments where such systems are typically deployed). We carefully discuss the recommendation system design process hoping to allow practitioners to develop and deploy similar recommendation systems in other contexts. The main limitation of this paper is that we focus on students that are heavy app users (interacted with at least sixty stories) and on stories that have been already shown to many users. This is a limitation of any system based on the collaborative filtering model as the model's performance improves with the number of past users-content interactions. Furthermore, the approach is not applicable to new users and new stories. Developing and implementing recommendations for new users and new items is a valuable extension of this work. Last, the proposed approach optimizes for user engagement rather than for learning. The recommendation system assigns stories that the user is most likely to complete, but these might not necessarily be the stories that will maximize learning. Optimizing the story selection for learning would be a preferable approach; however, because of difficulties in accurately measuring learning outcomes and slower feedback loops, we focused on engagement.\footnote{This relates to the literature on surrogate \citep{surrogates_eckles}, where a surrogate metric that closely tracks the target metric is optimized instead due to the target metric being infeasible to access} Bridging the gap between optimizing for short-term outcomes vs. long-term learning, for example by using surrogates, is a promising next step on this research agenda. \newpage \bibliographystyle{apalike}
{ "timestamp": "2022-08-31T02:05:17", "yymm": "2208", "arxiv_id": "2208.13940", "language": "en", "url": "https://arxiv.org/abs/2208.13940" }
\section{Introduction} As part of the Integrable Optics Test Accelerator (IOTA) a string of octupoles (Fig. \ref{fig:oct}) is installed in a configuration to maintain the Hamiltonian as a constant of motion. During IOTA run 2 unexpected deviations in the closed orbit while the octupoles were energized suggested misalignment in the magnets or deviations in construction generating large low-order (quadrupole and dipole) transverse multipole components. The nominal values for the octupoles are in Table~\ref{tab:octParams} \cite{antipov2016design}. \begin{figure}[h] \centering \includegraphics[width = 0.4 \textwidth]{octupoleTsquareCrop.jpg} \caption{A single octupole from the string.} \label{fig:oct} \end{figure} There are a number of conventions for presenting the multipole components, and the following format will be used in this paper, Eqs. (\ref{eq:harmDef}) \& (\ref{eq:compDef}). \begin{equation} B_y + iB_x = \sum_{n=1}^\infty C_n \left(\frac{x+iy}{R_{ref}}\right)^{n-1} \label{eq:harmDef} \end{equation} \begin{equation} C_n = B_n + iA_n \label{eq:compDef} \end{equation} Where $B_n$ and $A_n$ are the normal and skew terms respectively, $R_{ref}$ is the reference radius for the measurements, and the multipole index "$n$" follows the European convention, i.e. $n$ = 1 corresponds to the dipole term. The longitudinal component of the field was not considered in the characterization. The magnets were removed and characterized using a hall probe to determine potential outliers and align a set of nine magnets for installation in a new configuration before IOTA run 4. The figure of merit for selecting the magnets was the magnitude of low order multipoles. \begin{table}[h] \centering \caption{Nominal Octupole Parameters} \begin{tabular}{@{}lc@{}} \toprule \textbf{Octupole Parameter} & \textbf{Design Value}\\ \midrule Length & \SI{70}{mm} \\ \midrule Aperture & \SI{28}{mm} \\ \midrule Coil Turns per Pole & 88 \\ \midrule Maximum Excitation Current & \SI{2}{A} \\ \midrule Maximum Octupole Gradient & \SI{1.4}{kG/cm^3} \\ \midrule Effective Field Length & \SI{75}{mm} \\ \bottomrule \end{tabular} \label{tab:octParams} \end{table} \section{Test Stand Measurements} \subsection{Methods} The multipole components of the magnets were determined using a hall probe mounted on a three-axis test stand based on a procedure described in reference \cite{campmany2014determination}. The test stand was composed of three, perpendicular rails actuated by linear stepper motors with a hall probe mounted along the nominal z-axis. The magnets were mounted to a support stand with alignment features for all degrees of freedom next to the test stand, see Fig. \ref{fig:testCartoon}. Before any measurements were taken, the test stand was calibrated to the support stand stand using a precise flat and dial indicator to ensure that the axes of motion were perpendicular to each other. \begin{figure}[h] \centering \includegraphics[width = 0.35 \textwidth]{octTestStand.pdf} \caption{Cartoon of hall probe and support stand with octupole.} \label{fig:testCartoon} \end{figure} All measurements were taken at an energizing current of \SI{2}{A}, the maximum for these octupoles. The test stand measured the magnetic field at a preprogrammed set of points. In practice, this was an equidistant set of points on a circle in a number of planes along the magnet's axis. The field from the x and y hall sensors was combined into azimuthal data based on the relative angle of the points on the circle. A Fourier decomposition was then performed on the magnetic field data to find the multipole components. A coarse scan (smaller radius, fewer points) was performed first and the relevant offset was calculated using Eq. (\ref{eq:octCenter}) assuming that the sextupole component was all due to feed-down. The probe would then be centered in the magnet based on this offset and proceed onto a second, higher-fidelity scan. The high fidelity scan was 32 points at a reference radius of \SI{8}{mm}, the largest radius which did not risk hitting any pole tips. In total, six circular scans were performed in the magnet at three different longitudinal positions, at each end of the pole tips and at the center of the magnet so the integrated field could be calculated. The measurements were taken moving forward and backwards through the magnet and averaged at each position to account for any potential backlash in the test stand. \begin{equation} x_o + i y_o = \left(\frac{1}{n-1}\right)\left(\frac{C'_{n-1}}{C'_n}\right)R_{ref} \label{eq:octCenter} \end{equation} Once the magnets multipole compositions had all been determined the best magnets could be selected. As the sextupole components had been deliberately minimized, the quadrupole and dipole components were used for selecting the best subset. Any outliers were excluded and from the remaining ten magnets were selected (nine for installation and one spare). These magnets were then remeasured on the stand for alignment. The same basic procedure was followed, but the probe left in the same position for each magnet. The relative magnetic centers could then be calculated by Eq. (\ref{eq:octCenter}) and the magnets shimmed against matching alignment surfaces on the installation mount. \subsection{Results} Initially, all magnet decompositions demonstrated abnormally large low-order multipole components, especially the dipole term (see Fig. \ref{fig:azDecomp}). \begin{figure}[h] \centering \includegraphics[width = 0.45 \textwidth]{mag17decomp.pdf} \caption{Initial multipole decomposition.} \label{fig:azDecomp} \end{figure} This did not match empirical measurements, the abnormal dipole term was similar for all longitudinal positions in the magnet, but fields of this magnitude were not observed at the center where higher order multipoles are negligible. The source of this error was found to be the use of the calculated azimuthal fields. The hall probe consist of three individual sensors in the probe tip and have a significant offset with respect to one another. The initial calculation assumed these measurements were taken at the same point in the probe. To remedy this, the individual measurements of the disparate hall probes were decomposed separately, so each pass on the test stand effectively took X and Y measurements. The longitudinal component was not used. These measurements were not aligned to the magnetic center as the centering movement of the probe was based on the azimuthal calculation. In the interest of time, the measurements were centered in software using Eq. (\ref{eq:recenter}) which applies the feed-down of all higher order multipoles to find the components in a new set of coordinates \cite{jain1997basic}. As the sampling rate of the circular scan was 32 points the discrete Fourier transform yielded multipole components up to n=16, but the n=8 multipole is the maximum which demonstrates good sensitivity. A comparison of the centering calculation was done with both up to n=16 and n=8 and no significant deviations were observed. \begin{equation} C_n = \sum_{k=n}^{\infty} C'_k \left(\frac{(k-1)!}{(n-1)!(k-n)!}\right)\left(\frac{x_o +i y_o}{R_{ref}}\right)^{k-n} \label{eq:recenter} \end{equation} The new decomposition yielded much cleaner results (Figs. \ref{fig:xDecomp} and \ref{fig:yDecomp}), and was used for the centering measurements. \begin{figure}[h] \centering \includegraphics[width = 0.45 \textwidth]{mag13xDecomp.pdf} \caption{Multipole components from X sensor.} \label{fig:xDecomp} \end{figure} \begin{figure}[h] \centering \includegraphics[width = 0.45 \textwidth]{mag13yDecomp.pdf} \caption{Multipole components from Y sensor.} \label{fig:yDecomp} \end{figure} \subsection{Alignment} Once a subset of octupoles had been selected the alignment measurement was performed. During the course of these measurements the repeatability of the test stand positioning at the center was found to be on the order of \SI{5}{\micro m}. This was well within the alignment threshold of \SI{400}{\micro m}. To determine the size of the shims, the relative offset compared to the magnet with the center furthest from the alignment feature was found. Figure \ref{fig:xOctCenters} shows the offsets in the x direction for the selected magnets, here Magnet 12 is the reference. \begin{figure}[h] \centering \includegraphics[width = 0.48 \textwidth]{octXCenterOffsets.png} \caption{Relative offset of octupole magnets.} \label{fig:xOctCenters} \end{figure} Shims matching these offsets could then be inserted to align the relative centers of the octupoles (see Fig. \ref{fig:octAlign}). \begin{figure} \centering \includegraphics{octAlign.pdf} \caption{Cartoon of relative alignment procedure.} \label{fig:octAlign} \end{figure} \section{Summary and Future Work} The octupoles for the IOTA quasi integrable lattice element were characterized using a hall probe mounted to a test stand. An error related to the mismatched position of the hall sensors in the probe was identified and remedied using an alternative decomposition of the field. Once the satisfactory magnets had been selected, their relative centers were measured and aligned for installation in IOTA prior to run 4. In the course of run 4, the magnet alignment will be confirmed using beam based measurements.
{ "timestamp": "2022-08-31T02:03:30", "yymm": "2208", "arxiv_id": "2208.13883", "language": "en", "url": "https://arxiv.org/abs/2208.13883" }
"\\section{Introduction}\nGiven an infinite graph $G$ and a probability measure $\\nu$ supported on (...TRUNCATED)
{"timestamp":"2022-08-31T02:04:40","yymm":"2208","arxiv_id":"2208.13922","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nNotwithstanding fifty years of quantum chromodynamics (QCD), the spectrum (...TRUNCATED)
{"timestamp":"2022-09-02T02:05:24","yymm":"2208","arxiv_id":"2208.13903","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:introduction}\n\\IEEEPARstart{T}{he} composition of materials (...TRUNCATED)
{"timestamp":"2022-08-31T02:04:12","yymm":"2208","arxiv_id":"2208.13909","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\r\n\r\n\r\n\r\nRadio frequency (RF) based wireless communication is inevita(...TRUNCATED)
{"timestamp":"2022-08-31T02:09:30","yymm":"2208","arxiv_id":"2208.14027","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\\label{introduction}\n\n\n The rapid increase in the usage of artificial(...TRUNCATED)
{"timestamp":"2022-08-31T02:09:41","yymm":"2208","arxiv_id":"2208.14037","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction} \\label{sec:intro}\nIn a recent work, we investigated the weak interaction (...TRUNCATED)
{"timestamp":"2022-08-31T02:07:17","yymm":"2208","arxiv_id":"2208.13971","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:intro}\nIn the local universe, the density of galaxies spans s(...TRUNCATED)
{"timestamp":"2022-08-31T02:08:17","yymm":"2208","arxiv_id":"2208.14004","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:intro}\n\nMany areas of environmental science seek to predict (...TRUNCATED)
{"timestamp":"2022-08-31T02:08:40","yymm":"2208","arxiv_id":"2208.14015","language":"en","url":"http(...TRUNCATED)
End of preview.

No dataset card yet

Downloads last month
4