Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
100
3.4k
A
stringlengths
100
3.42k
B
stringlengths
100
3.99k
C
stringlengths
100
3.94k
D
stringlengths
100
3k
label
stringclasses
4 values
𝔼⁢[ν(l)⁢Z−l]=0𝔼delimited-[]superscript𝜈𝑙subscript𝑍𝑙0\mathbb{E}[\nu^{(l)}Z_{-l}]=0blackboard_E [ italic_ν start_POSTSUPERSCRIPT ( italic_l ) end_POSTSUPERSCRIPT italic_Z start_POSTSUBSCRIPT - italic_l end_POSTSUBSCRIPT ] = 0, such that (γ0(l))T⁢Z−lsuperscriptsubscriptsuperscript𝛾𝑙0𝑇subscript𝑍𝑙(\gamma^{(l)}_{0...
The auxiliary regression (2.7) is used to construct an orthogonal score function for valid inference in a high-dimensional setting, as described in Section 2.1.
The primary aim of our paper is to provide a method for constructing uniformly valid inference and confidence bands in sparse high-dimensional models in the sieve framework. In doing so, we contribute to the growing literature on high-dimensional inference in additive models, especially that on debiased/double machine ...
Later, we will also allow for an approximation error in this equation. Belloni et al. (2014b) propose including in the final regression not only the covariates selected in the first step of the naive approach but also augmenting this set of variables with Lasso-selected regressors from the auxiliary regression. This pr...
Belloni et al. (2014b) developed an approach for valid inference for one parameter. In high-dimensional additive models, the major technical challenge arises from the need to conduct inference for the potentially high-dimensional vector θ0subscript𝜃0\theta_{0}italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. In other ...
A
The attentive eye can observe the similarity of our approach with respect to [17]. The fundamental difference (and one of the key feature of our method) is that our indices are defined over T𝑇Titalic_T, meaning that they are able to provide insights about the impact of input variables all across the domain of definiti...
Predicting a quantity for the long time scales which matter for the climate is a hard task, with a great degree of uncertainty involved. Many efforts have been undertaken to model and control this and other uncertainties, such as the development of standardized scenarios of future development, called Shared Socio-econo...
Some fundamental pieces of knowledge are still missing: given a dynamic phenomenon such as the evolution of C⁢O2𝐶subscript𝑂2CO_{2}italic_C italic_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT emissions in time a policymaker is interested if the input of the factor varies across time, and how. Moreover, given the presence...
The attentive eye can observe the similarity of our approach with respect to [17]. The fundamental difference (and one of the key feature of our method) is that our indices are defined over T𝑇Titalic_T, meaning that they are able to provide insights about the impact of input variables all across the domain of definiti...
In the presence of a I/O model whose output(s) are not intrinsically deterministic, it is of fundamental importance to compute the mean value of the sensitivity indices introduced in the previous section, and to compare their absolute or relative magnitude to the natural variability of the phenomenon, or to the uncerta...
D
The “if” direction of Theorem 1 readily follows from Theorem 3: when all stationary beliefs have adequate knowledge, a correct action is taken almost surely for any distribution of stationary beliefs, hence u∗⁢(μ0)=u∗⁢(μ0)subscript𝑢subscript𝜇0superscript𝑢subscript𝜇0u_{*}(\mu_{0})=u^{*}(\mu_{0})italic_u start_POSTSU...
Theorem 3 can be used to quantify how a failure of excludability impacts welfare. Proposition SA.2 in Supplementary Appendix SA.2 provides a formal result in this vein. In particular, that result implies a sense in which an environment with “approximate excludability” ensures that, eventually, agents’ ex-ante expected
belief convergence. Since expanding observations is compatible with the observational network having multiple components, one cannot expect the social belief to converge even in probability.252525Consider an observational network consisting of two disjoint complete subnetworks: every odd agent observes only all odd pre...
The conclusion of Theorem 3 would be straightforward if we were assured that agents eventually hold stationary beliefs. However, there are networks (with expanding observations) in which with positive probability the beliefs of an infinite number of agents are bounded away from the set of stationary beliefs; see Exampl...
Consider normal information. There are full-support priors μ𝜇\muitalic_μ such that the posterior probability μs⁢(ω)subscript𝜇𝑠𝜔\mu_{s}(\omega)italic_μ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_ω ) is uniformly bounded away from 1111 across signals s𝑠sitalic_s and states ω𝜔\omegaitalic_ω (see Supplem...
C
Instead of WAP, one could compare maximin protocols in terms of their power over a local (to θ=0𝜃0\theta=0italic_θ = 0) alternative space or focus on admissible maximin protocols. In Appendix C.2, we consider a notion of local power with the property that locally most powerful protocols are also admissible when λ=0𝜆0...
We consider two notions of optimality: maximin optimality (corresponding to the case where λ=0𝜆0\lambda=0italic_λ = 0) and global optimality (corresponding to the more general case where λ≥0𝜆0\lambda\geq 0italic_λ ≥ 0). Accordingly, we say that r∗superscript𝑟r^{*}italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ...
Instead of WAP, one could compare maximin protocols in terms of their power over a local (to θ=0𝜃0\theta=0italic_θ = 0) alternative space or focus on admissible maximin protocols. In Appendix C.2, we consider a notion of local power with the property that locally most powerful protocols are also admissible when λ=0𝜆0...
Romano (2005b). We show that any globally most powerful protocol is also locally most powerful (and thus admissible if λ=0𝜆0\lambda=0italic_λ = 0) under linearity and normality.
Here, we consider the general case where λ≥0𝜆0\lambda\geq 0italic_λ ≥ 0 and show that when λ>0𝜆0\lambda>0italic_λ > 0, the planner’s subjective utility from research implies a notion of power. Globally optimal protocols generally depend on both λ𝜆\lambdaitalic_λ and the planner’s prior π𝜋\piitalic_π. We restrict ou...
C
Note that the oppositely directed implication of statement (ii) is necessarily not true: the NRM rule defined in Example 2 below satisfies truncation-invariance but violates rank monotonicity.
Note that the oppositely directed implication of statement (iii) is not necessarily true: the object-proposing deferred acceptance (OPDA) rule defined in Example 3 satisfies truncation-proofness but violates truncation-invariance.121212Chen et al. (2024) proved that the OPDA rule satisfies truncation-proofness.
Note that the oppositely directed implication of statement (i) is not necessarily true: The immediate acceptance (IA) rule101010See Abdulkadiroğlu and
Sönmez (2003). defined in Example 1 below satisfies truncation-invariance but violates strategy-proofness.111111From the procedure of the immediate acceptance algorithm, one can easily obtain that the IA rule satisfies truncation-invariance. The IA rule is exactly the so-called Boston mechanism. It is well-known that s...
Note that the oppositely directed implication of statement (ii) is necessarily not true: the NRM rule defined in Example 2 below satisfies truncation-invariance but violates rank monotonicity.
A
Panel surveys routinely collect data on an ordinal scale. For example, many nationally representative surveys ask respondents to rate their health or life satisfaction on an ordinal scale.333One example is the British Household Panel Survey in our empirical application. Others include the U.S. Health and Retirement Stu...
We are interested in regression models for ordinal outcomes that allow for lagged dependent variables as well as fixed effects. In the model that we propose, the ordered outcome depends on a fixed effect, a lagged dependent variable, regressors, and a logistic error term. We study identification and estimation of the f...
To do this, we follow the functional differencing approach in Bonhomme (2012) to obtain moment conditions for the finite-dimensional parameters in this model, namely the autoregressive parameters (one for each level of the lagged dependent variable), the threshold parameters in the underlying latent variable formulatio...
For other types of outcome variables (continuous outcomes in linear models, binary and multinomial outcomes), results for regression models with fixed effects and lagged dependent variables are already available. Such results are of great importance for applied practice, as they allow researchers to distinguish unobser...
This paper contributes to the literature on dynamic ordered logit models. We are aware of only one paper that studies a fixed-T𝑇Titalic_T version of this model while allowing for fixed effects. The approach in Muris, Raposo, and Vandoros (2023) builds on methods for dynamic binary choice models in Honoré and Kyriazido...
A
First, we introduce the model, discuss how the model specification relates to the existing causal discovery literature, and derive testable implications. Second, we present the conditional independence test that is a central component of the test. Third, we present the implementation of the test.
The main idea of this paper is to study the conditions under which this model is distinguishable form reversed analog without relying on exogenous information. The reverse model, where Y𝑌Yitalic_Y is causing X𝑋Xitalic_X, again in the presence of the vector of covariates W𝑊Witalic_W, is defined as
Consider a model where observable continuous scalar variable X𝑋Xitalic_X causes observable scalar variable Y𝑌Yitalic_Y in the presence of the vector of covariates W𝑊Witalic_W (which we refer to in the following as the model):
We show how to test for reverse causality between two variables X𝑋Xitalic_X and Y𝑌Yitalic_Y in the presence of additional covariates W𝑊Witalic_W.
We extend their work by allowing for additional control variables W𝑊Witalic_W and also considering heteroskedasticity of the error term with respect to these covariates. Intuitively, nonlinearity of hℎhitalic_h ensures that the error terms in the reverse model are not independent of the regressor, which provides power...
B
The per capita GDP of the Yangzi Delta, the most developed region in China, was roughly at par with that of the Netherlands, the most developed region in Europe.
The per capita GDP of the Yangzi Delta, the most developed region in China, was roughly at par with that of the Netherlands, the most developed region in Europe.
Our model offers the interpretation that per capita income stagnated in the pre-modern era because laborforce was allocated to the agricultural sector that engaged in subsistence production and not to productive activities, possibly factory work in the manufacturing sector that generated sustained per capita growth.
This claim is supported by recent estimates of GDP per capita, as plotted in Figure 1. The figure shows that Britain’s GDP per capita was similar to that of China before 1750, but diverged after that.
this study assumes that per capita GDP was roughly constant before the Industrial Revolution (see also Broadberry et al., 2015).
C
There is a large literature in behavioral and experimental economics that points toward the importance of various behavioral traits and heterogeneous characteristics of trust and reciprocity in sharing behavior. A large series of studies including Fehr et al. (1997); Fehr and Gächter (1998, 2000); Camerer (2003); Cox (...
These results also agree with other experimental work that has specifically focused on the role of trust in experimental sharing environments. In particular, Glaeser et al. (2000) finds that survey questions about trust (similar to the questions we used to measure trust) are effective at predicting trustworthy behavior...
Because there are only three trust questions, the first principal component summarizes most of the information from the trust questionnaire. It places positive weight on the question that involves trust and negative weights on two questions that suggest mistrust. Perhaps surprisingly, this measure of trust is associate...
Finally, we use our structural framework to conduct three counterfactual simulations, each examining the effects of a uniform increase in one of the three principal attributes—trust, overall reciprocity, and positive reciprocity. Consistent with our estimates for the model with individual heterogeneity, an increase in ...
This result agrees with some more recent work examining the role of these characteristics in supporting positive market outcomes. For example, Choi and Storr (2022) finds evidence suggesting that providing reputation systems in experimental markets interacts with preferences primarily by giving participants more inform...
D
(ii) The statistical error rate of HOPE in (29) is the same as the upper bound of the iterative projection algorithms for estimation of the fixed rank TFM-tucker, c.f. Corollary 3.1 and 3.2 in Han et al., (2020), which is shown to have the minimax optimality.
Figure 3: Boxplots of the logarithm of the estimation error for HOPE under experiment configuration II.
(ii) The statistical error rate of HOPE in (29) is the same as the upper bound of the iterative projection algorithms for estimation of the fixed rank TFM-tucker, c.f. Corollary 3.1 and 3.2 in Han et al., (2020), which is shown to have the minimax optimality.
It follows that HOPE also achieves the minimax rate-optimal estimation error under fixed r𝑟ritalic_r.
controls the level of signal cancellation (see Han et al., (2020) for details). When there is no signal cancellation, ζ=0𝜁0\zeta=0italic_ζ = 0, the rate of the two procedures are the same. Note that iTIPUP only estimates the loading space, while HOPE provides estimates of the unique loading vectors. The error rate of ...
C
In this section, we briefly discuss the strategic aspects of the messaging game induced by an individual elicitation protocols. We formally define two dynamic implementation notions in Section 5.1: implementation in dominant and obviously dominant strategies. Although there is no conceptual innovation in these definiti...
We then turn to the special case of the second-price auction rule as an instructive and practically relevant example—we see how maximally contextually private protocols for the second-price auction rule choose a set of agents to protect, and delay asking questions to the protected agents. In Theorem 3, we use the repre...
In Section 5.2 we consider the incentive properties of the maximally contextually private protocols discussed in Section 4. We first observe that the ascending-join protocol is implementable in obviously dominant strategies. This result is a direct consequence of Li (2017)’s characterization of obvious dominance. Then,...
Under the restriction to individual elicitation, protocols induce a well-defined extensive-form game. In Section 5, we ?check? the incentive properties of the maximally contextually private protocols for the second-price auction rule described in Section 4. The ascending-join and overdescending-join protocols have impl...
In addition to the ascending-join and overdescending-join protocols, the serial dictatorship also has good privacy properties and incentive guarantees. In particular, we show in Appendix B that the serial dictatorship protocol is contextually private (i.e. it produces no privacy violations) and that the messaging strat...
B
Finally, the study discussed in section 3.2 is related to section 4 of Osana (1992). This paper, written in Japanese, discusses not only consumer surplus but also the relationship between this and equivalent and compensating variations. The relationship between Stokes’ theorem and these results is also discussed.
So, why are all equilibrium prices locally stable in a quasi-linear economy? The answer is obtained by the theory of no-trade equilibria. Balasko (1978, Theorem 1) showed that in a pure exchange economy, any no-trade equilibrium price is locally stable. This result was in fact substantially shown in Kihlstrom et al. (1...
One virtue of the two-commodity quasi-linear economy is the ability to calculate the change of consumer’s utility from the aggregated demand curve. That is, such an economy can be described by a partial equilibrium model, and we can calculate the consumer’s surplus instead of the utility function directly. It is known ...
We have shown that in a quasi-linear economy, the equilibrium price is uniquely determined and is locally stable. Compared with similar results, a feature of this result is that there is no assumption imposed on the excess demand function. Moreover, we have exhibited that in this economy, consumers’ surplus can be defi...
This paper also discusses surplus analysis. As in the partial equilibrium theory, the consumer’s surplus can be defined for a quasi-linear economy, and can be calculated using only the aggregated demand function. The amount of surplus coincides with the increase in the sum of utilities in the trade of this market. (The...
C
If s∉D𝑠𝐷s\not\in Ditalic_s ∉ italic_D, the statistician is told the element z∈Z𝑧𝑍z\in Zitalic_z ∈ italic_Z such that s∈[Tz]𝑠delimited-[]subscript𝑇𝑧s\in[T_{z}]italic_s ∈ [ italic_T start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ]. The statistician has then to select an element j∈I𝑗𝐼j\in Iitalic_j ∈ italic_I.
Roughly speaking, we consider a zero-sum game between an adversary and a statistician, in which the adversary chooses a deviation and the statistician, after observing the realization s𝑠sitalic_s, has to guess the deviator if s∉D𝑠𝐷s\notin Ditalic_s ∉ italic_D. A strategy for the statistican in this game is a blame f...
The adversary selects an element of i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I (a player in the original problem)
The blame function f𝑓fitalic_f above correctly identifies the Deviator with probability 1111, regardless of Deviator’s strategy.
Thus, the adversary’s strategy is to select the identity of the deviator i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I and a strategy for that deviator.
D
\bar{p}}(y))\Leftrightarrow u_{f,\bar{p}}(x)\geq u_{f,\bar{p}}(y),italic_x ≿ italic_y ⇔ italic_f ( over¯ start_ARG italic_p end_ARG , italic_u start_POSTSUBSCRIPT italic_f , over¯ start_ARG italic_p end_ARG end_POSTSUBSCRIPT ( italic_x ) ) ≿ italic_f ( over¯ start_ARG italic_p end_ARG , italic_u start_POSTSUBSCRIPT ita...
Steps 8–10 indicate that all of our claims in Proposition 1 are correct. This completes the proof. ■■\blacksquare■
We now complete the preparation for proving Proposition 1. We separate the proof of Proposition 1 into ten steps.
Finally, our Proposition 1 says that condition (iii) implies condition (ii). This completes the proof. ■■\blacksquare■
and thus Fact 1 implies that the solution function is locally Lipschitz. This completes the proof. ■■\blacksquare■
A
The proposed multivariate extensions of the Lorenz curve in both Taguchi (1972a,1972b) and Koshevoy and Mosler (1996) relate population proportions to a vector of resource shares. Our proposal differs substantially from these in that it directly relates a specific subset of the population, namely individuals with multi...
The appealing properties of the Lorenz curve are well captured by the formulation given in Gastwirth (1971). In that formulation, the Lorenz curve is the graph of the Lorenz map, and the latter is the cumulative share of individuals below a given rank in the distribution, i.e., the normalized integral of the quantile f...
Interpretation. Unlike other multivariate proposals, the Lorenz map shares the interpretation of the traditional Lorenz curve as the cumulative share of resources held by the lowest ranked individuals.
Lorenz curve as a CDF. The Lorenz map is a map from [0,1]dsuperscript01𝑑\displaystyle[0,1]^{d}[ 0 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT to [0,1]dsuperscript01𝑑\displaystyle[0,1]^{d}[ 0 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT. Hence, unlike the traditional scalar Lorenz curve, it ca...
A more successful proposal in that respect, is the Lorenz zonoid of Koshevoy and Mosler (1996). Again, take (1.1) in the univariate case as the point of departure. It associates a fraction p𝑝\displaystyle pitalic_p of the population to the share of the resource collectively held by the poorest fraction p𝑝\displaystyl...
B
V^eq,3⁢(β^)subscript^𝑉eq3^𝛽\hat{V}_{\text{eq},3}(\hat{\beta})over^ start_ARG italic_V end_ARG start_POSTSUBSCRIPT eq , 3 end_POSTSUBSCRIPT ( over^ start_ARG italic_β end_ARG )
Figure 4: We plot the score distributions that are induced by βcompsubscript𝛽comp\beta_{\text{comp}}italic_β start_POSTSUBSCRIPT comp end_POSTSUBSCRIPT, βstratsubscript𝛽strat\beta_{\text{strat}}italic_β start_POSTSUBSCRIPT strat end_POSTSUBSCRIPT, βcapsubscript𝛽cap\beta_{\text{cap}}italic_β start_POSTSUBSCRIPT cap e...
Capacity-Aware βcapsubscript𝛽cap\beta_{\text{cap}}italic_β start_POSTSUBSCRIPT cap end_POSTSUBSCRIPT
Following Bhattacharya and Dupas (2012), the decision maker runs a randomized controlled trial (RCT) to obtain a model for the conditional average treatment effect (CATE) τ⁢(x)=𝔼⁢[Yi⁢(1)−Yi⁢(0)∣X=x]𝜏𝑥𝔼delimited-[]subscript𝑌𝑖1conditionalsubscript𝑌𝑖0𝑋𝑥\tau(x)=\mathbb{E}\left[Y_{i}(1)-Y_{i}(0)\mid X=x\right]ital...
Competition-Aware (Policy Gradient) βcompsubscript𝛽comp\beta_{\text{comp}}italic_β start_POSTSUBSCRIPT comp end_POSTSUBSCRIPT
B
For some C<∞𝐶C<\inftyitalic_C < ∞, P⁢{E⁢[Yi,g2⁢(a)|Ng,Zg]≤C⁢ for all ⁢1≤i≤Ng}=1𝑃𝐸delimited-[]conditionalsubscriptsuperscript𝑌2𝑖𝑔𝑎subscript𝑁𝑔subscript𝑍𝑔𝐶 for all 1𝑖subscript𝑁𝑔1P\{E[Y^{2}_{i,g}(a)|N_{g},Z_{g}]\leq C\text{ for all }1\leq i\leq N_{g}\}=1italic_P { italic_E [ italic_Y start_POSTSUPERSCRIPT 2 ...
Assumptions 2.2.(a)–(b) formalize the idea that our data consist of an i.i.d. sample of clusters, where the cluster sizes are themselves random and possibly related to potential outcomes. An important implication of these two assumptions for our purposes is that
Assumptions 2.2.(e)–(f) impose some mild regularity on the (conditional) moments of the distribution of cluster sizes and potential outcomes, in order to permit the application of relevant laws of large numbers and central limit theorems. Note that Assumption 2.2.(e) does not rule out the possibility of observing arbit...
An attractive feature of our framework is that, by virtue of modeling cluster sizes as random, it is straightforward to permit dependence between the cluster size and other features of the cluster, such as the distribution of potential outcomes within the cluster. In this way, our setting departs from other frameworks ...
We model the distribution of the data described above in two parts: a super-population sampling framework for the clusters and an assignment mechanism which assigns the clusters to treatments. The sampling framework itself can be described in two stages. In the first stage, an i.i.d. sample of G𝐺Gitalic_G clusters is ...
A
In this paper, we propose a stochastic lookahead policy embedded in a data-driven sequential decision process for determining replenishment order quantities in e-grocery retailing. We aim at investigating to what extent this approach allows a retailer to improve the inventory management process when faced with multiple...
The decision policy introduced above allows to explicitly consider the full uncertainty in the inventory management process by incorporating distributional information for the stochastic variables demand, spoilage, and supply shortage when determining replenishment order quantities. In practice, the underlying distribu...
For the evaluation of the lookahead policy proposed in this paper, we first test the policy in a simulation-based setting, where we can consider the benefit of incorporating full uncertainty information in isolation, i.e. without the additional noise induced by the need to estimate the relevant probability distribution...
Evaluating an experimental data set generated in accordance with data provided by our business partner, we can show that our approach yields a replenishment policy that reduces the corresponding inventory management costs compared to the frequently applied newsvendor model. In addition, we analyse the value of explicit...
In our setting with multiple sources of uncertainty (demand, supply, and spoilage), the use of point forecasts reduced average per-period costs for perishable SKUs compared to the more myopic newsvendor model, which addresses only the stochasticity of demand. However, the approach presented in the previous section resu...
C
Alternatively, the researcher could consider the threshold strategy of first using both datasets, choosing to report this p𝑝pitalic_p-value if it is below a threshold and, otherwise, choosing the best of the available p𝑝pitalic_p-values. For K=2𝐾2K=2italic_K = 2, this gives three potential p𝑝pitalic_p-values to cho...
The right-hand side panel in Figure 7 presents the p𝑝pitalic_p-curves for the minimum case. When p𝑝pitalic_p-hacking works through taking the minimum p𝑝pitalic_p-value, as in earlier cases for p𝑝pitalic_p-values near commonly used sizes, the impact is to move the distributions towards the left, making the p𝑝pitali...
In order to consider relevant directions of power, we examine two approaches to p𝑝pitalic_p-hacking in four situations in which we might think opportunities for p𝑝pitalic_p-hacking in economics and other fields commonly arise. The two approaches are what we refer to as a “threshold” approach where a researcher target...
We construct theoretical results for the implied distribution of p𝑝pitalic_p-values under each approach to p𝑝pitalic_p-hacking in a simple model. The point of this exercise is twofold — by seeing how exactly p𝑝pitalic_p-hacking affects the distribution we can determine the testing method appropriate for detecting th...
In time series regression, sums of random variables such as means or regression coefficients are standardized by an estimate of the spectral density of the relevant series at frequency zero. A number of estimators exist; the most popular in practice is a nonparametric estimator that takes a weighted average of covarian...
D
Renewable energy consumption (% of total final energy consumption): Renewable energy consumption is the share of renewable energy in total final energy consumption. Source: World Bank WDI.
Fossil fuel energy consumption (% of total): Fossil fuel comprises coal, oil, petroleum, and natural gas products. Source: World Bank WDI.
Renewable energy consumption (% of total final energy consumption): Renewable energy consumption is the share of renewable energy in total final energy consumption. Source: World Bank WDI.
Renewable energy consumption (% of total final energy consumption): Renewable energy consumption is the share of renewable energy in total final energy consumption. Source: World Bank WDI.
Fossil fuel energy consumption (% of total): Fossil fuel comprises coal, oil, petroleum, and natural gas products. Source: World Bank WDI.
A
We also evaluate each model based on a longer-term, 5-year prediction window (1985–1989). In this case, each state will have five prediction errors, one for each post-treatment period. For the longer-term predictions, we calculate mean squared error based on the prediction errors in each pseudo-treated state over each ...
In our first exercise, we exclude California (the true treated state). Instead, we assume that one other state from the control group (the “pseudo-treated” state) has been treated with a cigarette sales tax in 1989. We mask the post-1988 cigarette sales of the pseudo-treated state and apply each of the alternative caus...
Because the out-of-sample prediction error determines the accuracy of the estimated treatment effect, we compare the various estimation methods along this dimension. The model predictions are visualized in Figure 3, and the resulting distribution of prediction errors is summarized in Table 1. Among the methods we consi...
To compare SyNBEATS with MC and SDID, we replicate the analyses in the previous section. The results are presented in Table A.3. SyNBEATS dramatically outperforms MC in each analysis we consider with the data sets corresponding to Proposition 99 and the German Reunification. With respect to SDID, the results are more n...
As shown in Table 1, SyNBEATS outperforms other traditional estimators in their short-term predictions, improving the RMSE by 31% compared to the second-best alternative (SC). To further facilitate a direct comparison of short- to long-term predictions for each estimator, in Figure 4, we contrast predictions in the fir...
B
Beyond the work of Bojinov et al. (2023), we are not aware of prior studies of switchback experiments that consider using
The regular switchback estimator takes data from a regular switchback along with an (optional) burn-in
Our first result is an error decomposition for regular switchback estimators under our geometric mixing assumptions.
is on and the average outcome when treatment is off. Under our model, we show that this standard switchback is severely
induced by their modeling assumptions. We note that in our setting, i.e., under Assumptions 1 and 2,
B
We favour FA-LP by including the true number of six factors, estimated by principal components from the 120 variables, and estimate the
that the FFR rises by one on impact, as opposed to a size of one standard deviation, as is done by Bernanke et al. (2005). We implement this change to
In section 3.1 we compare our proposed method with unpenalized parameter of interest to the standard desparsified lasso in a sparse structural VAR. In section 3.2, we study our proposed method in an empirically calibrated DFM.
As this method matches the true DGP closely, we expect this benchmark to be a highly competitive standard.
A simple alteration of the desparsified lasso that leaves this parameter unpenalized thus brings the coverage rates much closer to the nominal level. Note that the standard desparsified lasso has coverage exceeding our proposed estimator at further horizons; this is because the true impulse response becomes close to ze...
C
The anchoring effect refers to “a systematic influence of initially presented numerical values on subsequent judgments of uncertain quantities,” where the judgement is biased toward the anchor (Teovanović (2019)). The anchoring effect has been replicated across a variety of contexts, as I discuss in Sect. 1.1, includin...
Since determining what wage to offer to employees is a complex judgement, employers may use the minimum wage as a convenient reference point upon which to base their offers. Due to the difficulty of conducting controlled experiments with employers, I seek to answer a related question: does the minimum wage function as ...
I demonstrate that the minimum wage functions as an anchor for what Prolific workers consider a fair wage: for numerical values of the minimum wage ranging from $5 to $15, the perceived fair wage shifts towards the minimum wage, thus establishing its role as an anchor (Fig. 1 and Table 1). I replicate this result for a...
The main hypothesis is that minimum wage acts as an anchor on what is considered a fair wage for a job. That is, for a job description, the average response for what is considered a fair wage changes depending on whether it is conditioned on a value of the minimum wage m𝑚mitalic_m, and in particular it shifts towards ...
Given the results established in this paper, how should governments regulate labor markets through minimum wages? One interpretation is that as long as the minimum wage is below the perceived fair wage, the minimum wage results in decreases in the judgements of perception of fairness, meaning that current minimum wages...
A
23−α=2⁢α−123𝛼2𝛼1\frac{2}{3}-\alpha=2\alpha-1divide start_ARG 2 end_ARG start_ARG 3 end_ARG - italic_α = 2 italic_α - 1
The OPE solution can be motivated intuitively by the following reasoning. First, note that if player 1 bets the optimal size of 2 with a winning hand against an equilibrium strategy (which calls a bet of 2 with probability 1313\frac{1}{3}divide start_ARG 1 end_ARG start_ARG 3 end_ARG), expected payoff is
In the no-limit clairvoyance game [2], player 1 is dealt a winning hand (W) and a losing hand (L) each with probability 12.12\frac{1}{2}.divide start_ARG 1 end_ARG start_ARG 2 end_ARG . (While player 2 is not explicitly dealt a “hand,” we can view player 2 as always being dealt a medium-strength hand that wins against ...
So the OPE is the unique equilibrium where player 1 loses the same amount of expected payoff with both types of mistakes (betting 1 with a winning hand and
According to our above analysis, the unique Nash equilibrium strategy for player 1 is to bet 2 with probability 1 with a winning hand, to bet 2 with probability 2323\frac{2}{3}divide start_ARG 2 end_ARG start_ARG 3 end_ARG with a losing hand, and to check with probability 1313\frac{1}{3}divide start_ARG 1 end_ARG start...
C
The adversarial structure of public advocacy provides an additional benefit: the senders, having conflicting goals, cannot coordinate to influence the receiver’s decision. Resilience to collusion is desirable in organizations where informed agents can discuss their intentions before being consulted by the receiver. The...
The proof follows from the observation that there is no profitable coalitional deviation involving two opposed-biased senders. Likewise, the receiver cannot gain from a coalitional deviation because the equilibrium is already efficient. Therefore, the equilibrium in Proposition 4 is strong (Aumann, \APACyear1959) and c...
Equilibrium.— The equilibrium concept is the perfect Bayesian equilibrium (PBE). To test for the protocols’ robustness against collusion, I use the two related concepts of strong Nash equilibrium (Aumann, \APACyear1959) and coalition-proof Nash equilibrium (Bernheim \BOthers., \APACyear1987). An equilibrium is strong i...
Proposition 4 provides an equilibrium characterization, which allows us to understand the mechanism supporting truthful reporting on the equilibrium path. The key to efficiency in public advocacy stands in how the receiver allocates the burden of proof between the two senders.111111Since the receiver fully learns the s...
payoffs; it is coalition-proof when resilient against those coalitional deviations that are self-enforcing.444A coalition is self-enforcing if there is no proper sub-coalition that, taking fixed the action of its complement, can agree to deviate from the deviation in a way that makes all of its members better off. The ...
A
The generalized is thus F⁢A⁢S=[0.5,1]𝐹𝐴𝑆0.51FAS=[0.5,1]italic_F italic_A italic_S = [ 0.5 , 1 ] and does not contain β𝛽\betaitalic_β.
Because Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is here an endogenous explanatory variable, and because
that are themselves endogenous explanatory variables with γℓ⁢αℓ≠0subscript𝛾ℓsubscript𝛼ℓ0\gamma_{\ell}\alpha_{\ell}\neq 0italic_γ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT italic_α start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT ≠ 0.
where Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT violates the exclusion assumption and Z1subscript𝑍1Z_{1}italic_Z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and
of Z1subscript𝑍1Z_{1}italic_Z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, only identifies β𝛽\betaitalic_β when Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is
A
Note: The figure shows histograms of (real) market capitalization (in billions of U.S. dollars) of all the firms in Panel (a), for the firms whose real market capitalization is between the 5th ($11.6 million) and 95th ($6.2 billion) percentiles in Panel (b), and for the firms whose real market capitalization is below t...
A recent study closely related to ours is Singh et al. (2022), which uses an event study methodology to evaluate the stock market’s reaction to clinical trial announcements in the pharmaceutical industry. They allow for rich heterogeneity across drugs and correlate abnormal returns with clinical trial results, providin...
Given the pronounced right skewness in the firm size distribution, we estimate the value of drugs after removing outlier firms to ensure that extreme cases do not drive our results. This decision is motivated by our discussion of the market capitalization distribution in Section 3.3, which suggests that large and small...
Heterogeneity between small and large firms is crucial for our approach. First, large firms may have more expertise and resources for conducting clinical development, potentially increasing their chances of success and leading to heterogeneity in transition probabilities. Second, for the same reasons, large firms may c...
Our estimates suggest several important areas for future research. First, our approach excludes drugs developed by large pharmaceutical firms with a market valuation above the 95th percentile of the firm size distribution. To the extent that these firms develop different types of drugs, our approach fails to capture th...
C
The cost of a verification protocol is log2⁡|𝒞|subscript2𝒞\log_{2}|{\mathcal{C}}|roman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT | caligraphic_C |.
However, the goal of these protocols is different than traditional nondeterministic protocols in computer science: the protocols of Section 5 only aim to describe a matching to the applicants, not to verify that the matching is correct.
The (concurrent) representation complexity of a mechanism f𝑓fitalic_f is the minimum of the costs of all concurrent representation protocols for f𝑓fitalic_f.
Thus, the verification complexity of 𝖳𝖳𝖢𝖳𝖳𝖢{{\mathsf{TTC}}}sansserif_TTC and of any stable matching mechanism is Ω⁢(|𝒜|)Ω𝒜\Omega\bigl{(}|\mathcal{A}|\bigr{)}roman_Ω ( | caligraphic_A | ).
The verification complexity of a mechanism f𝑓fitalic_f is the minimum of the costs of all verification protocols for f𝑓fitalic_f.
D
15 voters in all, with 3 experts: N=15𝑁15N=15italic_N = 15, K=3𝐾3K=3italic_K = 3. The two treatments
Table 1: p=0.7𝑝0.7p=0.7italic_p = 0.7, F⁢(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ]
In all experiments, we set π=0.5𝜋0.5\pi=0.5italic_π = 0.5, p=0.7𝑝0.7p=0.7italic_p = 0.7, and F⁢(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform
Table 2: p=0.7𝑝0.7p=0.7italic_p = 0.7, F⁢(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ]
With p=0.7𝑝0.7p=0.7italic_p = 0.7 and q𝑞qitalic_q uniform over [0.5,[0.5,[ 0.5 ,0.7], we have verified
A
In scenario (ii), illustrated by Figure 4, related variety is just slightly higher, and the model still accommodates for re-dispersion. However, the re-dispersion process is not smooth – the economy suddenly jumps to symmetric dispersion from a fairly asymmetric equilibrium spatial distribution.
In scenario (i), shown in Figure 3, related variety is such that within-region interaction is relatively less important (b=0.33𝑏0.33b=0.33italic_b = 0.33). For a low freeness of trade, symmetric dispersion is stable because firms wish to avoid the burden of a very costly transportation supplying to farmers from full a...
The case of high related variety, b∈(12,1)𝑏121b\in(\frac{1}{2},1)italic_b ∈ ( divide start_ARG 1 end_ARG start_ARG 2 end_ARG , 1 ), is much less diversified and can be accounted for resorting to a subset of the pictures from Figure 1. The history as economic integration increases is as follows. For a very low trade fr...
In scenario (vi) we illustrate the qualitative change in the spatial structure of the economy as ϕitalic-ϕ\phiitalic_ϕ increases for b>1/2𝑏12b>1/2italic_b > 1 / 2, but with a higher λ𝜆\lambdaitalic_λ, since, with the parameter values of the previous scenario, agglomeration would be ubiquitously stable (and hence unin...
Scenario (iii) also just slightly increases related variety compared to the previous scenario (see Figure 5), and the story of spatial outcomes as economic integration increases is very similar, except that, in this case, full agglomeration is stable for a small range of intermediate values of ϕitalic-ϕ\phiitalic_ϕ, as...
D
In economics, Kleiner, Moldovanu and Strack (2021) characterize the extreme points of monotone functions on [0,1]01[0,1][ 0 , 1 ] that majorize (or are majorized by) some given monotone function, which is equivalent to the set of probability measures that dominate (or are dominated by) a given probability measure in th...
Yamashita (2023). Several recent papers in economics also exploit properties of extreme points to derive economic implications. See, for instance, Bergemann et al. (2015) and Lipnowski and Mathevet (2018). Candogan and Strack (2023) and Nikzad (2023)
In economics, Kleiner, Moldovanu and Strack (2021) characterize the extreme points of monotone functions on [0,1]01[0,1][ 0 , 1 ] that majorize (or are majorized by) some given monotone function, which is equivalent to the set of probability measures that dominate (or are dominated by) a given probability measure in th...
We first apply Theorem 2 and Theorem 3 to gerrymandering. Existing economic theory on gerrymandering has primarily focused on optimal redistricting or fair redistricting mechanisms (e.g., Owen and Grofman 1988; Friedman and Holden 2008; Gul and Pesendorfer 2010; Pegden et al. 2017; Ely 2019; Friedman and Holden 2020; K...
When |supp⁢(F)|>2supp𝐹2|\mathrm{supp}(F)|>2| roman_supp ( italic_F ) | > 2, this “concavafication” method requires finding the concave closure of a multi-variate function, which is known to be computationally challenging, especially when |supp⁢(F)|=∞supp𝐹|\mathrm{supp}(F)|=\infty| roman_supp ( italic_F ) | = ∞.111111...
A
Comparison of decarbonization strategies. Figure 3 shows a comparison of four exemplary decarbonization strategies: The ‘Remove largest emitters first’ strategy, which aims to reach the highest emission savings with a minimum number of firms to be removed from production, the
(C) In the ‘Remove least-risky firms first (employment)’ strategy, firms are removed according to their ascending risk of triggering job loss, i.e., EW-ESRI; firms that are considered least systemically relevant for the production network are removed first.
‘Remove least-employees firms first’ strategy that aims at minimum job loss on the individual firm level,
The ‘Remove least-employees firms first’ strategy that aims at minimizing job loss at each individual firm, shown in Fig. 3B manages to keep expected job and output loss at low levels for the initially removed firms. But since this strategy focuses on job loss at the individual firm level, it fails to anticipate a high...
(B) In the ‘Remove least-employees firms first’ strategy, firms are closed according to their ascending numbers of employees.
B
9.0.0) (Gurobi Optimization, 2020). The source code and data generated for the illustrative three-node and the Nordics case studies are openly available at GitHub repositories (Belyak, 2022a, b).
To be able to conduct a thorough analysis of the optimal investment strategies and generation levels in this section, we first consider a simplified structure for the case study, hereinafter referred to as the illustrative instance. Then in Section 5, we consider a case study for the Nordic region. The structure of the...
Due to the large-scale nature of the case study instance and the consequent computational challenges we have faced, we limited the range of the values for each of the input parameters to a discrete set with two values in the sensitivity analysis. The first one represents a possible “low” value, and the second one repre...
Let us consider the case of a €1B GEB for each GenCo and compare the differences in optimal investment and generation portfolios when increasing TEB from €25M to €50M in a perfectly competitive market. The detailed analysis for this particular example is presented in Figure 5. The arrow in the figure indicates the dire...
In this paper, we study the impact of the TSO infrastructure expansion decisions in combination with carbon taxes and renewable-driven investment incentives on the optimal generation mix. To examine the impact of renewables-driven policies we propose a novel bi-level modelling assessment to plan optimal transmission in...
A
Our main result (Theorem 1) characterizes all strategy-proof rules on the aforementioned domain. In particular, we find that all strategy-proof rules comply with the following two-step procedure. In the first step, each agent with single-peaked preferences is asked about her best alternative in the range of the rule (h...
Jackson (1994) for only single-peaked and the result in Manjunath (2014) for only single-dipped preferences. We also find out that the characterized family can be easily implemented in two steps and with few information. Finally, we establish that all strategy-proof rules are also group strategy-proof and show that Par...
It can be seen that PE imposes a strong restriction on the range of f𝑓fitalic_f: only a range equal to the set of feasible alternatives or a range equal to the extreme alternatives of that set is compatible with SP and PE. In particular, any SP rule whose range is equal to the set of feasible alternatives is PE, while...
We also show that all strategy-proof rules are group strategy-proof (Theorem 1). Finally, the range of any strategy-proof and Pareto efficient rule is either equal to the set of alternatives or coincides with the “extreme points” of the set of alternatives (Proposition 5).
Our main result (Theorem 1) characterizes all strategy-proof rules on the aforementioned domain. In particular, we find that all strategy-proof rules comply with the following two-step procedure. In the first step, each agent with single-peaked preferences is asked about her best alternative in the range of the rule (h...
C
−0.0(−0.0/−0.0)\underset{(-0.0/-0.0)}{-0.0}start_UNDERACCENT ( - 0.0 / - 0.0 ) end_UNDERACCENT start_ARG - 0.0 end_ARG
0.010.0/0.040.00.040.01\underset{0.0/0.04}{0.01}start_UNDERACCENT 0.0 / 0.04 end_UNDERACCENT start_ARG 0.01 end_ARG
0.040.01/0.080.010.080.04\underset{0.01/0.08}{0.04}start_UNDERACCENT 0.01 / 0.08 end_UNDERACCENT start_ARG 0.04 end_ARG
0.010.0/0.040.00.040.01\underset{0.0/0.04}{0.01}start_UNDERACCENT 0.0 / 0.04 end_UNDERACCENT start_ARG 0.01 end_ARG
0.367(0.04/0.54)0.040.540.367\underset{(0.04/0.54)}{0.367}start_UNDERACCENT ( 0.04 / 0.54 ) end_UNDERACCENT start_ARG 0.367 end_ARG
D
There exists an equilibrium of the best-value pricing managed campaign with efficient steering that:
price that balances the profit off-platform with the relaxation of the showrooming constraint. In particular, the second term in
offering their product only at a higher price, each advertiser can weaken the showrooming constraint and extract more surplus on the platform. Consequently, the off-platform prices increase with the number of on-platform shoppers.
integrated model that considers how auction mechanisms and data availability jointly determine match formation and surplus extraction both on and off large digital platforms. The auction mechanisms employed by the platform have substantial implications for
The argument proceeds by considering the problem of a vertically integrated platform that jointly maximizes the profit of the firms and the platform. The vertically integrated platform can jointly coordinate on-platform and off-platform pricing but still faces the showrooming constraint due to
D
While using net exports already produces a good fit of the flow matrix with the data, we observe that just by optimizing the weights we can improve the average R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT from 94% to 97% (first vs. second column).
In the following we will investigate in how far further adjustments to the trade data can improve these results.
In the following we will present an approach that allows to use data on trade flows for the approximation of P flows in a global model.
In particular, we have to investigate if the weighting scheme in the calculation of the trade matrix (see eq. 4) can be improved. In an ideal setting we would try to calculate the exact P content of each trade relationship that each country has with each other country for each goods category. This, however, is not feas...
The statistics in table 1 show that these approximate corrections do in fact improve the fit of the model significantly. Interestingly the fit does not only improve with the mining data but also with the use data, which indicates that the correction has also an indirect effect on the column sums of the flow matrix.
A
It is well known, for matching markets, that there is no stable rule for which truth-telling is a dominant strategy for all agents (see Dubins and Freedman, 1981; Roth, 1982, 1985; Sotomayor, 1996, 2012; Martínez et al., 2004; Manasero and
Oviedo, 2022, among others). That is, given the true preferences and a stable rule, at least one agent might benefit from misrepresenting her preferences regardless of what the rest of the agents state. Thus, stable matchings cannot be reached through dominant “truth-telling equilibria". The stability of equilibrium so...
A stable rule is a function that associates each stated strategy profile to a stable matching under this stated profile. To evaluate such matchings workers, and firms use their true preferences and their true choice functions, respectively. A market and a stable rule induce a matching game. In this game, the set of str...
In centralized markets, a board needs to collect the preferences and choice functions of all agents in order to produce a stable matching. Normally, agents are expected to behave strategically by not revealing their true preferences or their true choice functions in order to benefit themselves. When this is the case, t...
The main motivation of this paper is to provide a framework to study the Nash equilibrium solutions of the game induced by stable rules. In a many-to-one matching market with substitutable choice functions, we show that any stable matching rule implements, in Nash equilibrium, the individually rational matchings. Moreo...
A
A second strand of the literature aims at assessing what happens to individual life trajectories after a default. This literature essentially focused on the impact of a harsh default, i.e. either Chapter 7 or Chapter 13 declarations or a foreclosure. Our work sheds some light on the short and medium term consequences o...
To this second strand of the literature belong, for example, Collinson et al. [2023], who investigate the impact of eviction on low income households in terms of homelessness, health status, labor market outcomes, and long term residential instability. Similarly, Currie and Tekin [2015] show that foreclosure causes an ...
Albanesi and Nosal [2018] investigate the impact of the 2005 bankruptcy reform, which made it more difficult for individuals to declare either Chapter 13 or Chapter 7. They find that the reform hindered an important channel of financial relief. Diamond et al. [2020] analyze the negative impact of foreclosures on forecl...
Our interest, in this paper, is on tracing individuals’ lives after a default, while we do not aim to distinguish between different default motives (see for example Ganong and Noel [2020] and the literature cited therein). More generally, our work is related to two strands of the literature, one which focuses on the in...
A second strand of the literature aims at assessing what happens to individual life trajectories after a default. This literature essentially focused on the impact of a harsh default, i.e. either Chapter 7 or Chapter 13 declarations or a foreclosure. Our work sheds some light on the short and medium term consequences o...
A
_{jk}\right)∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_q ⋅ italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT sign ( italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - italic_s start_POS...
Block Approval: Voters vote for any number of candidates.272727We use the same sincere strategy as for single-winner Approval Voting.
Approval: Vote for all candidates with uj≥E⁢Vsubscript𝑢𝑗𝐸𝑉u_{j}\geq EVitalic_u start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≥ italic_E italic_V.
Minimax: Vote sincerely222222While a viability-aware strategy was included for Minimax in Wolk et al. (2023),
to Minimax. As with IRV, each ballot is a ranking of some or all of the candidates.101010While it is often recommended that equal rankings be allowed under
C
Each of these constraints will forbid exactly one outcome that is not in 𝒯𝒯\mathcal{T}caligraphic_T. As a result, it holds that 𝒮𝒙⁢(ξ𝒯)=𝒯subscript𝒮𝒙subscript𝜉𝒯𝒯\mathcal{S}_{\bm{x}}(\xi_{\mathcal{T}})=\mathcal{T}caligraphic_S start_POSTSUBSCRIPT bold_italic_x end_POSTSUBSCRIPT ( italic_ξ start_POSTSUBSCRIPT c...
Corollary 1 follows from the observation that there are no imposed constraints on the set of possible outcomes in collective choice under dichotomous preferences. Note that Proposition 2 is necessary for this result in order to show that the considered class of ILPs ΞΞ\Xiroman_Ξ is sufficiently rich, i.e., that its set...
In this section, we study which axiomatic properties are satisfied by the distribution rules described in Section 5. Interestingly, the following result implies that all axiomatic results that have been obtained for collective choice under dichotomous preferences (Bogomolnaia and Moulin, 2004; Bogomolnaia et al., 2005)...
When studying a specific class of problems that can be modeled by an ILP in ΞΞ\Xiroman_Ξ, such as kidney exchange or knapsack, it may be the case that there exist sets of outcomes that do not correspond to the set of optimal solutions of any instance of that specific problem class. To illustrate this, one can observe, ...
All axiomatic results that have been obtained for collective choice under dichotomous preferences (fair mixing) also hold for distribution rules over optimal solutions of integer linear programs in Ξnormal-Ξ\Xiroman_Ξ.
D
Since it is based on actual trades, realized volatility (RV) is the ultimate measure of market volatility, although the latter is more often associated with the implied volatility, most commonly measured by the VIX index cboevix ; cboevixhistoric – the so called market ”fear index” – that tries to predict RV of the S&...
We fit CCDF of the full RV distribution – for the entire time span discussed in Sec. 2 – using mGB (7) and GB2 (11). The fits are shown on the log-log scale in Figs. 4 – 13, together with the linear fit (LF) of the tails with R⁢V>40𝑅𝑉40RV>40italic_R italic_V > 40. LF excludes the end points, as prescribed in pisarenk...
The main result of this paper is that the largest values of RV are in fact nDK. We find that daily returns are the closest to the BS behavior. However, with the increase of n𝑛nitalic_n we observe the development of ”potential” DK with statistically significant deviations upward from the straight line. This trend termi...
While the standard search for Dragon Kings involves performing a linear fit of the tails of the distribution pisarenko2012robust ; janczura2012black , here we tried to broaden our analysis by also fitting the entire distribution using mGB (7) and GB2 (11) – the two members of the Generalized Beta family of distribution...
It should be emphasized that RV is agnostic with respect to gains or losses in stock returns. Nonetheless, it has been habitual that large gains and losses occur at around the same time. Here we wish to address the question of whether the largest values of RV fall on the power-law tail of the RV distribution. As is wel...
D
§4.4.1 validates our approach: for a range of artificial value distributions, we first simulate bids, and then apply the noted iterative search procedure to back out, or elicit, the underlying value distribution. We can thus compare the originally postulated distribution with the elicited one.
Next, we demonstrate how our approach can be used as a step in an inference procedure aimed at determining the distribution of bidders’ values, which is not directly observable, from the observed distribution of bids. In §4.4.1, to validate our approach, we start from randomly generated values, then simulate bids, and ...
Utilizing our simulation approach, we also showed how to infer bidders’ valuations in the presence of more realistic auction rules. We demonstrated this using aggregate bid data from an e-commerce website in both low- and high-density auctions.
§4.4.2 employs aggregate bid data from an actual production environment (a major e-commerce website) and infer bidder value distributions in both low and high traffic shopper query scenarios.
The analysis is based on aggregated bid data for two specific shopper queries in an e-commerce setting, one characterized by low traffic and the other by high traffic. The data aggregation process converts all bids into a bid per impression, so we set all click-through rates to 1. Thus, we apply our analysis to a symme...
C
While payoff externalities induced by a sharing contract conditioning on the winner’s identity can fix the inefficiencies induced by strategic experimentation, the structure of the sharing contract is notable. In particular, winner-take-all contracts and equal sharing are both inefficient; the efficient contract must g...
A logical way the agents might wish restore cooperative efficiency is if they can agree ex ante to a contract that specifies how to split the rewards from experimentation in the event of a breakthrough. Thus, in this section, I consider the problem of a regulator (or contest designer) who observes the outcome of the ba...
While the formal analysis was constrained to a specific model, this theoretical work offers important insights for thinking about research. First, the condition for efficiency when there are breakthrough payoff externalities is that breakthroughs must have a neutral impact on the losers. As much of the contest literatu...
The corollary shows that an ex ante fair split of the rewards cannot result in an efficient outcome, so an efficient contract must still condition on some part of the outcome of the experimentation game. Recall that the previous subsection showed that conditioning the contract on the observation of winner/loser was suf...
Having shown that the identity of the winner is sufficient for restoring efficiency (even without observing effort), it might seem that this information is also necessary for implementing an efficient outcome. It is not; contracting on the effort profile at the time of breakthrough is also sufficient to restore efficie...
D
Existing nonparametric estimators of g0subscript𝑔0g_{0}italic_g start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT typically rely on
no function g𝑔gitalic_g is such that E⁢[g⁢(Z)|W]𝐸delimited-[]conditional𝑔𝑍𝑊E[g(Z)|W]italic_E [ italic_g ( italic_Z ) | italic_W ] is a linear function of the variables in X𝑋Xitalic_X.
the mapping g∈𝒢↦E⁢[g⁢(Z)|W=⋅]𝑔𝒢maps-to𝐸delimited-[]conditional𝑔𝑍𝑊⋅g\in\mathcal{G}\mapsto E[g(Z)|W=\cdot]italic_g ∈ caligraphic_G ↦ italic_E [ italic_g ( italic_Z ) | italic_W = ⋅ ] is injective.
}t)]=0\quad\forall t\in{\mathbb{R}}^{p}\,.italic_E [ italic_Y - italic_g ( italic_Z ) | italic_W ] = 0 a.s. ⇔ italic_E [ ( italic_Y - italic_g ( italic_Z ) ) roman_exp ( bold_i italic_W start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT italic_t ) ] = 0 ∀ italic_t ∈ blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCR...
E⁢[g⁢(Z)|W]𝐸delimited-[]conditional𝑔𝑍𝑊E[g(Z)|W]italic_E [ italic_g ( italic_Z ) | italic_W ] in a generalized method of moments
D
We find that while participants’ behaviour is in line with the theoretical predictions, there is still a large part of behaviour that the model cannot account for. Using the Strategy Frequency Estimation Method (Dal Bó and Fréchette, 2011; Fudenberg et al., 2012), we allow for the presence of various behavioural types ...
2-4). Additionally, we investigate whether subjects align with the predictions of the G&M model (G&M type). We find that around 25% of the subjects behave according to the G&M model, the vast majority behaves in a conditional co-operating or altruistic way, and a non-significant proportion free rides. From a mechanism ...
The first thing to notice is the high value of β𝛽\betaitalic_β for all treatments, ranging between 0.819 and 0.893. This parameter is always significant and different from 0.5 (β𝛽\betaitalic_β values close to 0.5 indicate random behaviour, while values close to unity indicate almost deterministic behaviour). There is...
25% of the subjects behave according to the theoretical predictions of Gallice and Monzón (2019). Allowing for the presence of alternative behavioural types among the remaining subjects, we find that the majority are classified as conditional co-operators, some are altruists, and very few behave in a free-riding way.
The majority of the subjects behave in an altruistic or conditional co-operating way, around 25% of the subjects as G&M type, and free-riding is very rare.
A
We assign a High quality rating to those in the lowest CCEI quartile among the 60 potential experts and a Low quality rating to those in the top CCEI quartile.
Figure 5: Relative frequency that each expert is chosen in each information condition. The ratings code in the bottom row indicates the expert’s realized earnings, the quality of their decision making (Low or High), and their risk tolerance (Low, Medium, High).
From the choices in the simple and complex blocks of each of those 60 potential experts, we construct a quality rating using the Afriat (1973) Critical Cost Efficiency Index, as detailed in Online Appendix B.
We assign a High quality rating to those in the lowest CCEI quartile among the 60 potential experts and a Low quality rating to those in the top CCEI quartile.
We also assign a risk rating to each potential expert based on their average coefficient of relative risk aversion implied by each non-dominated choice in the simple and complex blocks. A potential expert is rated as High, Medium or Low risk tolerance according to whether that average is in the lowest, middle or upper ...
D
A proof sketch is in order. We abuse notation to write ψ+=EℙObs⁢[ψ+⁢(R)]superscript𝜓subscript𝐸superscriptℙObsdelimited-[]superscript𝜓𝑅\psi^{+}=E_{\mathbb{P}^{\textup{Obs}}}[\psi^{+}(R)]italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT = italic_E start_POSTSUBSCRIPT blackboard_P start_POSTSUPERSCRIPT Obs end_POST...
ψ−⁢(w¯,w¯)superscript𝜓¯𝑤¯𝑤\displaystyle\psi^{-}(\underline{w},\bar{w})italic_ψ start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT ( under¯ start_ARG italic_w end_ARG , over¯ start_ARG italic_w end_ARG )
{+}(R)]italic_E start_POSTSUBSCRIPT blackboard_P start_POSTSUPERSCRIPT Obs end_POSTSUPERSCRIPT end_POSTSUBSCRIPT [ italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_R ) ] = italic_E start_POSTSUBSCRIPT blackboard_P start_POSTSUPERSCRIPT Obs end_POSTSUPERSCRIPT end_POSTSUBSCRIPT [ italic_ϕ start_POSTSUPERSCR...
A proof sketch is in order. We abuse notation to write ψ+=EℙObs⁢[ψ+⁢(R)]superscript𝜓subscript𝐸superscriptℙObsdelimited-[]superscript𝜓𝑅\psi^{+}=E_{\mathbb{P}^{\textup{Obs}}}[\psi^{+}(R)]italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT = italic_E start_POSTSUBSCRIPT blackboard_P start_POSTSUPERSCRIPT Obs end_POST...
ψ+⁢(R)superscript𝜓𝑅\displaystyle\psi^{+}(R)italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_R )
D
Neme, 2001). In Theorem 3, we show that the single-plateaued domain is maximal for our properties as well. Therefore, even though replacing strategy-proofness with NOM greatly expands the family of admissible rules, the maximal domain of preferences involved remains basically unaltered.
Morrill (2020). They try to single out those manipulations that are easily identifiable by the agents.
Morrill (2020) notion of obvious manipulations to the allocation of a non-disposable commodity among agents with single-peaked preferences. In the context of voting, Aziz and
Next, we analyze the maximality of the domain of preferences (including the domain of single-peaked preferences) for which a rule satisfying own-peak-onliness, efficiency, the equal division guarantee, and NOM exists. For the properties of efficiency, strategy-proofness, and symmetry, the single-plateaued domain is max...
Consider the problem of allocating a single non-disposable commodity among a group of agents with single-peaked preferences: up to some critical level, called the peak, an increase in an agent’s consumption raises his welfare; beyond that level, the opposite holds. In this context, an allotment rule is a systematic pro...
B
If g𝑔gitalic_g is S𝑆Sitalic_S-unimodal, then every stable periodic orbit attracts at least one of a𝑎aitalic_a, b𝑏bitalic_b, or c𝑐citalic_c (i.e. the endpoints of I𝐼Iitalic_I or the critical point of g𝑔gitalic_g).
Proposition 5.5 means that all ”visible” orbits (in numerical experiments) are orbits containing a𝑎aitalic_a, b𝑏bitalic_b, or c𝑐citalic_c only (in the long run). We consider that only these visible orbits are meaningful in economics (or in real life) since it is widely believed that every economic modelling is some ...
In this paper, we interpret our numerical calculations based on Propositions 5.5 and 5.6. In particular, we look at the orbit starting from the critical point s𝑠sitalic_s, that is {s,f⁢(s),f2⁢(s),⋯}𝑠𝑓𝑠superscript𝑓2𝑠⋯\{s,f(s),f^{2}(s),\cdots\}{ italic_s , italic_f ( italic_s ) , italic_f start_POSTSUPERSCRIPT 2 en...
If g𝑔gitalic_g is S𝑆Sitalic_S-unimodal, then every stable periodic orbit attracts at least one of a𝑎aitalic_a, b𝑏bitalic_b, or c𝑐citalic_c (i.e. the endpoints of I𝐼Iitalic_I or the critical point of g𝑔gitalic_g).
If one of the conditions in Proposition 5.8 is satisfied (within some numerical limitation), we conclude that we can predict the future by Proposition 5.1. We must admit that our argument in this section is not rigorous (we hope to make it rigorous in the future), but we believe that we have provided enough (numerical/...
A
Using data from Kranz (2023) we compile a list of package usage from the replication packages of publications in top economics journals.
Here, require is able to match about 98 percent of all packages used in publications, and about 99 percent when weighted by publication intensity.
Using these metrics, package usage is even more skewed, with the top 100 packages accounting for 93 percent of all publications (green line) and 98 percent of all usage in code (red line).
Next, to ensure these requirements are satisfied, we add the corresponding line at the start of each do-file:
We then compute the fraction of publications that rely on each community-contributed package (dashed line) as well as the “publication intensity” of each package—the total number of times a package is used across all publications (dotted line).
D
Assumption 4.2 can be equivalently formulated with linear inequalities, i.e., there is some A∈ℝp×∑j=1Nnj𝐴superscriptℝ𝑝superscriptsubscript𝑗1𝑁subscript𝑛𝑗A\in\mathbb{R}^{p\times\sum_{j=1}^{N}n_{j}}italic_A ∈ blackboard_R start_POSTSUPERSCRIPT italic_p × ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POS...
Assumption 4.2 is satisfied for many games found in the literature (see, e.g., [16, GNEP (21)] and [2, Proposition 3]). Included are also special cases like mixed strategies (without further constraints) as then 𝕏=[0,1]N𝕏superscript01𝑁\mathbb{X}=[0,1]^{N}blackboard_X = [ 0 , 1 ] start_POSTSUPERSCRIPT italic_N end_PO...
Constraints of this type are also frequently considered in the literature: they are called ‘orthogonal constraint sets’ in [21] and ‘classical games’ in [2].
Usually, when considering convex games, the focus is on providing the existence of a unique equilibrium point and developing methods for finding this particular equilibrium, see e.g. [21], where additional strong convexity conditions are assumed to guarantee uniqueness of the Nash equilibrium. In this paper, we will co...
Non-cooperative shared constraint games are a special type of generalized games as not only the cost function, but also the constraint set of player i𝑖iitalic_i in optimization problem (1) can depend on the strategy x−i∗superscriptsubscript𝑥𝑖x_{-i}^{*}italic_x start_POSTSUBSCRIPT - italic_i end_POSTSUBSCRIPT start_P...
A
In predictive learning decisions are often made on the basis of predictions automatically while in causal applications, the estimates generally need to be interpreted by a human being. Following on from this, in general predictive systems are used for individual-level decisions (e.g. targeting product recommendations) ...
A method for removing selection effects from data in causal inference using predictive machine learning models as nuisance models. Essentially it involves using these two nuisance models, one to predict treatment, one to predict outcome then taking the residuals from these models and feeding them into an estimator of s...
Causal machine learning is a broad term for several different families of methods which all draw inspiration from machine learning literature in computer science. The most widely-used method here and our focus for this paper is the causal forest (Wager & Athey, 2018; Athey et al., 2019) which uses a random forest made ...
The transparency needs for these two kinds of models varies. One can imagine research questions where it is helpful to understand the nuisance models as well as the final model, but for the most part this is not neccessary. We still need some amount of transparency over nuisance functions, mostly to diagnose problems i...
The most rudimentary difference in the structure of models is that causal machine learning methods generally involve the fitting of several models with different purposes where predictive applications typically involve fitting one, or several with the same purpose in an ensemble method (Chernozhukov et al., 2018; Athey...
D
From Figure 4 and Table 1, it is evident that our approach consistently demonstrates the lowest bias across all metrics compared to other approaches. The data splitting method also manages to achieve relatively low biases but exhibits significantly higher variance. Furthermore, it’s worth noting that the true variance ...
We further provide information on the bias and standard errors of treatment effect estimators obtained using various methods in Table 1. In each metric, the first column represents the bias in comparison to the true global treatment effect (GTE). The second column displays the standard deviation calculated from the res...
where NTsubscript𝑁𝑇N_{T}italic_N start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT represents the number of users in the treatment group and FinishisubscriptFinish𝑖\text{Finish}_{i}Finish start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and StayDurationisubscriptStayDuration𝑖\text{StayDuration}_{i}StayDuration start_POS...
6:         When a user is assigned to the treatment group, the platform recommends an item based on the treatment algorithm and model, and vice versa.
In Table 2, we have calculated the experimentation costs. For treatment users, we computed the average treatment values based on the treatment linear fusion formula, i.e.,
D
Recall that the game runs until (absolute) time t=99𝑡99t=99italic_t = 99. We say that the attack is successful in a given run of the game in which some player attacks if (1) player α𝛼\alphaitalic_α initiates an attack by (absolute) time t^α≜49≜subscript^𝑡𝛼49\hat{t}_{\alpha}\triangleq 49over^ start_ARG italic_t end_...
The welfare-maximizing equilibria in these three examples (and especially in the latter two) might seem at first glance to be driven by qualitatively different effects. It is therefore unclear how one might generally characterize welfare-maximizing equilibria in a manner that captures all three examples, let alone capt...
Before we commence our analysis of equilibrium behavior in coordinated-attack games, we demonstrate the diverse structure of equilibria in different coordinated-attack games via several examples. We start with a coordinated-attack game that has an equilibrium of a particularly simple form.
We emphasize that equilibrium behavior in coordinated-attack games cannot in general be characterized by common knowledge as traditionally defined. Indeed, recalling the coordinated-attack game from Example 1, we note that by Theorem 1 (see also the discussion that follows the proof of that theorem), common knowledge a...
In this Section, we demonstrate the usefulness of our notion of common knowledge for characterizing equilibrium behavior in a family of dynamic Bayesian coordination games that we call coordinated-attack games. In a coordinated-attack game, each player may decide whether and when to initiate an attack, and players are ...
B
The estimation step needs to account for two sources of errors, that is, the usaul estimation error in obtaining a consistent estimator that corresponds to the IVX instrumentation of the nearly integrated regressors, and the second source of error is the sampling error in generating the forecast for the VaR from the fi...
Moreover, we expect that the stochastic equicontinuity property to still hold regardless of the plug-in estimation approach and the presence of time series nonstationarity.
the presence of both a generated covariates and persistent regressors, for the joint estimation of the risk measure pair of (𝖵𝖺𝖱,𝖢𝗈𝖵𝖺𝖱)𝖵𝖺𝖱𝖢𝗈𝖵𝖺𝖱(\mathsf{VaR},\mathsf{CoVaR})( sansserif_VaR , sansserif_CoVaR ), remains an open problem. Several studies in the literature develop estimation and inference met...
We focus on the IVX estimators in each estimation stage, since it is well-known to be robust to the abstract degree of persistence (e.g., see Lee, (2016)). As a result, the doubly corrected IVX estimator (which is the IVX estimator obtained from the second step estimation procedure), is the main parameter of interest i...
The estimation step needs to account for two sources of errors, that is, the usaul estimation error in obtaining a consistent estimator that corresponds to the IVX instrumentation of the nearly integrated regressors, and the second source of error is the sampling error in generating the forecast for the VaR from the fi...
A
Assumption (CCTSB) requires that (CCTS) holds only until the period before each unit is treated, while (CCTSA) requires (CCTS) to hold after treatment begins. Many previous works have distinguished between the parallel trends before treatment versus after treatment, noting that (CCTSB) can be directly tested under assu...
We can contrast (CCTS) with an unconditional or marginal version, which Wooldridge (2021) calls Assumption (CTS).
Finally we discuss estimating the average treatment effects marginalized over 𝑿isubscript𝑿𝑖\boldsymbol{X}_{i}bold_italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. As we showed in Theorem A.5(a) and (b), because the treatment effects are interacted with the covariates centered with respect to their cohort mea...
Because (CTS) does not account for covariates, (CCTS) is often thought of as more plausible than (CTS). We generally agree, though in Appendix A.2 we add some nuance by pointing out that (CTS) can hold when (CCTS) does not hold (Theorem A.2). In such a setting it is not possible to estimate conditional average treatmen...
However, (CTS) holding is enough to estimate marginal average treatment effects consistently using many estimators, so it is reasonable to wonder whether regression (1.5) might be able to estimate treatment effects consistently under (CTS) and some additional assumptions. The following assumption will turn out to allow...
A
To recap, the ℓ2superscriptℓ2\ell^{2}roman_ℓ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT local parameter space is “too small” for the QMLE, but is the suitable local parameter space for the fixed effects approach. This implies that, as explained earlier, QMLE is a better procedure than the fixed effects method.
By analyzing the local likelihood ratios, we are able to inform the merits of different estimators that are otherwise hard to discern based on the usual asymptotics alone (e.g., limiting distributions).
We show that quasi-maximum likelihood method applied to the system attains the efficiency bound. These results are obtained under an increasing number of incidental parameters. Contrasts with the fixed effects estimators are made. In particular,
To have the limit process of the local likelihood ratios reside in a Hilbert space and to directly apply the convolution theorem
The preceding corollaries imply that, under normality, we are able to establish the asymptotic efficiency bound in the presence of increasing number of
A
Since fat tails are scale free, they are naturally analyzed on a log-log scale, where complementary cumulative distribution function (CCDF) ought to be a straight line with negative slope. LF of CCDF, however, must be supplemented by a statistical test to determine the likelihood that the points in the tail conform to ...
This paper is organized as follows. In Section 2 we present the analytical form of mGB and GB2 distributions and discuss the limiting behaviors of both. In Section 3 we fit HP and HPI with mGB and GB 2, as well as the fit tails directly using LF. For each test we conduct a U-test, for which a null hypothesis is formula...
Towards this end, we performed a U-test pisarenko2012robust but we did not limit it to LF alone. We also aimed at describing the entire distribution, not just tails, and thus performed a U-test on mGB and GB2, which we employed for this purpose. GB2 was used for fitting since it is the most flexible distribution with ...
Towards this end we studied a combined multi-year distribution of HPI for years 2000-2022, which contained 201040 data points. The main result, as seen in Figs. 11 and 12 below is that the tails of the combined HPI is more aligned with the finite upper limit of HPI and, accordingly, with mGB distribution. Of course, su...
Our interest to house prices and house price indices was motivated by their being proxies to income distributions. Income distributions may be possible to describe by models of economic exchange, some of which can be reduced to stochastic differential equations with well-defined steady-state distributions. One class of...
B
Two challenges arise when using SCM with higher frequency data, such as when the outcome is measured every month versus every year. First, because there are more pre-treatment outcomes to balance, achieving excellent pre-treatment fit is typically more challenging. Second, even when excellent pre-treatment fit is possi...
In this paper, we propose a framework for temporal aggregation for SCM. Adapting recent results from Sun, Ben-Michael and Feller (2023), we first derive finite-sample bounds on the bias for SCM under a linear factor model when using temporally disaggregated versus aggregated outcome series.
There are many directions for future work that incorporate recent innovations in panel data methods, including first de-noising (e.g., Amjad, Shah and Shen, 2018) or seasonally adjusting the disaggregated outcome series. We could also explore choosing an optimal level of temporal aggregation for a single SCM objective....
Theorem 1 in the appendix formally states high-probability bounds on the bias terms, which we obtain using results from Sun, Ben-Michael and Feller (2023).
Two challenges arise when using SCM with higher frequency data, such as when the outcome is measured every month versus every year. First, because there are more pre-treatment outcomes to balance, achieving excellent pre-treatment fit is typically more challenging. Second, even when excellent pre-treatment fit is possi...
A
, }t(x)\leq\gamma(x)\text{ for all }x\in\mathfrak{X}\}.caligraphic_T = { italic_t : caligraphic_X → blackboard_R : italic_t affine, italic_t ( italic_x ) ≤ italic_γ ( italic_x ) for all italic_x ∈ fraktur_X } .
The structure of the paper is as follows. Section 2 formalizes our model and provides a method for obtaining the asymptotic distribution, or upper bounds thereof, of minimax test statistics. Section 3 shows that, under general conditions, critical values can be obtained for those distributions using the bootstrap. The ...
It would likely be possible to extend the results of this paper to noncompact settings using bounded entropy conditions and finite sample concentration inequalities coming from empirical process theory (e.g. Chernozhukov et al., (2023), Assumption 3.3 and references therein). However, imposing compactness greatly strea...
Lemma 3.5 below shows that this formulation of our inference problem is consistent with the main assumptions of the paper.
Lemma 3.3 suggests that a tractable way of estimating e.g. (3.2) is to solve the triple optimization problem:
C
Partition 𝐗=(𝐗1,𝐗2)𝐗subscript𝐗1subscript𝐗2\mathbf{X}=\left(\mathbf{X}_{1},\mathbf{X}_{2}\right)bold_X = ( bold_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) where
(𝐗1⁢h,subscript𝐗1ℎ\mathbf{X}_{1h},bold_X start_POSTSUBSCRIPT 1 italic_h end_POSTSUBSCRIPT , standardized) and the rest of the covariates
suppose ytsubscript𝑦𝑡y_{t}italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the first element of the high-dimensional vector
from zero and 𝐗2subscript𝐗2\mathbf{X}_{2}bold_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the matrix of covariates with the
𝐗1subscript𝐗1\mathbf{X}_{1}bold_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the matrix of covariates with the corresponding vector of
D
All code was written in Python using PyTorch and the pref_voting library (pypi.org/project/pref-voting/), version 0.4.42, for all functions dealing with voting. Training and evaluation was parallelized across nine local Apple computers with Apple silicon, the most powerful equipped with an M2 Ultra with 24-core CPU, 76...
We begin by discussing our results under the uniform utility model and then turn to the 2D spatial model in Section 4.9.
To generate utility profiles for our experiments described below, we first used a standard uniform utility model (see, e.g., (Merrill, 1988, p. 16)): for each voter independently, the utility of each candidate for that voter is drawn independently from the uniform distribution on the [0,1]01[0,1][ 0 , 1 ] interval. We ...
Figure 2. Results using the 2D spatial model. Top: the average profitability of submitted rankings by the best performing MLP with any hidden layer configuration for a given voting method and information type, averaging over 3–6 candidates and 5, 6, 10, 11, 20, and 21 voters. Bottom: the ratio of the average profitabil...
Comparing Figure 1 for the uniform utility model and Figure 2 for the 2D spatial model model, the most striking differences are (1) that all voting methods become less profitably manipulable (roughly by one half) under the spatial model and (2) even the best MLP’s could not learn to profitably manipulate against Minima...
A
This table reports marginal effects of panel logistic regressions using subject-level random effects and a cluster–robust VCE estimator at the matched group level (standard errors in parentheses). The dependent variable is a binary variable that equals 1 if the consumer switched to a new expert in the current round. Un...
∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01, ∗∗∗ p<0.001𝑝0.001p<0.001italic_p < 0.001
∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01, ∗∗∗ p<0.001𝑝0.001p<0.001italic_p < 0.001
∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01, ∗∗∗ p<0.001𝑝0.001p<0.001italic_p < 0.001
∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01, ∗∗∗ p<0.001𝑝0.001p<0.001italic_p < 0.001
A
Fig. 13 shows that URLLC support provides lower profits than eMBB support except for very low c𝑐citalic_c. The sum of profits only decreases slowly as c𝑐citalic_c increases.
Finally, when aggregating the SPs’ profits and the consumer surplus into the social welfare quantity, Fig. 8 shows that the variation of the aggregated profit dominates.
Fig. 6 shows that the aggregated surplus of the URLLC-supported users is greater than the surplus of the eMBB-supported users, which is sensible since it is a measure that aggregates the quality of the service, the price and the number of subscribers. The same figure shows that the total consumer surplus does not exhib...
Fig. 12 shows that the aggregated surplus of the URLLC-supported users is greater than the surplus of the eMBB-supported users. The same figure shows that the total consumer surplus decreases as c𝑐citalic_c increases.
Finally, when aggregating the SPs’ profits and the consumer surplus into the social welfare quantity, Fig. 14 shows that the variation of the aggregated consumer surplus dominates.
D
There is a large body of work deriving maximal inequalities and its derivatives such as the functional CLT for dependent data under mixing conditions; cf. [16, 3, 2, 29, 17, 6], and [10] and [22] for reviews. Closest to the present paper is the important work in [17] wherein the authors establish a functional invarianc...
Unfortunately, the approach utilized in [17] and related papers cannot be applied to establish maximal inequalities when the aforementioned restrictions on the mixing coefficients do not hold. One of the cornerstones of this approach relies on insights that can be traced back to Dudley in the 1960s ([19]) for Gaussian ...
The main result in the paper shows that the L1superscript𝐿1L^{1}italic_L start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT norm of supf,f0∈ℱ|Gn⁢[f−f0]|subscriptsupremum𝑓subscript𝑓0ℱsubscript𝐺𝑛delimited-[]𝑓subscript𝑓0\sup_{f,f_{0}\in\mathcal{F}}|G_{n}[f-f_{0}]|roman_sup start_POSTSUBSCRIPT italic_f , italic_f start_POS...
The family of norms proposed in the proof is linked to the dependence structure, which is captured by suitably chosen mixing coefficients. Papers such as [3, 17] use β𝛽\betaitalic_β-mixing while some other papers use stronger concepts such as ϕitalic-ϕ\phiitalic_ϕ-mixing (see [20]). In this paper, however, we use the ...
These results leave open the question of what type of maximal inequality one can obtain in contexts where the mixing coefficients do not satisfy these conditions. Many process do not satisfy them, either because the data exhibits long-range dependency or long-memory and this feature is modeled using slowly decaying dep...
D
In contrast, our LTU framework (which generalizes von Neumann’s TU framework) involves a continuum of contracts.
We show that matching problems with LTU are equivalent to two-player games which are a nonzero-sum generalization of von Neumann’s hide-and-seek game.
In this section we show how the LTU matching problem can be reframed as a generalization of the two-person game known as hide-and-seek.
Interestingly, the method of Scarf (1967) also involves two-person games – although not hide-and-seek.
Overall, the combination of these two results created an equivalence between matching problems with TU and zero-sum hide-and-seek games.
C
Our comparison of the concealed vs. revealed contracts has a direct isomorphism with third degree monopoly price discrimination. Specifically, our concealed setting corresponds to monopoly pricing without price discrimination, with the seller acting as principal and buyer acting as agent. Our revealed setting correspon...
Thus, with minor adjustments, we can apply results from the price discrimination literature that characterize the effects of third degree monopoly price discrimination on total welfare. Mirroring Varian [1985]’s seminal work, Lemma 4 shows that total welfare increases only if the quantity of tasks completed also increa...
The question of whether price discrimination increases total welfare has been well studied [Varian, 1985].
Our comparison of the concealed vs. revealed contracts has a direct isomorphism with third degree monopoly price discrimination. Specifically, our concealed setting corresponds to monopoly pricing without price discrimination, with the seller acting as principal and buyer acting as agent. Our revealed setting correspon...
To give additional sufficient conditions for concealment and revelation beyond the anchored setting with one zero-cost type, we apply an analysis technique similar to that of Aguirre et al. [2010], who analyzed the effects of third degree monopoly price discrimination on total welfare.
A
Stratified permutation is an instance of permutations with restricted positions (see, e.g., Rosenbaum, 1984; Diaconis
as the LHS of the inequality above is the sum of the areas of nssubscript𝑛𝑠n_{s}italic_n start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT
Z=hπ∈ℝn𝑍subscriptℎ𝜋superscriptℝ𝑛Z=h_{\pi}\in\mathbb{R}^{n}italic_Z = italic_h start_POSTSUBSCRIPT italic_π end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, where π∼𝒰⁢(𝕊n)similar-to𝜋𝒰subscript𝕊𝑛\pi\sim\mathcal{U}(\mathbb{S}_{n})italic_π ∼ caligraphic_U ( blackboard_S start_PO...
Bickel, 2021); thus, the set of such permutations forms a subset of all permutations of observation indices. Chen
et al. (2011, Chapter 6) provide normal approximation results for a class of restricted permutations — in contrast with the set of entire permutations required in Hoeffding (1951)’s CLT — but do not consider stratified permutations. The present paper fills this gap in the literature by developing normal approximation f...
C
A collection of ranked experiments identifies spectral subdivision C~~𝐶\tilde{C}over~ start_ARG italic_C end_ARG if for any prior μ∈int⁡Δ𝜇intΔ\mu\in\operatorname{int}\Deltaitalic_μ ∈ roman_int roman_Δ there exists a decision problem that satisfies the collection that induces C~~𝐶\tilde{C}over~ start_ARG italic_C end...
There exists a collection of ranked experiments and utility differences that identifies the value of information for the agent.
For any spectral subdivision C~~𝐶\tilde{C}over~ start_ARG italic_C end_ARG, there exists a collection of ranked experiments that identifies it.
By construction, for a decision problem whose value function induces subdivision C𝐶Citalic_C, at prior μ∈int⁡Δ𝜇intΔ\mu\in\operatorname{int}\Deltaitalic_μ ∈ roman_int roman_Δ, the DM is indifferent between any experiment π𝜋\piitalic_π possessing π~∈C~i~𝜋subscript~𝐶𝑖\tilde{\pi}\in\tilde{C}_{i}over~ start_ARG italic...
A collection of ranked experiments identifies spectral subdivision C~~𝐶\tilde{C}over~ start_ARG italic_C end_ARG if for any prior μ∈int⁡Δ𝜇intΔ\mu\in\operatorname{int}\Deltaitalic_μ ∈ roman_int roman_Δ there exists a decision problem that satisfies the collection that induces C~~𝐶\tilde{C}over~ start_ARG italic_C end...
B
If ν𝜈\nuitalic_ν has a single preference in its support, we are done. Suppose otherwise that n≥2𝑛2n\geq 2italic_n ≥ 2, so that that ≻1subscriptsucceeds1\succ_{1}≻ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ≻2subscriptsucceeds2\succ_{2}≻ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are in the support of ν𝜈\nuitalic_ν. Si...
It follows from Proposition 4.3 that the supports of SCRUM representations satisfy edge decomposability. Specifically, we know that every subset of a SCRUM support is itself the support of some SCRUM representation. It then follows that the lowest ranked preference in the support is the unique element of some L⁢(x,A)𝐿...
Unlike previously, it is not immediately obvious that the supports of SCRUM representations are edge decomposable. In order to see that they are, note the following.
et al. (2017) and Turansick (2022). We show that every set of preferences which can be the support of some distribution over preferences satisfying the conditions of either Apesteguia
et al. (2017). SCRUM puts further structure on Xnsubscript𝑋𝑛X_{n}italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT in that SCRUM assumes that Xnsubscript𝑋𝑛X_{n}italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is endowed with some exogenous linear order ⊳contains-as-subgroup\rhd⊳. We say that a random c...
A
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
4