diff --git "a/raw_rss_feeds/https___arxiv_org_rss_econ.xml" "b/raw_rss_feeds/https___arxiv_org_rss_econ.xml"
--- "a/raw_rss_feeds/https___arxiv_org_rss_econ.xml"
+++ "b/raw_rss_feeds/https___arxiv_org_rss_econ.xml"
@@ -7,572 +7,392 @@
http://www.rssboard.org/rss-specificationen-us
- Tue, 09 Dec 2025 05:00:10 +0000
+ Wed, 10 Dec 2025 05:00:23 +0000rss-help@arxiv.org
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500SaturdaySunday
- Comparative risk attitude and the aggregation of single-crossing
- https://arxiv.org/abs/2512.06005
- arXiv:2512.06005v1 Announce Type: new
-Abstract: In choice under risk, there is a standard notion of 'less risk-averse than', due to Yaari (1969). In the theory of comparative statics, the single-crossing property is satisfied by all weighted averages of a family of single-crossing functions if and only if the family satisfies a property called signed-ratio monotonicity (Quah & Strulovici, 2012). We establish a close link between 'less risk-averse than' and signed-ratio monotonicity.
- oai:arXiv.org:2512.06005v1
- econ.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Gregorio Curello, Ludvig Sinander, Mark Whitmeyer
-
-
- Wealth or Stealth? The Camouflage Effect in Insider Trading
- https://arxiv.org/abs/2512.06309
- arXiv:2512.06309v1 Announce Type: new
-Abstract: We consider a Kyle-type model where insider trading takes place among a potentially large population of liquidity traders and is subject to legal penalties. Insiders exploit the liquidity provided by the trading masses to "camouflage" their actions and balance expected wealth with the necessary stealth to avoid detection. Under a diverse spectrum of prosecution schemes, we establish the existence of equilibria for arbitrary population sizes and a unique limiting equilibrium. A convergence analysis determines the scale of insider trading by a stealth index $\gamma$, revealing that the equilibrium can be closely approximated by a simple limit due to diminished price informativeness. Empirical aspects are derived from two calibration experiments using non-overlapping data sets spanning from 1980 to 2018, which underline the indispensable role of a large population in insider trading models with legal risk, along with important implications for the incidence of stealth trading and the deterrent effect of legal enforcement.
- oai:arXiv.org:2512.06309v1
- econ.GN
- q-fin.EC
- q-fin.TR
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jin Ma, Weixuan Xia, Jianfeng Zhang
-
-
- Innovation, Spillovers and Economic Geography
- https://arxiv.org/abs/2512.06402
- arXiv:2512.06402v1 Announce Type: new
-Abstract: We develop a Schumpeterian quality-ladder spatial model in which innovation arrivals depend on regional knowledge spillovers. A parsimonious reduced-form diffusion mechanism induces the convergence of regions' average distance to the global frontier quality. As a result, regional differences in knowledge levels stem residually from asymmetries in the spatial distribution of researchers and firms. We analytically characterize the processes of innovation and knowledge diffusion. We then explore how the weight of intra-relative to inter-regional knowledge spillovers interacts with freer trade to shape the spatial distribution of economic activities. If intra-regional spillovers are relatively stronger, a higher economic integration leads to progressive agglomeration. If inter-regional spillovers dominate, researchers and firms may re-disperse after an initial phase of agglomeration as integration increases. This happens because firms and researchers have incentives to relocate to the smaller region, where they can leverage the concentrated knowledge base of the larger region while avoiding congestion in innovation. The smoothness of the dispersion process depends on the particular weight of intra-regional spillovers. If inter-regional spillovers become stronger as trade becomes freer, then the latter induces a monotone dispersion process. When integration is high enough, stable long-run equilibria always maximize the growth rate of the global frontier quality and the average distance to the frontier, irrespective of whether spillovers are mainly local or global.
- oai:arXiv.org:2512.06402v1
- econ.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jos\'e M. Gaspar, Minoru Osawa
-
-
- AI as "Co-founder": GenAI for Entrepreneurship
- https://arxiv.org/abs/2512.06506
- arXiv:2512.06506v1 Announce Type: new
-Abstract: This paper studies whether, how, and for whom generative artificial intelligence (GenAI) facilitates firm creation. Our identification strategy exploits the November 2022 release of ChatGPT as a global shock that lowered start-up costs and leverages variations across geo-coded grids with differential pre-existing AI-specific human capital. Using high-resolution and universal data on Chinese firm registrations by the end of 2024, we find that grids with stronger AI-specific human capital experienced a sharp surge in new firm formation$\unicode{x2013}$driven entirely by small firms, contributing to 6.0% of overall national firm entry. Large-firm entry declines, consistent with a shift toward leaner ventures. New firms are smaller in capital, shareholder number, and founding team size, especially among small firms. The effects are strongest among firms with potential AI applications, weaker financing needs, and among first-time entrepreneurs. Overall, our results highlight that GenAI serves as a pro-competitive force by disproportionately boosting small-firm entry.
- oai:arXiv.org:2512.06506v1
+ An Examination of Bitcoin's Structural Shortcomings as Money: A Synthesis of Economic and Technical Critiques
+ https://arxiv.org/abs/2512.07840
+ arXiv:2512.07840v1 Announce Type: new
+Abstract: Since its inception, Bitcoin has been positioned as a revolutionary alternative to national currencies, attracting immense public and academic interest. This paper presents a critical evaluation of this claim, suggesting that Bitcoin faces significant structural barriers to qualifying as money. It synthesizes critiques from two distinct schools of economic thought - Post-Keynesianism and the Austrian School - and validates their conclusions with rigorous technical analysis. From a Post-Keynesian perspective, it is argued that Bitcoin does not function as money because it is not a debt-based IOU and fails to exhibit the essential properties required for a stable monetary asset (Vianna, 2021). Concurrently, from an Austrian viewpoint, it is shown to be inconsistent with a strict interpretation of Mises's Regression Theorem, as it lacks prior non-monetary value and has not achieved the status of the most saleable commodity (Peniaz and Kavaliou, 2024). These theoretical arguments are then supported by an empirical analysis of Bitcoin's extreme volatility, hard-coded scalability limits, fragile market structure, and insecure long-term economic design. The paper concludes that Bitcoin is more accurately characterized as a novel speculative asset whose primary legacy may be the technological innovation it has spurred, rather than its viability as a monetary standard.
+ oai:arXiv.org:2512.07840v1econ.GN
- cs.AIq-fin.EC
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Junhui Jeff Cai, Xian Gu, Liugang Sheng, Mengjia Xia, Linda Zhao, Wu Zhu
+ Hamoon Soleimani
- Regulating a Monopolist without Subsidy
- https://arxiv.org/abs/2512.06525
- arXiv:2512.06525v1 Announce Type: new
-Abstract: We study monopoly regulation under asymmetric information about costs when subsidies are infeasible. A monopolist with privately known marginal cost serves a single product market and sets a price. The regulator maximizes a weighted welfare function using unit taxes as sole policy instrument. We identify a sufficient and necessary condition for when laissez-faire is optimal. When intervention is desired, we provide simple sufficient conditions under which the optimal policy is a progressive price cap: prices below a benchmark face no tax, while higher prices are taxed at increasing and potentially prohibitive rates. This policy combines delegation at low prices with taxation at high prices, balancing access, affordability, and profitability. Our results clarify when taxes act as complements to subsidies and when they serve only as imperfect substitutes, illuminating how feasible policy instruments shape optimal regulatory design.
- oai:arXiv.org:2512.06525v1
+ Coordinate-free utility theory
+ https://arxiv.org/abs/2512.07991
+ arXiv:2512.07991v1 Announce Type: new
+Abstract: Standard decision theory seeks conditions under which a preference relation can be compressed into a single real-valued function. However, when preferences are incomplete or intransitive, a single function fails to capture the agent's evaluative structure. Recent literature on multi-utility representations suggests that such preferences are better represented by families of functions. This paper provides a canonical and intrinsic geometric characterization of this family. We construct the \textit{ledger group} $U(P)$, a partially ordered group that faithfully encodes the native structure of the agent's preferences in terms of trade-offs. We show that the set of all admissible utility functions is precisely the \textit{dual cone} $U^*$ of this structure. This perspective shifts the focus of utility theory from the existence of a specific map to the geometry of the measurement space itself. We demonstrate the power of this framework by explicitly reconstructing the standard multi-attribute utility representation as the intersection of the abstract dual cone with a subspace of continuous functionals, and showing the impossibility of this for a set of lexicographic preferences.
+ oai:arXiv.org:2512.07991v1econ.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jiaming Wei, Dihan Zou
-
-
- Tournament-Based Performance Evaluation and Systematic Misallocation: Why Forced Ranking Systems Produce Random Outcomes
- https://arxiv.org/abs/2512.06583
- arXiv:2512.06583v1 Announce Type: new
-Abstract: Tournament-based compensation schemes with forced distributions represent a widely adopted class of relative performance evaluation mechanisms in technology and corporate environments. These systems mandate within-team ranking and fixed distributional requirements (e.g., bottom 15% terminated, top 15% promoted), ostensibly to resolve principal-agent problems through mandatory differentiation. We demonstrate through agent-based simulation that this mechanism produces systematic classification errors independent of implementation quality. With 994 engineers across 142 teams of 7, random team assignment yields 32% error in termination and promotion decisions, misclassifying employees purely through composition variance. Under realistic conditions reflecting differential managerial capability, error rates reach 53%, with false positives and false negatives each exceeding correct classifications. Cross-team calibration (often proposed as remedy) transforms evaluation into influence contests where persuasive managers secure promotions independent of merit. Multi-period dynamics produce adverse selection as employees observe random outcomes, driving risk-averse behavior and high-performer exit. The efficient solution (delegating judgment to managers with hierarchical accountability) cannot be formalized within the legal and coordination constraints that necessitated forced ranking. We conclude that this evaluation mechanism persists not through incentive alignment but through satisfying demands for demonstrable process despite producing outcomes indistinguishable from random allocation. This demonstrates how formalization intended to reduce agency costs structurally increases allocation error.
- oai:arXiv.org:2512.06583v1
- econ.GN
- q-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Jeremy McEntire
+ Safal Raman Aryal
- Making Event Study Plots Honest: A Functional Data Approach to Causal Inference
- https://arxiv.org/abs/2512.06804
- arXiv:2512.06804v1 Announce Type: new
-Abstract: Event study plots are the centerpiece of Difference-in-Differences (DiD) analysis, but current plotting methods cannot provide honest causal inference when the parallel trends and/or no-anticipation assumptions fail. We introduce a novel functional data approach to DiD that directly enables honest causal inference via event study plots. Our DiD estimator converges to a Gaussian process in the Banach space of continuous functions, enabling fast and powerful simultaneous confidence bands. This theoretical contribution allows us to turn an event study plot into a rigorous honest causal inference tool through equivalence and relevance testing: Honest reference bands can be validated using equivalence testing in the pre-anticipation period, and honest causal effects can be tested using relevance testing in the post-treatment period. We demonstrate the performance of the method in simulations and two case studies.
- oai:arXiv.org:2512.06804v1
+ Branching Fixed Effects: A Proposal for Communicating Uncertainty
+ https://arxiv.org/abs/2512.08101
+ arXiv:2512.08101v1 Announce Type: new
+Abstract: Economists often rely on estimates of linear fixed effects models developed by other teams of researchers. Assessing the uncertainty in these estimates can be challenging. I propose a form of sample splitting for network data that breaks two-way fixed effects estimates into statistically independent branches, each of which provides an unbiased estimate of the parameters of interest. These branches facilitate uncertainty quantification, moment estimation, and shrinkage. Algorithms are developed for efficiently extracting branches from large datasets. I illustrate these techniques using a benchmark dataset from Veneto, Italy that has been widely used to study firm wage effects.
+ oai:arXiv.org:2512.08101v1econ.EM
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chencheng Fang, Dominik Liebl
-
-
- Effectiveness of Carbon Pricing and Compensation Instruments: An Umbrella Review of the Empirical Evidence
- https://arxiv.org/abs/2512.06887
- arXiv:2512.06887v1 Announce Type: new
-Abstract: The growing urgency of the climate crisis has driven the implementation of diverse policy instruments to mitigate greenhouse gas (GHG) emissions. Among them, carbon pricing mechanisms such as carbon taxes and emissions trading systems (ETS), together with voluntary carbon markets (VCM) and compensation programs such as REDD+, are central components of global decarbonization strategies. However, academic and political debate persists regarding their true effectiveness, equity, and integrity. This paper presents an umbrella review of the empirical evidence, synthesizing key findings from systematic reviews and meta-analyses to provide a consolidated picture of the state of knowledge. A rigorous methodology based on PRISMA guidelines is used for study selection, and the methodological quality of included reviews is assessed with AMSTAR-2, while the risk of bias in frequently cited primary studies is examined through ROBINS-I. Results indicate that carbon taxes and ETS have demonstrated moderate effectiveness in reducing emissions, with statistically significant but heterogeneous elasticities across geographies and sectors. Nonetheless, persistent design problems -- such as insufficient price levels and allowance overallocation -- limit their impact. By contrast, compensation markets, especially VCM and REDD+ projects, face systemic critiques regarding integrity, primarily related to additionality, permanence, leakage, and double counting, leading to generalized overestimation of their real climate impact. We conclude that while no instrument is a panacea, compliance-based carbon pricing mechanisms are necessary, though insufficient, tools that require stricter design and higher prices. Voluntary offset mechanisms, in their current state, do not represent a reliable climate solution and may undermine the integrity of climate targets unless they undergo fundamental reform.
- oai:arXiv.org:2512.06887v1
- econ.GN
- q-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
+ stat.AP
+ stat.CO
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Ricardo Alonzo Fern\'andez Salguero
+ Patrick Kline
- Estimating Duration Dependence in Job Search: the Within-Estimation Duration Bias
- https://arxiv.org/abs/2512.06928
- arXiv:2512.06928v1 Announce Type: new
-Abstract: Many recent studies use individual longitudinal data to analyze job search behaviors. Such data allow the use of fixed-effects models, which supposedly address the issue of dynamic selection and make it possible to identify the structural effect of time. However, using fixed effects can induce a sizable within-estimation bias if job search outcomes take specific values at the time job seekers exit unemployment. This pattern creates an undesirable mechanical correlation between the error term and the time regressor. This paper derives the conditions under which the fixed-effects estimator provides valid estimates of structural duration-dependence relationships. Using Monte Carlo simulations, we show that the magnitude of the bias can be extremely large. Our results are not limited to the job search context but naturally extend to any framework in which longitudinal data are used to measure the structural effect of time.
- oai:arXiv.org:2512.06928v1
+ Robust Counterfactuals in Centralized Schools Choice Systems: Addressing Gender Inequality in STEM Education
+ https://arxiv.org/abs/2512.08115
+ arXiv:2512.08115v1 Announce Type: new
+Abstract: Counterfactual analysis is central to education market design and provides a foundation for credible policy recommendations. We develop a novel methodology for counterfactual analysis in Gale-Shapley deferred-acceptance (DA) assignment mechanisms under a weaker set of assumptions than those typically imposed in existing empirical works. Instead of fully specifying utility functions or students' beliefs about admission probabilities, we rely on interpretable restrictions on behavior that yield an incomplete but flexible model of preferences. This framework addresses the challenge of partial identification by delivering sharp bounds on counterfactual stable matching outcomes, which we compute efficiently using a combination of algorithmic techniques and integer programming. We illustrate the methodology by evaluating policies aimed at increasing female enrollment in STEM fields in Chile.
+ oai:arXiv.org:2512.08115v1econ.EM
- Tue, 09 Dec 2025 00:00:00 -0500
+ econ.TH
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Jeremy Zuchuat
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lixiong Li, Isma\"el Mourifi\'e
- Testing the Significance of the Difference-in-Differences Coefficient via Doubly Randomised Inference
- https://arxiv.org/abs/2512.06946
- arXiv:2512.06946v1 Announce Type: new
-Abstract: This article develops a significance test for the Difference-in-Differences (DiD) estimator based on doubly randomised inference, in which both the treatment and time indicators are permuted to generate an empirical null distribution of the DiD coefficient. Unlike classical $t$-tests or single-margin permutation procedures, the proposed method exploits a substantially enlarged randomization space. We formally characterise this expansion and show that dual randomization increases the number of admissible relabelings by a factor of $\binom{n}{n_T}$, yielding an exponentially richer permutation universe. This combinatorial gain implies a denser and more stable approximation of the null distribution, a result further justified through an information-theoretic (entropy) interpretation. The validity and finite-sample behaviour of the test are examined using multiple empirical datasets commonly analysed in applied economics, including the Indonesian school construction program (INPRES), brand search data, minimum wage reforms, and municipality-level refugee inflows in Greece. Across all settings, doubly randomised inference performs comparably to standard approaches while offering superior small-sample stability and sharper critical regions due to the enlarged permutation space. The proposed procedure therefore provides a robust, nonparametric alternative for assessing the statistical significance of DiD estimates, particularly in designs with limited group sizes or irregular assignment structures.
- oai:arXiv.org:2512.06946v1
- econ.EM
- Tue, 09 Dec 2025 00:00:00 -0500
+ Competition for being visited first and ordered search deterrence
+ https://arxiv.org/abs/2512.08136
+ arXiv:2512.08136v1 Announce Type: new
+Abstract: When customers must visit a seller to learn the valuation of its product, sellers potentially benefit from charging a lower price on the first visit and a higher price when a buyer returns. Armstrong and Zhou (2016) show that such price discrimination can arise in equilibrium when buyers learn a seller's pricing policy only upon visiting. We depart from this assumption by supposing that sellers commit to observable pricing policies that guide consumer search and buyers can choose whom to visit first. We show that no seller engages in price discrimination in equilibrium.
+ oai:arXiv.org:2512.08136v1
+ econ.TH
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Stanis{\l}aw Marek Sergiusz Halkiewicz, Andrzej Ka{\l}u\.za
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wojciech Olszewski, Yutong Zhang
- Wage Dispersion, On-the-Job Search, and Stochastic Match Productivity: A Mean Field Game Approach
- https://arxiv.org/abs/2512.07024
- arXiv:2512.07024v1 Announce Type: new
-Abstract: Wage dispersion and job-to-job mobility are central features of modern labour markets, yet canonical equilibrium search models with exogenous job-offer ladders struggle to jointly account for these facts and the magnitude of frictional wage inequality. We develop a continuous-time equilibrium search model in which match surplus follows a diffusion process, workers choose on-the-job search and separation, firms post state-contingent wages, and the cross-sectional distribution of match states endogenously determines both outside options and the job ladder. On the theoretical side, we formulate the problem as a stationary mean field game with a one-dimensional surplus state, characterize stationary mean field equilibria, and show that equilibrium separation is governed by a free-boundary rule: matches continue if and only if surplus stays above an endogenous threshold. Under standard regularity and Lasry-Lions monotonicity conditions we prove existence and uniqueness of stationary equilibrium and obtain comparative statics for the separation boundary, wage schedules, and wage dispersion. On the quantitative side, we solve the coupled HJB and Kolmogorov system using monotone finite-difference methods and interpret the discretization as a finite-state mean field game. The model is calibrated to micro evidence on stochastic match productivity, job durations, tenure-dependent separation hazards, wage growth, and job-to-job mobility. The stationary equilibrium delivers a structural decomposition of wage dispersion into stochastic selection along job spells, equilibrium on-the-job search and the induced job ladder, and equilibrium wage policies with feedback through outside options. We use this framework to quantify how firing costs, search subsidies, and changes in match-productivity volatility jointly shape mobility, the job ladder, and the cross-sectional distribution of wages.
- oai:arXiv.org:2512.07024v1
+ Robust procurement design
+ https://arxiv.org/abs/2512.08177
+ arXiv:2512.08177v1 Announce Type: new
+Abstract: We study procurement design when the buyer is uncertain about both the value of the good and the seller's cost. The buyer has a conjectured model but does not fully trust it. She first identifies mechanisms that maximize her worst-case payoff over a set of plausible models, and then selects one from this set that maximizes her expected payoff under the conjectured model. Robustness leads the buyer to increase procurement from the least efficient sellers and reduce it from those with intermediate costs. We also study monopoly regulation and identify conditions under which quantity regulation outperforms price regulation.
+ oai:arXiv.org:2512.08177v1econ.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- I. Sebastian Buhai
-
-
- Limitations of Randomization Tests in Finite Samples
- https://arxiv.org/abs/2512.07099
- arXiv:2512.07099v1 Announce Type: new
-Abstract: Randomization tests yield exact finite-sample Type 1 error control when the null satisfies the randomization hypothesis. However, achieving these guarantees in practice often requires stronger conditions than the null hypothesis of primary interest. For instance, sign-change tests for mean zero require symmetry and fail to control finite-sample error for non-symmetric mean-zero distributions. We investigate whether such limitations stem from specific test choices or reflect a fundamental inability to construct valid randomization tests for certain hypotheses. We develop a framework providing a simple necessary and sufficient condition for when null hypotheses admit randomization tests. Applying this framework to one-sample tests, we provide characterizations of which nulls satisfy this condition for both finite and continuous supports. In doing so, we prove that certain null hypotheses -- including mean zero -- do not admit randomization tests. We further show that nulls that admit randomization tests based on linear group actions correspond only to subsets of symmetric or normal distributions. Overall, our findings affirm that practitioners are not inadvertently incurring additional Type 1 error when using existing tests and further motivate focusing on the asymptotic validity of randomization tests.
- oai:arXiv.org:2512.07099v1
- econ.EM
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Deniz Dutz, Xinyi Zhang
+ Debasis Mishra, Sanket Patil, Alessandro Pavan
- Variational Regularized Bilevel Estimation for Exponential Random Graph Models
- https://arxiv.org/abs/2512.07176
- arXiv:2512.07176v1 Announce Type: new
-Abstract: I propose an estimation algorithm for Exponential Random Graph Models (ERGM), a popular statistical network model for estimating the structural parameters of strategic network formation in economics and finance. Existing methods often produce unreliable estimates of parameters for the triangle, a key network structure that captures the tendency of two individuals with friends in common to connect. Such unreliable estimates may lead to untrustworthy policy recommendations for networks with triangles. Through a variational mean-field approach, my algorithm addresses the two well-known difficulties when estimating the ERGM, the intractability of its normalizing constant and model degeneracy. In addition, I introduce $\ell_2$ regularization that ensures a unique solution to the mean-field approximation problem under suitable conditions. I provide a non-asymptotic optimization convergence rate analysis for my proposed algorithm under mild regularity conditions. Through Monte Carlo simulations, I demonstrate that my method achieves a perfect sign recovery rate for triangle parameters for small and mid-sized networks under perturbed initialization, compared to a 50% rate for existing algorithms. I provide the sensitivity analysis of estimates of ERGM parameters to hyperparameter choices, offering practical insights for implementation.
- oai:arXiv.org:2512.07176v1
+ Automatic Debiased Machine Learning of Structural Parameters with General Conditional Moments
+ https://arxiv.org/abs/2512.08423
+ arXiv:2512.08423v1 Announce Type: new
+Abstract: This paper proposes a method to automatically construct or estimate Neyman-orthogonal moments in general models defined by a finite number of conditional moment restrictions (CMRs), with possibly different conditioning variables and endogenous regressors. CMRs are allowed to depend on non-parametric components, which might be flexibly modeled using Machine Learning tools, and non-linearly on finite-dimensional parameters. The key step in this construction is the estimation of Orthogonal Instrumental Variables (OR-IVs) -- "residualized" functions of the conditioning variables, which are then combined to obtain a debiased moment. We argue that computing OR-IVs necessarily requires solving potentially complicated functional equations, which depend on unknown terms. However, by imposing an approximate sparsity condition, our method finds the solutions to those equations using a Lasso-type program and can then be implemented straightforwardly. Based on this, we introduce a GMM estimator of finite-dimensional parameters (structural parameters) in a two-step framework. We derive theoretical guarantees for our construction of OR-IVs and show $\sqrt{n}$-consistency and asymptotic normality for the estimator of the structural parameters. Our Monte Carlo experiments and an empirical application on estimating firm-level production functions highlight the importance of relying on inference methods like the one proposed.
+ oai:arXiv.org:2512.08423v1econ.EM
- stat.CO
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yoon Choi
-
-
- Analysing the factors affecting electric vehicle adoption using the extended theory of planned behaviour framework
- https://arxiv.org/abs/2512.07188
- arXiv:2512.07188v1 Announce Type: new
-Abstract: This study uses the Theory of Planned Behaviour (TPB) framework and expands it by including Optimism, Innovativeness and Range Anxiety constructs. In this study, conducted in Lucknow, the capital of India's most populous province (Uttar Pradesh), a multi stage random sampling design was employed to select 432 respondents from different city areas. The survey instruments were adapted from similar studies and suitably modified to suit the context. Using exploratory factor analysis, 18 measurement items converged into six factors, namely attitude, subjective norms, perceived behavioural control, optimism, innovativeness and range anxiety. We confirmed the reliability and validity of the constructs using Cronbach's alpha, composite reliability, average variance extracted and discriminant validity analysis. We explored the relationship between them using structural equation modelling. All factors but Optimism were found to be significantly associated with adoption intention. We further employed mediation analysis to examine the mediation pathways. The TPB components mediated the effect of innovativeness but not range anxiety. The study's insights can help policymakers and marketers design targeted interventions that address consumer concerns, reshape consumer perceptions, and foster greater EV adoption. The interventions can target increasing the mediating variables or decreasing range anxiety to facilitate a smoother transition to sustainable transportation.
- oai:arXiv.org:2512.07188v1
- econ.GN
- q-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Pranshu Raghuvanshi (India Institute of Science, Bangalore, India), Anjula Gurtoo (India Institute of Science, Bangalore, India)
+ Facundo Arga\~naraz
- Rice Price Dynamics during the 1945--1947 Famine in Post-War Taiwan: A Quantitative Reassessment
- https://arxiv.org/abs/2512.07492
- arXiv:2512.07492v1 Announce Type: new
-Abstract: We compiled the first high-frequency rice price panel for Taiwan from August 1945 to March 1947, during the transition from Japanese rule to China rule. Using regression models, we found that the pattern of rice price changes could be divided into four stages, each with distinct characteristics. Based on different stages, we combined the policies formulated by the Taiwan government at the time to demonstrate the correlation between rice prices and policies. The research results highlight the dominant role of policy systems in post-war food crises.
- oai:arXiv.org:2512.07492v1
+ When Medical AI Explanations Help and When They Harm
+ https://arxiv.org/abs/2512.08424
+ arXiv:2512.08424v1 Announce Type: new
+Abstract: We document a fundamental paradox in AI transparency: explanations improve decisions when algorithms are correct but systematically worsen them when algorithms err. In an experiment with 257 medical students making 3,855 diagnostic decisions, we find explanations increase accuracy by 6.3 percentage points when AI is correct (73% of cases) but decrease it by 4.9 points when incorrect (27% of cases). This asymmetry arises because modern AI systems generate equally persuasive explanations regardless of recommendation quality-physicians cannot distinguish helpful from misleading guidance. We show physicians treat explained AI as 15.2 percentage points more accurate than reality, with over-reliance persisting even for erroneous recommendations. Competent physicians with appropriate uncertainty suffer most from the AI transparency paradox (-12.4pp when AI errs), while overconfident novices benefit most (+9.9pp net). Welfare analysis reveals that selective transparency generates \$2.59 billion in annual healthcare value, 43% more than the \$1.82 billion from mandated universal transparency.
+ oai:arXiv.org:2512.08424v1econ.GNq-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Huaide Chen, Hailiang Yang
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Manshu Khanna, Ziyi Wang, Lijia Wei, Lian Xue
- Sustainable Exploitation Equilibria for Dynamic Games
- https://arxiv.org/abs/2512.07629
- arXiv:2512.07629v1 Announce Type: new
-Abstract: We introduce the Sustainable Exploitation Equilibrium (SEE), a refinement of Markov Perfect Equilibrium (MPE) for dynamic games with an exploiter-exploitee structure. SEE imposes two additional discipline conditions: (i) viability, requiring state trajectories to remain inside a sustainability set; and (ii) renegotiation-proofness with exploiter-optimal selection, to retain only those viable equilibria that are immune to Pareto-improving renegotiations, with ties resolved in favor of the exploiter. In our base formulation the exploitee cannot exit the relationship (no outside option), but retains a strategic effort margin that affects dynamics and payoffs. We establish existence under appropriate conditions and illustrate SEE in a hegemon-client model of foreign politics, where tribute demands trade off against the client's governance effort.
- oai:arXiv.org:2512.07629v1
- econ.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Minimax and Bayes Optimal Adaptive Experimental Design for Treatment Choice
+ https://arxiv.org/abs/2512.08513
+ arXiv:2512.08513v1 Announce Type: new
+Abstract: We consider an adaptive experiment for treatment choice and design a minimax and Bayes optimal adaptive experiment with respect to regret. Given binary treatments, the experimenter's goal is to choose the treatment with the highest expected outcome through an adaptive experiment, in order to maximize welfare. We consider adaptive experiments that consist of two phases, the treatment allocation phase and the treatment choice phase. The experiment starts with the treatment allocation phase, where the experimenter allocates treatments to experimental subjects to gather observations. During this phase, the experimenter can adaptively update the allocation probabilities using the observations obtained in the experiment. After the allocation phase, the experimenter proceeds to the treatment choice phase, where one of the treatments is selected as the best. For this adaptive experimental procedure, we propose an adaptive experiment that splits the treatment allocation phase into two stages, where we first estimate the standard deviations and then allocate each treatment proportionally to its standard deviation. We show that this experiment, often referred to as Neyman allocation, is minimax and Bayes optimal in the sense that its regret upper bounds exactly match the lower bounds that we derive. To show this optimality, we derive minimax and Bayes lower bounds for the regret using change-of-measure arguments. Then, we evaluate the corresponding upper bounds using the central limit theorem and large deviation bounds.
+ oai:arXiv.org:2512.08513v1
+ econ.EM
+ cs.LG
+ math.ST
+ stat.ME
+ stat.ML
+ stat.TH
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nicholas H. Kirk
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Masahiro Kato
- Bounds on inequality with incomplete data
- https://arxiv.org/abs/2512.07709
- arXiv:2512.07709v1 Announce Type: new
-Abstract: We develop a unified, nonparametric framework for sharp partial identification and inference on inequality indices when income or wealth are only coarsely observed -- for example via grouped tables or individual interval reports -- possibly together with linear restrictions such as known means or subgroup totals. First, for a broad class of Schur-convex inequality measures, we characterize extremal allocations and show that sharp bounds are attained by distributions with simple, finite support, reducing the underlying infinite-dimensional problem to finite-dimensional optimization. Second, for indices that admit linear-fractional representations after suitable ordering of the data (including the Gini coefficient, quantile ratios, and the Hoover index), we recast the bound problems as linear or quadratic programs, yielding fast computation of numerically sharp bounds. Third, we establish $\sqrt{n}$ inference for bound endpoints using a uniform directional delta method and a bootstrap procedure for standard errors. In ELSA wealth data with mixed point and interval observations, we obtain sharp Gini bounds of 0.714--0.792 for liquid savings and 0.686--0.767 for a broad savings measure; historical U.S. income tables deliver time-series bounds for the Gini, quantile ratios, and Hoover index under grouped information.
- oai:arXiv.org:2512.07709v1
+ Difference-in-Differences with Interval Data
+ https://arxiv.org/abs/2512.08759
+ arXiv:2512.08759v1 Announce Type: new
+Abstract: Difference-in-differences (DID) is one of the most popular tools used to evaluate causal effects of policy interventions. This paper extends the DID methodology to accommodate interval outcomes, which are often encountered in empirical studies using survey or administrative data. We point out that a naive application or extension of the conventional parallel trends assumption may yield uninformative or counterintuitive results, and present a suitable identification strategy, called parallel shifts, which exhibits desirable properties. Practical attractiveness of the proposed method is illustrated by revisiting an influential minimum wage study by Card and Krueger (1994).
+ oai:arXiv.org:2512.08759v1econ.EM
- stat.CO
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- James Banks, Thomas Glinnan, Tatiana Komarova
+ Daisuke Kurisu, Yuta Okamoto, Taisuke Otsu
- Log-concave functions and transformations thereof
- https://arxiv.org/abs/2512.07768
- arXiv:2512.07768v1 Announce Type: new
-Abstract: I summarize Bagnoli and Bergstrom (2005)'s review on log-concave functions, make several corrections, and augment the discussion with further results that can be useful in obtaining monotone hazard rate. I also provide an application of monopoly pricing.
- oai:arXiv.org:2512.07768v1
+ Platform Competition with User-Generated Content
+ https://arxiv.org/abs/2512.08876
+ arXiv:2512.08876v1 Announce Type: new
+Abstract: This paper develops a theoretical model of platform competition where user-generated content (UGC) quality arises endogenously from the composition of the user base. Users differ in their relative preferences for content quality and network size, and platforms compete by choosing advertising intensity, which affects user utility through perceived quality. We characterize equilibrium platform choice, identifying conditions under which equilibria are stable. The model captures how platforms' strategic decisions shape user allocation and market outcomes, including coexistence and dominance scenarios. We consider two types of equilibria in advertising levels: Nash equilibria and Stackelberg equilibria, and discuss the industry and policy implications of our results.
+ oai:arXiv.org:2512.08876v1econ.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Dihan Zou
-
-
- Optimal Auction Design under Costly Learning
- https://arxiv.org/abs/2512.07798
- arXiv:2512.07798v1 Announce Type: new
-Abstract: We study optimal auction design in an independent private values environment where bidders can endogenously -- but at a cost -- improve information about their own valuations. The optimal mechanism is two-stage: at stage-1 bidders register an information acquisition plan and pay a transfer; at stage-2 they bid, and allocation and payments are determined. We show that the revenue-optimal stage-2 rule is the Vickrey--Clarke--Groves (VCG) mechanism, while stage-1 transfers implement the optimal screening of types and absorb information rents consistent with incentive compatibility and participation. By committing to VCG ex post, the pre-auction information game becomes a potential game, so equilibrium information choices maximize expected welfare; the stage-1 fee schedule then transfers an optimal amount of payoff without conditioning on unverifiable cost scales. The design is robust to asymmetric primitives and accommodates a wide range of information technologies, providing a simple implementation that unifies efficiency and optimal revenue in environments with endogenous information acquisition.
- oai:arXiv.org:2512.07798v1
- econ.TH
- cs.GT
- cs.IT
- math.IT
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kemal Ozbek
+ Bohan Zhang
- Sell Data to AI Algorithms Without Revealing It: Secure Data Valuation and Sharing via Homomorphic Encryption
- https://arxiv.org/abs/2512.06033
- arXiv:2512.06033v1 Announce Type: cross
-Abstract: The rapid expansion of Artificial Intelligence is hindered by a fundamental friction in data markets: the value-privacy dilemma, where buyers cannot verify a dataset's utility without inspection, yet inspection may expose the data (Arrow's Information Paradox). We resolve this challenge by introducing the Trustworthy Influence Protocol (TIP), a privacy-preserving framework that enables prospective buyers to quantify the utility of external data without ever decrypting the raw assets. By integrating Homomorphic Encryption with gradient-based influence functions, our approach allows for the precise, blinded scoring of data points against a buyer's specific AI model. To ensure scalability for Large Language Models (LLMs), we employ low-rank gradient projections that reduce computational overhead while maintaining near-perfect fidelity to plaintext baselines, as demonstrated across BERT and GPT-2 architectures. Empirical simulations in healthcare and generative AI domains validate the framework's economic potential: we show that encrypted valuation signals achieve a high correlation with realized clinical utility and reveal a heavy-tailed distribution of data value in pre-training corpora where a minority of texts drive capability while the majority degrades it. These findings challenge prevailing flat-rate compensation models and offer a scalable technical foundation for a meritocratic, secure data economy.
- oai:arXiv.org:2512.06033v1
- cs.CR
+ Pattern Recognition of Ozone-Depleting Substance Exports in Global Trade Data
+ https://arxiv.org/abs/2512.07864
+ arXiv:2512.07864v1 Announce Type: cross
+Abstract: New methods are needed to monitor environmental treaties, like the Montreal Protocol, by reviewing large, complex customs datasets. This paper introduces a framework using unsupervised machine learning to systematically detect suspicious trade patterns and highlight activities for review. Our methodology, applied to 100,000 trade records, combines several ML techniques. Unsupervised Clustering (K-Means) discovers natural trade archetypes based on shipment value and weight. Anomaly Detection (Isolation Forest and IQR) identifies rare "mega-trades" and shipments with commercially unusual price-per-kilogram values. This is supplemented by Heuristic Flagging to find tactics like vague shipment descriptions. These layers are combined into a priority score, which successfully identified 1,351 price outliers and 1,288 high-priority shipments for customs review. A key finding is that high-priority commodities show a different and more valuable value-to-weight ratio than general goods. This was validated using Explainable AI (SHAP), which confirmed vague descriptions and high value as the most significant risk predictors. The model's sensitivity was validated by its detection of a massive spike in "mega-trades" in early 2021, correlating directly with the real-world regulatory impact of the US AIM Act. This work presents a repeatable unsupervised learning pipeline to turn raw trade data into prioritized, usable intelligence for regulatory groups.
+ oai:arXiv.org:2512.07864v1
+ cs.LG
+ econ.EMecon.GNq-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Michael Yang (Eric), Ruijiang Gao (Eric), Zhiqiang (Eric), Zheng
+ http://creativecommons.org/licenses/by/4.0/
+ Muhammad Sukri Bin Ramli
- PoliFi Tokens and the Trump Effect
- https://arxiv.org/abs/2512.06036
- arXiv:2512.06036v1 Announce Type: cross
-Abstract: Cryptoassets launched by political figures, e.g., political finance (PoliFi) tokens, have recently attracted attention. Chief among them are the eponymous tokens backed by the 47th president and first lady of the United States, TRUMPandMELANIA. We empirically analyze both, and study their impact on the broad decentralized finance (DeFi) ecosystem. Via a comparative longitudinal study, we uncover a "Trump Effect": the behavior of these tokens correlates positively with presidential approval ratings, whereas the same tight coupling does not extend to other cryptoassets and administrations. We additionally quantify the ecosystemic impact, finding that the fervor surrounding the two assets was accompanied by capital flows towards associated platforms like the Solana blockchain, which also enjoyed record volumes and fee expenditure.
- oai:arXiv.org:2512.06036v1
- physics.soc-ph
- econ.GN
- q-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
+ LLM-Generated Counterfactual Stress Scenarios for Portfolio Risk Simulation via Hybrid Prompt-RAG Pipeline
+ https://arxiv.org/abs/2512.07867
+ arXiv:2512.07867v1 Announce Type: cross
+Abstract: We develop a transparent and fully auditable LLM-based pipeline for macro-financial stress testing, combining structured prompting with optional retrieval of country fundamentals and news. The system generates machine-readable macroeconomic scenarios for the G7, which cover GDP growth, inflation, and policy rates, and are translated into portfolio losses through a factor-based mapping that enables Value-at-Risk and Expected Shortfall assessment relative to classical econometric baselines. Across models, countries, and retrieval settings, the LLMs produce coherent and country-specific stress narratives, yielding stable tail-risk amplification with limited sensitivity to retrieval choices. Comprehensive plausibility checks, scenario diagnostics, and ANOVA-based variance decomposition show that risk variation is driven primarily by portfolio composition and prompt design rather than by the retrieval mechanism. The pipeline incorporates snapshotting, deterministic modes, and hash-verified artifacts to ensure reproducibility and auditability. Overall, the results demonstrate that LLM-generated macro scenarios, when paired with transparent structure and rigorous validation, can provide a scalable and interpretable complement to traditional stress-testing frameworks.
+ oai:arXiv.org:2512.07867v1
+ q-fin.RM
+ cs.AI
+ econ.EM
+ Wed, 10 Dec 2025 00:00:00 -0500cross
- http://creativecommons.org/licenses/by/4.0/
- Ignacy Nieweglowski, Aviv Yaish, Fahad Saleh, Fan Zhang
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Masoud Soleimani
- Market Reactions and Information Spillovers in Bank Mergers: A Multi-Method Analysis of the Japanese Banking Sector
- https://arxiv.org/abs/2512.06550
- arXiv:2512.06550v1 Announce Type: cross
-Abstract: Major bank mergers and acquisitions (M&A) transform the financial market structure, but their valuation and spillover effects remain open to question. This study examines the market reaction to two M&A events: the 2005 creation of Mitsubishi UFJ Financial Group following the Financial Big Bang in Japan, and the 2018 merger involving Resona Holdings after the global financial crisis. The multi-method analysis in this research combines several distinct methods to explore these M&A events. An event study using the market model, the capital asset pricing model (CAPM), and the Fama-French three-factor model is implemented to estimate cumulative abnormal returns (CAR) for valuation purposes. Vector autoregression (VAR) models are used to test for Granger causality and map dynamic effects using impulse response functions (IRFs) to investigate spillovers. Propensity score matching (PSM) helps provide a causal estimate of the average treatment effect on the treated (ATT). The analysis detected a significant positive market reaction to the mergers. The findings also suggest the presence of prolonged positive spillovers to other banks, which may indicate a synergistic effect among Japanese banks. Combining these methods provides a unique perspective on M&A events in the Japanese banking sector, offering valuable insights for investors, managers, and regulators concerned with market efficiency and systemic stability
- oai:arXiv.org:2512.06550v1
- q-fin.CP
- econ.EM
- q-fin.PM
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
+ Does it take two to tango: Interaction between Credit Default Swaps and National Stock Indices
+ https://arxiv.org/abs/2512.07887
+ arXiv:2512.07887v1 Announce Type: cross
+Abstract: This paper investigates both short and long-run interaction between BIST-100 index and CDS prices over January 2008 to May 2015 using ARDL technique. The paper documents several findings. First, ARDL analysis shows that 1 TL increase in CDS shrinks BIST-100 index by 22.5 TL in short-run and 85.5 TL in long-run. Second, 1000 TL increase in BIST index price causes 25 TL and 44 TL reducation in Turkey's CDS prices in short- and long-run respectively. Third, a percentage increase in interest rate shrinks BIST index by 359 TL and a percentage increase in inflation rate scales CDS prices up to 13.34 TL both in long-run. In case of short-run, these impacts are limited with 231 TL and 5.73 TL respectively. Fourth, a kurush increase in TL/USD exchange rate leads 24.5 TL (short-run) and 78 TL (long-run) reductions in BIST, while it augments CDS prices by 2.5 TL (short-run) and 3 TL (long-run) respectively. Fifth, each negative political events decreases BIST by 237 TL in short-run and 538 TL in long-run, while it increases CDS prices by 33 TL in short-run and 89 TL in long-run. These findings imply the highly dollar indebted capital structure of Turkish firms, and overly sensitivity of financial markets to the uncertainties in political sphere. Finally, the paper provides evidence for that BIST and CDS with control variables drift too far apart, and converge to a long-run equilibrium at a moderate monthly speed.
+ oai:arXiv.org:2512.07887v1
+ q-fin.ST
+ econ.GN
+ q-fin.EC
+ q-fin.GN
+ Wed, 10 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Haibo Wang, Takeshi Tsuyuguchi
+ Journal of Economics and Financial Analysis, 2018, 2(1), pp.129-149
+ Yhlas Sovbetov, Hami Saka
- Learning Paths to Multi-Sector Equilibrium: Belief Dynamics Under Uncertain Returns to Scale
- https://arxiv.org/abs/2512.07013
- arXiv:2512.07013v1 Announce Type: cross
-Abstract: This paper explores the dynamics of learning in a multi-sector general equilibrium model where firms operate under incomplete information about their production returns to scale. Firms iteratively update their beliefs using maximum a-posteriori estimation, derived from observed production outcomes, to refine their knowledge of their returns to scale. The implications of these learning dynamics for market equilibrium and the conditions under which firms can effectively learn their true returns to scale are the key objects of this study. Our results shed light on how idiosyncratic shocks influence the learning process and demonstrate that input decisions encode all pertinent information for belief updates. Additionally, we show that a long-memory (path-dependent) learning which keeps track of all past estimations ends up having a worse performance than a short-memory (path-independent) approach.
- oai:arXiv.org:2512.07013v1
+ The Theory of Strategic Evolution: Games with Endogenous Players and Strategic Replicators
+ https://arxiv.org/abs/2512.07901
+ arXiv:2512.07901v1 Announce Type: cross
+Abstract: This paper develops the Theory of Strategic Evolution, a general model for systems in which the population of players, strategies, and institutional rules evolve together. The theory extends replicator dynamics to settings with endogenous players, multi level selection, innovation, constitutional change, and meta governance. The central mathematical object is a Poiesis stack: a hierarchy of strategic layers linked by cross level gain matrices. Under small gain conditions, the system admits a global Lyapunov function and satisfies selection, tracking, and stochastic stability results at every finite depth. We prove that the class is closed under block extension, innovation events, heterogeneous utilities, continuous strategy spaces, and constitutional evolution. The closure theorem shows that no new dynamics arise at higher levels and that unrestricted self modification cannot preserve Lyapunov structure. The theory unifies results from evolutionary game theory, institutional design, innovation dynamics, and constitutional political economy, providing a general mathematical model of long run strategic adaptation.
+ oai:arXiv.org:2512.07901v1cs.GT
+ cs.AIecon.TH
- math.OC
- math.PR
- math.ST
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Stefano Nasini, Rabia Nessah, Bertrand Wigniolle
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Kevin Vallier
- Finite-Sample Failures and Condition-Number Diagnostics in Double Machine Learning
- https://arxiv.org/abs/2512.07083
- arXiv:2512.07083v1 Announce Type: cross
-Abstract: Standard Double Machine Learning (DML; Chernozhukov et al., 2018) confidence intervals can exhibit substantial finite-sample coverage distortions when the underlying score equations are ill-conditioned, even if nuisance functions are estimated with state-of-the-art methods. Focusing on the partially linear regression (PLR) model, we show that a simple, easily computed condition number for the orthogonal score, denoted kappa_DML := 1 / |J_theta|, largely determines when DML inference is reliable. Our first result derives a nonasymptotic, Berry-Esseen-type bound showing that the coverage error of the usual DML t-statistic is of order n^{-1/2} + sqrt(n) * r_n, where r_n is the standard DML remainder term summarizing nuisance estimation error. Our second result provides a refined linearization in which both estimation error and confidence interval length scale as kappa_DML / sqrt(n) + kappa_DML * r_n, so that ill-conditioning directly inflates both variance and bias. These expansions yield three conditioning regimes - well-conditioned, moderately ill-conditioned, and severely ill-conditioned - and imply that informative, shrinking confidence sets require kappa_DML = o_p(sqrt(n)) and kappa_DML * r_n -> 0. We conduct Monte Carlo experiments across overlap levels, nuisance learners (OLS, Lasso, random forests), and both low- and high-dimensional (p > n) designs. Across these designs, kappa_DML is highly predictive of finite-sample performance: well-conditioned designs with kappa_DML < 1 deliver near-nominal coverage with short intervals, whereas severely ill-conditioned designs can exhibit large bias and coverage around 40% for nominal 95% intervals, despite flexible nuisance fitting. We propose reporting kappa_DML alongside DML estimates as a routine diagnostic of score conditioning, in direct analogy to condition-number checks and weak-instrument diagnostics in IV settings.
- oai:arXiv.org:2512.07083v1
- stat.ME
- econ.EM
- Tue, 09 Dec 2025 00:00:00 -0500
+ Cabin Layout, Seat Density, and Passenger Segmentation in Air Transport: Implications for Prices, Ancillary Revenues, and Efficiency
+ https://arxiv.org/abs/2512.08066
+ arXiv:2512.08066v1 Announce Type: cross
+Abstract: This study investigates how the layout and density of seats in aircraft cabins influence the pricing of airline tickets on domestic flights. The analysis is based on microdata from boarding passes linked to face-to-face interviews with passengers, allowing us to relate the price paid to the location on the aircraft seat map, as well as market characteristics and flight operations. Econometric models were estimated using the Post-Double-Selection LASSO (PDS-LASSO) procedure, which selects numerous controls for unobservable factors linked to commercial and operational aspects, thus enabling better identification of the effect of variables such as advance purchase, reason for travel, fuel price, market structure, and load factor, among others. The results suggest that a higher density of seat rows is associated with lower prices, reflecting economies of scale with the increase in aircraft size and gains in operational efficiency. An unexpected result was also obtained: in situations where there was no seat selection fee, passengers with more expensive tickets were often allocated middle seats due to purchasing at short notice, when the side alternatives were no longer available. This behavior helps explain the economic logic behind one of the main ancillary revenues of airlines. In addition to quantitative analysis, the study incorporates an exploratory approach to innovative cabin concepts and their possible effects on density and comfort on board.
+ oai:arXiv.org:2512.08066v1
+ eess.SY
+ cs.SY
+ econ.GN
+ q-fin.EC
+ stat.AP
+ Wed, 10 Dec 2025 00:00:00 -0500cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Gabriel Saco
-
-
- The Suicide Region: Option Games and the Race to Artificial General Intelligence
- https://arxiv.org/abs/2512.07526
- arXiv:2512.07526v1 Announce Type: cross
-Abstract: Standard real options theory predicts delay in exercising the option to invest or deploy when extreme asset volatility or technological uncertainty are present. However, in the current race to develop artificial general intelligence (AGI), sovereign actors are exhibiting behaviors contrary to theoretical predictions: the US and China are accelerating AI investment despite acknowledging the potential for catastrophic failure from AGI misalignment. We resolve this puzzle by formalizing the AGI race as a continuous-time preemption game with endogenous existential risk. In our model, the cost of failure is no longer bounded only by the sunk cost of investment (I), but rather a systemic ruin parameter (D) that is correlated with development velocity and shared globally. As the disutility of catastrophe is embedded in both players' payoffs, the risk term mathematically cancels out of the equilibrium indifference condition. This creates a "suicide region" in the investment space where competitive pressures force rational agents to deploy AGI systems early, despite a negative risk-adjusted net present value. Furthermore, we show that "warning shots" (sub-existential disasters) will fail to deter AGI acceleration, as the winner-takes-all nature of the race remains intact. The race can only be halted if the cost of ruin is internalized, making safety research a prerequisite for economic viability. We derive the critical private liability threshold required to restore the option value of waiting and propose mechanism design interventions that can better ensure safe AGI research and socially responsible deployment.
- oai:arXiv.org:2512.07526v1
- q-fin.RM
+ http://creativecommons.org/licenses/by/4.0/
+ 10.5281/zenodo.17860616
+ Communications in Airline Economics Research, 202117818, 2025
+ Alessandro V. M. Oliveira, Moises D. Vassallo
+
+
+ Measuring Computer Science Enthusiasm: A Questionnaire-Based Analysis of Age and Gender Effects on Students' Interest
+ https://arxiv.org/abs/2512.08472
+ arXiv:2512.08472v1 Announce Type: cross
+Abstract: This study offers new insights into students' interest in computer science (CS) education by disentangling the distinct effects of age and gender across a diverse adolescent sample. Grounded in the person-object theory of interest (POI), we conceptualize enthusiasm as a short-term, activating expression of interest that combines positive affect, perceived relevance, and intention to re-engage. Experiencing such enthusiasm can temporarily shift CS attitudes and strengthen future engagement intentions, making it a valuable lens for evaluating brief outreach activities. To capture these dynamics, we developed a theoretically grounded questionnaire for pre-post assessment of the enthusiasm potential of CS interventions. Using data from more than 400 students participating in online CS courses, we examined age- and gender-related patterns in enthusiasm. The findings challenge the prevailing belief that early exposure is the primary pathway to sustained interest in CS. Instead, we identify a marked decline in enthusiasm during early adolescence, particularly among girls, alongside substantial variability in interest trajectories across age groups. Crucially, our analyses reveal that age is a more decisive factor than gender in shaping interest development and uncover key developmental breakpoints. Despite starting with lower baseline attitudes, older students showed the largest positive changes following the intervention, suggesting that well-designed short activities can effectively re-activate interest even at later ages. Overall, the study highlights the need for a dynamic, age-sensitive framework for CS education in which instructional strategies are aligned with developmental trajectories.
+ oai:arXiv.org:2512.08472v1
+ cs.SE
+ cs.CYecon.GNq-fin.EC
- q-fin.GN
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- David Tan
+ Kai Marquardt, Robert Hanak, Anne Koziolek, Lucia Happe
- The Adoption and Usage of AI Agents: Early Evidence from Perplexity
- https://arxiv.org/abs/2512.07828
- arXiv:2512.07828v1 Announce Type: cross
-Abstract: This paper presents the first large-scale field study of the adoption, usage intensity, and use cases of general-purpose AI agents operating in open-world web environments. Our analysis centers on Comet, an AI-powered browser developed by Perplexity, and its integrated agent, Comet Assistant. Drawing on hundreds of millions of anonymized user interactions, we address three fundamental questions: Who is using AI agents? How intensively are they using them? And what are they using them for? Our findings reveal substantial heterogeneity in adoption and usage across user segments. Earlier adopters, users in countries with higher GDP per capita and educational attainment, and individuals working in digital or knowledge-intensive sectors -- such as digital technology, academia, finance, marketing, and entrepreneurship -- are more likely to adopt or actively use the agent. To systematically characterize the substance of agent usage, we introduce a hierarchical agentic taxonomy that organizes use cases across three levels: topic, subtopic, and task. The two largest topics, Productivity & Workflow and Learning & Research, account for 57% of all agentic queries, while the two largest subtopics, Courses and Shopping for Goods, make up 22%. The top 10 out of 90 tasks represent 55% of queries. Personal use constitutes 55% of queries, while professional and educational contexts comprise 30% and 16%, respectively. In the short term, use cases exhibit strong stickiness, but over time users tend to shift toward more cognitively oriented topics. The diffusion of increasingly capable AI agents carries important implications for researchers, businesses, policymakers, and educators, inviting new lines of inquiry into this rapidly emerging class of AI capabilities.
- oai:arXiv.org:2512.07828v1
- cs.LG
- econ.GN
- q-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
+ Variance strikes back: sub-game--perfect Nash equilibria in time-inconsistent $N$-player games, and their mean-field sequel
+ https://arxiv.org/abs/2512.08745
+ arXiv:2512.08745v1 Announce Type: cross
+Abstract: We investigate a time-inconsistent, non-Markovian finite-player game in continuous time, where each player's objective functional depends non-linearly on the expected value of the state process. As a result, the classical Bellman optimality principle no longer applies. To address this, we adopt a two-layer game-theoretic framework and seek sub-game--perfect Nash equilibria both at the intra-personal level, which accounts for time inconsistency, and at the inter-personal level, which captures strategic interactions among players. We first characterise sub-game--perfect Nash equilibria and the corresponding value processes of all players through a system of coupled backward stochastic differential equations. We then analyse the mean-field counterpart and its sub-game--perfect mean-field equilibria, described by a system of McKean-Vlasov backward stochastic differential equations. Building on this representation, we finally prove the convergence of sub-game--perfect Nash equilibria and their corresponding value processes in the $N$-player game to their mean-field counterparts.
+ oai:arXiv.org:2512.08745v1
+ math.PR
+ econ.TH
+ math.OC
+ Wed, 10 Dec 2025 00:00:00 -0500cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Jeremy Yang, Noah Yonack, Kate Zyskowski, Denis Yarats, Johnny Ho, Jerry Ma
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dylan Possama\"i, Chiara Rossato
- Optimal Investment, Consumption, and Insurance with Durable Goods under Stochastic Depreciation Risk
- https://arxiv.org/abs/1903.00631
- arXiv:1903.00631v2 Announce Type: replace
-Abstract: We study an infinite-horizon optimal investment, consumption and insurance problem for an economic agent who consumes a perishable and a durable good. The agent trades in a risk-free asset, a risky asset, and a durable good whose price follows a correlated diffusion, while the stock of the durable good depreciates deterministically and is subject to insurable Poisson loss shocks. The agent can partially hedge these shocks via an insurance contract with loading and chooses optimal perishable consumption, portfolio holdings, and insurance coverage to maximise expected discounted CRRA utility. Exploiting the homogeneity of the problem, we reduce the Hamilton--Jacobi--Bellman equation to a static one-dimensional optimisation over constant portfolio shares and derive a semi-explicit optimal strategy. We then prove a verification theorem for the associated jump-diffusion wealth process with insurance, establishing the existence and optimality of this constant-fraction strategy under explicit transversality conditions for both risk-aversion regimes $0<\gamma<1$ and $\gamma>1$. Numerical experiments illustrate the impact of stochastic depreciation risk and insurance loading on the optimal allocation to financial assets, durable goods, and insurance coverage.
- oai:arXiv.org:1903.00631v2
- econ.GN
- q-fin.CP
- q-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
+ Identifying Treatment and Spillover Effects Using Exposure Contrasts
+ https://arxiv.org/abs/2403.08183
+ arXiv:2403.08183v4 Announce Type: replace
+Abstract: To report spillover effects, a common practice is to regress outcomes on statistics summarizing neighbors' treatments. This paper studies nonparametric analogs of these estimands, which we refer to as exposure contrasts. We demonstrate that a contrast may have the opposite sign of the unit-level effects of interest even under unconfoundedness. We then provide interpretable conditions on interference and the assignment mechanism under which exposure contrasts can be represented as convex averages of the unit-level effects and therefore avoid sign reversals. These conditions encompass cluster-randomized trials, network experiments, and observational settings with peer effects in selection into treatment.
+ oai:arXiv.org:2403.08183v4
+ econ.EM
+ stat.ME
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Aleksandar Arandjelovi\'c, Ryle S. Perera, Pavel V. Shevchenko, Tak Kuen Siu, Jin Sun
+ Michael P. Leung
- A Unified Framework for Estimation of High-dimensional Conditional Factor Models
- https://arxiv.org/abs/2209.00391
- arXiv:2209.00391v2 Announce Type: replace
-Abstract: This paper presents a general framework for estimating high-dimensional conditional latent factor models via constrained nuclear norm regularization. We establish large sample properties of the estimators and provide efficient algorithms for their computation. To improve practical applicability, we propose a cross-validation procedure for selecting the regularization parameter. Our framework unifies the estimation of various conditional factor models, enabling the derivation of new asymptotic results while addressing limitations of existing methods, which are often model-specific or restrictive. Empirical analyses of the cross section of individual US stock returns suggest that imposing homogeneity improves the model's out-of-sample predictability, with our new method outperforming existing alternatives.
- oai:arXiv.org:2209.00391v2
+ A quantile-based nonadditive fixed effects model
+ https://arxiv.org/abs/2405.03826
+ arXiv:2405.03826v2 Announce Type: replace
+Abstract: I propose a quantile-based nonadditive fixed effects panel model to study heterogeneous causal effects. Similar to standard fixed effects (FE) model, my model allows arbitrary dependence between regressors and unobserved heterogeneity, but it generalizes the additive separability of standard FE to allow the unobserved heterogeneity to enter nonseparably. Similar to structural quantile models, my model's random coefficient vector depends on an unobserved, scalar ''rank'' variable, in which outcomes (excluding an additive noise term) are monotonic at a particular value of the regressor vector, which is much weaker than the conventional monotonicity assumption that must hold at all possible values. This rank is assumed to be stable over time, which is often more economically plausible than the panel quantile studies that assume individual rank is iid over time. It uncovers the heterogeneous causal effects as functions of the rank variable. I provide identification and estimation results, establishing uniform consistency and uniform asymptotic normality of the heterogeneous causal effect function estimator. Simulations show reasonable finite-sample performance and show my model complements fixed effects quantile regression. Finally, I illustrate the proposed methods by examining the causal effect of a country's oil wealth on its military defense spending.
+ oai:arXiv.org:2405.03826v2econ.EM
- stat.AP
- stat.ME
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qihui Chen
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Xin Liu
- Generative AI and Copyright: A Dynamic Perspective
- https://arxiv.org/abs/2402.17801
- arXiv:2402.17801v2 Announce Type: replace
-Abstract: The rapid advancement of generative AI is poised to disrupt the creative industry. Amidst the immense excitement for this new technology, its future development and applications in the creative industry hinge crucially upon two copyright issues: 1) the compensation to creators whose content has been used to train generative AI models (the fair use standard); and 2) the eligibility of AI-generated content for copyright protection (AI-copyrightability). While both issues have ignited heated debates among academics and practitioners, most analysis has focused on their challenges posed to existing copyright doctrines. In this paper, we aim to better understand the economic implications of these two regulatory issues and their interactions. By constructing a dynamic model with endogenous content creation and AI model development, we unravel the impacts of the fair use standard and AI-copyrightability on AI development, AI company profit, creators income, and consumer welfare, and how these impacts are influenced by various economic and operational factors. For example, while generous fair use (use data for AI training without compensating the creator) benefits all parties when abundant training data exists, it can hurt creators and consumers when such data is scarce. Similarly, stronger AI-copyrightability (AI content enjoys more copyright protection) could hinder AI development and reduce social welfare. Our analysis also highlights the complex interplay between these two copyright issues. For instance, when existing training data is scarce, generous fair use may be preferred only when AI-copyrightability is weak. Our findings underscore the need for policymakers to embrace a dynamic, context-specific approach in making regulatory decisions and provide insights for business leaders navigating the complexities of the global regulatory environment.
- oai:arXiv.org:2402.17801v2
- econ.TH
- cs.AI
- Tue, 09 Dec 2025 00:00:00 -0500
+ Endogenous Heteroskedasticity in Linear Models
+ https://arxiv.org/abs/2412.02767
+ arXiv:2412.02767v4 Announce Type: replace
+Abstract: Linear regressions with endogeneity are widely used to estimate causal effects. This paper studies a framework that involves two common practical issues: endogeneity of the regressors and heteroskedasticity that depends on endogenous regressors, i.e., endogenous heteroskedasticity. To address the inconsistency of the two-stage least squares estimator in this scenario, and recover the causal parameters of interest, we develop a framework for practical estimation and inference based on the control function approach allowing for discrete and continuous regressors. In particular, we suggest a simple two-step estimation procedure. We establish the limiting properties of the estimator, namely, consistency and asymptotic normality. In addition, we develop practical valid inference methods by proposing an estimator for the asymptotic variance-covariance matrix, and formally establishing its consistency. Monte Carlo simulations provide evidence on the finite-sample performance of the proposed methods and evaluate different implementation strategies. We revisit an empirical application on job training to illustrate the methods.
+ oai:arXiv.org:2412.02767v4
+ econ.EM
+ Wed, 10 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- S. Alex Yang, Angela Huyue Zhang
+ http://creativecommons.org/licenses/by/4.0/
+ Javier Alejo, Antonio F. Galvao, Julian Martinez-Iriarte, Gabriel Montes-Rojas
- Characterization of Priority-Neutral Matching Lattices
- https://arxiv.org/abs/2404.02142
- arXiv:2404.02142v2 Announce Type: replace
-Abstract: We study the structure of the set of priority-neutral matchings. These matchings, introduced by Reny (AER, 2022), generalize stable matchings by allowing for priority violations in a principled way that enables Pareto-improvements to stable matchings. Known results show that the set of priority-neutral matchings is a lattice, suggesting that these matchings may enjoy the same tractable theoretical structure as stable matchings.
- In this paper, we characterize priority-neutral matching lattices, and show that their structure is considerably more intricate than that of stable matching lattices. To begin, we show priority-neutral lattices are not distributive, an important property that characterizes stable lattices and is satisfied by many other lattice structures considered in matching theory and algorithm design. Then, in our main result, we show that priority-neutral lattices are in fact characterized by a more-involved property which we term being a "movement lattice," which allows for significant departures from the order theoretic properties of distributive (and hence stable) lattices. While our results show that priority-neutrality is more intricate than stability, they also establish tractable properties. Indeed, as a corollary of our main result, we obtain the first known polynomial-time algorithm for checking whether a given matching is priority-neutral.
- oai:arXiv.org:2404.02142v2
+ Analysis of the Order Flow Auction under Proposer-Builder Separation on Blockchain
+ https://arxiv.org/abs/2502.12026
+ arXiv:2502.12026v2 Announce Type: replace
+Abstract: We study the impact of the order flow auction (OFA) in the context of the proposer-builder separation (PBS) mechanism in blockchains through a game-theoretic perspective. The OFA is designed to improve user welfare by redistributing maximal extractable value (MEV) to the users, in which two sequential auctions take place: the order flow auction and the block-building auction. We formulate the OFA as a multiplayer game, and establish the existence of a Nash equilibrium, and in the two-player case derive a closed-form solution (and prove its uniqueness) via a quartic equation. Our result shows that the builder with a competitive advantage pays a lower cost, leading to a higher revenue, and adding to centralization in the builder space. In contrast, the proposer's shares evolve as a martingale process, which implies decentralization in the proposer/validator space. Our analyses rely on various tools from stochastic processes, convex optimization, and polynomial equations. We also conduct numerical studies to corroborate our findings, and to bring out other features of the OFA under the PBS mechanism.
+ oai:arXiv.org:2502.12026v2econ.TH
- cs.GT
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Clayton Thomas
+ Ruofei Ma, Wenpin Tang, David Yao
- Algorithmic Collusion under Observed Demand Shocks
- https://arxiv.org/abs/2502.15084
- arXiv:2502.15084v4 Announce Type: replace
-Abstract: This paper examines how the observability of demand shocks influences pricing patterns and market outcomes when firms delegate pricing decisions to Q-learning algorithms. Simulations show that demand observability induces Q-learning agents to adapt prices to demand fluctuations, giving rise to distinctive demand-contingent pricing patterns across the discount factor $\delta$, consistent with Rotemberg and Saloner (1986). When $\delta$ is high, they learn procyclical pricing, charging higher prices in higher demand states. In contrast, at low $\delta$, they lower prices during booms and raise them during downturns, exhibiting countercyclical pricing. Q-learning agents also autonomously sustain supracompetitive profits, indicating that demand observability does not hinder algorithmic collusion. I further explore how the information available to algorithms shapes their learned pricing behavior. Overall, the results suggest that, through pure trial and error, Q-learning algorithms internalize both the stronger deviation incentives during booms and the trade-off between short-term gains and long-term continuation values governed by the discount factor, thereby reproducing the cyclicality of pricing patterns predicted by collusion theory.
- oai:arXiv.org:2502.15084v4
+ Using firm-level supply chain networks to measure the speed of the energy transition
+ https://arxiv.org/abs/2503.01572
+ arXiv:2503.01572v2 Announce Type: replace
+Abstract: While many national and international climate policies clearly outline decarbonization targets and the timelines for achieving them, there is a notable lack of effort to objectively monitor progress. A significant share of the transition from fossil fuels to low-carbon energy will be borne by industry and the economy, requiring both the decarbonization of the electricity sector and the electrification of industrial processes. But how quickly are firms adopting low-carbon electricity? Using a unique dataset on Hungary's national supply chain network, we analyze the energy portfolios of 25,000 firms, covering more than 75% of gas, 70% of electricity, and 50% of oil consumption between 2020 and 2024. This enables us to objectively measure the trends of decarbonization efforts at the firm level. Although almost half of firms have increased their share of low-carbon electricity, more than half have reduced it. Extrapolating the observed trends, we find a transition of only 20% of total energy consumption to low-carbon electricity by 2050. The current speed of transition in the economy is not sufficient to reach climate neutrality by 2050. However, if firms would adopt the same efforts as the decarbonization frontrunners in their industry, a low-carbon share of up to 70% could be reached, putting climate targets within reach. We examine several firm characteristics that differentiate transitioning from non-transitioning firms. Our results are consistent with a 'lock-in' effect, whereby firms with a high share of fossil fuel costs relative to revenue are less likely to transition.
+ oai:arXiv.org:2503.01572v2econ.GNq-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zexin Ye
+ Johannes Stangl, Andr\'as Borsos, Stefan Thurner
- Quasi-Bayesian Local Projections: Simultaneous Inference and Extension to the Instrumental Variable Method
- https://arxiv.org/abs/2503.20249
- arXiv:2503.20249v3 Announce Type: replace
-Abstract: Local projections (LPs) are widely used for impulse response analysis, but Bayesian methods face challenges due to the absence of a likelihood function. Existing approaches rely on pseudo-likelihoods, which often result in poorly calibrated posteriors. We propose a quasi-Bayesian method based on the Laplace-type estimator, where a quasi-likelihood is constructed using a generalized method of moments criterion. This approach avoids strict distributional assumptions, ensures well-calibrated inferences, and supports simultaneous credible bands. Additionally, it can be naturally extended to the instrumental variable method. We validate our approach through Monte Carlo simulations.
- oai:arXiv.org:2503.20249v3
- econ.EM
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Masahiro Tanaka
-
-
- Attention vs Choice in Welfare Take-Up: What Works for WIC?
- https://arxiv.org/abs/2506.03457
- arXiv:2506.03457v5 Announce Type: replace
-Abstract: Incomplete take-up of welfare benefits remains a major policy puzzle. This paper decomposes the causes of incomplete welfare take-up into two mechanisms: inattention, where households do not consider program participation, and active choice, where households consider participation but find it not worthwhile. To capture these two mechanisms, we model households' take-up decision as a two-stage process: attention followed by choice. Applied to NLSY97 data on the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), our model reveals substantial household-level heterogeneity in both attention and choice probabilities. Furthermore, counterfactual simulations predict that choice-nudging policies outperform attention-boosting policies. We test this prediction using data from the WIC2Five pilot program that sent choice-nudging and attention-boosting text messages to different households. Consistent with the counterfactual prediction, choice-nudging messages increased retention much more effectively than attention-boosting messages.
- oai:arXiv.org:2506.03457v5
+ The Economic Impact of Low- and High-Frequency Temperature Changes
+ https://arxiv.org/abs/2505.08950
+ arXiv:2505.08950v2 Announce Type: replace
+Abstract: Temperature variations at different frequencies may have distinct impacts on economic outcomes. We first explore ways to estimate the low- and high-frequency components in a U.S. panel of 48 states. All methods suggest slowly evolving low-frequency components of temperature at the state level, and that they share a common factor which covaries with the low-frequency component of economic activity. While we fail to find a statistically significant impact of low-frequency temperature changes on U.S. growth, an international panel of 50 countries suggests that a 1{\deg}C increase in the low-frequency component will reduce economic growth by about one percent in the long run. The linear effect of the high-frequency component is not well determined in all panels, but there is evidence of a non-linear effect in the international panel. The findings are corroborated by time series estimation using data at the unit and national levels. Our empirical work pays attention to distortions that may arise from using one-way clustered errors for inference, and to the possible inadequacy of the additive fixed effect specification in controlling for common time effects.
+ oai:arXiv.org:2505.08950v2econ.GNq-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Lei Bill Wang, Sooa Ahn
+ Nikolay Gospodinov, Ignacio Lopez Gaffney, Serena Ng
- The AI Productivity Index (APEX)
- https://arxiv.org/abs/2509.25721
- arXiv:2509.25721v5 Announce Type: replace
-Abstract: We present an extended version of the AI Productivity Index (APEX-v1-extended), a benchmark for assessing whether frontier models are capable of performing economically valuable tasks in four jobs: investment banking associate, management consultant, big law associate, and primary care physician (MD). This technical report details the extensions to APEX-v1, including an increase in the held-out evaluation set from n = 50 to n = 100 cases per job (n = 400 total) and updates to the grading methodology. We present a new leaderboard, where GPT5 (Thinking = High) remains the top performing model with a score of 67.0%. APEX-v1-extended shows that frontier models still have substantial limitations when performing typical professional tasks. To support further research, we are open sourcing n = 25 non-benchmark example cases per role (n = 100 total) along with our evaluation harness.
- oai:arXiv.org:2509.25721v5
- econ.GN
- cs.AI
- cs.CL
- cs.HC
- q-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
+ A Generalized Control Function Approach to Production Function Estimation
+ https://arxiv.org/abs/2511.21578
+ arXiv:2511.21578v3 Announce Type: replace
+Abstract: We develop a generalized control function approach to production function estimation. Our approach accommodates settings in which productivity evolves jointly with other unobservable factors such as latent demand shocks and the invertibility assumption underpinning the traditional proxy variable approach fails. We provide conditions under which the output elasticity of the variable input -- and hence the markup -- is nonparametrically point-identified. A Neyman orthogonal moment condition ensures oracle efficiency of our GMM estimator. A Monte Carlo exercise shows a large bias for the traditional approach that decreases rapidly and nearly vanishes for our generalized control function approach.
+ oai:arXiv.org:2511.21578v3
+ econ.EM
+ Wed, 10 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Bertie Vidgen, Abby Fennelly, Evan Pinnix, Julien Benchek, Daniyal Khan, Zach Richards, Austin Bridges, Calix Huang, Ben Hunsberger, Isaac Robinson, Akul Datta, Chirag Mahapatra, Dominic Barton, Cass R. Sunstein, Eric Topol, Brendan Foody, Osvald Nitski
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ulrich Doraszelski, Lixiong Li
- Gas supply shocks, uncertainty and price setting: evidence from Italian firms
- https://arxiv.org/abs/2510.03792
- arXiv:2510.03792v4 Announce Type: replace
-Abstract: This paper examines how natural gas supply shocks affect Italian firms' pricing decisions and inflation expectations using quarterly survey data from the Bank of Italy's Survey on Inflation and Growth Expectations (SIGE). We identify natural gas supply shocks through an external IV-VAR approach exploiting likely unexpected news about interruption to gas supplies to Europe. Our findings show that although gas supply shocks do not have huge effects on gas quantity and only modest effect on gas inventories, they are quickly transmitted to spot electricity prices with persistent effects. We then estimate a proxy internalizing BVAR incorporating firm-level variables from SIGE, documenting that gas supply shocks raise firms' current and expected prices as well as inflation uncertainty. Finally, we uncover substantial nonlinearities using state-dependent local projections: under high inflation uncertainty, firms successfully pass cost increases on to consumers, sustaining elevated prices; under low uncertainty, recessionary effects dominate, leading firms to cut prices below baseline.
- oai:arXiv.org:2510.03792v4
+ Assessing the potential impact of environmental land management schemes on emergent infection disease risks
+ https://arxiv.org/abs/2311.07735
+ arXiv:2311.07735v2 Announce Type: replace-cross
+Abstract: Financial incentives encourage the plantation of new woodland to increase habitat, biodiversity, carbon sequestration, as a contribution to meeting climate change and biodiversity conservation targets. Whilst these are largely positive effects, it is worth considering that this expansion of woodland can lead to increased presence of wildlife species in proximity to agricultural holdings that may pose an enhanced risk of disease transmission between wildlife and livestock. Wildlife and the provision of a reservoir for infectious disease is particularly important in the transmission dynamics of bovine tuberculosis, the case studied here.
+ In this paper we develop an economic model for predicting changes in land use resulting from subsidies for woodland planting. We use this to assess the consequent impact on wild deer populations in the newly created woodland areas, and thus the emergent infectious disease risk arising from the proximity of new and existing wild deer populations and existing cattle holdings.
+ We consider an area in the South-West of Scotland, having existing woodland, deer populations, and extensive and diverse cattle farm holdings. In this area we find that, with a varying level of subsidy and plausible new woodland creation scenarios, the contact risk between areas of wild deer and cattle increases between 26% and 35% over the risk present with a zero subsidy.
+ This provides a foundation for extending to larger regions and for examining potential risk mitigation strategies, for example the targeting of subsidy in low disease risk areas, or provisioning for buffer zones between woodland and agricultural holdings.
+ oai:arXiv.org:2311.07735v2
+ q-bio.PEecon.GNq-fin.EC
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Giuseppe Pagano Giorgianni
-
-
- Efficiency in Games with Incomplete Information
- https://arxiv.org/abs/2510.12508
- arXiv:2510.12508v2 Announce Type: replace
-Abstract: We study games with incomplete information and characterize when a feasible outcome is Pareto efficient. Outcomes with excessive randomization are inefficient: generically, the total number of action profiles across states must be strictly less than the sum of the number of players and the number of states. We consider three applications. A cheap talk outcome is efficient only if pure; with state-independent sender payoffs, it is efficient if and only if the sender's most preferred action is induced with certainty. In natural settings, Bayesian persuasion outcomes are inefficient across many priors. Finally, ranking-based allocation mechanisms are inefficient under mild conditions.
- oai:arXiv.org:2510.12508v2
- econ.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Itai Arieli, Yakov Babichenko, Atulya Jain, Rann Smorodinsky
-
-
- Dynamic Mediation and Moral Hazard: From Private To Public Communication
- https://arxiv.org/abs/2511.02436
- arXiv:2511.02436v4 Announce Type: replace
-Abstract: I characterize optimal mediation dynamics with fixed discounting in a moral hazard model where a long-lived worker interacts with short-lived clients. I show that optimal mediation yields a nonstationary correlated information structure that transitions from private to public communication over time. In early periods, it occasionally creates information asymmetry about future play between the worker and the clients by randomizing over two continuations, with the realization privately revealed to the worker. In one, the worker shirks with impunity. In the other, the worker exerts effort subject to minimal punishment for underperformance. Eventually, optimal mediation prescribes only public communication that induces carrot-and-stick incentives.
- oai:arXiv.org:2511.02436v4
- econ.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Allen Vong
-
-
- Assessing Financial Statement Risks among $\mathrm{MCDM}$ Techniques
- https://arxiv.org/abs/2512.04035
- arXiv:2512.04035v2 Announce Type: replace
-Abstract: In this paper, to determine the financial risks faced by an industrial company, assessing the relative importance of these risks and identifying the years most exposed to financial risk using modern multi-criteria decision-making techniques. Applied to AL-Ahliah Vegetable Oil Company, the research utilizes the Analytical Hierarchy Process and Simple Additive Weighting to analyze financial ratios from 2008 to 2017.
- oai:arXiv.org:2512.04035v2
- econ.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Marwa Abdullah, Revzon Oksana Anatolyevna, Duaa Abdullah
-
-
- Contest Design with Threshold Objectives
- https://arxiv.org/abs/2109.03179
- arXiv:2109.03179v3 Announce Type: replace-cross
-Abstract: We study contests where the designer's objective is an extension of the widely studied objective of maximizing the total output: The designer gets zero marginal utility from a player's output if the output of the player is very low or very high. We consider two variants of this setting, which correspond to two objective functions: binary threshold, where the designer's utility is a non-decreasing function of the number of players with output above a certain threshold; and linear threshold, where a player's contribution to the designer's utility is linear in her output if the output is between a lower and an upper threshold, and becomes constant below the lower and above the upper threshold. For both of these objectives, we study rank-order allocation contests and general contests. We characterize the contests that maximize the designer's objective and indicate techniques to efficiently compute them.
- oai:arXiv.org:2109.03179v3
- cs.GT
- econ.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Edith Elkind, Abheek Ghosh, Paul W. Goldberg
+ http://creativecommons.org/licenses/by/4.0/
+ Christopher J. Banks, Katherine Simpson, Nicholas Hanley, Rowland R. Kao
- Matching Markets Meet LLMs: Algorithmic Reasoning with Ranked Preferences
- https://arxiv.org/abs/2506.04478
- arXiv:2506.04478v2 Announce Type: replace-cross
-Abstract: The rise of Large Language Models (LLMs) has driven progress in reasoning tasks -- from program synthesis to scientific hypothesis generation -- yet their ability to handle ranked preferences and structured algorithms in combinatorial domains remains underexplored. We study matching markets, a core framework behind applications like resource allocation and ride-sharing, which require reconciling individual ranked preferences to ensure stable outcomes. We evaluate several state-of-the-art models on a hierarchy of preference-based reasoning tasks -- ranging from stable-matching generation to instability detection, instability resolution, and fine-grained preference queries -- to systematically expose their logical and algorithmic limitations in handling ranked inputs. Surprisingly, even top-performing models with advanced reasoning struggle to resolve instability in large markets, often failing to identify blocking pairs or execute algorithms iteratively. We further show that parameter-efficient fine-tuning (LoRA) significantly improves performance in small markets, but fails to bring about a similar improvement on large instances, suggesting the need for more sophisticated strategies to improve LLMs' reasoning with larger-context inputs.
- oai:arXiv.org:2506.04478v2
+ Left Leaning Models: How AI Evaluates Economic Policy?
+ https://arxiv.org/abs/2507.15771
+ arXiv:2507.15771v2 Announce Type: replace-cross
+Abstract: Would artificial intelligence (AI) cut interest rates or adopt conservative monetary policy? Would it deregulate or opt for a more controlled economy? As AI use by economic policymakers, academics, and market participants grows exponentially, it is becoming critical to understand AI preferences over economic policy. However, these preferences are not yet systematically evaluated and remain a black box. This paper makes a conjoint experiment on leading large language models (LLMs) from OpenAI, Anthropic, and Google, asking them to evaluate economic policy under multi-factor constraints. The results are remarkably consistent across models: most LLMs exhibit a strong preference for high growth, low unemployment, and low inequality over traditional macroeconomic concerns such as low inflation and low public debt. Scenario-specific experiments show that LLMs are sensitive to context but still display strong preferences for low unemployment and low inequality even in monetary-policy settings. Numerical sensitivity tests reveal intuitive responses to quantitative changes but also uncover non-linear patterns such as loss aversion.
+ oai:arXiv.org:2507.15771v2
+ cs.CYcs.AI
- cs.GT
- econ.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hadi Hosseini, Samarth Khanna, Ronak Singh
-
-
- Can language models boost the power of randomized experiments without statistical bias?
- https://arxiv.org/abs/2510.05545
- arXiv:2510.05545v2 Announce Type: replace-cross
-Abstract: Randomized experiments or randomized controlled trials (RCTs) are gold standards for causal inference, yet cost and sample-size constraints limit power. We introduce CALM (Causal Analysis leveraging Language Models), a statistical framework that integrates large language models (LLMs) generated insights of RCTs with established causal estimators to increase precision while preserving statistical validity. In particular, CALM treats LLM-generated outputs as auxiliary prognostic information and corrects their potential bias via a heterogeneous calibration step that residualizes and optimally reweights predictions. We prove that CALM remains consistent even when LLM predictions are biased and achieves efficiency gains over augmented inverse probability weighting estimators for various causal effects. In particular, CALM develops a few-shot variant that aggregates predictions across randomly sampled demonstration sets. The resulting U-statistic-like predictor restores i.i.d. structure and also mitigates prompt-selection variability. Empirically, in simulations calibrated to a mobile-app depression RCT, CALM delivers lower variance relative to other benchmarking methods, is effective in zero- and few-shot settings, and remains stable across prompt designs. By principled use of LLMs to harness unstructured data and external knowledge learned during pretraining, CALM provides a practical path to more precise causal analyses in RCTs.
- oai:arXiv.org:2510.05545v2
- stat.ME
- econ.EM
- Tue, 09 Dec 2025 00:00:00 -0500
+ econ.GN
+ q-fin.EC
+ Wed, 10 Dec 2025 00:00:00 -0500replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xinrui Ruan, Xinwei Ma, Yingfei Wang, Waverly Wei, Jingshen Wang
+ http://creativecommons.org/licenses/by/4.0/
+ Maxim Chupilkin
- Computing Evolutionarily Stable Strategies in Multiplayer Games
- https://arxiv.org/abs/2511.20859
- arXiv:2511.20859v3 Announce Type: replace-cross
-Abstract: We present an algorithm for computing all evolutionarily stable strategies in nondegenerate normal-form games with three or more players.
- oai:arXiv.org:2511.20859v3
+ Collusion-proof Auction Design using Side Information
+ https://arxiv.org/abs/2511.12456
+ arXiv:2511.12456v2 Announce Type: replace-cross
+Abstract: We study the problem of auction design in the presence of bidder collusion. Specifically, we consider a multi-unit auction of identical items with single-minded bidders, where a subset of bidders may collude by coordinating bids and transferring payments and items among themselves. The classical Vickrey-Clarke-Groves(VCG) mechanism is highly vulnerable to collusion and fully collusion-proof mechanisms are limited to posted-price formats, which fail to guarantee even approximate efficiency. This paper aims to bridge this gap by designing auctions that achieve good welfare and revenue guarantees even when some bidders collude. We first characterize the strategic behavior of colluding bidders under VCG and prove that such bidders optimally bid shade: they never overbid or take additional items, but instead reduce the auction price. This characterization enables a Bulow-Klemperer type result: adding colluding bidders can only improve welfare and revenue relative to running VCG on the non-colluding group alone. We next consider a setting where black-box collusion detection algorithm is available to label bidders as being colluding or non-colluding, and we propose a VCG-Posted Price(V-PoP) mechanism that combines VCG applied to non-colluding bidders with a posted-price mechanism for colluding bidders. We show that V-PoP is ex-post dominant-strategy incentive compatible(DSIC) and derive probabilistic guarantees on expected welfare and revenue under both known and unknown valuation distributions. Numerical experiments across several distributions demonstrate that V-PoP consistently outperforms VCG restricted to non-colluding bidders and approaches the performance of the ideal VCG mechanism assuming universal truthfulness. Our results provide a principled framework for incorporating collusion detection into mechanism design, offering a step toward collusion-resistant auctions.
+ oai:arXiv.org:2511.12456v2cs.GT
- cs.AI
- cs.MAecon.TH
- q-bio.PE
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sam Ganzfried
+ http://creativecommons.org/licenses/by/4.0/
+ Sukanya Kudva, Anil Aswani