id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
83f4f0edc33f0cc6221e922ad5ca3bc27a676eb4e66cd83492e6323d72001df8
2026-01-21T00:00:00-05:00
AI Skills Improve Job Prospects: Causal Evidence from a Hiring Experiment
arXiv:2601.13286v1 Announce Type: new Abstract: The growing adoption of artificial intelligence (AI) technologies has heightened interest in the labour market value of AI-related skills, yet causal evidence on their role in hiring decisions remains scarce. This study examines whether AI skills serve as a positive hiring signal and whether they can offset conventional disadvantages such as older age or lower formal education. We conduct an experimental survey with 1,700 recruiters from the United Kingdom and the United States. Using a paired conjoint design, recruiters evaluated hypothetical candidates represented by synthetically designed resumes. Across three occupations - graphic designer, office assistant, and software engineer - AI skills significantly increase interview invitation probabilities by approximately 8 to 15 percentage points. AI skills also partially or fully offset disadvantages related to age and lower education, with effects strongest for office assistants, where formal AI certification plays an additional compensatory role. Effects are weaker for graphic designers, consistent with more skeptical recruiter attitudes toward AI in creative work. Finally, recruiters' own background and AI usage significantly moderate these effects. Overall, the findings demonstrate that AI skills function as a powerful hiring signal and can mitigate traditional labour market disadvantages, with implications for workers' skill acquisition strategies and firms' recruitment practices.
https://arxiv.org/abs/2601.13286
Academic Papers
svg
f2ba8a2b9b833e72db9bf62c8b6194a53b41332baab2f7fcf27bf5dfdaa5af6a
2026-01-21T00:00:00-05:00
Human-AI Collaboration in Radiology: The Case of Pulmonary Embolism
arXiv:2601.13379v1 Announce Type: new Abstract: We study how radiologists use AI to diagnose pulmonary embolism (PE), tracking over 100,000 scans interpreted by nearly 400 radiologists during the staggered rollout of a real-world FDA-approved diagnostic platform in a hospital system. When AI flags PE, radiologists agree 84% of the time; when AI predicts no PE, they agree 97%. Disagreement evolves substantially: radiologists initially reject AI-positive PEs in 30% of cases, dropping to 12% by year two. Despite a 16% increase in scan volume, diagnostic speed remains stable while per-radiologist monthly volumes nearly double, with no change in patient mortality -- suggesting AI improves workflow without compromising outcomes. We document significant heterogeneity in AI collaboration: some radiologists reject AI-flagged PEs half the time while others accept nearly always; female radiologists are 6 percentage points less likely to override AI than male radiologists. Moderate AI engagement is associated with the highest agreement, whereas both low and high engagement show more disagreement. Follow-up imaging reveals that when radiologists override AI to diagnose PE, 54% of subsequent scans show both agreeing on no PE within 30 days.
https://arxiv.org/abs/2601.13379
Academic Papers
svg
7c838cf9e8d7acfc4cdaff976d19c1950ae981eb3984d58e778e6e5fdd033a80
2026-01-21T00:00:00-05:00
Accelerator and Brake: Dynamic Persuasion with Dead Ends
arXiv:2601.13686v1 Announce Type: new Abstract: We study optimal dynamic persuasion in a bandit experimentation model where a principal, unlike in standard settings, has a single-peaked preference over the agent's stopping time. This non-monotonic preference arises because maximizing the agent's effort is not always in the principal's best interest, as it may lead to a dead end. The principal privately observes the agent's payoff upon success and uses the information as the instrument of incentives. We show that the optimal dynamic information policy involves at most two one-shot disclosures: an accelerator before the principal's optimal stopping time, persuading the agent to be optimistic, and a brake after the principal's optimal stopping time, persuading the agent to be pessimistic. A key insight of our analysis is that the optimal disclosure pattern -- whether gradual or one-shot -- depends on how the principal resolves a trade-off between the mean of stopping times and its riskiness. We identify the Arrow-Pratt coefficient of absolute risk aversion as a sufficient statistic for determining the optimal disclosure structure.
https://arxiv.org/abs/2601.13686
Academic Papers
svg
4d5eda05e5275f42f5de8619123117bd04b74ba40f55a4c15236eb2e4d0b4ca4
2026-01-21T00:00:00-05:00
Liabilities for the social cost of carbon
arXiv:2601.13834v1 Announce Type: new Abstract: We estimate the national social cost of carbon using a recent meta-analysis of the total impact of climate change and a standard integrated assessment model. The average social cost of carbon closely follows per capita income, the national social cost of carbon the size of the population. The national social cost of carbon measures self-harm. Net liability is defined as the harm done by a country's emissions on other countries minus the harm done to a country by other countries' emissions. Net liability is positive in middle-income, carbon-intensive countries. Poor and rich countries would be compensated because their current emissions are relatively low, poor countries additionally because they are vulnerable.
https://arxiv.org/abs/2601.13834
Academic Papers
svg
c266c5c089b0b2674b95f509296aeb8de6012d4e56f239ffdf8c50c6c56cb103
2026-01-21T00:00:00-05:00
How Disruptive is Financial Technology?
arXiv:2601.14071v1 Announce Type: new Abstract: We study whether Fintech disrupts the banking sector by intensifying competition for scarce deposits funds and raising deposit rates. Using difference-in-difference estimation around the exogenous removal of marketplace platform investing restrictions by US states, we show the cost of deposits increase by approximately 11.5% within small financial institutions. However, these price changes are effective in preventing a drain of liquidity. Size and geographical diversification through branch networks can mitigate the effects of Fintech competition by sourcing deposits from less competitive markets. The findings highlight the unintended consequences of the growing Fintech sector on banks and offer policy insights for regulators and managers into the ongoing development and impact of technology on the banking sector.
https://arxiv.org/abs/2601.14071
Academic Papers
svg
df6cd9f539497fadda29c41b19fb87373ed610d1e043671280146903549ea62d
2026-01-21T00:00:00-05:00
Hot Days, Unsafe Schools? The Impact of Heat on School Shootings
arXiv:2601.14094v1 Announce Type: new Abstract: Using data on school shooting incidents in U.S. K--12 schools from 1981 to 2022, we estimate the causal effects of high temperatures on school shootings and assess the implications of climate change. We find that days with maximum temperatures exceeding 90$^\circ$F lead to a 80\% increase in school shootings relative to days below 70$^\circ$F. Consistent with theories linking heat exposure to aggression, high temperatures increase homicidal and threat-related shootings but have no effect on accidental or suicidal shootings. Heat-induced shootings occur disproportionately during periods of greater student mobility and reduced supervision, including before and after school hours and lunch periods. Higher temperatures increase shootings involving both student and non-student perpetrators. We project that climate change will increase homicidal and threat-related school shootings in the U.S. by 8\% under SSP2--4.5 (moderate emissions) and by 14\% under SSP5--8.5 (high emissions) by 2091--2100, corresponding to approximately 23 and 39 additional shootings per decade, respectively. The present discounted value of the resulting social costs is \$343 million and \$592 million (2025 dollars), respectively.
https://arxiv.org/abs/2601.14094
Academic Papers
svg
8f465e4299b627a0fc3bb45cb4dcdcb4c8292441b91da8862ed5b44e185fe13e
2026-01-21T00:00:00-05:00
Foreign influencer operations: How TikTok shapes American perceptions of China
arXiv:2601.14118v1 Announce Type: new Abstract: How do authoritarian regimes strengthen global support for nondemocratic political systems? Roughly half of the users of the social media platform TikTok report getting news from social media influencers. Against this backdrop, authoritarian regimes have increasingly outsourced content creation to these influencers. To gain understanding of the extent of this phenomenon and the persuasive capabilities of these influencers, we collect comprehensive data on pro-China influencers on TikTok. We show that pro-China influencers have more engagement than state media. We then create a realistic clone of the TikTok app, and conduct a randomized experiment in which over 8,500 Americans are recruited to use this app and view a random sample of actual TikTok content. We show that pro-China foreign influencers are strikingly effective at increasing favorability toward China, while traditional Chinese state media causes backlash. The findings highlight the importance of influencers in shaping global public opinion.
https://arxiv.org/abs/2601.14118
Academic Papers
svg
8799a862a4d507ff4f3a32bd91c8d4d6c78504819d331859f68fb4df58914817
2026-01-21T00:00:00-05:00
Trade relationships during and after a crisis
arXiv:2601.14150v1 Announce Type: new Abstract: I study how firms adjust to temporary disruptions in international trade relationships organized through relational contracts. I exploit an extreme, plausibly exogenous weather shock during the 2010-11 La Ni\~na season that restricted Colombian flower exporters' access to cargo terminals. Using transaction-level data from the Colombian-U.S. flower trade, I show that importers with less-exposed supplier portfolios are less likely to terminate disrupted relationships, instead tolerating shipment delays. In contrast, firms facing greater exposure experience higher partner turnover and are more likely to exit the market, with exit accounting for a substantial share of relationship separations. These findings demonstrate that idiosyncratic shocks to buyer-seller relationships can propagate into persistent changes in firms' trading portfolios.
https://arxiv.org/abs/2601.14150
Academic Papers
svg
2d8e6855b00f8904f532619f480d879a37cedf74456ec24ce3f4378266f2cfff
2026-01-21T00:00:00-05:00
Settling the Score: Portioning with Cardinal Preferences
arXiv:2307.15586v5 Announce Type: cross Abstract: We study a portioning setting in which a public resource such as time or money is to be divided among a given set of candidates, and each agent proposes a division of the resource. We consider two families of aggregation rules for this setting -- those based on coordinate-wise aggregation and those that optimize some notion of welfare -- as well as the recently proposed independent markets rule. We provide a detailed analysis of these rules from an axiomatic perspective, both for classic axioms, such as strategyproofness and Pareto optimality, and for novel axioms, some of which aim to capture proportionality in this setting. Our results indicate that a simple rule that computes the average of the proposals satisfies many of our axioms and fares better than all other considered rules in terms of fairness properties. We complement these results by presenting two characterizations of the average rule.
https://arxiv.org/abs/2307.15586
Academic Papers
svg
da125a1cd7bc50b928ad8d623d399214a06146a3411ae783ec7dffc0dc277369
2026-01-21T00:00:00-05:00
Latent Variable Phillips Curve
arXiv:2601.11601v1 Announce Type: cross Abstract: This paper re-examines the empirical Phillips curve (PC) model and its usefulness in the context of medium-term inflation forecasting. A latent variable Phillips curve hypothesis is formulated and tested using 3,968 randomly generated factor combinations. Evidence from US core PCE inflation between Q1 1983 and Q1 2025 suggests that latent variable PC models reliably outperform traditional PC models six to eight quarters ahead and stand a greater chance of outperforming a univariate benchmark. Incorporating an MA(1) residual process improves the accuracy of empirical PC models across the board, although the gains relative to univariate models remain small. The findings presented in this paper have two important implications: First, they corroborate a new conceptual view on the Phillips curve theory; second, they offer a novel path towards improving the competitiveness of Phillips curve forecasts in future empirical work.
https://arxiv.org/abs/2601.11601
Academic Papers
svg
0c7ddb5d67f401021dcf377a3653863212074c93bc47a0284345f71cd6ef996e
2026-01-21T00:00:00-05:00
Conservation priorities to prevent the next pandemic
arXiv:2601.13349v1 Announce Type: cross Abstract: Diseases originating from wildlife pose a significant threat to global health, causing human and economic losses each year. The transmission of disease from animals to humans occurs at the interface between humans, livestock, and wildlife reservoirs, influenced by abiotic factors and ecological mechanisms. Although evidence suggests that intact ecosystems can reduce transmission, disease prevention has largely been neglected in conservation efforts and remains underfunded compared to mitigation. A major constraint is the lack of reliable, spatially explicit information to guide efforts effectively. Given the increasing rate of new disease emergence, accelerated by climate change and biodiversity loss, identifying priority areas for mitigating the risk of disease transmission is more crucial than ever. We present new high-resolution (1 km) maps of priority areas for targeted ecological countermeasures aimed at reducing the likelihood of zoonotic spillover, along with a methodology adaptable to local contexts. Our study compiles data on well-documented risk factors, protection status, forest restoration potential, and opportunity cost of the land to map areas with high potential for cost-effective interventions. We identify low-cost priority areas across 50 countries, including 277,000 km2 where environmental restoration could mitigate the risk of zoonotic spillover and 198,000 km2 where preventing deforestation could do the same, 95% of which are not currently under protection. The resulting layers, covering tropical regions globally, are freely available alongside an interactive no-code platform that allows users to adjust parameters and identify priority areas at multiple scales. Ecological countermeasures can be a cost-effective strategy for reducing the emergence of new pathogens; however, our study highlights the extent to which current conservation efforts fall short of this goal.
https://arxiv.org/abs/2601.13349
Academic Papers
svg
54f539cc9c3b23857e9fc1d7ab4cabd12308d38add968b0176e1cb8e4effc872
2026-01-21T00:00:00-05:00
On the Anchoring Effect of Monetary Policy on the Labor Share of Income and the Rationality of Its Setting Mechanism
arXiv:2601.13675v1 Announce Type: cross Abstract: Modern macroeconomic monetary theory suggests that the labor share of income has effectively become a core macroe-conomic parameter anchored by top policymakers through Open Market Operations (OMO). However, the setting of this parameter remains a subject of intense economic debate. This paper provides a detailed summary of these controversies, analyzes the scope of influence exerted by market agents other than the top policymakers on the labor share, and explores the rationality of its setting mechanism.
https://arxiv.org/abs/2601.13675
Academic Papers
svg
20866f3ad41f194b7f9ab225512cbbde1f34e11a6e761886fbb2b48bea707a1c
2026-01-21T00:00:00-05:00
A simple model of interbank trading with tiered remuneration
arXiv:2006.10946v2 Announce Type: replace Abstract: Many countries have adopted negative interest rate policies with tiering remuneration, which allows for exemption from negative rates. This practice has led to higher interbank trading volumes, with market rates ranging between zero and the negative remuneration rates. This study proposes a basic model of an interbank market with tiering remuneration that can be tested with actual market data because of its simplicity and can indicate the level of the market rate created by the different exemption levels. By generalizing the model, we found that a tiering system is also suitable for maintaining a higher trading activity, regardless of the level of the remuneration rate.
https://arxiv.org/abs/2006.10946
Academic Papers
svg
a0fc5204af3aa1a8388e447d4a6e5ac81df9b4ccde5099be120649dcd1d47635
2026-01-21T00:00:00-05:00
Market-Based Asset Price Probability
arXiv:2205.07256v5 Announce Type: replace Abstract: The random values and volumes of consecutive trades made at the exchange with shares of security determine its mean, variance, and higher statistical moments. The volume weighted average price (VWAP) is the simplest example of such a dependence. We derive the dependence of the market-based variance and 3rd statistical moment of prices on the means, variances, covariances, and 3rd moments of the values and volumes of market trades. The usual frequency-based assessments of statistical moments of prices are the limited case of market-based statistical moments if we assume that all volumes of consecutive trades with security are constant during the averaging interval. To forecast market-based variance of price, one should predict the first two statistical moments and the correlation of values and volumes of consecutive trades at the same horizon. We explain how that limits the number of predicted statistical moments of prices by the first two and the accuracy of the forecasts of the price probability by the Gaussian distribution. This limitation also reduces the reliability of Value-at-Risk by Gaussian approximation. The accounting for the randomness of trade volumes and the use of VWAP results in zero price-volume correlations. To study the price-volume empirical statistical dependence, one should calculate correlations of prices and squares of trade volumes or correlations of squares of prices and volumes. To improve the accuracy and reliability of large macroeconomic and market models like those developed by BlackRock's Aladdin, JP Morgan, and the U.S. Fed., the developers should explicitly account for the impact of random trade volumes and use market-based statistical moments of asset prices.
https://arxiv.org/abs/2205.07256
Academic Papers
svg
65a42335e2d8c8e17aebfc3c902e91af8871e031bd8dd88f6d3aabd794873c1c
2026-01-21T00:00:00-05:00
Policy Learning under Endogeneity Using Instrumental Variables
arXiv:2206.09883v4 Announce Type: replace Abstract: I propose a framework for learning individualized policy rules in observational data settings characterized by endogenous treatment selection and the availability of an instrumental variable. I introduce encouragement rules that manipulate the instrument. By incorporating the marginal treatment effect (MTE) as a policy invariant parameter, I establish the identification of the social welfare criterion for the optimal encouragement rule. Focusing on binary encouragement rules, I propose to estimate the optimal encouragement rule via the Empirical Welfare Maximization (EWM) method and derive the welfare loss convergence rate. I apply my method to advise on the optimal tuition subsidy assignment in Indonesia.
https://arxiv.org/abs/2206.09883
Academic Papers
svg
08e80a83f55606365b90e97601967bdecf55d5fc6668263b8498efb789fd7319
2026-01-21T00:00:00-05:00
Identification in Multiple Treatment Models under Discrete Variation
arXiv:2307.06174v2 Announce Type: replace Abstract: We develop a marginal treatment effect based method to learn about causal effects in multiple treatment models with discrete instruments. We allow selection into treatment to be governed by a general class of threshold crossing models that permit multidimensional unobserved heterogeneity. An inherent complication is that the primitives characterizing the selection model are not generally point-identified. Allowing these primitives to be point-identified up to a finite-dimensional parameter, we show how a two-step computational program can be used to obtain sharp bounds for a number of treatment effect parameters when the marginal treatment response functions are allowed to satisfy only nonparametric shape restrictions or are additionally parameterized. We demonstrate the benefits of our method by revisiting Kline and Walters' (2016) empirical analysis of the Head Start program. Our approach relaxes their point-identifying assumptions on the selection model and marginal treatment response functions, allowing us to assess the robustness of their conclusions.
https://arxiv.org/abs/2307.06174
Academic Papers
svg
83252af480ac43fdcb6b761f12d8a5c83df32f5e2a5862deb137067d43f24d0e
2026-01-21T00:00:00-05:00
Interpreting Event-Studies from Recent Difference-in-Differences Methods
arXiv:2401.12309v2 Announce Type: replace Abstract: This note discusses the interpretation of event-study plots produced by recent difference-in-differences methods. I show that even when specialized to the case of non-staggered treatment timing, the default plots produced by software for several of the most popular recent methods do not match those of traditional two-way fixed effects (TWFE) event-studies. The plots produced by the new methods may show a kink or jump at the time of treatment even when the TWFE event-study shows a straight line. This difference stems from the fact that the new methods construct the pre-treatment coefficients asymmetrically from the post-treatment coefficients. As a result, visual heuristics for evaluating violations of parallel trends using TWFE event-study plots should not be immediately applied to those from these methods. I conclude with practical recommendations for constructing and interpreting event-study plots when using these methods.
https://arxiv.org/abs/2401.12309
Academic Papers
svg
9082afbcfffba3d3f1ff830d1c9e290f7e4936e27b67403406962c11eafe1011
2026-01-21T00:00:00-05:00
Database for the meta-analysis of the social cost of carbon (v2026.1)
arXiv:2402.09125v4 Announce Type: replace Abstract: A new version of the database for the meta-analysis of estimates of the social cost of carbon is presented. New records were added, and new fields on gender and stochasticity.
https://arxiv.org/abs/2402.09125
Academic Papers
svg
06bb4ecfae7819a4072489f5edaf998aff9a2c249bd4b5fe727a99b63664bf2b
2026-01-21T00:00:00-05:00
To be or not to be: Roughness or long memory in volatility?
arXiv:2403.12653v2 Announce Type: replace Abstract: We develop a framework for composite likelihood estimation of parametric continuous-time stationary Gaussian processes. We derive the asymptotic theory of the associated maximum composite likelihood estimator. We implement our approach on a pair of models that have been proposed to describe the random log-spot variance of financial asset returns. A simulation study shows that it delivers good performance in these settings and improves upon a method-of-moments estimation. In an empirical investigation, we inspect the dynamic of an intraday measure of the spot log-realized variance computed with high-frequency data from the cryptocurrency market. The evidence supports a mechanism, where the short- and long-term correlation structure of stochastic volatility are decoupled in order to capture its properties at different time scales. This is further backed by an analysis of the associated spot log-trading volume.
https://arxiv.org/abs/2403.12653
Academic Papers
svg
b95c797d23480a9e986b6b804f66fccd59d9c446acea32e3127741333e02558a
2026-01-21T00:00:00-05:00
Potential weights and implicit causal designs in linear regression
arXiv:2407.21119v4 Announce Type: replace Abstract: When we interpret linear regression as estimating causal effects justified by quasi-experimental treatment variation, what do we mean? This paper formalizes a minimal criterion for quasi-experimental interpretation and characterizes its necessary implications. A minimal requirement is that the regression always estimates some contrast of potential outcomes under the true treatment assignment process. This requirement implies linear restrictions on the true distribution of treatment. If the regression were to be interpreted quasi-experimentally, these restrictions imply candidates for the true distribution of treatment, which we call implicit designs. Regression estimators are numerically equivalent to augmented inverse propensity weighting (AIPW) estimators using an implicit design. Implicit designs serve as a framework that unifies and extends existing theoretical results on causal interpretation of regression across starkly distinct settings (including multiple treatment, panel, and instrumental variables). They lead to new theoretical insights for widely used but less understood specifications.
https://arxiv.org/abs/2407.21119
Academic Papers
svg
db0286452bb8a6dbc1da9c2b8a11362670f9d24b85a2c610fb5bfbd9176e273d
2026-01-21T00:00:00-05:00
The Turing Valley: How AI Capabilities Shape Labor Income
arXiv:2408.16443v2 Announce Type: replace Abstract: Current AI systems are better than humans in some knowledge dimensions but weaker in others. Guided by the long-standing vision of machine intelligence inspired by the Turing Test, AI developers increasingly seek to eliminate this "jagged" nature by pursuing Artificial General Intelligence (AGI) that surpasses human knowledge across domains. This pursuit has sparked an important debate, with leading economists arguing that AGI risks eroding the value of human capital. We contribute to this debate by showing how AI capabilities in different dimensions shape labor income in a multidimensional knowledge economy. AI improvements in dimensions where it is stronger than humans always increase labor income, but the effects of AI progress in dimensions where it is weaker than humans depend on the nature of human-AI communication. When communication allows the integration of partial solutions, improvements in AI's weak dimensions reduce the marginal product of labor, and labor income is maximized by a deliberately jagged form of AI. In contrast, when communication is limited to sharing full solutions, improvements in AI's weak dimensions can raise the marginal product of labor, and labor income can be maximized when AI achieves high performance across all dimensions. These results point to the importance of empirically assessing the additivity properties of human-AI communication for understanding the labor-market consequences of progress toward AGI.
https://arxiv.org/abs/2408.16443
Academic Papers
svg
f52ef9f3fae29ee497fe6d753ac18aacc4f78406fd41d13914ff9b3cafe7593d
2026-01-21T00:00:00-05:00
Uncertain and Asymmetric Forecasts
arXiv:2411.05938v2 Announce Type: replace Abstract: This paper develops distribution-based measures that extract policy-relevant information from subjective probability distributions beyond point forecasts. We introduce two complementary indicators that operationalize the second and third moments of beliefs. First, a Normalized Uncertainty measure applies a variance-stabilizing transformation that removes mechanical level effects around policy-relevant anchors. Empirically, uncertainty behaves as a state variable: it amplifies perceived de-anchoring following monetary-policy shocks and weakens and delays pass-through to credit conditions, particularly across loan maturities. Second, an Asymmetry Coherence indicator combines the median and skewness of subjective distributions to identify coherent directional tail risks. Directional asymmetry is largely orthogonal to uncertainty and is primarily reflected in monetary-policy responses rather than real activity. Overall, the results show that properly measured uncertainty governs state-dependent transmission, while distributional asymmetries convey distinct information about macroeconomic risks.
https://arxiv.org/abs/2411.05938
Academic Papers
svg
cf7870d77b0b98a6efcc0e1b6d73789413052b05321887f7f8b6aa1d86cb529b
2026-01-21T00:00:00-05:00
Sectorial Exclusion Criteria in the Marxist Analysis of the Average Rate of Profit: The United States Case (1960-2020)
arXiv:2501.06270v2 Announce Type: replace Abstract: The long term estimation of the Marxist average rate of profit does not adhere to a theoretically grounded standard regarding which economic activities should or should not be included for such purposes, which is relevant because methodological non uniformity can be a significant source of overestimation or underestimation, generating a less accurate reflection of the capital accumulation dynamics. This research aims to provide a standard Marxist decision criterion regarding the inclusion and exclusion of economic activities for the calculation of the Marxist average profit rate for the case of United States economic sectors from 1960 to 2020, based on the Marxist definition of productive labor, its location in the circuit of capital, and its relationship with the production of surplus value. Using wavelet transformed Daubechies filters with increased symmetry, empirical mode decomposition, Hodrick Prescott filter embedded in unobserved components model, and a wide variety of unit root tests the internal theoretical consistency of the presented criteria is evaluated. Also, the objective consistency of the theory is evaluated by a dynamic factor autoregressive model, Principal Component Analysis via Singular Value Decomposition, and regularized Horseshoe regression. The results are consistent both theoretically and econometrically with the logic of Classical Marxist political economy.
https://arxiv.org/abs/2501.06270
Academic Papers
svg
fc8e4676eddf546b11fde4c6128b377f9e25ff7c3170812e788e4c565587b4f7
2026-01-21T00:00:00-05:00
Kotlarski's lemma for dyadic models
arXiv:2502.02734v2 Announce Type: replace Abstract: We show how to identify the distributions of the latent components in the two-way dyadic model for bipartite networks $y_{i,\ell}= \alpha_i+\eta_{\ell}+\varepsilon_{i,\ell}$. This is achieved by a repeated application of the extension of the classical lemma of Kotlarski (1967) in Evdokimov and White (2012). We provide two separate sets of assumptions under which all the latent distributions are identified. Both rely on some of the latent components being identically distributed.
https://arxiv.org/abs/2502.02734
Academic Papers
svg
4f6e2afa8b2fceb779c37b838fa0f6f2a43f72f24d023a12a0ba0588b3f4a040
2026-01-21T00:00:00-05:00
Trade and pollution: Evidence from India
arXiv:2502.09289v3 Announce Type: replace Abstract: What happens to pollution when developing countries open their borders to trade? Theoretical predictions are ambiguous, and empirical evidence remains limited. We study the effects of the 1991 Indian trade liberalization reform on water pollution. The reform abruptly and unexpectedly lowered import tariffs, increasing exposure to trade. Larger tariff reductions are associated with relative increases in water pollution. The estimated effects imply a 0.11 standard deviation increase in water pollution for the median district exposed to the tariff reform.
https://arxiv.org/abs/2502.09289
Academic Papers
svg
859964f6b7695b14cfe5a9056ed1356684ee977a6d4cc1a815c8bfc9a9764082
2026-01-21T00:00:00-05:00
Policy Learning with Confidence
arXiv:2502.10653v3 Announce Type: replace Abstract: This paper introduces a rule for policy selection in the presence of estimation uncertainty, explicitly accounting for estimation risk. The rule belongs to the class of risk-aware rules on the efficient decision frontier, characterized as policies offering maximal estimated welfare for a given level of estimation risk. Among this class, the proposed rule is chosen to provide a reporting guarantee, ensuring that the welfare delivered exceeds a threshold with a pre-specified confidence level. We apply this approach to the allocation of a limited budget among social programs using estimates of their marginal value of public funds and associated standard errors.
https://arxiv.org/abs/2502.10653
Academic Papers
svg
ddba326b581b2a83e2b06d9d4bf679f2858ad1765821abea08b3240d12bc44e5
2026-01-21T00:00:00-05:00
Do Determinants of EV Purchase Intent vary across the Spectrum? Evidence from Bayesian Analysis of US Survey Data
arXiv:2504.09854v3 Announce Type: replace Abstract: While electric vehicle (EV) adoption has been widely studied, most research focuses on the average effects of predictors on purchase intent, overlooking variation across the distribution of EV purchase intent. This paper makes a threefold contribution by analyzing four unique explanatory variables, leveraging large-scale US survey data from 2021 to 2023, and employing Bayesian ordinal probit and Bayesian ordinal quantile modeling to evaluate the effects of these variables-while controlling for other commonly used covariates-on EV purchase intent, both on average and across its full distribution. By modeling purchase intent as an ordered outcome-from "not at all likely" to "very likely"-we reveal how covariate effects differ across levels of interest. This is the first application of ordinal quantile modeling in the EV adoption literature, uncovering heterogeneity in how potential buyers respond to key factors. For instance, confidence in development of charging infrastructure and belief in environmental benefits are linked not only to higher interest among likely adopters but also to reduced resistance among more skeptical respondents. Notably, we identify a gap between the prevalence and influence of key predictors: although few respondents report strong infrastructure confidence or frequent EV information exposure, both factors are strongly associated with increased intent across the spectrum. These findings suggest clear opportunities for targeted communication and outreach, alongside infrastructure investment, to support widespread EV adoption.
https://arxiv.org/abs/2504.09854
Academic Papers
svg
8c3e92c788f20b8bd3d1c65da92e74281b61afb4cf15e17d45adfb2967fa0fc2
2026-01-21T00:00:00-05:00
Probabilistic Forecasting of Climate Policy Uncertainty: The Role of Macro-financial Variables and Google Search Data
arXiv:2507.12276v3 Announce Type: replace Abstract: Accurately forecasting Climate Policy Uncertainty (CPU) is essential for designing climate strategies that balance economic growth with environmental objectives. Elevated CPU levels can delay regulatory implementation, hinder investment in green technologies, and amplify public resistance to policy reforms, particularly during periods of economic stress. Despite the growing literature documenting the economic relevance of CPU, forecasting its evolution and understanding the role of macro-financial drivers in shaping its fluctuations have not been explored. This study addresses this gap by presenting the first effort to forecast CPU and identify its key drivers. We employ various statistical tools to identify macro-financial exogenous drivers, alongside Google search data to capture early public attention to climate policy. Local projection impulse response analysis quantifies the dynamic effects of these variables, revealing that household financial vulnerability, housing market activity, business confidence, credit conditions, and financial market sentiment exert the most substantial impacts. These predictors are incorporated into a Bayesian Structural Time Series (BSTS) framework to produce probabilistic forecasts for both US and Global CPU indices. Extensive experiments and statistical validation demonstrate that BSTS with time-invariant regression coefficients achieves superior forecasting performance. We demonstrate that this performance stems from its variable selection mechanism, which identifies exogenous predictors that are empirically significant and theoretically grounded, as confirmed by the feature importance analysis. From a policy perspective, the findings underscore the importance of adaptive climate policies that remain effective across shifting economic conditions while supporting long-term environmental and growth objectives.
https://arxiv.org/abs/2507.12276
Academic Papers
svg
1c4445bb8b9c2a473613e0a6caf42792fbe76287bc560b052ed1b2889e092266
2026-01-21T00:00:00-05:00
From Many Models, One: Macroeconomic Forecasting with Reservoir Ensembles
arXiv:2512.13642v2 Announce Type: replace Abstract: Model combination is a powerful approach for achieving superior performance compared to selecting a single model. We study both theoretically and empirically the effectiveness of ensembles of Multi-Frequency Echo State Networks (MFESNs), which have been shown to achieve state-of-the-art macroeconomic time series forecasting results (Ballarin et al., 2024a). The Hedge and Follow-the-Leader schemes are discussed, and their online learning guarantees are extended to settings with dependent data. In empirical applications, the proposed Ensemble Echo State Networks demonstrate significantly improved predictive performance relative to individual MFESN models.
https://arxiv.org/abs/2512.13642
Academic Papers
svg
ce20a52dfd707c9a383048ce169c65c836ca2a970a3595e84750d22fe106ab26
2026-01-21T00:00:00-05:00
The Connection Between Monetary Policy and Housing Prices: Public Perception and Expert Communication
arXiv:2601.08957v2 Announce Type: replace Abstract: We study how the general public perceives the link between monetary policy and housing markets. Using a large-scale, cross-country survey experiment in Austria, Germany, Italy, Sweden, and the United Kingdom, we examine households' understanding of monetary policy, their beliefs about its impact on house prices, and how these beliefs respond to expert information. We find that while most respondents grasp the basic mechanisms of conventional monetary policy and recognize the connection between interest rates and house prices, literacy regarding unconventional monetary policy is very low. Beliefs about the monetary policy-housing nexus are malleable and respond to information, particularly when it is provided by academic economists rather than central bankers. Monetary policy literacy is strongly related to education, gender, age, and experience in housing and mortgage markets. Our results highlight the central role of housing in how households interpret monetary policy and point to the importance of credible and inclusive communication strategies for effective policy transmission.
https://arxiv.org/abs/2601.08957
Academic Papers
svg
bfbf84be5e1f96a5ea6636f03414222f4327d4f2314dc9af66ed11119b653c92
2026-01-21T00:00:00-05:00
Assessing Utility of Differential Privacy for RCTs
arXiv:2309.14581v2 Announce Type: replace-cross Abstract: Randomized controlled trials (RCTs) have become powerful tools for assessing the impact of interventions and policies in many contexts. They are considered the gold standard for causal inference in the biomedical fields and many social sciences. Researchers have published an increasing number of studies that rely on RCTs for at least part of their inference. These studies typically include the response data that has been collected, de-identified, and sometimes protected through traditional disclosure limitation methods. In this paper, we empirically assess the impact of privacy-preserving synthetic data generation methodologies on published RCT analyses by leveraging available replication packages (research compendia) in economics and policy analysis. We implement three privacy-preserving algorithms, that use as a base one of the basic differentially private (DP) algorithms, the perturbed histogram, to support the quality of statistical inference. We highlight challenges with the straight use of this algorithm and the stability-based histogram in our setting and described the adjustments needed. We provide simulation studies and demonstrate that we can replicate the analysis in a published economics article on privacy-protected data under various parameterizations. We find that relatively straightforward (at a high-level) privacy-preserving methods influenced by DP techniques allow for inference-valid protection of published data. The results have applicability to researchers wishing to share RCT data, especially in the context of low- and middle-income countries, with strong privacy protection.
https://arxiv.org/abs/2309.14581
Academic Papers
svg
183ab9301e876bca62683e2c7fd08c89b60f2eedb970f633b77026d6a229e0d7
2026-01-21T00:00:00-05:00
In Defense of the Pre-Test: Valid Inference when Testing Violations of Parallel Trends for Difference-in-Differences
arXiv:2510.26470v2 Announce Type: replace-cross Abstract: The difference-in-differences (DID) research design is a key identification strategy which allows researchers to estimate causal effects under the parallel trends assumption. While the parallel trends assumption is counterfactual and cannot be tested directly, researchers often examine pre-treatment periods to check whether the time trends are parallel before treatment is administered. Recently, researchers have been cautioned against using preliminary tests which aim to detect violations of parallel trends in the pre-treatment period. In this paper, we argue that preliminary testing can -- and should -- play an important role within the DID research design. We propose a new and more substantively appropriate conditional extrapolation assumption, which requires an analyst to conduct a preliminary test to determine whether the severity of pre treatment parallel trend violations falls below an acceptable level before extrapolation to the post-treatment period is justified. This stands in contrast to prior work which can be interpreted as either setting the acceptable level to be exactly zero (in which case preliminary tests lack power) or assuming that extrapolation is always justified (in which case preliminary tests are not required). Under mild assumptions on how close the actual violation is to the acceptable level, we provide a consistent preliminary test as well confidence intervals which are valid when conditioned on the result of the test. The conditional coverage of these intervals overcomes a common critique made against the use of preliminary testing within the DID research design. To illustrate the performance of the proposed methods, we use synthetic data as well as data on recentralization of public services in Vietnam and right-to-carry laws in Virginia.
https://arxiv.org/abs/2510.26470
Academic Papers
svg
1cea693283adb9dd2f355108a410b195effc5f63b559674323dbd4d366c6e7be
2026-01-21T00:00:00-05:00
Identifying Conditions Favouring Multiplicative Heterogeneity Models in Network Meta-Analysis
arXiv:2601.11735v1 Announce Type: new Abstract: Explicit modelling of between-study heterogeneity is essential in network meta-analysis (NMA) to ensure valid inference and avoid overstating precision. While the additive random-effects (RE) model is the conventional approach, the multiplicative-effect (ME) model remains underexplored. The ME model inflates within-study variances by a common factor estimated via weighted least squares, yielding identical point estimates to a fixed-effect model while inflating confidence intervals. We empirically compared RE and ME models across NMAs of two-arm studies with significant heterogeneity from the nmadb database, assessing model fit using the Akaike Information Criterion. The ME model often provided comparable or better fit to the RE model. Case studies further revealed that RE models are sensitive to extreme and imprecise observations, whereas ME models assign less weight to such observations and hence exhibit greater robustness to publication bias. Our results suggest that the ME model warrant consideration alongside conventional RE model in NMA practice.
https://arxiv.org/abs/2601.11735
Academic Papers
svg
db5988b9e971e8424914407fb92e685852fd54f5bdf083fba8b4debe80fe692f
2026-01-21T00:00:00-05:00
Gradient-based Active Learning with Gaussian Processes for Global Sensitivity Analysis
arXiv:2601.11790v1 Announce Type: new Abstract: Global sensitivity analysis of complex numerical simulators is often limited by the small number of model evaluations that can be afforded. In such settings, surrogate models built from a limited set of simulations can substantially reduce the computational burden, provided that the design of computer experiments is enriched efficiently. In this context, we propose an active learning approach that, for a fixed evaluation budget, targets the most informative regions of the input space to improve sensitivity analysis accuracy. More specifically, our method builds on recent advances in active learning for sensitivity analysis (Sobol' indices and derivative-based global sensitivity measures, DGSM) that exploit derivatives obtained from a Gaussian process (GP) surrogate. By leveraging the joint posterior distribution of the GP gradient, we develop acquisition functions that better account for correlations between partial derivatives and their impact on the response surface, leading to a more comprehensive and robust methodology than existing DGSM-oriented criteria. The proposed approach is first compared to state-of-the-art methods on standard benchmark functions, and is then applied to a real environmental model of pesticide transfers.
https://arxiv.org/abs/2601.11790
Academic Papers
svg
c7faa2dc61319a2bb86e32829ab648249639a69b5ad2dc85c140a054d1d2b3c4
2026-01-21T00:00:00-05:00
Adversarial Drift-Aware Predictive Transfer: Toward Durable Clinical AI
arXiv:2601.11860v1 Announce Type: new Abstract: Clinical AI systems frequently suffer performance decay post-deployment due to temporal data shifts, such as evolving populations, diagnostic coding updates (e.g., ICD-9 to ICD-10), and systemic shocks like the COVID-19 pandemic. Addressing this ``aging'' effect via frequent retraining is often impractical due to computational costs and privacy constraints. To overcome these hurdles, we introduce Adversarial Drift-Aware Predictive Transfer (ADAPT), a novel framework designed to confer durability against temporal drift with minimal retraining. ADAPT innovatively constructs an uncertainty set of plausible future models by combining historical source models and limited current data. By optimizing worst-case performance over this set, it balances current accuracy with robustness against degradation due to future drifts. Crucially, ADAPT requires only summary-level model estimators from historical periods, preserving data privacy and ensuring operational simplicity. Validated on longitudinal suicide risk prediction using electronic health records from Mass General Brigham (2005--2021) and Duke University Health Systems, ADAPT demonstrated superior stability across coding transitions and pandemic-induced shifts. By minimizing annual performance decay without labeling or retraining future data, ADAPT offers a scalable pathway for sustaining reliable AI in high-stakes healthcare environments.
https://arxiv.org/abs/2601.11860
Academic Papers
svg
16451dd384cd1eb2a4c263ec6e886044e8e5d71059ffb0a8a10ce8ba6b9e4d56
2026-01-21T00:00:00-05:00
A Deep Learning-Copula Framework for Climate-Related Home Insurance Risk
arXiv:2601.11949v1 Announce Type: new Abstract: Extreme weather events are becoming more common, with severe storms, floods, and prolonged precipitation affecting communities worldwide. These shifts in climate patterns pose a direct threat to the insurance industry, which faces growing exposure to weather-related damages. As claims linked to extreme weather rise, insurance companies need reliable tools to assess future risks. This is not only essential for setting premiums and maintaining solvency but also for supporting broader disaster preparedness and resilience efforts. In this study, we propose a two-step method to examine the impact of precipitation on home insurance claims. Our approach combines the predictive power of deep neural networks with the flexibility of copula-based multivariate analysis, enabling a more detailed understanding of how precipitation patterns relate to claim dynamics. We demonstrate this methodology through a case study of the Canadian Prairies, using data from 2002 to 2011.
https://arxiv.org/abs/2601.11949
Academic Papers
svg
8b50264521b5a37650ec59b6c49651a71c5b3af42a9f034d1a688493cba3b8b1
2026-01-21T00:00:00-05:00
A Kernel Approach for Semi-implicit Variational Inference
arXiv:2601.12023v1 Announce Type: new Abstract: Semi-implicit variational inference (SIVI) enhances the expressiveness of variational families through hierarchical semi-implicit distributions, but the intractability of their densities makes standard ELBO-based optimization biased. Recent score-matching approaches to SIVI (SIVI-SM) address this issue via a minimax formulation, at the expense of an additional lower-level optimization problem. In this paper, we propose kernel semi-implicit variational inference (KSIVI), a principled and tractable alternative that eliminates the lower-level optimization by leveraging kernel methods. We show that when optimizing over a reproducing kernel Hilbert space, the lower-level problem admits an explicit solution, reducing the objective to the kernel Stein discrepancy (KSD). Exploiting the hierarchical structure of semi-implicit distributions, the resulting KSD objective can be efficiently optimized using stochastic gradient methods. We establish optimization guarantees via variance bounds on Monte Carlo gradient estimators and derive statistical generalization bounds of order $\tilde{\mathcal{O}}(1/\sqrt{n})$. We further introduce a multi-layer hierarchical extension that improves expressiveness while preserving tractability. Empirical results on synthetic and real-world Bayesian inference tasks demonstrate the effectiveness of KSIVI.
https://arxiv.org/abs/2601.12023
Academic Papers
svg
30db2f4c5ee690f097e9758871598f48d8d0077ea0ff81731c4c983fd242b6a5
2026-01-21T00:00:00-05:00
Estimations of Extreme CoVaR and CoES under Asymptotic Independence
arXiv:2601.12031v1 Announce Type: new Abstract: The two popular systemic risk measures CoVaR (Conditional Value-at-Risk) and CoES (Conditional Expected Shortfall) have recently been receiving growing attention on applications in economics and finance. In this paper, we study the estimations of extreme CoVaR and CoES when the two random variables are asymptotic independent but positively associated. We propose two types of extrapolative approaches: the first relies on intermediate VaR and extrapolates it to extreme CoVaR/CoES via an adjustment factor; the second directly extrapolates the estimated intermediate CoVaR/CoES to the extreme tails. All estimators, including both intermediate and extreme ones, are shown to be asymptotically normal. Finally, we explore the empirical performances of our methods through conducting a series of Monte Carlo simulations and a real data analysis on S&P500 Index with 12 constituent stock data.
https://arxiv.org/abs/2601.12031
Academic Papers
svg
b1553a5f4d0412f2b16fa30c2353e446ee8826acc50a7d564203b4acde50533c
2026-01-21T00:00:00-05:00
Lost in Aggregation: The Causal Interpretation of the IV Estimand
arXiv:2601.12120v1 Announce Type: new Abstract: Instrumental variable based estimation of a causal effect has emerged as a standard approach to mitigate confounding bias in the social sciences and epidemiology, where conducting randomized experiments can be too costly or impossible. However, justifying the validity of the instrument often poses a significant challenge. In this work, we highlight a problem generally neglected in arguments for instrumental variable validity: the presence of an ''aggregate treatment variable'', where the treatment (e.g., education, GDP, caloric intake) is composed of finer-grained components that each may have a different effect on the outcome. We show that the causal effect of an aggregate treatment is generally ambiguous, as it depends on how interventions on the aggregate are instantiated at the component level, formalized through the aggregate-constrained component intervention distribution. We then characterize conditions on the interventional distribution and the aggregate setting under which standard instrumental variable estimators identify the aggregate effect. The contrived nature of these conditions implies major limitations on the interpretation of instrumental variable estimates based on aggregate treatments and highlights the need for a broader justificatory base for the exclusion restriction in such settings.
https://arxiv.org/abs/2601.12120
Academic Papers
svg
b429dbf103bbbfe36c39c85bd6ed2e22b69dab8d0320249be8ad89123eed81c5
2026-01-21T00:00:00-05:00
Using Directed Acyclic Graphs to Illustrate Common Biases in Diagnostic Test Accuracy Studies
arXiv:2601.12167v1 Announce Type: new Abstract: Background: Diagnostic test accuracy (DTA) studies, like etiological studies, are susceptible to various biases including reference standard error bias, partial verification bias, spectrum effect, confounding, and bias from misassumption of conditional independence. While directed acyclic graphs (DAGs) are widely used in etiological research to identify and illustrate bias structures, they have not been systematically applied to DTA studies. Methods: We developed DAGs to illustrate the causal structures underlying common biases in DTA studies. For each bias, we present the corresponding DAG structure and demonstrate the parallel with equivalent biases in etiological studies. We use real-world examples to illustrate each bias mechanism. Results: We demonstrate that five major biases in DTA studies can be represented using DAGs with clear structural parallels to etiological studies: reference standard error bias corresponds to exposure misclassification, misassumption of conditional independence creates spurious correlations similar to unmeasured confounding, spectrum effect parallels effect modification, confounding operates through backdoor paths in both settings, and partial verification bias mirrors selection bias. These DAG representations reveal the causal mechanisms underlying each bias and suggest appropriate correction strategies. Conclusions: DAGs provide a valuable framework for understanding bias structures in DTA studies and should complement existing quality assessment tools like STARD and QUADAS-2. We recommend incorporating DAGs during study design to prospectively identify potential biases and during reporting to enhance transparency. DAG construction requires interdisciplinary collaboration and sensitivity analyses under alternative causal structures.
https://arxiv.org/abs/2601.12167
Academic Papers
svg
15a679f154f9da7101402caa71226cde413918ce9c02eba19ce994bc4a32ebf8
2026-01-21T00:00:00-05:00
A warping function-based control chart for detecting distributional changes in damage-sensitive features for structural condition assessment
arXiv:2601.12221v1 Announce Type: new Abstract: Data-driven damage detection methods achieve damage identification by analyzing changes in damage-sensitive features (DSFs) derived from structural health monitoring (SHM) data. The core reason for their effectiveness lies in the fact that damage or structural state transition can be manifested as changes in the distribution of DSF data. This enables us to reframe the problem of damage detection as one of identifying these distributional changes. Hence, developing automated tools for detecting such changes is pivotal for automated structural health diagnosis. Control charts are extensively utilized in SHM for DSF change detection, owing to their excellent online detection and early warning capabilities. However, conventional methods are primarily designed to detect mean or variance shifts, making it challenging to identify complex shape changes in distributions. This limitation results in insufficient damage detection sensitivity. Moreover, they typically exhibit poor robustness against data contamination. This paper proposes a novel control chart to address these limitations. It employs the probability density functions (PDFs) of subgrouped DSF data as monitoring objects, with shape deformations characterized by warping functions. Furthermore, a nonparametric control chart is specifically constructed for warping function monitoring in the functional data analysis framework. Key advantages of the new method include the ability to detect both shifts and complex shape deformations in distributions, excellent online detection performance, and robustness against data contamination. Extensive simulation studies demonstrate its superiority over competing approaches. Finally, the method is applied to detecting distributional changes in DSF data for cable condition assessment in a long-span cable-stayed bridge, demonstrating its practical utility in engineering.
https://arxiv.org/abs/2601.12221
Academic Papers
svg
a9e5ae0b06d4784e20c0937ac29851a49409c48a92ab635b820696558ba9c79a
2026-01-21T00:00:00-05:00
A Machine Learning--Based Surrogate EKMA Framework for Diagnosing Urban Ozone Formation Regimes: Evidence from Los Angeles
arXiv:2601.12321v1 Announce Type: new Abstract: Surface ozone pollution remains a persistent challenge in many metropolitan regions worldwide, as the nonlinear dependence of ozone formation on nitrogen oxides and volatile organic compounds (VOCs) complicates the design of effective emission control strategies. While chemical transport models provide mechanistic insights, they rely on detailed emission inventories and are computationally expensive. This study develops a machine learning--based surrogate framework inspired by the Empirical Kinetic Modeling Approach (EKMA). Using hourly air quality observations from Los Angeles during 2024--2025, a random forest model is trained to predict surface ozone concentrations based on precursor measurements and spatiotemporal features, including site location and cyclic time encodings. The model achieves strong predictive performance, with permutation importance highlighting the dominant roles of diurnal temporal features and nitrogen dioxide, along with additional contributions from carbon monoxide. Building on the trained surrogate, EKMA-style sensitivity experiments are conducted by perturbing precursor concentrations while holding other covariates fixed. The results indicate that ozone formation in Los Angeles during the study period is predominantly VOC-limited. Overall, the proposed framework offers an efficient and interpretable approach for ozone regime diagnosis in data-rich urban environments.
https://arxiv.org/abs/2601.12321
Academic Papers
svg
c14ba6a21623d5b66f7e8fb6efd55b6bd80872fd4e8f16529b1de3e229b63e4f
2026-01-21T00:00:00-05:00
Single-index Semiparametric Transformation Cure Models with Interval-censored Data
arXiv:2601.12370v1 Announce Type: new Abstract: Interval censored data commonly arise in medical studies when the event time of interest is only known to lie within an interval. In the presence of a cure subgroup, conventional mixture cure models typically assume a logistic model for the uncure probability and a proportional hazards model for the susceptible subjects. However, in practice, the assumptions of parametric form for the uncure probability and the proportional hazards model for the susceptible may not always be satisfied. In this paper, we propose a class of flexible single-index semiparametric transformation cure models for interval-censored data, where a single-index model and a semiparametric transformation model are utilized for the uncured and conditional survival probability, respectively, encompassing both the proportional hazards cure and proportional odds cure models as specific cases. We approximate the single-index function and cumulative baseline hazard functions via the kernel technique and splines, respectively, and develop a computationally feasible expectation-maximisation (EM) algorithm, facilitated by a four-layer gamma-frailty Poisson data augmentation. Simulation studies demonstrate the satisfactory performance of our proposed method, compared to the spline-based approach and the classical logistic-based mixture cure models. The application of the proposed methodology is illustrated using the Alzheimers dataset.
https://arxiv.org/abs/2601.12370
Academic Papers
svg
666c3a9496302ec6a6c86144d94a71bf05af34d5b074fe918720bca73fc9eea0
2026-01-21T00:00:00-05:00
Robust semi-parametric mixtures of linear experts using the contaminated Gaussian distribution
arXiv:2601.12425v1 Announce Type: new Abstract: Semi- and non-parametric mixture of regressions are a very useful flexible class of mixture of regressions in which some or all of the parameters are non-parametric functions of the covariates. These models are, however, based on the Gaussian assumption of the component error distributions. Thus, their estimation is sensitive to outliers and heavy-tailed error distributions. In this paper, we propose semi- and non-parametric contaminated Gaussian mixture of regressions to robustly estimate the parametric and/or non-parametric terms of the models in the presence of mild outliers. The virtue of using a contaminated Gaussian error distribution is that we can simultaneously perform model-based clustering of observations and model-based outlier detection. We propose two algorithms, an expectation-maximization (EM)-type algorithm and an expectation-conditional-maximization (ECM)-type algorithm, to perform maximum likelihood and local-likelihood kernel estimation of the parametric and non-parametric of the proposed models, respectively. The robustness of the proposed models is examined using an extensive simulation study. The practical utility of the proposed models is demonstrated using real data.
https://arxiv.org/abs/2601.12425
Academic Papers
svg
d800b9d775a2e0c9661bfc6f960d20211b4fe4eae938406bc7aa5f46688ac516
2026-01-21T00:00:00-05:00
Assessing Interactive Causes of an Occurred Outcome Due to Two Binary Exposures
arXiv:2601.12478v1 Announce Type: new Abstract: In contrast to evaluating treatment effects, causal attribution analysis focuses on identifying the key factors responsible for an observed outcome. For two binary exposure variables and a binary outcome variable, researchers need to assess not only the likelihood that an observed outcome was caused by a particular exposure, but also the likelihood that it resulted from the interaction between the two exposures. For example, in the case of a male worker who smoked, was exposed to asbestos, and developed lung cancer, researchers aim to explore whether the cancer resulted from smoking, asbestos exposure, or their interaction. Even in randomized controlled trials, widely regarded as the gold standard for causal inference, identifying and evaluating retrospective causal interactions between two exposures remains challenging. In this paper, we define posterior probabilities to characterize the interactive causes of an observed outcome. We establish the identifiability of posterior probabilities by using a secondary outcome variable that may appear after the primary outcome. We apply the proposed method to the classic case of smoking and asbestos exposure. Our results indicate that for lung cancer patients who smoked and were exposed to asbestos, the disease is primarily attributable to the synergistic effect between smoking and asbestos exposure.
https://arxiv.org/abs/2601.12478
Academic Papers
svg
74f45605cd4a63ee51d406bf83f4a395590d81fe6ddf45aaa8c712296ae55030
2026-01-21T00:00:00-05:00
Bayesian Inference for Partially Observed McKean-Vlasov SDEs with Full Distribution Dependence
arXiv:2601.12515v1 Announce Type: new Abstract: McKean-Vlasov stochastic differential equations (MVSDEs) describe systems whose dynamics depend on both individual states and the population distribution, and they arise widely in neuroscience, finance, and epidemiology. In many applications the system is only partially observed, making inference very challenging when both drift and diffusion coefficients depend on the evolving empirical law. This paper develops a Bayesian framework for latent state inference and parameter estimation in such partially observed MVSDEs. We combine time-discretization with particle-based approximations to construct tractable likelihood estimators, and we design two particle Markov chain Monte Carlo (PMCMC) algorithms: a single-level PMCMC method and a multilevel PMCMC (MLPMCMC) method that couples particle systems across discretization levels. The multilevel construction yields correlated likelihood estimates and achieves mean square error $(O(\varepsilon^2))$ at computational cost $(O(\varepsilon^{-6}))$, improving on the $(O(\varepsilon^{-7}))$ complexity of single-level schemes. We address the fully law-dependent diffusion setting which is the most general formulation of MVSDEs, and provide theoretical guarantees under standard regularity assumptions. Numerical experiments confirm the efficiency and accuracy of the proposed methodology.
https://arxiv.org/abs/2601.12515
Academic Papers
svg
46aaaa5c36673e6d77e0cb0ffb7013ac971f8ef1cf1b864318c594ebed4f68ee
2026-01-21T00:00:00-05:00
Stop using limiting stimuli as a measure of sensitivities of energetic materials
arXiv:2601.12552v1 Announce Type: new Abstract: Accurately estimating the sensitivity of explosive materials is a potentially life-saving task which requires standardised protocols across nations. One of the most widely applied procedures worldwide is the so-called '1-In-6' test from the United Nations (UN) Manual of Tests in Criteria, which estimates a 'limiting stimulus' for a material. In this paper we demonstrate that, despite their popularity, limiting stimuli are not a well-defined notion of sensitivity and do not provide reliable information about a material's susceptibility to ignition. In particular, they do not permit construction of confidence intervals to quantify estimation uncertainty. We show that continued reliance on limiting stimuli through the 1-In-6 test has caused needless confusion in energetic materials research, both in theoretical studies and practical safety applications. To remedy this problem, we consider three well-founded alternative approaches to sensitivity testing to replace limiting stimulus estimation. We compare their performance in an extensive simulation study and apply the best-performing approach to real data, estimating the friction sensitivity of pentaerythritol tetranitrate (PETN).
https://arxiv.org/abs/2601.12552
Academic Papers
svg
fa631033905be6d664b115c976a2567b847f248f23a52d0ff272a87aaaefbb34
2026-01-21T00:00:00-05:00
A Theory of Diversity for Random Matrices with Applications to In-Context Learning of Schr\"odinger Equations
arXiv:2601.12587v1 Announce Type: new Abstract: We address the following question: given a collection $\{\mathbf{A}^{(1)}, \dots, \mathbf{A}^{(N)}\}$ of independent $d \times d$ random matrices drawn from a common distribution $\mathbb{P}$, what is the probability that the centralizer of $\{\mathbf{A}^{(1)}, \dots, \mathbf{A}^{(N)}\}$ is trivial? We provide lower bounds on this probability in terms of the sample size $N$ and the dimension $d$ for several families of random matrices which arise from the discretization of linear Schr\"odinger operators with random potentials. When combined with recent work on machine learning theory, our results provide guarantees on the generalization ability of transformer-based neural networks for in-context learning of Schr\"odinger equations.
https://arxiv.org/abs/2601.12587
Academic Papers
svg
c867b29f11e3faf245fc6233a090d30f5b7ab250a8d1afc27be4392bfbe1cc00
2026-01-21T00:00:00-05:00
Quasi-Bayesian Variable Selection: Model Selection without a Model
arXiv:2601.12767v1 Announce Type: new Abstract: Bayesian inference offers a powerful framework for variable selection by incorporating sparsity through prior beliefs and quantifying uncertainty about parameters, leading to consistent procedures with good finite-sample performance. However, accurately quantifying uncertainty requires a correctly specified model, and there is increasing awareness of the problems that model misspecification causes for variable selection. Current solutions to this problem either require a more complex model, detracting from the interpretability of the original variable selection task, or gain robustness by moving outside of rigorous Bayesian uncertainty quantification. This paper establishes the model quasi-posterior as a principled tool for variable selection. We prove that the model quasi-posterior shares many of the desirable properties of full Bayesian variable selection, but no longer necessitates a full likelihood specification. Instead, the quasi-posterior only requires the specification of mean and variance functions, and as a result, is robust to other aspects of the data. Laplace approximations are used to approximate the quasi-marginal likelihood when it is not available in closed form to provide computational tractability. We demonstrate through extensive simulation studies that the quasi-posterior improves variable selection accuracy across a range of data-generating scenarios, including linear models with heavy-tailed errors and overdispersed count data. We further illustrate the practical relevance of the proposed approach through applications to real datasets from social science and genomics
https://arxiv.org/abs/2601.12767
Academic Papers
svg
ef7e25b83e31a22ea3b4e9353792534215c58bc623704e12659ecc14f1aa1a64
2026-01-21T00:00:00-05:00
The impact of abnormal temperatures on crop yields in Italy: a functional quantile regression approach
arXiv:2601.12864v1 Announce Type: new Abstract: In this study, we apply functional regression analysis to identify the specific within-season periods during which temperature and precipitation anomalies most affect crop yields. Using provincial data for Italy from 1952 to 2023, we analyze two major cereals, maize and soft wheat, and quantify how abnormal weather conditions influence yields across the growing cycle. Unlike traditional statistical yield models, which assume additive temperature effects over the season, our approach is capable of capturing the timing and functional shape of weather impacts. In particular, the results show that above-average temperatures reduce maize yields primarily between June and August, while exerting a mild positive effect in April and October. For soft wheat, unusually high temperatures negatively affect yields from late March to early April. Precipitation also exerts season-dependent effects, improving wheat yields early in the season but reducing them later on. These findings highlight the importance of accounting for intra-seasonal weather patterns to provide insights for climate change adaptation strategies, including the timely adjustment of key crop management inputs.
https://arxiv.org/abs/2601.12864
Academic Papers
svg
93aa85d7bcd16fdebeb5fc375a4a65436f78345162a97d9a4ea894beb61699d4
2026-01-21T00:00:00-05:00
Guidance for Addressing Individual Time Effects in Cohort Stepped Wedge Cluster Randomized Trials: A Simulation Study
arXiv:2601.12930v1 Announce Type: new Abstract: Background: Stepped wedge cluster randomized trials (SW-CRTs) involve sequential measurements within clusters over time. Initially, all clusters start in the control condition before crossing over to the intervention on a staggered schedule. In cohort designs, secular trends, cluster-level changes, and individual-level changes (e.g., aging) must be considered. Methods: We performed a Monte Carlo simulation to analyze the influence of different time effects on the estimation of the intervention effect in cohort SW-CRTs. We compared four linear mixed models with different adjustment strategies, all including random intercepts for clustering and repeated measurements. We recorded the estimated fixed intervention effects and their corresponding model-based standard errors, derived from models both without and with cluster-robust variance estimators (CRVEs). Results: Models incorporating fixed categorical time effects, a fixed intervention effect, and two random intercepts provided unbiased estimates of the intervention effect in both closed and open cohort SW-CRTs. Fixed categorical time effects captured temporal cohort changes, while random individual effects accounted for baseline differences. However, these differences can cause large, non-normally distributed random individual effects. CRVEs provide reliable standard errors for the intervention effect, controlling the Type I error rate. Conclusions: Our simulation study is the first to assess individual-level changes over time in cohort SW-CRTs. Linear mixed models incorporating fixed categorical time effects and random cluster and individual effects yield unbiased intervention effect estimates. However, cluster-robust variance estimation is necessary when time-varying independent variables exhibit nonlinear effects. We recommend always using CRVEs.
https://arxiv.org/abs/2601.12930
Academic Papers
svg
e44cff2e674299fd9ff04284ed009586afa205282da842a07f2e7665f2aba0b5
2026-01-21T00:00:00-05:00
Propensity Score Propagation: A General Framework for Design-Based Inference with Unknown Propensity Scores
arXiv:2601.13150v1 Announce Type: new Abstract: Design-based inference, also known as randomization-based or finite-population inference, provides a principled framework for causal and descriptive analyses that attribute randomness solely to the design mechanism (e.g., treatment assignment, sampling, or missingness) without imposing distributional or modeling assumptions on the outcome data of study units. Despite its conceptual appeal and long history, this framework becomes challenging to apply when the underlying design probabilities (i.e., propensity scores) are unknown, as is common in observational studies, real-world surveys, and missing-data settings. Existing plug-in or matching-based approaches either ignore the uncertainty stemming from estimated propensity scores or rely on the post-matching uniform-propensity condition (an assumption typically violated when there are multiple or continuous covariates), leading to systematic under-coverage. Finite-population M-estimation partially mitigates these issues but remains limited to parametric propensity score models. In this work, we introduce propensity score propagation, a general framework for valid design-based inference with unknown propensity scores. The framework introduces a regeneration-and-union procedure that automatically propagates uncertainty in propensity score estimation into downstream design-based inference. It accommodates both parametric and nonparametric propensity score models, integrates seamlessly with standard tools in design-based inference with known propensity scores, and is universally applicable to various important design-based inference problems, such as observational studies, real-world surveys, and missing-data analyses, among many others. Simulation studies demonstrate that the proposed framework restores nominal coverage levels in settings where conventional methods suffer from severe under-coverage.
https://arxiv.org/abs/2601.13150
Academic Papers
svg
2631c898df73df9dd80095893ce894424b56de1c52b6f578fd1ed65ad1bacc45
2026-01-21T00:00:00-05:00
Empirical Risk Minimization with $f$-Divergence Regularization
arXiv:2601.13191v1 Announce Type: new Abstract: In this paper, the solution to the empirical risk minimization problem with $f$-divergence regularization (ERM-$f$DR) is presented and conditions under which the solution also serves as the solution to the minimization of the expected empirical risk subject to an $f$-divergence constraint are established. The proposed approach extends applicability to a broader class of $f$-divergences than previously reported and yields theoretical results that recover previously known results. Additionally, the difference between the expected empirical risk of the ERM-$f$DR solution and that of its reference measure is characterized, providing insights into previously studied cases of $f$-divergences. A central contribution is the introduction of the normalization function, a mathematical object that is critical in both the dual formulation and practical computation of the ERM-$f$DR solution. This work presents an implicit characterization of the normalization function as a nonlinear ordinary differential equation (ODE), establishes its key properties, and subsequently leverages them to construct a numerical algorithm for approximating the normalization factor under mild assumptions. Further analysis demonstrates structural equivalences between ERM-$f$DR problems with different $f$-divergences via transformations of the empirical risk. Finally, the proposed algorithm is used to compute the training and test risks of ERM-$f$DR solutions under different $f$-divergence regularizers. This numerical example highlights the practical implications of choosing different functions $f$ in ERM-$f$DR problems.
https://arxiv.org/abs/2601.13191
Academic Papers
svg
1a27bcb06e0e315c587ad9886a2ae8ed7e5624bc1508400ec7bb91de1af6c00f
2026-01-21T00:00:00-05:00
Improving Geopolitical Forecasts with Bayesian Networks
arXiv:2601.13362v1 Announce Type: new Abstract: This study explores how Bayesian networks (BNs) can improve forecast accuracy compared to logistic regression and recalibration and aggregation methods, using data from the Good Judgment Project. Regularized logistic regression models and a baseline recalibrated aggregate were compared to two types of BNs: structure-learned BNs with arcs between predictors, and naive BNs. Four predictor variables were examined: absolute difference from the aggregate, forecast value, days prior to question close, and mean standardized Brier score. Results indicated the recalibrated aggregate achieved the highest accuracy (AUC = 0.985), followed by both types of BNs, then the logistic regression models. Performance of the BNs was likely harmed by reduced information from the discretization process and violation of the assumption of linearity likely harmed the logistic regression models. Future research should explore hybrid approaches combining BNs with logistic regression, examine additional predictor variables, and account for hierarchical data dependencies.
https://arxiv.org/abs/2601.13362
Academic Papers
svg
878b5a01eb334390229d000de65ef0d26ece6d8baee43c2a5263bbd1912af2fe
2026-01-21T00:00:00-05:00
A Two-Stage Bayesian Framework for Multi-Fidelity Online Updating of Spatial Fragility Fields
arXiv:2601.13396v1 Announce Type: new Abstract: This paper addresses a long-standing gap in natural hazard modeling by unifying physics-based fragility functions with real-time post-disaster observations. It introduces a Bayesian framework that continuously refines regional vulnerability estimates as new data emerges. The framework reformulates physics-informed fragility estimates into a Probit-Normal (PN) representation that captures aleatory variability and epistemic uncertainty in an analytically tractable form. Stage 1 performs local Bayesian updating by moment-matching PN marginals to Beta surrogates that preserve their probability shapes, enabling conjugate Beta-Bernoulli updates with soft, multi-fidelity observations. Fidelity weights encode source reliability, and the resulting Beta posteriors are re-projected into PN form, producing heteroscedastic fragility estimates whose variances reflect data quality and coverage. Stage 2 assimilates these heteroscedastic observations within a probit-warped Gaussian Process (GP), which propagates information from high-fidelity sites to low-fidelity and unobserved regions through a composite kernel that links space, archetypes, and correlated damage states. The framework is applied to the 2011 Joplin tornado, where wind-field priors and computer-vision damage assessments are fused under varying assumptions about tornado width, sampling strategy, and observation completeness. Results show that the method corrects biased priors, propagates information spatially, and produces uncertainty-aware exceedance probabilities that support real-time situational awareness.
https://arxiv.org/abs/2601.13396
Academic Papers
svg
9caee830295ff74936ca1241a805fbfc8389d93035319a46a9ca4742ff1f948b
2026-01-21T00:00:00-05:00
Associating High-Dimensional Longitudinal Datasets through an Efficient Cross-Covariance Decomposition
arXiv:2601.13405v1 Announce Type: new Abstract: Understanding associations between paired high-dimensional longitudinal datasets is a fundamental yet challenging problem that arises across scientific domains, including longitudinal multi-omic studies. The difficulty stems from the complex, time-varying cross-covariance structure coupled with high dimensionality, which complicates both model formulation and statistical estimation. To address these challenges, we propose a new framework, termed Functional-Aggregated Cross-covariance Decomposition (FACD), tailored for canonical cross-covariance analysis between paired high-dimensional longitudinal datasets through a statistically efficient and theoretically grounded procedure. Unlike existing methods that are often limited to low-dimensional data or rely on explicit parametric modeling of temporal dynamics, FACD adaptively learns temporal structure by aggregating signals across features and naturally accommodates variable selection to identify the most relevant features associated across datasets. We establish statistical guarantees for FACD and demonstrate its advantages over existing approaches through extensive simulation studies. Finally, we apply FACD to a longitudinal multi-omic human study, revealing blood molecules with time-varying associations across omic layers during acute exercise.
https://arxiv.org/abs/2601.13405
Academic Papers
svg
20ce18ee5dc6981d09cc902cb08ef0fce681c522238836ca4b2808e7b0ac4d80
2026-01-21T00:00:00-05:00
Pathway-based Bayesian factor models for gene expression data
arXiv:2601.13419v1 Announce Type: new Abstract: Interpreting gene expression data requires methods that can uncover coordinated patterns corresponding to biological pathways. Traditional approaches such as principal component analysis and factor models reduce dimensionality, but latent components may have unclear biological meaning. Current approaches to incorporate pathway annotations impose restrictive assumptions, require extensive hyperparameter tuning, and do not provide principled uncertainty quantification, hindering the robustness and reproducibility of results. Here, we develop Bayesian Analysis with gene-Sets Informed Latent space (BASIL), a scalable Bayesian factor modeling framework that incorporates gene pathway annotations into latent variable analysis for RNA-sequencing data. BASIL places structured priors on factor loadings, shrinking them toward combinations of annotated gene sets, enhancing biological interpretability and stability, while simultaneously learning new unstructured components. BASIL provides accurate covariance estimates and uncertainty quantification, without resorting to computationally expensive Markov chain Monte Carlo sampling. An automatic empirical Bayes procedure eliminates the need for manual hyperparameter tuning, promoting reproducibility and usability in practice. In simulations and large-scale human transcriptomic datasets, BASIL consistently outperforms state-of-the-art approaches, accurately reconstructing gene-gene covariance, selecting the correct latent dimension, and identifying biologically coherent modules.
https://arxiv.org/abs/2601.13419
Academic Papers
svg
f56b3d43ea6d37fcceef4f583bbc504de747659bca56c00fd62a9db711e068fb
2026-01-21T00:00:00-05:00
Identifying Causes of Test Unfairness: Manipulability and Separability
arXiv:2601.13449v1 Announce Type: new Abstract: Differential item functioning (DIF) is a widely used statistical notion for identifying items that may disadvantage specific groups of test-takers. These groups are often defined by non-manipulable characteristics, e.g., gender, race/ethnicity, or English-language learner (ELL) status. While DIF can be framed as a causal fairness problem by treating group membership as the treatment variable, this invokes the long-standing controversy over the interpretation of causal effects for non-manipulable treatments. To better identify and interpret causal sources of DIF, this study leverages an interventionist approach using treatment decomposition proposed by Robins and Richardson (2010). Under this framework, we can decompose a non-manipulable treatment into intervening variables. For example, ELL status can be decomposed into English vocabulary unfamiliarity and classroom learning barriers, each of which influences the outcome through different causal pathways. We formally define separable DIF effects associated with these decomposed components, depending on the absence or presence of item impact, and provide causal identification strategies for each effect. We then apply the framework to biased test items in the SAT and Regents exams. We also provide formal detection methods using causal machine learning methods, namely causal forests and Bayesian additive regression trees, and demonstrate their performance through a simulation study. Finally, we discuss the implications of adopting interventionist approaches in educational testing practices.
https://arxiv.org/abs/2601.13449
Academic Papers
svg
3b6c741cc75aa36f64eab37d68309a33657d5f4258a02ad2a1674564374929bc
2026-01-21T00:00:00-05:00
Categorical distance correlation under general encodings and its application to high-dimensional feature screening
arXiv:2601.13454v1 Announce Type: new Abstract: In this paper, we extend distance correlation to categorical data with general encodings, such as one-hot encoding for nominal variables and semicircle encoding for ordinal variables. Unlike existing methods, our approach leverages the spacing information between categories, which enhances the performance of distance correlation. Two estimates including the maximum likelihood estimate and a bias-corrected estimate are given, together with their limiting distributions under the null and alternative hypotheses. Furthermore, we establish the sure screening property for high-dimensional categorical data under mild conditions. We conduct a simulation study to compare the performance of different encodings, and illustrate their practical utility using the 2018 General Social Survey data.
https://arxiv.org/abs/2601.13454
Academic Papers
svg
519d8963e5acdcf9a6699d8ad2a1f9d76542e0f284ad524a3a094bfc0b79af60
2026-01-21T00:00:00-05:00
Two-stage least squares with clustered data
arXiv:2601.13507v1 Announce Type: new Abstract: Clustered data -- where units of observation are nested within higher-level groups, such as repeated measurements on users, or panel data of firms, industries, or geographic regions -- are ubiquitous in business research. When the objective is to estimate the causal effect of a potentially endogenous treatment, a common approach -- which we call the canonical two-stage least squares (2sls) -- is to fit a 2sls regression of the outcome on treatment status with instrumental variables (IVs) for point estimation, and apply cluster-robust standard errors to account for clustering in inference. When both the treatment and IVs vary within clusters, a natural alternative -- which we call the two-stage least squares with fixed effects (2sfe) -- is to include cluster indicators in the 2sls specification, thereby incorporating cluster information in point estimation as well. This paper clarifies the trade-off between these two approaches within the local average treatment effect (LATE) framework, and makes three contributions. First, we establish the validity of both approaches for Wald-type inference of the LATE when clusters are homogeneous, and characterize their relative efficiency. We show that, when the true outcome model includes cluster-specific effects, 2sfe is more efficient than the canonical 2sls only when the variation in cluster-specific effects dominates that in unit-level errors. Second, we show that with heterogeneous clusters, 2sfe recovers a weighted average of cluster-specific LATEs, whereas the canonical 2sls generally does not. Third, to guide empirical choice between the two procedures, we develop a joint asymptotic theory for the two estimators under homogeneous clusters, and propose a Wald-type test for detecting cluster heterogeneity.
https://arxiv.org/abs/2601.13507
Academic Papers
svg
a913e5974297731393e57adaa96e814da363d52b001435fb47265c3a57bcaba7
2026-01-21T00:00:00-05:00
Post-selection inference for penalized M-estimators via score thinning
arXiv:2601.13514v1 Announce Type: new Abstract: We consider inference for M-estimators after model selection using a sparsity-inducing penalty. While existing methods for this task require bespoke inference procedures, we propose a simpler approach, which relies on two insights: (i) adding and subtracting carefully-constructed noise to a Gaussian random variable with unknown mean and known variance leads to two \emph{independent} Gaussian random variables; and (ii) both the selection event resulting from penalized M-estimation, and the event that a standard (non-selective) confidence interval for an M-estimator covers its target, can be characterized in terms of an approximately normal ``score variable". We combine these insights to show that -- when the noise is chosen carefully -- there is asymptotic independence between the model selected using a noisy penalized M-estimator, and the event that a standard (non-selective) confidence interval on noisy data covers the selected parameter. Therefore, selecting a model via penalized M-estimation (e.g. \verb=glmnet= in \verb=R=) on noisy data, and then conducting \emph{standard} inference on the selected model (e.g. \verb=glm= in \verb=R=) using noisy data, yields valid inference: \emph{no bespoke methods are required}. Our results require independence of the observations, but only weak distributional requirements. We apply the proposed approach to conduct inference on the association between sex and smoking in a social network.
https://arxiv.org/abs/2601.13514
Academic Papers
svg
e6cc0d1db381cf776404edaa3f470efc82b480c76b2751a39534c1b47efe16c8
2026-01-21T00:00:00-05:00
What is Overlap Weighting, How Has it Evolved, and When to Use It for Causal Inference?
arXiv:2601.13535v1 Announce Type: new Abstract: The growing availability of large health databases has expanded the use of observational studies for comparative effectiveness research. Unlike randomized trials, observational studies must adjust for systematic differences in patient characteristics between treatment groups. Propensity score methods, including matching, weighting, stratification, and regression adjustment, address this issue by creating groups that are comparable with respect to measured covariates. Among these approaches, overlap weighting (OW) has emerged as a principled and efficient method that emphasizes individuals at empirical equipoise, those who could plausibly receive either treatment. By assigning weights proportional to the probability of receiving the opposite treatment, OW targets the Average Treatment Effect in the Overlap population (ATO), achieves exact mean covariate balance under logistic propensity score models, and minimizes asymptotic variance. Over the last decade, the OW method has been recognized as a valuable confounding adjustment tool across the statistical, epidemiologic, and clinical research communities, and is increasingly applied in clinical and health studies. Given the growing interest in using observational data to emulate randomized trials and the capacity of OW to prioritize populations at clinical equipoise while achieving covariate balance (fundamental attributes of randomized studies), this article provides a concise overview of recent methodological developments in OW and practical guidance on when it represents a suitable choice for causal inference.
https://arxiv.org/abs/2601.13535
Academic Papers
svg
c63b8c314a8d5b9ce37fbcc5dde4f5c2464b13762336180be99fc213a62bf265
2026-01-21T00:00:00-05:00
Are Large Language Models able to Predict Highly Cited Papers? Evidence from Statistical Publications
arXiv:2601.13627v1 Announce Type: new Abstract: Predicting highly-cited papers is a long-standing challenge due to the complex interactions of research content, scholarly communities, and temporal dynamics. Recent advances in large language models (LLMs) raise the question of whether early-stage textual information can provide useful signals of long-term scientific impact. Focusing on statistical publications, we propose a flexible, text-centered framework that leverages LLMs and structured prompt design to predict highly cited papers. Specifically, we utilize information available at the time of publication, including titles, abstracts, keywords, and limited bibliographic metadata. Using a large corpus of statistical papers, we evaluate predictive performance across multiple publication periods and alternative definitions of highly cited papers. The proposed approach achieves stable and competitive performance relative to existing methods and demonstrates strong generalization over time. Textual analysis further reveals that papers predicted as highly cited concentrate on recurring topics such as causal inference and deep learning. To facilitate practical use of the proposed approach, we further develop a WeChat mini program, \textit{Stat Highly Cited Papers}, which provides an accessible interface for early-stage citation impact assessment. Overall, our results provide empirical evidence that LLMs can capture meaningful early signals of long-term citation impact, while also highlighting their limitations as tools for research impact assessment.
https://arxiv.org/abs/2601.13627
Academic Papers
svg
7bec013783d0f4d90e4505c4ccb96cda209c4efe13e9da2a129f05446e0aa40e
2026-01-21T00:00:00-05:00
Correction of Pooling Matrix Mis-specifications in Compressed Sensing Based Group Testing
arXiv:2601.13641v1 Announce Type: new Abstract: Compressed sensing, which involves the reconstruction of sparse signals from an under-determined linear system, has been recently used to solve problems in group testing. In a public health context, group testing aims to determine the health status values of p subjects from n<<p pooled tests. In this paper, we present an algorithm to correct the MMEs in the pooled tests directly from the pooled results and the available (inaccurate) pooling matrix. Our approach then reconstructs the signal vector from the corrected pooling matrix, in order to determine the health status of the subjects. We further provide theoretical guarantees for the correction of the MMEs and the reconstruction error from the corrected pooling matrix. We also provide several supporting numerical results.
https://arxiv.org/abs/2601.13641
Academic Papers
svg
b5bc07a14ebe053942ed539f7b2885494e2c6e61c49a4c6f0ede7aa319a84ea6
2026-01-21T00:00:00-05:00
Sample Complexity of Average-Reward Q-Learning: From Single-agent to Federated Reinforcement Learning
arXiv:2601.13642v1 Announce Type: new Abstract: Average-reward reinforcement learning offers a principled framework for long-term decision-making by maximizing the mean reward per time step. Although Q-learning is a widely used model-free algorithm with established sample complexity in discounted and finite-horizon Markov decision processes (MDPs), its theoretical guarantees for average-reward settings remain limited. This work studies a simple but effective Q-learning algorithm for average-reward MDPs with finite state and action spaces under the weakly communicating assumption, covering both single-agent and federated scenarios. For the single-agent case, we show that Q-learning with carefully chosen parameters achieves sample complexity $\widetilde{O}\left(\frac{|\mathcal{S}||\mathcal{A}|\|h^{\star}\|_{\mathsf{sp}}^3}{\varepsilon^3}\right)$, where $\|h^{\star}\|_{\mathsf{sp}}$ is the span norm of the bias function, improving previous results by at least a factor of $\frac{\|h^{\star}\|_{\mathsf{sp}}^2}{\varepsilon^2}$. In the federated setting with $M$ agents, we prove that collaboration reduces the per-agent sample complexity to $\widetilde{O}\left(\frac{|\mathcal{S}||\mathcal{A}|\|h^{\star}\|_{\mathsf{sp}}^3}{M\varepsilon^3}\right)$, with only $\widetilde{O}\left(\frac{\|h^{\star}\|_{\mathsf{sp}}}{\varepsilon}\right)$ communication rounds required. These results establish the first federated Q-learning algorithm for average-reward MDPs, with provable efficiency in both sample and communication complexity.
https://arxiv.org/abs/2601.13642
Academic Papers
svg
474ab32c8b89703b4633ffc360bbd505ccf97222b3ca4fb02143e7049ff7e77d
2026-01-21T00:00:00-05:00
Building a Standardised Statistical Reporting Toolbox in an Academic Oncology Clinical Trials Unit: The grstat R Package
arXiv:2601.13755v1 Announce Type: new Abstract: Academic Clinical Trial Units frequently face fragmented statistical workflows, leading to duplicated effort, limited collaboration, and inconsistent analytical practices. To address these challenges within an oncology Clinical Trial Unit, we developed grstat, an R package providing a standardised set of tools for routine statistical analyses. Beyond the software itself, the development of grstat is embedded in a structured organisational framework combining formal request tracking, peer-reviewed development, automated testing, and staged validation of new functionalities. The package is intentionally opinionated, reflecting shared practices agreed upon within the unit, and evolves through iterative use in real-world projects. Its development as an open-source project on GitHub supports transparent workflows, collective code ownership, and traceable decision-making. While primarily designed for internal use, this work illustrates a transferable approach to organising, validating, and maintaining a shared analytical toolbox in an academic setting. By coupling technical implementation with governance and validation principles, grstat supports efficiency, reproducibility, and long-term maintainability of biostatistical workflows, and may serve as a source of inspiration for other Clinical Trial Units facing similar organisational challenges.
https://arxiv.org/abs/2601.13755
Academic Papers
svg
23e918e2772167e8c07e5c6cae99db29ad14666ae525f19d90cb8b4594b74b91
2026-01-21T00:00:00-05:00
ChauBoxplot and AdaptiveBoxplot: two R packages for boxplot-based outlier detection
arXiv:2601.13759v1 Announce Type: new Abstract: Tukey's boxplot is widely used for outlier detection; however, its classic fixed-fence rule tends to flag an excessive number of outliers as the sample size grows. To address this limitation, we introduce two new R packages, ChauBoxplot and AdaptiveBoxplot, which implement more robust methods for outlier detection. We also provide practical guidance, drawn from simulation results, to help practitioners choose suitable boxplot methods and balance interpretability with statistical reliability.
https://arxiv.org/abs/2601.13759
Academic Papers
svg
5a3ec8fe95659f6d773d837a1b05a9e281e459d67b7f63e35193384e4e64fc66
2026-01-21T00:00:00-05:00
An Adaptive Phase II Trial Design for Dose Selection and Addition in Microfilarial Infections
arXiv:2601.13784v1 Announce Type: new Abstract: We propose a frequentist adaptive phase 2 trial design to evaluate the safety and efficacy of three treatment regimens (doses) compared to placebo for four types of helminth (worm) infections. This trial will be carried out in four Subsaharan African countries from spring 2025. Since the safety of the highest dose is not yet established, the study begins with the two lower doses and placebo. Based on safety and early efficacy results from an interim analysis, a decision will be made to either continue with the two lower doses or drop one or both and introduce the highest dose instead. This design borrows information across baskets for safety assessment, while efficacy is assessed separately for each basket. The proposed adaptive design addresses several key challenges: (1) The trial must begin with only the two lower doses because reassuring safety data from these doses is required before escalating to a higher dose. (2) Due to the expected speed of recruitment, adaptation decisions must rely on an earlier, surrogate endpoint. (3) The primary outcome is a count variable that follows a mixture distribution with an atom at 0. To control the familywise error rate in the strong sense when comparing multiple doses to the control in the adaptive design, we extend the partial conditional error approach to accommodate the inclusion of new hypotheses after the interim analysis. In a comprehensive simulation study we evaluate various design options and analysis strategies, assessing the robustness of the design under different design assumptions and parameter values. We identify scenarios where the adaptive design improves the trial's ability to identify an optimal dose. Adaptive dose selection enables resource allocation to the most promising treatment arms, increasing the likelihood of selecting the optimal dose while reducing the required overall sample size and trial duration.
https://arxiv.org/abs/2601.13784
Academic Papers
svg
23bef067c53951aaadd4dbe564ad0337b596184551a27ac0580582cd8804c254
2026-01-21T00:00:00-05:00
Unified Unbiased Variance Estimation for MMD: Robust Finite-Sample Performance with Imbalanced Data and Exact Acceleration under Null and Alternative Hypotheses
arXiv:2601.13874v1 Announce Type: new Abstract: The maximum mean discrepancy (MMD) is a kernel-based nonparametric statistic for two-sample testing, whose inferential accuracy depends critically on variance characterization. Existing work provides various finite-sample estimators of the MMD variance, often differing under the null and alternative hypotheses and across balanced or imbalanced sampling schemes. In this paper, we study the variance of the MMD statistic through its U-statistic representation and Hoeffding decomposition, and establish a unified finite-sample characterization covering different hypotheses and sample configurations. Building on this analysis, we propose an exact acceleration method for the univariate case under the Laplacian kernel, which reduces the overall computational complexity from $\mathcal O(n^2)$ to $\mathcal O(n \log n)$.
https://arxiv.org/abs/2601.13874
Academic Papers
svg
0064a644f863cab8d984cee9ffa3cfb1ca8422df11d62bec35635a4ffd874c99
2026-01-21T00:00:00-05:00
Modeling Zero-Inflated Longitudinal Circular Data Using Bayesian Methods: Application to Ophthalmology
arXiv:2601.13998v1 Announce Type: new Abstract: This paper introduces the modeling of circular data with excess zeros under a longitudinal framework, where the response is a circular variable and the covariates can be both linear and circular in nature. In the literature, various circular-circular and circular-linear regression models have been studied and applied to different real-world problems. However, there are no models for addressing zero-inflated circular observations in the context of longitudinal studies. Motivated by a real case study, a mixed-effects two-stage model based on the projected normal distribution is proposed to handle such issues. The interpretation of the model parameters is discussed and identifiability conditions are derived. A Bayesian methodology based on Gibbs sampling technique is developed for estimating the associated model parameters. Simulation results show that the proposed method outperforms its competitors in various situations. A real dataset on post-operative astigmatism is analyzed to demonstrate the practical implementation of the proposed methodology. The use of the proposed method facilitates effective decision-making for treatment choices and in the follow-up phases.
https://arxiv.org/abs/2601.13998
Academic Papers
svg
956e237c7f20ec6702f4c991cd1b6d858c791c59d1c0d574750ed8bd295429dd
2026-01-21T00:00:00-05:00
Intermittent time series forecasting: local vs global models
arXiv:2601.14031v1 Announce Type: new Abstract: Intermittent time series, characterised by the presence of a significant amount of zeros, constitute a large percentage of inventory items in supply chain. Probabilistic forecasts are needed to plan the inventory levels; the predictive distribution should cover non-negative values, have a mass in zero and a long upper tail. Intermittent time series are commonly forecast using local models, which are trained individually on each time series. In the last years global models, which are trained on a large collection of time series, have become popular for time series forecasting. Global models are often based on neural networks. However, they have not yet been exhaustively tested on intermittent time series. We carry out the first study comparing state-of-the-art local (iETS, TweedieGP) and global models (D-Linear, DeepAR, Transformers) on intermittent time series. For neural networks models we consider three different distribution heads suitable for intermittent time series: negative binomial, hurdle-shifted negative binomial and Tweedie. We use, for the first time, the last two distribution heads with neural networks. We perform experiments on five large datasets comprising more than 40'000 real-world time series. Among neural networks D-Linear provides best accuracy; it also consistently outperforms the local models. Moreover, it has also low computational requirements. Transformers-based architectures are instead much more computationally demanding and less accurate. Among the distribution heads, the Tweedie provides the best estimates of the highest quantiles, while the negative binomial offers overall the best performance.
https://arxiv.org/abs/2601.14031
Academic Papers
svg
62a50d63f88639b37711a6971f6cccbb05caee546dfc5f8e257b7dc2d28cb637
2026-01-21T00:00:00-05:00
Tail-Aware Density Forecasting of Locally Explosive Time Series: A Neural Network Approach
arXiv:2601.14049v1 Announce Type: new Abstract: This paper proposes a Mixture Density Network for forecasting time series that exhibit locally explosive behavior. By incorporating skewed t-distributions as mixture components, our approach offers enhanced flexibility in capturing the skewed, heavy-tailed, and potentially multimodal nature of predictive densities associated with bubble dynamics modeled by mixed causal-noncausal ARMA processes. In addition, we implement an adaptive weighting scheme that emphasizes tail observations during training and hence leads to accurate density estimation in the extreme regions most relevant for financial applications. Equally important, once trained, the MDN produces near-instantaneous density forecasts. Through extensive Monte Carlo simulations and an empirical application on the natural gas price, we show that the proposed MDN-based framework delivers superior forecasting performance relative to existing approaches.
https://arxiv.org/abs/2601.14049
Academic Papers
svg
5b884e7ad0caa3a96439967d4752949b76191a79b0d53349d21e46b1382ac342
2026-01-21T00:00:00-05:00
Factor Analysis of Multivariate Stochastic Volatility Model
arXiv:2601.14199v1 Announce Type: new Abstract: Modeling the time-varying covariance structures of high-dimensional variables is critical across diverse scientific and industrial applications; however, existing approaches exhibit notable limitations in either modeling flexibility or inferential efficiency. For instance, change-point modeling fails to account for the continuous time-varying nature of covariance structures, while GARCH and stochastic volatility models suffer from over-parameterization and the risk of overfitting. To address these challenges, we propose a Bayesian factor modeling framework designed to enable simultaneous inference of both the covariance structure of a high-dimensional time series and its time-varying dynamics. The associated Expectation-Maximization (EM) algorithm not only features an exact, closed-form update for the M-step but also is easily generalizable to more complex settings, such as spatiotemporal multivariate factor analysis. We validate our method through simulation studies and real-data experiments using climate and financial datasets.
https://arxiv.org/abs/2601.14199
Academic Papers
svg
47401838181d4aae8336076fd5ab1e1bddfe3f69913703e5e26a1ab2ca04e0bf
2026-01-21T00:00:00-05:00
Verifying Physics-Informed Neural Network Fidelity using Classical Fisher Information from Differentiable Dynamical System
arXiv:2601.11638v1 Announce Type: cross Abstract: Physics-Informed Neural Networks (PINNs) have emerged as a powerful tool for solving differential equations and modeling physical systems by embedding physical laws into the learning process. However, rigorously quantifying how well a PINN captures the complete dynamical behavior of the system, beyond simple trajectory prediction, remains a challenge. This paper proposes a novel experimental framework to address this by employing Fisher information for differentiable dynamical systems, denoted $g_F^C$. This Fisher information, distinct from its statistical counterpart, measures inherent uncertainties in deterministic systems, such as sensitivity to initial conditions, and is related to the phase space curvature and the net stretching action of the state space evolution. We hypothesize that if a PINN accurately learns the underlying dynamics of a physical system, then the Fisher information landscape derived from the PINN's learned equations of motion will closely match that of the original analytical model. This match would signify that the PINN has achieved comprehensive fidelity capturing not only the state evolution but also crucial geometric and stability properties. We outline an experimental methodology using the dynamical model of a car to compute and compare $g_F^C$ for both the analytical model and a trained PINN. The comparison, based on the Jacobians of the respective system dynamics, provides a quantitative measure of the PINN's fidelity in representing the system's intricate dynamical characteristics.
https://arxiv.org/abs/2601.11638
Academic Papers
svg
04c8c659f4df3dabda329d7b7521498e7d91c5af2dc79a4bbbb2723bd78529c0
2026-01-21T00:00:00-05:00
Task-tailored Pre-processing: Fair Downstream Supervised Learning
arXiv:2601.11897v1 Announce Type: cross Abstract: Fairness-aware machine learning has recently attracted various communities to mitigate discrimination against certain societal groups in data-driven tasks. For fair supervised learning, particularly in pre-processing, there have been two main categories: data fairness and task-tailored fairness. The former directly finds an intermediate distribution among the groups, independent of the type of the downstream model, so a learned downstream classification/regression model returns similar predictive scores to individuals inputting the same covariates irrespective of their sensitive attributes. The latter explicitly takes the supervised learning task into account when constructing the pre-processing map. In this work, we study algorithmic fairness for supervised learning and argue that the data fairness approaches impose overly strong regularization from the perspective of the HGR correlation. This motivates us to devise a novel pre-processing approach tailored to supervised learning. We account for the trade-off between fairness and utility in obtaining the pre-processing map. Then we study the behavior of arbitrary downstream supervised models learned on the transformed data to find sufficient conditions to guarantee their fairness improvement and utility preservation. To our knowledge, no prior work in the branch of task-tailored methods has theoretically investigated downstream guarantees when using pre-processed data. We further evaluate our framework through comparison studies based on tabular and image data sets, showing the superiority of our framework which preserves consistent trade-offs among multiple downstream models compared to recent competing models. Particularly for computer vision data, we see our method alters only necessary semantic features related to the central machine learning task to achieve fairness.
https://arxiv.org/abs/2601.11897
Academic Papers
svg
76eb72898f563890335793f140e65d17f591759eb98f9b06624117d94a533598
2026-01-21T00:00:00-05:00
Privacy-Preserving Cohort Analytics for Personalized Health Platforms: A Differentially Private Framework with Stochastic Risk Modeling
arXiv:2601.12105v1 Announce Type: cross Abstract: Personalized health analytics increasingly rely on population benchmarks to provide contextual insights such as ''How do I compare to others like me?'' However, cohort-based aggregation of health data introduces nontrivial privacy risks, particularly in interactive and longitudinal digital platforms. Existing privacy frameworks such as $k$-anonymity and differential privacy provide essential but largely static guarantees that do not fully capture the cumulative, distributional, and tail-dominated nature of re-identification risk in deployed systems. In this work, we present a privacy-preserving cohort analytics framework that combines deterministic cohort constraints, differential privacy mechanisms, and synthetic baseline generation to enable personalized population comparisons while maintaining strong privacy protections. We further introduce a stochastic risk modeling approach that treats re-identification risk as a random variable evolving over time, enabling distributional evaluation through Monte Carlo simulation. Adapting quantitative risk measures from financial mathematics, we define Privacy Loss at Risk (P-VaR) to characterize worst-case privacy outcomes under realistic cohort dynamics and adversary assumptions. We validate our framework through system-level analysis and simulation experiments, demonstrating how privacy-utility tradeoffs can be operationalized for digital health platforms. Our results suggest that stochastic risk modeling complements formal privacy guarantees by providing interpretable, decision-relevant metrics for platform designers, regulators, and clinical informatics stakeholders.
https://arxiv.org/abs/2601.12105
Academic Papers
svg
949522e06fb65a66cb5b28bb121d34b02d09bc0e1913d17e49d83936562e775b
2026-01-21T00:00:00-05:00
Distributional Fitting and Tail Analysis of Lead-Time Compositions: Nights vs. Revenue on Airbnb
arXiv:2601.12175v1 Announce Type: cross Abstract: We analyze daily lead-time distributions for two Airbnb demand metrics, Nights Booked (volume) and Gross Booking Value (revenue), treating each day's allocation across 0-365 days as a compositional vector. The data span 2,557 days from January 2019 through December 2025 in a large North American region. Three findings emerge. First, GBV concentrates more heavily in mid-range horizons: beyond 90 days, GBV tail mass typically exceeds Nights by 20-50%, with ratios reaching 75% at the 180-day threshold during peak seasons. Second, Gamma and Weibull distributions fit comparably well under interval-censored cross-entropy. Gamma wins on 61% of days for Nights and 52% for GBV, with Weibull close behind at 38% and 45%. Lognormal rarely wins (<3%). Nonparametric GAMs achieve 18-80x lower CRPS but sacrifice interpretability. Third, generalized Pareto fits suggest bounded tails for both metrics at thresholds below 150 days, though this may partly reflect right-truncation at 365 days; above 150 days, estimates destabilize. Bai-Perron tests with HAC standard errors identify five structural breaks in the Wasserstein distance series, with early breaks coinciding with COVID-19 disruptions. The results show that volume and revenue lead-time shapes diverge systematically, that simple two-parameter distributions capture daily pmfs adequately, and that tail inference requires care near truncation boundaries.
https://arxiv.org/abs/2601.12175
Academic Papers
svg
47340e0e067d3a10f005c28078d6a94c7f166c4dbb8235bb0218754f9065b6d3
2026-01-21T00:00:00-05:00
Extracting useful information about reversible evolutionary processes from irreversible evolutionary accumulation models
arXiv:2601.13010v1 Announce Type: cross Abstract: Evolutionary accumulation models (EvAMs) are an emerging class of machine learning methods designed to infer the evolutionary pathways by which features are acquired. Applications include cancer evolution (accumulation of mutations), anti-microbial resistance (accumulation of drug resistances), genome evolution (organelle gene transfers), and more diverse themes in biology and beyond. Following these themes, many EvAMs assume that features are gained irreversibly -- no loss of features can occur. Reversible approaches do exist but are often computationally (much) more demanding and statistically less stable. Our goal here is to explore whether useful information about evolutionary dynamics which are in reality reversible can be obtained from modelling approaches with an assumption of irreversibility. We identify, and use simulation studies to quantify, errors involved in neglecting reversible dynamics, and show the situations in which approximate results from tractable models can be informative and reliable. In particular, EvAM inferences about the relative orderings of acquisitions, and the core dynamic structure of evolutionary pathways, are robust to reversibility in many cases, while estimations of uncertainty and feature interactions are more error-prone.
https://arxiv.org/abs/2601.13010
Academic Papers
svg
c2c5b0be024f0b28fb38efb0cf16b503c88254da62d2299c2277f8ca477ce65c
2026-01-21T00:00:00-05:00
Optimal Calibration of the endpoint-corrected Hilbert Transform
arXiv:2601.13962v1 Announce Type: cross Abstract: Accurate, low-latency estimates of the instantaneous phase of oscillations are essential for closed-loop sensing and actuation, including (but not limited to) phase-locked neurostimulation and other real-time applications. The endpoint-corrected Hilbert transform (ecHT) reduces boundary artefacts of the Hilbert transform by applying a causal narrow-band filter to the analytic spectrum. This improves the phase estimate at the most recent sample. Despite its widespread empirical use, the systematic endpoint distortions of ecHT have lacked a principled, closed-form analysis. In this study, we derive the ecHT endpoint operator analytically and demonstrate that its output can be decomposed into a desired positive-frequency term (a deterministic complex gain that induces a calibratable amplitude/phase bias) and a residual leakage term setting an irreducible variance floor. This yields (i) an explicit characterisation and bounds for endpoint phase/amplitude error, (ii) a mean-squared-error-optimal scalar calibration (c-ecHT), and (iii) practical design rules relating window length, bandwidth/order, and centre-frequency mismatch to residual bias via an endpoint group delay. The resulting calibrated ecHT achieves near-zero mean phase error and remains computationally compatible with real-time pipelines. Code and analyses are provided at https://github.com/eosmers/cecHT.
https://arxiv.org/abs/2601.13962
Academic Papers
svg
72f6f4a88b10ff26568e69359a113ada579ada5641727a6234712d71487a4d54
2026-01-21T00:00:00-05:00
Demystifying the trend of the healthcare index: Is historical price a key driver?
arXiv:2601.14062v1 Announce Type: cross Abstract: Healthcare sector indices consolidate the economic health of pharmaceutical, biotechnology, and healthcare service firms. The short-term movements in these indices are closely intertwined with capital allocation decisions affecting research and development investment, drug availability, and long-term health outcomes. This research investigates whether historical open-high-low-close (OHLC) index data contain sufficient information for predicting the directional movement of the opening index on the subsequent trading day. The problem is formulated as a supervised classification task involving a one-step-ahead rolling window. A diverse feature set is constructed, comprising original prices, volatility-based technical indicators, and a novel class of nowcasting features derived from mutual OHLC ratios. The framework is evaluated on data from healthcare indices in the U.S. and Indian markets over a five-year period spanning multiple economic phases, including the COVID-19 pandemic. The results demonstrate robust predictive performance, with accuracy exceeding 0.8 and Matthews correlation coefficients above 0.6. Notably, the proposed nowcasting features have emerged as a key determinant of the market movement. We have employed the Shapley-based explainability paradigm to further elucidate the contribution of the features: outcomes reveal the dominant role of the nowcasting features, followed by a more moderate contribution of original prices. This research offers a societal utility: the proposed features and model for short-term forecasting of healthcare indices can reduce information asymmetry and support a more stable and equitable health economy.
https://arxiv.org/abs/2601.14062
Academic Papers
svg
aa7123676ea7a5f52ec3932add65afc0fbb5bf5f2c6e82300fdbf442ac8add4c
2026-01-21T00:00:00-05:00
Penalizing Localized Dirichlet Energies in Low Rank Tensor Products
arXiv:2601.14173v1 Announce Type: cross Abstract: We study low-rank tensor-product B-spline (TPBS) models for regression tasks and investigate Dirichlet energy as a measure of smoothness. We show that TPBS models admit a closed-form expression for the Dirichlet energy, and reveal scenarios where perfect interpolation is possible with exponentially small Dirichlet energy. This renders global Dirichlet energy-based regularization ineffective. To address this limitation, we propose a novel regularization strategy based on local Dirichlet energies defined on small hypercubes centered at the training points. Leveraging pretrained TPBS models, we also introduce two estimators for inference from incomplete samples. Comparative experiments with neural networks demonstrate that TPBS models outperform neural networks in the overfitting regime for most datasets, and maintain competitive performance otherwise. Overall, TPBS models exhibit greater robustness to overfitting and consistently benefit from regularization, while neural networks are more sensitive to overfitting and less effective in leveraging regularization.
https://arxiv.org/abs/2601.14173
Academic Papers
svg
ddbf8a3ecf6c112c0638fbcf27684f6b7988374cb4d8e56cae8d58ae846daed9
2026-01-21T00:00:00-05:00
Q-learning with Adjoint Matching
arXiv:2601.14234v1 Announce Type: cross Abstract: We propose Q-learning with Adjoint Matching (QAM), a novel TD-based reinforcement learning (RL) algorithm that tackles a long-standing challenge in continuous-action RL: efficient optimization of an expressive diffusion or flow-matching policy with respect to a parameterized Q-function. Effective optimization requires exploiting the first-order information of the critic, but it is challenging to do so for flow or diffusion policies because direct gradient-based optimization via backpropagation through their multi-step denoising process is numerically unstable. Existing methods work around this either by only using the value and discarding the gradient information, or by relying on approximations that sacrifice policy expressivity or bias the learned policy. QAM sidesteps both of these challenges by leveraging adjoint matching, a recently proposed technique in generative modeling, which transforms the critic's action gradient to form a step-wise objective function that is free from unstable backpropagation, while providing an unbiased, expressive policy at the optimum. Combined with temporal-difference backup for critic learning, QAM consistently outperforms prior approaches on hard, sparse reward tasks in both offline and offline-to-online RL.
https://arxiv.org/abs/2601.14234
Academic Papers
svg
99fa9abe7d47ffe9377aea62be0471565ec84a53551577823fb1c465530121b9
2026-01-21T00:00:00-05:00
Non-parametric Bayesian inference via loss functions under model misspecification
arXiv:2103.04086v5 Announce Type: replace Abstract: In the usual Bayesian setting, a full probabilistic model is required to link the data and parameters, and the form of this model and the inference and prediction mechanisms are specified via de Finetti's representation. In general, such a formulation is not robust to model misspecification of its component parts. An alternative approach is to draw inference based on loss functions, where the quantity of interest is defined as a minimizer of some expected loss, and to construct posterior distributions based on the loss-based formulation; this strategy underpins the construction of the Gibbs posterior. We develop a Bayesian non-parametric approach; specifically, we generalize the Bayesian bootstrap, and specify a Dirichlet process model for the distribution of the observables. We implement this using direct prior-to-posterior calculations, but also using predictive sampling. We also study the assessment of posterior validity for non-standard Bayesian calculations. We show that the developed non-standard Bayesian updating procedures yield valid posterior distributions in terms of consistency and asymptotic normality under model misspecification. Simulation studies show that the proposed methods can recover the true value of the parameter under misspecification.
https://arxiv.org/abs/2103.04086
Academic Papers
svg
3e887c3171303916906e69067f39a01bc39cb1718eaa6548755a12cc775c07ff
2026-01-21T00:00:00-05:00
Bayesian Evidence Synthesis for the common effect model
arXiv:2103.13236v2 Announce Type: replace Abstract: Bayes Factors, the Bayesian tool for hypothesis testing, are receiving increasing attention in the literature. Compared to their frequentist rivals ($p$-values or test statistics), Bayes Factors have the conceptual advantage of providing evidence both for and against a null hypothesis, and they can be calibrated so that they do not depend so heavily on the sample size. Research on the synthesis of Bayes Factors arising from individual studies has received increasing attention, mostly for the fixed effects model for meta-analysis. In this work, we review and propose methods for combining Bayes Factors from multiple studies, depending on the level of information available, focusing on the common effect model. In the process, we provide insights with respect to the interplay between frequentist and Bayesian evidence. We assess the performance of the methods discussed via a simulation study and apply the methods in an example from the field of positive psychology.
https://arxiv.org/abs/2103.13236
Academic Papers
svg
28bf13a5b65572458e617a3a1edd3290b4a426e3942aff6c05c8f99693a40f30
2026-01-21T00:00:00-05:00
Classification of high-dimensional data with spiked covariance matrix structure
arXiv:2110.01950v3 Announce Type: replace Abstract: We study the classification problem for high-dimensional data with $n$ observations on $p$ features where the $p \times p$ covariance matrix $\Sigma$ exhibits a spiked eigenvalue structure and the vector $\zeta$, given by the difference between the {\em whitened} mean vectors, is sparse. We analyze an adaptive classifier (adaptive with respect to the sparsity $s$) that first performs dimension reduction on the feature vectors prior to classification in the dimensionally reduced space, i.e., the classifier whitens the data, then screens the features by keeping only those corresponding to the $s$ largest coordinates of $\zeta$ and finally applies Fisher linear discriminant on the selected features. Leveraging recent results on entrywise matrix perturbation bounds for covariance matrices, we show that the resulting classifier is Bayes optimal whenever $n \rightarrow \infty$ and $s \sqrt{n^{-1} \ln p} \rightarrow 0$. Notably, our theory also guarantees Bayes optimality for the corresponding quadratic discriminant analysis (QDA). Experimental results on real and synthetic data further indicate that the proposed approach is competitive with state-of-the-art methods while operating on a substantially lower-dimensional representation.
https://arxiv.org/abs/2110.01950
Academic Papers
svg
2fb462cca1e8e65ed91eecbe8ce1133a53f8688dd256e990e1e7f54ceda95979
2026-01-21T00:00:00-05:00
Transformed Linear Prediction for Extremes
arXiv:2111.03754v5 Announce Type: replace Abstract: We address the problem of prediction for extreme observations by proposing an extremal linear prediction method. We construct an inner product space of nonnegative random variables derived from transformed-linear combinations of independent regularly varying random variables. Under a reasonable modeling assumption, the matrix of inner products corresponds to the tail pairwise dependence matrix, which can be easily estimated. We derive the optimal transformed-linear predictor via the projection theorem, which yields a predictor with the same form as the best linear unbiased predictor in non-extreme settings. We quantify uncertainty for prediction errors by constructing prediction intervals based on the geometry of regular variation. We demonstrate the effectiveness of our method through a simulation study and its applications to predicting high pollution levels, and extreme precipitation.
https://arxiv.org/abs/2111.03754
Academic Papers
svg
83590f11f58e4065cb73aaecdfe10a3e189f724ee0d5e4555fdc65136e09405f
2026-01-21T00:00:00-05:00
dynamite: An R Package for Dynamic Multivariate Panel Models
arXiv:2302.01607v4 Announce Type: replace Abstract: dynamite is an R package for Bayesian inference of intensive panel (time series) data comprising multiple measurements per multiple individuals measured in time. The package supports joint modeling of multiple response variables, time-varying and time-invariant effects, a wide range of discrete and continuous distributions, group-specific random effects, latent factors, and customization of prior distributions of the model parameters. Models in the package are defined via a user-friendly formula interface, and estimation of the posterior distribution of the model parameters takes advantage of state-of-the-art Markov chain Monte Carlo methods. The package enables efficient computation of both individual-level and aggregated predictions and offers a comprehensive suite of tools for visualization and model diagnostics.
https://arxiv.org/abs/2302.01607
Academic Papers
svg
3799302d8faa5f6ca9fa258464b27faab6f5dca5cea828cb0f752c1ff4710f96
2026-01-21T00:00:00-05:00
BESS: A Bayesian Estimator of Sample Size
arXiv:2404.07923v4 Announce Type: replace Abstract: We consider a Bayesian framework for estimating the sample size of a clinical trial. The new approach, called BESS, is built upon three pillars: Sample size of the trial, Evidence from the observed data, and Confidence of the final decision in the posterior inference. It uses a simple logic of "given the evidence from data, a specific sample size can achieve a degree of confidence in trial success." The key distinction between BESS and standard sample size estimation (SSE) is that SSE, typically based on Frequentist inference, specifies the true parameters values in its calculation to achieve properties under repeated sampling while BESS assumes possible outcome from the observed data to achieve high posterior probabilities of trial success. As a result, the calibration of the sample size is directly based on the probability of making a correct decision rather than type I or type II error rates. We demonstrate that BESS leads to a more interpretable statement for investigators, and can easily accommodates prior information as well as sample size re-estimation. We explore its performance in comparison to the standard SSE and demonstrate its usage through a case study of oncology optimization trial. An R tool is available at https://ccte.uchicago.edu/BESS.
https://arxiv.org/abs/2404.07923
Academic Papers
svg
ddddd039a8bb4920a0b180754925c3bdcaf85531b6e6ede317b34399fdb18740
2026-01-21T00:00:00-05:00
Asymmetry Analysis of Bilateral Shapes
arXiv:2407.17225v2 Announce Type: replace Abstract: Many biological objects possess bilateral symmetry about a midline or midplane, up to a ``noise'' term. This paper uses landmark-based methods to measure departures from bilateral symmetry, especially for the two-group problem where one group is more asymmetric than the other. In this paper, we formulate our work in the framework of size-and-shape analysis including registration via rigid body motion. Our starting point is a vector of elementary asymmetry features defined at the individual landmark coordinates for each object. We introduce two approaches for testing. In the first, the elementary features are combined into a scalar composite asymmetry measure for each object. Then standard univariate tests can be used to compare the two groups. In the second approach, a univariate test statistic is constructed for each elementary feature. The maximum of these statistics lead to an overall test statistic to compare the two groups and we then provide a technique to extract the important features from the landmark data. Our methodology is illustrated on a pre-registered smile dataset collected to assess the success of cleft lip surgery on human subjects. The asymmetry in a group of cleft lip subjects is compared to a group of normal subjects, and statistically significant differences have been found by univariate tests in the first approach. Further, our feature extraction method leads to an anatomically plausible set of landmarks for medical applications.
https://arxiv.org/abs/2407.17225
Academic Papers
svg
a5c63fe8e15bf973341743080363e0a9fb51dca464d3097e59ed695cb4632b8b
2026-01-21T00:00:00-05:00
Robust Inference for Non-Linear Regression Models with Applications in Enzyme Kinetics
arXiv:2409.15995v2 Announce Type: replace Abstract: Despite linear regression being the most popular statistical modelling technique, in real-life we often need to deal with situations where the true relationship between the response and the covariates is nonlinear in parameters. In such cases one needs to adopt appropriate non-linear regression (NLR) analysis, having wider applications in biochemical and medical studies among many others. In this paper we propose a new improved robust estimation and testing methodologies for general NLR models based on the minimum density power divergence approach and apply our proposal to analyze the widely popular Michaelis-Menten (MM) model in enzyme kinetics. We establish the asymptotic properties of our proposed estimator and tests, along with their theoretical robustness characteristics through influence function analysis. For the particular MM model, we have further empirically justified the robustness and the efficiency of our proposed estimator and the testing procedure through extensive simulation studies and several interesting real data examples of enzyme-catalyzed (biochemical) reactions.
https://arxiv.org/abs/2409.15995
Academic Papers
svg
b73d421dca28b1b2df5ab9cb62a1f995d207e4af28af6b7f13c6b74178e70350
2026-01-21T00:00:00-05:00
Experimentation on Endogenous Graphs
arXiv:2410.09267v2 Announce Type: replace Abstract: We study experimentation under endogenous network interference. Interference patterns are mediated by an endogenous graph, where edges can be formed or eliminated as a result of treatment. We show that conventional estimators are biased in these circumstances, and present a class of unbiased, consistent and asymptotically normal estimators of total treatment effects in the presence of such interference. We show via simulation that our estimator outperforms existing estimators in the literature. Our results apply both to bipartite experimentation, in which the units of analysis and measurement differ, and the standard network experimentation case, in which they are the same.
https://arxiv.org/abs/2410.09267
Academic Papers
svg
bc9ae116ac0b6a444c463c9749a1d8a04e2aa2b5034834112674028409b793c3
2026-01-21T00:00:00-05:00
Dynamic networks clustering via mirror distance
arXiv:2412.19012v2 Announce Type: replace Abstract: The classification of different patterns of network evolution, for example in brain connectomes or social networks, is a key problem in network inference and modern data science. Building on the notion of a network's Euclidean mirror, which captures its evolution as a curve in Euclidean space, we develop the Dynamic Network Clustering through Mirror Distance (DNCMD), an algorithm for clustering dynamic networks based on a distance measure between their associated mirrors. We provide theoretical guarantees for DNCMD to achieve exact recovery of distinct evolutionary patterns for latent position random networks both when underlying vertex features change deterministically and when they follow a stochastic process. We validate our theoretical results through numerical simulations and demonstrate the application of DNCMD to understand edge functions in Drosophila larval connectome data, as well as to analyze temporal patterns in dynamic trade networks.
https://arxiv.org/abs/2412.19012
Academic Papers
svg
59dbe4f43d016d4b0a0f2ee9cdfebb377efdf4c10e88d03ec821786bd64dfab6
2026-01-21T00:00:00-05:00
A Spatio-Temporal Dirichlet Process Mixture Model on Linear Networks for Crime Data
arXiv:2501.08673v2 Announce Type: replace Abstract: Analyzing crime events is crucial to understand crime dynamics and it is largely helpful for constructing prevention policies. Point processes specified on linear networks can provide a more accurate description of crime incidents by considering the geometry of the city. We propose a spatio-temporal Dirichlet process mixture model on a linear network to analyze crime events in Valencia, Spain. We propose a Bayesian hierarchical model with a Dirichlet process prior to automatically detect space-time clusters of the events and adopt a convolution kernel estimator to account for the network structure in the city. From the fitted model, we provide crime hotspot visualizations that can inform social interventions to prevent crime incidents. Furthermore, we study the relationships between the detected cluster centers and the city's amenities, which provides an intuitive explanation of criminal contagion.
https://arxiv.org/abs/2501.08673
Academic Papers
svg
27eeaafc98e15c2b5b4ed2178ab3761ebae111bb80fec9def2292b695830527f
2026-01-21T00:00:00-05:00
A survey on Clustered Federated Learning: Taxonomy, Analysis and Applications
arXiv:2501.17512v3 Announce Type: replace Abstract: As Federated Learning (FL) expands, the challenge of non-independent and identically distributed (non-IID) data becomes critical. Clustered Federated Learning (CFL) addresses this by training multiple specialized models, each representing a group of clients with similar data distributions. However, the term ''CFL'' has increasingly been applied to operational strategies unrelated to data heterogeneity, creating significant ambiguity. This survey provides a systematic review of the CFL literature and introduces a principled taxonomy that classifies algorithms into Server-side, Client-side, and Metadata-based approaches. Our analysis reveals a distinct dichotomy: while theoretical research prioritizes privacy-preserving Server/Client-side methods, real-world applications in IoT, Mobility, and Energy overwhelmingly favor Metadata-based efficiency. Furthermore, we explicitly distinguish ''Core CFL'' (grouping clients for non-IID data) from ''Clustered X FL'' (operational variants for system heterogeneity). Finally, we outline lessons learned and future directions to bridge the gap between theoretical privacy and practical efficiency.
https://arxiv.org/abs/2501.17512
Academic Papers
svg
cd14b947c71e1d81541fb68b18fcdbc5c733e1525d37e41256ab217f592a97f8
2026-01-21T00:00:00-05:00
Network-Level Measures of Mobility from Aggregated Origin-Destination Data
arXiv:2502.04162v2 Announce Type: replace Abstract: We introduce a framework for defining and interpreting collective mobility measures from spatially and temporally aggregated origin--destination (OD) data. Rather than characterizing individual behavior, these measures describe properties of the mobility system itself: how network organization, spatial structure, and routing constraints shape and channel population movement. In this view, aggregate mobility flows reveal aspects of connectivity, functional organization, and large-scale daily activity patterns encoded in the underlying transport and spatial network. To support interpretation and provide a controlled reference for the proposed time-elapsed calculations, we first employ an independent, network-driven synthetic data generator in which trajectories arise from prescribed system structure rather than observed data. This controlled setting provides a concrete reference for understanding how the proposed measures reflect network organization and flow constraints. We then apply the measures to fully anonymized data from the NetMob 2024 Data Challenge, examining their behavior under realistic limitations of spatial and temporal aggregation. While such data constraints restrict dynamical resolution, the resulting metrics still exhibit interpretable large-scale structure and temporal variation at the city scale.
https://arxiv.org/abs/2502.04162
Academic Papers
svg
26c70fa089d7bf1a6dc6f014106517b50eb473dc5105a568115a97940ccfbca3
2026-01-21T00:00:00-05:00
Variable transformations in consistent loss functions
arXiv:2502.16542v3 Announce Type: replace Abstract: The empirical use of variable transformations within (strictly) consistent loss functions is widespread, yet a theoretical understanding is lacking. To address this gap, we develop a theoretical framework that establishes formal characterizations of (strict) consistency for such transformed loss functions. Our analysis focuses on two interrelated cases: (a) transformations applied solely to the realization variable and (b) bijective transformations applied jointly to both the realization and prediction variables. These cases extend the well-established framework of transformations applied exclusively to the prediction variable, as formalized by Osband's revelation principle. We further develop analogous characterizations for (strict) identification functions. The resulting theoretical framework is broadly applicable to statistical and machine learning methodologies. For instance, we apply the framework to Bregman and expectile loss functions to interpret empirical findings from models trained with transformed loss functions and systematically construct new identifiable and elicitable functionals, which we term respectively $g$-transformed expectation and $g$-transformed expectile. Applications of the framework to simulated and real-world data illustrate its practical utility in diverse settings. By unifying theoretical insights with practical applications, this work advances principled methodologies for designing loss functions in complex predictive tasks.
https://arxiv.org/abs/2502.16542
Academic Papers
svg
73d7a1a5cb2fa83260e224d4199c05cecc404f1d69e00f3c69579ad97233f81e
2026-01-21T00:00:00-05:00
Fairness-aware kidney exchange and kidney paired donation
arXiv:2503.06431v2 Announce Type: replace Abstract: The kidney paired donation (KPD) program provides an innovative solution to overcome incompatibility challenges in kidney transplants by matching incompatible donor-patient pairs and facilitating kidney exchanges. To address unequal access to transplant opportunities, there are two widely used fairness criteria: group fairness and individual fairness. However, these criteria do not consider protected patient features, which refer to characteristics legally or ethically recognized as needing protection from discrimination, such as race and gender. Motivated by the calibration principle in machine learning, we introduce a new fairness criterion: the matching outcome should be conditionally independent of the protected feature, given the sensitization level. We integrate this fairness criterion as a constraint within the KPD optimization framework and propose a computationally efficient solution using linearization strategies and column-generation methods. Theoretically, we analyze the associated price of fairness using random graph models. Empirically, we compare our fairness criterion with group fairness and individual fairness through both simulations and a real-data example.
https://arxiv.org/abs/2503.06431
Academic Papers
svg
7444d3139381c49a62debe1cf904b397f87eb4b5f9729f36e9758ee7b1c1c720
2026-01-21T00:00:00-05:00
Applications of higher order Markov models and Pressure Index to strategize controlled run chases in Twenty20 cricket
arXiv:2505.01849v2 Announce Type: replace Abstract: In limited overs cricket, the team batting first posts a target score for the team batting second to achieve in order to win the match. The team batting second is constrained by decreasing resources in terms of number of balls left and number of wickets in hand in the process of reaching the target as the second innings progresses. The Pressure Index, a measure created by researchers in the past, serves as a tool for quantifying the level of pressure that a team batting second encounters in limited overs cricket. Through a ball-by-ball analysis of the second innings, it reveals how effectively the team batting second in a limited-over game proceeds towards their target. This research employs higher order Markov chains to examine the strategies employed by successful teams during run chases in Twenty20 matches. By studying the trends in successful run chases spanning over 16 years and utilizing a significant dataset of 6537 Twenty20 matches, specific strategies are identified. Consequently, an efficient approach to successful run chases in Twenty20 cricket is formulated, effectively limiting the Pressure Index to [0.5, 3.5] or even further down under 0.5 as early as possible. The innovative methodology adopted in this research offers valuable insights for cricket teams looking to enhance their performance in run chases.
https://arxiv.org/abs/2505.01849
Academic Papers
svg
1585030cf7f60a1c0d5ea218aa3cb175b4d00fd311bb8caddec0220163556f89
2026-01-21T00:00:00-05:00
Flow-based Generative Modeling of Potential Outcomes and Counterfactuals
arXiv:2505.16051v3 Announce Type: replace Abstract: Predicting potential and counterfactual outcomes from observational data is central to individualized decision-making, particularly in clinical settings where treatment choices must be tailored to each patient rather than guided solely by population averages. We propose PO-Flow, a continuous normalizing flow (CNF) framework for causal inference that jointly models potential outcome distributions and factual-conditioned counterfactual outcomes. Trained via flow matching, PO-Flow provides a unified approach to individualized potential outcome prediction, conditional average treatment effect estimation, and counterfactual prediction. By encoding an observed factual outcome into a shared latent representation and decoding it under an alternative treatment, PO-Flow relates factual and counterfactual realizations at the individual level, rather than generating counterfactuals independently from marginal conditional distributions. In addition, PO-Flow supports likelihood-based evaluation of potential outcomes, enabling uncertainty-aware assessment of predictions. A supporting recovery guarantee is established under certain assumptions, and empirical results on benchmark datasets demonstrate strong performance across a range of causal inference tasks within the potential outcomes framework.
https://arxiv.org/abs/2505.16051
Academic Papers
svg
6fc60f6061143a9e05034ed5b51c89e0dc78473509f60116b090847894509c0b
2026-01-21T00:00:00-05:00
ALPCAHUS: Subspace Clustering for Heteroscedastic Data
arXiv:2505.18918v3 Announce Type: replace Abstract: Principal component analysis (PCA) is a key tool in the field of data dimensionality reduction. Various methods have been proposed to extend PCA to the union of subspace (UoS) setting for clustering data that comes from multiple subspaces like K-Subspaces (KSS). However, some applications involve heterogeneous data that vary in quality due to noise characteristics associated with each data sample. Heteroscedastic methods aim to deal with such mixed data quality. This paper develops a heteroscedastic-based subspace clustering method, named ALPCAHUS, that can estimate the sample-wise noise variances and use this information to improve the estimate of the subspace bases associated with the low-rank structure of the data. This clustering algorithm builds on K-Subspaces (KSS) principles by extending the recently proposed heteroscedastic PCA method, named LR-ALPCAH, for clusters with heteroscedastic noise in the UoS setting. Simulations and real-data experiments show the effectiveness of accounting for data heteroscedasticity compared to existing clustering algorithms. Code available at https://github.com/javiersc1/ALPCAHUS.
https://arxiv.org/abs/2505.18918
Academic Papers
svg