text
string
source
string
โ—โ—โ— โ— โ—โ—โ— โ— โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ— โ— โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ— โ— โ—โ— โ— โ— โ—โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ— โ— โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ— โ—โ— โ— โ—โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ— โ— โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ—โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ—โ—โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ— โ— โ—โ—โ—โ—โ—โ—โ— โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ— โ— โ— โ—โ—โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ—โ— โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ— โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ— โ—โ— โ— โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ— โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ— โ— โ—โ— โ— โ—โ—โ—โ—โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ—โ—โ— โ— โ—โ—โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ— โ— โ— โ—โ— โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ— โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ—โ—โ—โ—โ— โ— โ—โ—โ— โ— โ—โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ—โ—โ—โ— โ—โ— โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ—โ— โ— โ—โ— โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ— โ— โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ—โ—โ— โ— โ—โ—
https://arxiv.org/abs/2504.19138v1
โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ— โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ— โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ— โ—โ— โ— โ—โ— โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ— โ—โ—โ— โ—โ— โ—โ— โ— โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ—โ—โ— โ—โ— โ— โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ—โ— โ— โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ— โ— โ— โ—โ— โ— โ—โ— โ— โ—โ—โ—โ— โ—โ—โ— โ— โ— โ—โ— โ— โ—โ— โ— โ—โ—โ—โ— โ— โ—โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ— โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ— โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ—โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ— โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ— โ—โ—โ—โ— โ— โ— โ—โ—โ— โ— โ— โ—โ—โ—โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ—โ—โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ—โ— โ— โ— โ—โ—โ—โ— โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ—โ— โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ— โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ—โ—โ— โ— โ—โ— โ— โ—โ— โ—โ—โ— โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ— โ— โ—โ—โ—โ—โ—โ— โ— โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ— โ— โ—โ— โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ— โ—โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ— โ— โ—โ— โ—โ— โ— โ—โ—
https://arxiv.org/abs/2504.19138v1
โ—โ—โ— โ— โ—โ—โ— โ— โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ— โ— โ—โ—โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ—โ—โ—โ— โ— โ—โ—โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ— โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ—โ— โ— โ—โ— โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ—โ— โ—โ—โ—โ— โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ— โ— โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ—โ—โ—โ—โ— โ—โ—โ—โ—โ—โ— โ—โ—โ— โ—โ—โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ— โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ—โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ—โ—โ— โ—โ— โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ— โ—โ—โ—โ— โ—โ—โ—โ— โ— โ—โ—โ—โ—โ—โ— โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ—โ—โ—โ— โ— โ— โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ— โ— โ— โ—โ— โ—โ—โ— โ— โ— โ—โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ—โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ— โ—โ—โ— โ— โ—โ—โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ— โ— โ—โ—โ—โ— โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ—โ—โ— โ— โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—
https://arxiv.org/abs/2504.19138v1
โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ— โ—โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ— โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ— โ— โ—โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ— โ—โ— โ— โ—โ—โ— โ— โ— โ—โ—โ— โ—โ—โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ— โ— โ— โ—โ— โ— โ— โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ— โ— โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ— โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ—โ— โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ— โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ— โ— โ—โ— โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ— โ—โ—โ— โ— โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ— โ— โ—โ—โ—โ— โ— โ— โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ— โ— โ—โ— โ—โ— โ— โ— โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ—โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ— โ—โ— โ—โ—โ—โ— โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ— โ— โ—โ—โ— โ— โ— โ—โ— โ— โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—
https://arxiv.org/abs/2504.19138v1
โ—โ— โ— โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ— โ— โ—โ—โ—โ— โ—โ— โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ— โ— โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ— โ— โ— โ—โ— โ—โ—โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ— โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ— โ— โ—โ—โ— โ—โ— โ— โ—โ—โ— โ— โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ—โ—โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ—โ— โ—โ—โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ— โ—โ—โ—โ—โ—โ— โ— โ— โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ—โ— โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ— โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ—โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ—โ— โ— โ—โ— โ—โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ— โ—โ—โ—โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ—โ—โ— โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ— โ— โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ— โ— โ—โ—โ—โ—โ— โ—โ—โ—โ—โ— โ—โ—โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ—โ—โ— โ—โ—โ—โ—โ— โ— โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ—โ— โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ—โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ—โ—โ— โ— โ—โ— โ—โ—โ— โ— โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ—โ—โ— โ— โ—โ— โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ— โ—โ— โ—โ— โ—โ— โ— โ—โ— โ— โ—โ— โ—โ—โ— โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ—โ—โ— โ— โ— โ— โ—โ— โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ—โ—โ— โ—โ— โ—โ— โ—โ—
https://arxiv.org/abs/2504.19138v1
โ— โ— โ—โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ— โ—โ— โ— โ—โ— โ— โ—โ—โ— โ—โ— โ—โ—โ— โ—โ— โ—โ— โ— โ—โ—โ—โ— โ—โ— โ—โ— โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ— โ— โ—โ— โ—โ— โ—โ— โ—โ—โ— โ—โ—โ— โ— โ—โ—โ—โ— โ—โ—โ— โ—โ— โ—โ— โ—โ—โ—โ— โ— โ—โ—โ— โ—โ— โ—โ— โˆ’4 โˆ’2 0 2 4โˆ’0.006 โˆ’0.004 โˆ’0.002 0.000 0.002 0.004 0.006Normal Qโˆ’Q Plot Theoretical QuantilesSample Quantiles Figure 7: Histogram and normal quantile-quantile plot of ห† ยตEโˆ’ยตunder RLS when m= 18. 35 Pr(ห†ยตE> ยต) to 1 /2 is markedly slower than in the one-dimensional case, with RLS and CRD exhibiting comparable rates. Figure 5 and 6 compare the 90th percentile interval lengths and empirical coverage levels, respectively. While quantile intervals outperform t-intervals under CRD, the opposite is true under RLS, due to the fact that ห† ยตEโˆ’ยตunder RLS is approximately normal for the range of mwe are testing. Although our theory predicts the distribution of ห†ยตEโˆ’ยตbecomes concentrated and heavy-tailed asymptotically, the curse of di- mensionality delays these effects. At m= 18, RLS errors exhibit only marginally heavier tails than a normal distribution (Figure 7). 6 Discussion Our analysis has so far focused on infinitely differentiable integrands. A main obstacle in extending our results to finitely differentiable integrands is the decay of Walsh coefficients is insufficient to decompose ห† ยตโˆžโˆ’ยตin a manner compat- ible with Lemma 3. To illustrate this, consider the case where fhas square- integrable dominating mixed derivatives of order 1 ( f|ฮบ|(x) for|ฮบ| โˆˆ { 0,1}s). By Corollary 3 of [19] with ฮฑ= 0, ฮป= 1, for any โ„“= (โ„“1, . . . , โ„“ s)โˆˆNs โˆ—, X kโˆˆBโ„“,s|ห†f(k)|2โฉฝCf,s4โˆ’Ps j=1โ„“j, where Bโ„“,s={kโˆˆNs โˆ—| โŒˆฮบjโŒ‰=โ„“jforjโˆˆ1:s}andCf,sis a constant depending onfands. Mimicking our proof strategy in Section 3, one might set Km=Qโ€ฒ Nm with Qโ€ฒ N=n kโˆˆNs โˆ— kโˆˆBโ„“,sforโ„“โˆˆNs โˆ—satisfyingsX j=1โ„“jโฉฝNo and attempt to tune Nmto satisfy Lemma 3. However, unlike the original QN, the set Qโ€ฒ Nis rich in additive relations, particularly when s= 1, as Qโ€ฒ Nforms aF2-vector space. This restricts our choice of Nmand makes the condition limmโ†’โˆžPr(|SUM 1|โฉฝ|SUM 2|) = 0 harder to hold. Thus, our proof strategy cannot be naively applied to finitely differentiable integrands. A second critical limitation is the curse of dimensionality, which our asymp- totic analysis does not fully resolve. While Nmโˆผฮปm2/sensures Nm>> m in the limit, practical high-dimensional settings may yield Nm< m. In such cases, SUM 1is close to an empty sum and the bounds in Corollary 2 are non- informative. A finite-sample analysis is therefore required to explain phenomena like the rapid descent in Figure 4 for small m. One promising direction, inspired by Section 6 of [21], is to replace the dimension sin the analysis by a finite- sample effective dimension that captures the integrandโ€™s low-dimensional struc- ture. How to adapt such a framework to our setting is an interesting question for future research. A natural follow-up question concerns the limiting distribution of ห† ยตโˆžโˆ’ยต. By Theorem 2 and Corollary 2, we can replace ห† ยตโˆžโˆ’ยตby SUMโ€ฒ 1when study its 36 limiting distribution. The difficulty lies in the joint dependencies among Z(k). For
https://arxiv.org/abs/2504.19138v1
large m, we conjecture that SUMโ€ฒ 1can be approximated by SUMโ€ฒโ€ฒ 1=X kโˆˆQNmZโ€ฒ(k)Sโ€ฒ(k)ห†f(k), where each Zโ€ฒ(k) is sampled independently from a Bernoulli distribution with success probability 2โˆ’m. This approximation holds rigorously for polynomial in- tegrands, where the support of non-zero Walsh coefficients is particularly sparse. How to extend this result to general integrands is another challenging question for future research. A critical limitation of quantile-based confidence intervals lies in their finite- sample coverage guarantees. When ris odd and โ„“=rโˆ’u, the coverage proba- bility of [ห† ยต(โ„“) E,ห†ยต(u) E] is structurally bounded above by the nominal level. In appli- cations where undercoverage poses significant risks, the conventional t-interval, despite its slower convergence rate, may remain preferable due to its conservative bias. It remains an open problem how to design intervals that simultaneously achieve adaptive convergence rates and robust finite-sample coverage. Acknowledgments The author acknowledges the support of the Austrian Science Fund (FWF) Project DOI 10.55776/P34808. For open access purposes, the author has applied a CC BY public copyright license to any author accepted manuscript version arising from this submission. This paper is developed from a chapter of the authorโ€™s thesis. I would like to thank Professor Art Owen for his mentorship and many helpful suggestions. References [1] K. Basu and R. Mukherjee. Asymptotic normality of scrambled geometric net quadrature. The Annals of Statistics , 45(4):1759โ€“1788, 2017. [2] N. Clancy, Y. Ding, C. Hamilton, F. J. Hickernell, and Y. Zhang. The cost of deterministic, adaptive, automatic algorithms: Cones, not balls. Journal of Complexity , 30(1):21โ€“45, 2014. [3] J. Dick. Higher order scrambled digital nets achieve the optimal rate of the root mean square error for smooth integrands. The Annals of Statistics , 39(3):1372โ€“1398, 2011. [4] J. Dick and F. Pillichshammer. Digital sequences, discrepancy and quasi- Monte Carlo integration . Cambridge University Press, Cambridge, 2010. [5] P. Erdยจ os. On a lemma of Littlewood and Offord. Bulletin of the American Mathematical Society , 51(12):898 โ€“ 902, 1945. 37 [6] B. Fristedt. The structure of random partitions of large integers. Transac- tions of the American Mathematical Society , 337(2):703โ€“735, 1993. [7] M. Gnewuch, P. Kritzer, A. B. Owen, and Z. Pan. Computable error bounds for quasi-monte carlo using points with non-negative local discrep- ancy. Information and Inference: A Journal of the IMA , 13(3):iaae021, 08 2024. [8] E. Gobet, M. Lerasle, and D. Mยด etivier. Mean estimation for randomized quasi monte carlo method. Hal preprint hal-03631879v2 , 2022. [9] A. Griewank, F. Y. Kuo, H. Leยจ ovey, and I. H. Sloan. High dimensional integration of kinks and jumpsโ€“Smoothing by preintegration. Journal of Computational and Applied Mathematics , 344:259โ€“274, 2018. [10] S. Joe and F. Y. Kuo. Constructing Sobolโ€™ sequences with better two-dimensional projections. SIAM Journal on Scientific Computing , 30(5):2635โ€“2654, 2008. [11] S. G. Krantz and H. R. Parks. A primer of real analytic functions . Springer Science & Business Media, 2002. [12] P. Lโ€™Ecuyer, M. K. Nakayama, A. B. Owen, and B. Tuffin. Confidence intervals for randomized quasi-monte carlo estimators. Technical report, hal-04088085, 2023. [13] S. Liu and A. B. Owen. Preintegration via active subspace.
https://arxiv.org/abs/2504.19138v1
SIAM Journal on Numerical Analysis , 61(2):495โ€“514, 2023. [14] W.-L. Loh. On the asymptotic distribution of scrambled net quadrature. Annals of Statistics , 31(4):1282โ€“1324, 2003. [15] J. Matouห‡ sek. On the L2โ€“discrepancy for anchored boxes. Journal of Com- plexity , 14:527โ€“556, 1998. [16] M. K. Nakayama and B. Tuffin. Sufficient conditions for central limit the- orems and confidence intervals for randomized quasi-monte carlo meth- ods. ACM Transactions on Modeling and Computer Simulation , 34(3):1โ€“ 38, 2024. [17] A. B. Owen. Randomly permuted ( t, m, s )-nets and ( t, s)-sequences. In H. Niederreiter and P. J.-S. Shiue, editors, Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing , pages 299โ€“317, New York, 1995. Springer-Verlag. [18] A. B. Owen. Error estimation for quasi-Monte Carlo. Preprint , 2025. [19] Z. Pan. Automatic optimal-rate convergence of randomized nets using median-of-means. Mathematics of Computation , 2025. 38 [20] Z. Pan and A. B. Owen. Skewness of a randomized quasi-monte carlo estimate. Preprint , 2024. [21] Z. Pan and A. B. Owen. Super-polynomial accuracy of multidimensional randomized nets using the median-of-means. Mathematics of Computation , 93(349):2265โ€“2289, 2024. [22] I. M. Sobolโ€™. The distribution of points in a cube and the accurate evalu- ation of integrals (in Russian). Zh. Vychisl. Mat. i Mat. Phys. , 7:784โ€“802, 1967. [23] K. Suzuki and T. Yoshiki. Formulas for the walsh coefficients of smooth functions and their application to bounds on the walsh coefficients. Journal of Approximation Theory , 205:1โ€“24, 2016. [24] J. Wiart, C. Lemieux, and G. Y. Dong. On the dependence structure and quality of scrambled (t, m, s)-nets. Monte Carlo Methods and Applications , 27(1):1โ€“26, 2021. 39 Appendix This appendix contains the proofs of Lemma 5, 7 and 9. The proof strat- egy is inspired by [6]. To simplify the notation, we write aN=O(bN) if lim supNโ†’โˆž|aN|/|bN|< C for some constant C > 0 and aN=o(bN) if limNโ†’โˆž|aN|/|bN|= 0. We first construct an importance sampling measure on kโˆˆNs 0. Recall that each kโˆˆN0corresponds to ฮบโІNthrough k=P โ„“โˆˆฮบ2โ„“โˆ’1. Let LN(k) be the likelihood function of kunder the importance sampling measure described by LN(k) =sY j=1โˆžY โ„“=1qโ„“ N 1 +qโ„“ N1{โ„“โˆˆฮบj}1 1 +qโ„“ N1{โ„“/โˆˆฮบj} =qโˆฅkโˆฅ1 NQโˆž โ„“=1(1 +qโ„“ N)s with qN= exp( โˆ’ฯ€p s/12N). The value of qNis chosen so that LN(k) closely approximates U(QN) with QNdefined by equation (13). Under LN(k), it is clear that Xjโ„“=1{โ„“โˆˆฮบj}equals 1 with probability qโ„“ N/(1 + qโ„“ N) and {Xjโ„“, jโˆˆ1:s, โ„“โˆˆN}are jointly independent. We use Pr ,E,Var to denote the probability, expectation and variance when kfollows a U(QN) distribution and PrL,EL,VarLto denote those under the importance sampling measure LN(k). Suppose we are interested in Pr( kโˆˆA) =|A|/|QN|for a subset AโІQN. We can compute it under the importance sampling measure by Pr(kโˆˆA) =ELh1(kโˆˆA) |QN|L(k)i =ELh1(kโˆˆA) |QN|qโˆฅkโˆฅ1 NโˆžY โ„“=1(1 +qโ„“ N)si . (53) Since kโˆˆAโІQNimplies โˆฅkโˆฅ1โฉฝN, Pr(kโˆˆA)โฉฝELh1(kโˆˆA) |QN|qN NโˆžY โ„“=1(1 +qโ„“ N)si =PrL(kโˆˆA) |QN|qN NโˆžY โ„“=1(1 +qโ„“ N)s.(54) Hence, we can bound Pr( kโˆˆA) by PrL(kโˆˆA) times a factor depending only onNands, which is further bounded by the following lemma: Lemma 11. When Nโฉพ1, 1 |QN|qN NโˆžY โ„“=1(1 +qโ„“ N)sโฉฝAsN1/4 with Asa constant depending
https://arxiv.org/abs/2504.19138v1
on s. Proof. First we write โˆžY โ„“=1(1 +qโ„“ N)s= exp sโˆžX โ„“=1log(1 + qโ„“ N) . 40 Because log(1 + qโ„“ N) is monotonically decreasing in โ„“, โˆžX โ„“=1log(1 + qโ„“ N)โฉฝZโˆž 0log 1 + exp( โˆ’ฯ€โ„“p s/12N) dโ„“ =1 ฯ€r 12N sZโˆž 0log 1 + exp( โˆ’โ„“) dโ„“ =ฯ€r N 12s. HenceโˆžY โ„“=1(1 +qโ„“ N)sโฉฝexp ฯ€r sN 12 . Our conclusion then follows from equation (15) and qN N= exp( โˆ’ฯ€p sN/12). Now we are ready to prove Lemma 7. Proof of Lemma 7. Equation (54) and Lemma 11 imply Pr |ฮบj|p ฮปN/sโˆ’2 > ฯต โฉฝAsN1/4PrL |ฮบj|p ฮปN/sโˆ’2 > ฯต . (55) Because |ฮบj|=X โ„“โˆˆN1{โ„“โˆˆฮบj}=X โ„“โˆˆNXjโ„“, we have EL[|ฮบj|] =X โ„“โˆˆNqโ„“ N 1 +qโ„“ N=X โ„“โˆˆNexp(โˆ’ฯ€โ„“p s/12N) 1 + exp( โˆ’ฯ€โ„“p s/12N), VarL(|ฮบj|) =X โ„“โˆˆNqโ„“ N (1 +qโ„“ N)2=X โ„“โˆˆNexp(โˆ’ฯ€โ„“p s/12N) (1 + exp( โˆ’ฯ€โ„“p s/12N))2. Since qโ„“ N/(1 +qโ„“ N) is monotonically decreasing in โ„“, Zโˆž 1exp(โˆ’ฯ€โ„“p s/12N) 1 + exp( โˆ’ฯ€โ„“p s/12N)dโ„“โฉฝEL[|ฮบj|]โฉฝZโˆž 0exp(โˆ’ฯ€โ„“p s/12N) 1 + exp( โˆ’ฯ€โ„“p s/12N)dโ„“. Recall that ฮป= 3 log(2)2/ฯ€2. The difference between the above two integral is O(1) and Zโˆž 0exp(โˆ’ฯ€โ„“p s/12N) 1 + exp( โˆ’ฯ€โ„“p s/12N)dโ„“=log(2) ฯ€r 12N s= 2r ฮปN s, so EL[|ฮบj|]โˆผ2r ฮปN s+O(1). (56) 41 Similarly, because qโ„“ N<1 and x/(1 +x2) is monotonically increasing in xover [0,1], we know qโ„“ N/(1 +qโ„“ N)2is monotonically decreasing in โ„“overโ„“โฉพ0 and Zโˆž 1exp(โˆ’ฯ€โ„“p s/12N) (1 + exp( โˆ’ฯ€โ„“p s/12N))2dโ„“โฉฝVarL(|ฮบj|)โฉฝZโˆž 0exp(โˆ’ฯ€โ„“p s/12N) (1 + exp( โˆ’ฯ€โ„“p s/12N))2dโ„“. The difference is again O(1) and Zโˆž 0exp(โˆ’ฯ€โ„“p s/12N) (1 + exp( โˆ’ฯ€โ„“p s/12N))2dโ„“=1 ฯ€r 3N s, so VarL(|ฮบj|)โˆผ1 ฯ€r 3N s+O(1). (57) By Bernsteinโ€™s inequality, PrL |ฮบj| โˆ’EL[|ฮบj|] > t โฉฝ2 exp โˆ’t2/2 VarL(|ฮบj|)+t/3 for any t >0. Setting t=ฯตp ฮปN/4s, we get PrL |ฮบj| โˆ’EL[|ฮบj|] > ฯตq ฮปN 4s โฉฝ2 exp โˆ’ฯต2ฮปN/8s VarL(|ฮบj|)+ฯตโˆš ฮปN/4s/3 or equivalently PrL |ฮบj|p ฮปN/sโˆ’EL[|ฮบj|]p ฮปN/s >ฯต 2 โฉฝ2 exp โˆ’r ฮปN sฯต2/8 VarL(|ฮบj|)/p ฮปN/s +ฯต/6 . Because ฯต <1 and VarL(|ฮบj|)โˆผฯ€โˆ’1p 3N/s, the right hand side can be bounded by 2 exp( โˆ’Bsฯต2โˆš N) for some Bs>0. If further |EL[|ฮบj|]/p ฮปN/sโˆ’2|< ฯต/2, PrL |ฮบj|p ฮปN/sโˆ’2 > ฯต โฉฝPrL |ฮบj|p ฮปN/sโˆ’EL[|ฮบj|]p ฮปN/s >ฯต 2 โฉฝ2 exp(โˆ’Bsฯต2โˆš N) and we have proven equation (23) in view of equation (55). On the other hand, if|EL[|ฮบj|]/p ฮปN/sโˆ’2|โฉพฯต/2, equation (56) implies ฯต2โˆš Nโฉฝ4p ฮป/s EL[|ฮบj|]โˆ’2r ฮปN s =O(โˆšs). So by decreasing Bsif necessary, we can assume Bsฯต2โˆš Nโฉฝ1 for any ฯตsatisfying |EL[|ฮบj|]/p ฮปN/s โˆ’2|โฉพฯต/2. After increasing Asif necessary so that Asโฉพ exp(1), AsN1/4exp(โˆ’Bsฯต2โˆš N)โฉพAsexp(โˆ’1)โฉพ1 and equation (23) is trivially true. 42 The proofs of Lemma 5 and Lemma 9 are similar. By an abuse of notation, we let Pr and PrLbe the probability when k1, ...krare sampled independently fromU(QN) and L(k), respectively. An analogous argument using the impor- tance sampling trick shows for any subset AโІ(QN)r Pr((k1, . . . ,kr)โˆˆA)โฉฝPrL((k1, . . . ,kr)โˆˆA)1 |QN|qN NโˆžY โ„“=1(1 +qโ„“ N)sr .(58) Proof of Lemma 5. To simplify our notation, we write kโŠ•=โŠ•r i=1kiwith com- ponents ( kโŠ• 1, . . . , kโŠ• s). By equation (58) and Lemma 11, Pr kโŠ•โˆˆQN โฉฝAr sNr/4PrL kโŠ•โˆˆQN โฉฝAr sNr/4PrL โˆฅkโŠ•โˆฅ1โฉฝN .(59) By the definition of
https://arxiv.org/abs/2504.19138v1
kโŠ•,XโŠ• jโ„“=1{โ„“โˆˆฮบโŠ• j}equals 1 if and only if โ„“โˆˆฮบjfor an odd number of kamong k1, ...,kr. By a binomial distribution with success probability qโ„“ N/(1 +qโ„“ N), PrL(XโŠ• jโ„“= 1) =โŒˆr/2โŒ‰X j=1r 2jโˆ’1qโ„“ N 1 +qโ„“ N2jโˆ’11 1 +qโ„“ Nrโˆ’2j+1 =1 2โˆ’1 21โˆ’qโ„“ N 1 +qโ„“ Nr . (60) Also notice that {XโŠ• jโ„“, jโˆˆ1:s, โ„“โˆˆN}are jointly independent under L(k) and โˆฅkโŠ•โˆฅ1=sX j=1X โ„“โˆˆNโ„“XโŠ• jโ„“. By Markovโ€™s inequality, for any t >0 PrL โˆฅkโŠ•โˆฅ1โฉฝN = PrL exp โˆ’tsX j=1X โ„“โˆˆNโ„“XโŠ• jโ„“ โฉพeโˆ’tN โฉฝetNELh exp โˆ’tsX j=1X โ„“โˆˆNโ„“XโŠ• jโ„“i =etNsY j=1Y โ„“โˆˆN 1โˆ’PrL(XโŠ• jโ„“= 1)(1 โˆ’eโˆ’tโ„“) โฉฝexp tNโˆ’sX โ„“โˆˆNPrL(XโŠ• jโ„“= 1)(1 โˆ’eโˆ’tโ„“) . (61) 43 Because PrL(XโŠ• jโ„“= 1) is monotonically increasing in randrโฉพ2, X โ„“โˆˆNPrL(XโŠ• jโ„“= 1)(1 โˆ’eโˆ’tโ„“)โฉพX โ„“โˆˆN1 2โˆ’1 21โˆ’qโ„“ N 1 +qโ„“ N2 (1โˆ’eโˆ’tโ„“) =X โ„“โˆˆN1 2(1โˆ’eโˆ’tโ„“)(1 +qโ„“ N)2โˆ’(1โˆ’qโ„“ N)2 (1 +qโ„“ N)2 =X โ„“โˆˆN2(1โˆ’eโˆ’tโ„“)qโ„“ N (1 +qโ„“ N)2. Setting t=โˆ’ฮฑlog(qN) for ฮฑ >0 that we will tune later, we have X โ„“โˆˆN2(1โˆ’eโˆ’tโ„“)qโ„“ N (1 +qโ„“ N)2= 2X โ„“โˆˆNqโ„“ N (1 +qโ„“ N)2โˆ’2X โ„“โˆˆNqฮฑโ„“ Nqโ„“ N (1 +qโ„“ N)2 Similar to equation (57), because both qโ„“ N/(1 +qโ„“ N)2andqฮฑโ„“ Nqโ„“ N/(1 +qโ„“ N)2are monotonically decreasing in โ„“overโ„“โฉพ0, X โ„“โˆˆNqโ„“ N (1 +qโ„“ N)2โˆผZโˆž 0exp(โˆ’ฯ€โ„“p s/12N) (1 + exp( โˆ’ฯ€โ„“p s/12N))2dโ„“+O(1) =1 ฯ€r 12N sZโˆž 0exp(โˆ’โ„“) (1 + exp( โˆ’โ„“))2dโ„“+O(1) and X โ„“โˆˆNqฮฑโ„“ Nqโ„“ N (1 +qโ„“ N)2โˆผZโˆž 0exp(โˆ’ฯ€(ฮฑ+ 1)โ„“p s/12N) (1 + exp( โˆ’ฯ€โ„“p s/12N))2dโ„“+O(1) =1 ฯ€r 12N sZโˆž 0exp(โˆ’(ฮฑ+ 1)โ„“) (1 + exp( โˆ’โ„“))2dโ„“+O(1). Combining t=โˆ’ฮฑlog(qN) =ฮฑฯ€p s/12Nwith the above equations, we get tNโˆ’sX โ„“โˆˆNPrL(XโŠ• jโ„“= 1)(1 โˆ’eโˆ’tโ„“)โฉฝtNโˆ’2sX โ„“โˆˆNqโ„“ Nโˆ’qฮฑโ„“ Nqโ„“ N (1 +qโ„“ N)2โˆผc(ฮฑ)โˆš sN+O(s) ifc(ฮฑ)ฬธ= 0 with c(ฮฑ) =ฮฑฯ€โˆš 12โˆ’4โˆš 3 ฯ€Zโˆž 0exp(โˆ’โ„“)โˆ’exp(โˆ’(ฮฑ+ 1)โ„“) (1 + exp( โˆ’โ„“))2dโ„“. Because c(ฮฑ)โ†’ โˆž asฮฑโ†’ โˆž and cโ€ฒ(ฮฑ) =ฯ€โˆš 12โˆ’4โˆš 3 ฯ€Zโˆž 0โ„“exp(โˆ’(ฮฑ+ 1)โ„“) (1 + exp( โˆ’โ„“))2dโ„“ 44 is strictly increasing in ฮฑ, we see c(ฮฑ) has a unique minimum ฮฑโˆ—over ฮฑโฉพ0. Furthermore, cโ€ฒ(0) =ฯ€โˆš 12โˆ’4โˆš 3 ฯ€Zโˆž 0โ„“exp(โˆ’โ„“) (1 + exp( โˆ’โ„“))2dโ„“=ฯ€โˆš 12โˆ’4โˆš 3 log(2) ฯ€<0, soฮฑโˆ—>0 and c(ฮฑโˆ—)<0. A numerical approximation using Mathematica shows ฮฑโˆ—โ‰ˆ0.24 and c(ฮฑโˆ—)<โˆ’0.066. By choosing t=โˆ’ฮฑโˆ—log(q), we have shown exp tNโˆ’sX โ„“โˆˆNPrL(XโŠ• jโ„“= 1)(1 โˆ’eโˆ’tโ„“) โฉฝexp c(ฮฑโˆ—)โˆš sN+O(s) . Putting together equation (59) and equation (61), we get Pr kโŠ•โˆˆQN โฉฝAr sNr/4exp c(ฮฑโˆ—)โˆš sN+O(s) . For a threshold Rsโฉพ2 that we will determine later, we can choose Bssmall enough so that Bslog(Rs)โฉฝโˆ’c(ฮฑโˆ—)โˆšs. By increasing Asif necessary to ac- count for the exp( O(s)) term, equation (18) holds for all Nโฉพ1 and rโฉฝRs. It remains to show equation (18) holds for some As, Bs>0 when r > R s for some threshold Rs. Let โ„“โˆ—be the largest โ„“โˆˆNfor which PrL(XโŠ• jโ„“= 1)โฉพ 1/4. We conventionally set โ„“โˆ—= 0 if PrL(XโŠ• jโ„“= 1) <1/4 for all โ„“โˆˆN. By equation (60), PrL(XโŠ• jโ„“= 1) =1 2โˆ’1 21โˆ’qโ„“ N 1 +qโ„“ Nr =1 2โˆ’1 21โˆ’exp(โˆ’ฯ€โ„“p s/12N) 1 + exp( โˆ’ฯ€โ„“p s/12N)r . Because PrL(XโŠ• jโ„“= 1) is monotonically decreasing in โ„“overโ„“โฉพ0,โ„“โˆ—equals the floor of the solution of PrL(XโŠ• jโ„“= 1) = 1 /4. A straightforward calculation gives โ„“โˆ—=โŒŠlog1 + 2โˆ’1/r 1โˆ’2โˆ’1/rr 12N ฯ€2sโŒ‹. By convexity of the function f(x) =xโˆ’1/r, 2โˆ’1โˆ’1/rrโˆ’1โฉฝ1โˆ’2โˆ’1/rโฉฝrโˆ’1. Hence rโฉฝ1 + 2โˆ’1/r 1โˆ’2โˆ’1/rโฉฝ(1 +
https://arxiv.org/abs/2504.19138v1
2โˆ’1/r)21+1/rrโฉฝ8r and log(r)r 12N ฯ€2sโˆ’1โฉฝโ„“โˆ—โฉฝlog(8 r)r 12N ฯ€2s. (62) log(r)r 12N ฯ€2sโˆ’1โฉฝโ„“โˆ—โฉฝlog(8 r)r 12N ฯ€2s. 45 Using the inequality 1 โˆ’exp(โˆ’x)โฉพx/(1+x) when xโฉพ0, equation (61) becomes PrL โˆฅkโŠ•โˆฅ1โฉฝN โฉฝexp tNโˆ’sX โ„“โˆˆNPrL(XโŠ• jโ„“= 1)tโ„“ 1 +tโ„“ โฉฝexp tNโˆ’sX โ„“โˆˆN1{โ„“โฉฝโ„“โˆ—}tโ„“ 4(1 + tโ„“) = exp tNโˆ’stโ„“โˆ—(โ„“โˆ—+ 1) 8(1 + tโ„“โˆ—) . (63) Setting t=p s/N, we derive from equation (62) that there exists a large enough Rsso that for all rโฉพRs, 1 +tโ„“โˆ—โฉฝ1 + log(8 r)r 12 ฯ€2<2 log( r)r 12 ฯ€2 and stโ„“โˆ—(โ„“โˆ—+ 1)โฉพโˆš sNlog(r)r 12 ฯ€2 log(r)r 12 ฯ€2โˆ’rs N >โˆš sNlog(r)26 ฯ€2. By increasing Rsif necessary, we further have for all rโฉพRs tNโˆ’stโ„“โˆ—(โ„“โˆ—+ 1) 8(1 + tโ„“โˆ—)โฉฝโˆš sNโˆ’โˆš sNlog(r)โˆš 3 16ฯ€<โˆ’โˆš sNlog(r)โˆš 3 32ฯ€. Putting together equation (59) and equation (63), we get for rโฉพRs Pr kโŠ•โˆˆQN โฉฝAr sNr/4exp(โˆ’โˆš sNlog(r)โˆš 3 32ฯ€), which completes the proof. Proof of Lemma 9. By equation (54) and Lemma 11, Pr ฮบ>ฯโˆš N j,1 =โˆ… โฉฝAsN1/4PrL ฮบ>ฯโˆš N j,1 =โˆ… . Because ฮบ>ฯโˆš N j,1 =โˆ…if and only if โ„“ /โˆˆฮบj,1for all โ„“ > ฯโˆš N, PrL ฮบ>ฯโˆš N j,1 =โˆ… =โˆžY โ„“=โŒˆฯโˆš NโŒ‰1 1 +qโ„“ N= exp โˆ’โˆžX โ„“=โŒˆฯโˆš NโŒ‰log(1 + qโ„“ N) . Because log(1 + qโ„“ N) is monotonically decreasing in โ„“, โˆžX โ„“=โŒˆฯโˆš NโŒ‰log(1 + qโ„“ N)โฉพZโˆž โŒˆฯโˆš NโŒ‰log 1 + exp( โˆ’ฯ€โ„“p s/12N) dโ„“ โฉพcฯ,sโˆš Nโˆ’log(2) (64) 46 for cฯ,s=1 ฯ€r 12 sZโˆž ฯ€ฯโˆš s/12log 1 + exp( โˆ’โ„“) dโ„“. Hence Pr ฮบ>ฯโˆš N j.1 =โˆ… โฉฝ2AsN1/4exp โˆ’cฯ,sโˆš N . Similarly, by equation (58) and Lemma 11, Pr ฮบ>ฯโˆš N j,1 =ฮบ>ฯโˆš N j,2 โฉฝA2 sN1/2PrL ฮบ>ฯโˆš N j,1 =ฮบ>ฯโˆš N j,2 . Because ฮบ>ฯโˆš N j,1 =ฮบ>ฯโˆš N j,2 if and only if each โ„“ > ฯโˆš Neither appears in both of or neither of ฮบj,1, ฮบj,2, PrL(ฮบ>ฯโˆš N j,1 =ฮบ>ฯโˆš N j,2) =โˆžY โ„“=โŒˆฯโˆš NโŒ‰1 +q2โ„“ N (1 +qโ„“ N)2 = expโˆžX โ„“=โŒˆฯโˆš NโŒ‰log(1 + q2โ„“ N)โˆ’โˆžX โ„“=โŒˆฯโˆš NโŒ‰2 log(1 + qโ„“ N) . Again by monotonicity of log(1 + q2โ„“ N), โˆžX โ„“=โŒˆฯโˆš NโŒ‰log(1 + q2โ„“ N)โฉฝZโˆž โŒˆฯโˆš NโŒ‰โˆ’1log 1 + exp( โˆ’2ฯ€โ„“p s/12N) dโ„“ โฉฝcโ€ฒ ฯ,sโˆš N+ log(2) for cโ€ฒ ฯ,s=1 2ฯ€r 12 sZโˆž 2ฯ€ฯโˆš s/12log 1 + exp( โˆ’โ„“) dโ„“. Notice that cโ€ฒ ฯ,s< cฯ,s/2. Along with equation (64), we get the bound Pr ฮบ>ฯโˆš N j,1 =ฮบ>ฯโˆš N j,2 โฉฝ8A2 sN1/2exp โˆ’2(cฯ,sโˆ’1 2cโ€ฒ ฯ,s)โˆš N . Our conclusion follows by taking Aฯ,s= 2โˆš 2AsandBฯ,s=cฯ,sโˆ’cโ€ฒ ฯ,s/2> (3/4)cฯ,s. 47
https://arxiv.org/abs/2504.19138v1
Optimal experimental design for parameter estimation in the presence of observation noise Jie Qi1and Ruth E. Baker2 1College of Information Science and Technology, Donghua University, Shanghai, China 2Mathematical Institute, University of Oxford, Oxford, United Kingdom Abstract Using mathematical models to assist in the interpretation of experiments is becoming increasingly important in research across applied mathematics, and in particular in biology and ecology. In this context, accurate parameter estimation is crucial; model parameters are used to both quantify observed behaviour, characterise behaviours that cannot be directly measured and make quantitative predictions. The extent to which parameter estimates are constrained by the quality and quantity of available data is known as parameter identifiability, and it is widely understood that for many dynamical models the uncertainty in parameter estimates can vary over orders of magnitude as the time points at which data are collected are varied. Here, we use both local sensitivity measures derived from the Fisher Information Matrix and global measures derived from Sobolโ€™ indices to explore how parameter uncertainty changes as the number of measurements, and their placement in time, are varied. We use these measures within an optimisation algorithm to determine the observation times that give rise to the lowest uncertainty in parameter estimates. Applying our framework to models in which the observation noise is both correlated and uncorrelated demonstrates that correlations in observation noise can significantly impact the optimal time points for observing a system, and highlights that proper consideration of observation noise should be a crucial part of the experimental design process. 1arXiv:2504.19233v1 [math.ST] 27 Apr 2025 1 Introduction Mathematical modelling serves as a crucial technique in the interpretation of experiments and offers conceptual insights into the mechanisms underlying complex systems across various fields, particularly in biology and ecology. In this context, accurate parameter estimation is essential; model parameters are now routinely employed to quantify observed behaviours, characterize behaviours that cannot be directly measured, and make quantitative predictions. The extent to which parameter estimates are constrained by the quality and quantity of available data is termed parameter identifiability. For many dynamical models, the uncertainty in parameter estimates can vary over orders of magnitude as the time points at which data are collected are varied. Consequently, optimising data collection strategies to improve parameter identifiability is a crucial aspect of experimental design, particularly for biological systems that frequently exhibit complex and non-linear behaviours. Optimal experimental design methodologies provide a means to optimise experimental pro- tocols to e.g., maximise information acquisition or minimise uncertainty in parameter estimates under resource constraints. The choice of objective function in optimal experimental design is crucial as these set specific optimality criteria, and sensitivity measures are commonly employed. In particular, local sensitivity measures incorporated in the Fisher information matrix [1, 2, 3, 4] are often used because the inverse of the Fisher information matrix provides an estimate of the lower bound of the covariance matrix of the estimated parameters, based on the Cramยด er-Rao inequality [5]. Local sensitivity measures assume a linear relationship between parameter pertur- bations and model responses locally in the region of the specified parameter
https://arxiv.org/abs/2504.19233v1
values. A drawback is therefore that local sensitivity measures can lead to inefficient experimental designs if the input parameters are not close to their โ€œtrueโ€ values. To overcome the limitations of local sensitivity measures, global sensitivity analyses have been increasingly adopted in optimal ex- perimental design [6]. These measures consider nonlinear effects, non-monotonic behaviour and dependencies between multiple parameters [7], making them more suited to the complexities of biological systems. In addition, global sensitivities take parameter ranges as inputs, enhancing the robustness of the experimental design process [8, 9]. A key unexplored question in the literature is the extent to which changes in the character- istics of the measurement / observation noise process impact the output of optimal experimen- tal design protocols. As such, the goal of this paper is to apply optimal experimental design methodologies to a widely used model in mathematical biology to determine optimal observation schemes, specifically focussing on the time points at which data are collected and how different types of noise influence optimal data collection strategies. We will explore the ability of both lo- 2 cal and global measures, used within experimental design approaches, to reduce the uncertainty in parameter estimates in the face of observation noise. The model we consider is an ordinary differential equation model, as is commonly used to describe dynamical systems in biology, and the observations are assumed to consist of the solution of this model plus an โ€œobservation noiseโ€. In this work, our assumption is that that the observation noise accounts for factors not explicitly included in the model, for example, stochasticity in the underlying processes or measurement errors. Most modelling studies in mathematical biology assume that the observation noise is independent Gaussian distributed. This assumption simplifies much statistical analysis, but it does not hold universally across all applications [10, 11]. Although there has been some research into the impact of autocorrelated observation noise processes on parameter estimation [10, 12, 13], the extent to which it affects optimal experimental design remains under-explored. As such, the overarching aim of this work is to understand the extent to which changes in the observation noise process impact our ability to estimate model parameters and the โ€œoptimalโ€ experimental set-up that minimises the uncertainty in parameter estimates. In particular, in many applications there may be correlations between adjacent observation error terms in time series models; these may be caused by, for example, measurement equipment biases or model misspecification [10, 11]. Autocorrelated noise can be represented both through discrete-time models, such as autoregressive, moving average, autoregressive moving average and autoregressive integrated moving average processes [14], and continuous-time processes, such as fractional Brownian motion [15], Cox-Ingersoll-Ross processes [16] and Ornstein-Uhlenbeck (OU) [17] processes. The OU process, a continuous-time analogue of the discrete-time autore- gressive process of order 1, is particularly suited for analysing the impact of autocorrelated noise on parameter estimation because the extent of the autocorrelation between time points depends solely on the magnitude of the time between them. Therefore, in this study, we use the OU process to model autocorrelated observation noise. This paper explores optimal experimental
https://arxiv.org/abs/2504.19233v1
design within the context of both uncorrelated (in- dependent, identically distributed (IID)) and autocorrelated (OU distributed) observation noise, specifically focussing on the logistic model, which is ubiquitously applied in the modelling of biological systems. We suggest an approach to optimal experimental design to minimise param- eter uncertainty, utilising both local sensitivity measures derived from the Fisher information matrix and global sensitivity measures derived from Sobolโ€™ indices as objective functions, and we use our approach to show that changes in the structure of the observation error model can significantly impact the optimal experimental design. 3 The paper is structured as follows. Section 2 introduces the relevant methodology, including the logistic model and the information measures and noise processes to be used in this work. It also describes formulation of the optimal experimental design problem and an overview of the profile-likelihood approach for estimating parameters and confidence intervals. The main results, including the impact of the specific noise process on parameter estimates and the optimal observation schemes, are given in Section 3. The paper concludes with a brief discussion in Section 4. 2 Methods In this section, we outline the mathematical model and relevant statistical techniques used throughout the paper. All relevant code to implement the techniques and reproduce the figures in this work is provided on Github at https://github.com/Jane0917728/Optimal-Experim ental-Design-and-Parameter-Estimation-with-Autocorrelated-Observation-Noise . 2.1 Mathematical model We consider the logistic model for population growth, dC(t) dt=rC(t)/parenleftbigg 1โˆ’C(t) K/parenrightbigg , (1) where C(t) is the population at time t,r >0 is the growth rate, K > 0 is the carrying capacity, with K= lim tโ†’โˆžC(t), and C(0) = C0>is the initial population size. For C0โ‰ชK, population growth is characterised by an initial exponential growth phase in which resources are assumed abundant, a deceleration phase where competition for resources becomes important, and a steady state phase in which the population stabilises at the carrying capacity. The logistic model can be solved explicitly to give C(t) =C0K (Kโˆ’C0)eโˆ’rt+C0. (2) In the model, three parameters ฮธ= (ฮธ1, ฮธ2, ฮธ3) = ( r, K, C 0) are to be estimated, and we will often write C(t;ฮธ) to highlight the dependence of the model output on the parameters. The โ€œtrueโ€ parameters are denoted by ฮธโˆ—= (rโˆ—, Kโˆ—, Cโˆ— 0). Throughout this work, we set r= 0.2, K= 50 and C0= 4.5 and we consider the model up to a final time tfinal= 80, which is sufficient for the population to be very close to carrying capacity. 4 We assume that it is possible to observe the state of the system at ns(strictly positive) discrete observation times ,t1, . . . , t ns, and that observations are of the form Y=C(ฮธโˆ—) +ฯต, (3) where Y= [Y(t1), . . . , Y (tns)], (4) C(ฮธโˆ—) = [C(t1;ฮธโˆ—), . . . , C (tns;ฮธโˆ—)], (5) ฯต= [ฯต(t1), . . . , ฯต (tns)], (6) represent the vector forms of the observations, models outputs and measurement noise at discrete observation times t1, . . . , t ns, respectively. 2.2 Measures of information To quantify the amount of information about the parameters contained in
https://arxiv.org/abs/2504.19233v1
a dataset, we will consider both the Fisher information matrix, which provides a local measure of uncertainty, and Sobolโ€™ indices, which provide a global measure of uncertainty. Fisher information matrix. The Fisher information matrix can be defined using the expec- tation of the Hessian of the log-likelihood function as [1, 2] F={Fij}=/braceleftbigg โˆ’E/bracketleftbiggโˆ‚2L(ฮธ|Y) โˆ‚log(ฮธi)โˆ‚log(ฮธj)/bracketrightbigg/bracerightbigg , (7) where E[ยท] refers to the expectation operator and L(ฮธ|Y) is the log-likelihood of parameters ฮธgiven data Y. The inverse of the Fisher information matrix offers a lower bound on the covariance matrix for unbiased estimators via the Cramยด er-Rao bound [5]. As such, maximising an objective function based on the Fisher information matrix is equivalent to minimising the uncertainty of the estimated parameters governed by the covariance matrix. Sobolโ€™ indices. Sobolโ€™ indices are based on the idea of variance decomposition and evaluate the contribution of each parameterโ€™s variance to the total variance of the measurements [6, 18, 19]. They are particularly suitable for measuring the sensitivity of complex models where multiple parameters influence the measurements, and the relationships between them are non- linear or involve interactions among parameters. 5 Given a model of the form M=M(t;ฮธ), where ฮธ= (ฮธ1, . . . , ฮธ p) (recall that here that p= 3 with ฮธ1=r,ฮธ2=Kandฮธ3=C0), the total-effect Sobolโ€™ index is defined, at time t, as Si(t) =Eฮธโˆผi[Vฮธi[M(t;ฮธ)|ฮธโˆผi]] V[M(t;ฮธ)],fori= 1, . . . , p, (8) where Vis the total variance, Vฮธidenotes the variance over all possible values of ฮธi, and the inner expectation operator Eฮธโˆผi[M(t;ฮธ)|ฮธi] indicates, given ฮธifixed, the mean of M(t;ฮธ) over all possible values of ฮธโˆผi, the set of the parameters excluding ฮธi.Si(t) measures the total effect ofฮธion the model output M(t;ฮธ), including both its direct influence and its interactions with other parameters, and it is normalised such that Si(t)โˆˆ[0,1]. A high value of Si(t) indicates that the ithparameter has a strong effect on the output. 2.3 Independent and identically distributed observation noise For IID Gaussian observation noise we have ฯตโˆผ N (0, ฯƒ2 IIDI), where ฯƒ2 IID>0 is the noise variance, so that Yโˆผ N(C(ฮธโˆ—), ฯƒ2 IIDI) and the log-likelihood function is given by L(ฮธ|Y) =โˆ’ns 2log(2ฯ€ฯƒ2 IID)โˆ’1 2ฯƒ2 IID(Yโˆ’C(ฮธ))(Yโˆ’C(ฮธ))T. (9) Applying maximum likelihood estimation to estimate the noise variance gives ห†ฯƒ2 IID=1 ns(Yโˆ’C(ฮธ))(Yโˆ’C(ฮธ))T=1 nsns/summationdisplay s=1(Y(ts)โˆ’C(ts;ฮธ))2. (10) Throughout this work we will fix the estimate for ฯƒ2 IIDto be the maximum likelihood estimate (MLE) ห† ฯƒ2 IIDgiven in Equation (10), and focus on optimising the measurement times to best estimate the model parameters ฮธ. To calculate the Fisher information matrix, F, we first calculate โˆ‚2L(ฮธ|Y) โˆ‚log(ฮธi)โˆ‚log(ฮธj)=โˆ’1 ฯƒ2 IIDฮธiฮธjโˆ‚C โˆ‚ฮธi/parenleftbiggโˆ‚C โˆ‚ฮธj/parenrightbiggT +1 ฯƒ2 IIDฮธiฮธj(Yโˆ’C(ฮธ))/parenleftbiggโˆ‚2C โˆ‚ฮธiโˆ‚ฮธj/parenrightbiggT . (11) Exploiting the fact that E[Yโˆ’C(ฮธ)] = 0, the elements of the Fisher information matrix can be written Fij=1 ห†ฯƒ2 IIDฮธiฮธjโˆ‚C โˆ‚ฮธi/parenleftbiggโˆ‚C โˆ‚ฮธj/parenrightbiggT . (12) 6 We construct a global information matrix, G, for IID Gaussian observation noise based on the total effect Sobolโ€™ indices with entries Gij=SiST j=ns/summationdisplay s=1Si(ts)Sj(ts), (13) where Si= [Si(t1), . . . , S i(tns)] is a row vector composed of the total effect Sobolโ€™ indices of parameter iat each observation time ts,s= 1, .
https://arxiv.org/abs/2504.19233v1
. . , n s, as defined in Equation (8). Sobolโ€™ indices are usually computed using Monte Carlo simulation [8, 19]. In this paper, we compute the Sobolโ€™ indices directly using the Global Sensitivity Analysis Toolbox for Matlab [20]. 2.4 Autocorrelated observation noise We use an OU process, as defined by the stochastic differential equation dฯต(t) =โˆ’ฯ• ฯต(t)dt+ฯƒOUdW(t), ฯต(0) = ฯต0, (14) to model autocorrelated measurement noise, with ฯต0โˆผ N(0, ฯƒ2 OU/(2ฯ•)). In Equation (14), ฯ• >0 is the mean-reversion rate and ฯƒOU>0, often known as the volatility coefficient, controls the intensity of the random fluctuations introduced by the Wiener process W(t). The solution to Equation (14) is given by ฯต(t) =ฯต0eโˆ’ฯ•t+ฯƒOU/integraldisplayt 0eโˆ’ฯ•(tโˆ’ฯ„)dW(ฯ„), (15) where the second term follows N(0, ฯƒ2(1โˆ’eโˆ’2ฯ•t)/(2ฯ•)). Hence, ฯต(t) is conditionally normally distributed with E[ฯต(t)] =E[ฯต0]eโˆ’ฯ•t= 0, (16) V[ฯต(t)] =eโˆ’2ฯ•tV[ฯต0] +V/bracketleftbigg ฯƒOU/integraldisplayt 0eโˆ’ฯ•(tโˆ’ฯ„)dW(ฯ„)/bracketrightbigg =ฯƒ2 OU 2ฯ•. (17) Note that the choice of ฯต(0)โˆผ N(0, ฯƒ2 OU/(2ฯ•)) means that ฯต(t) follows the stationary distribution at all times. We use the OU process described above to model autocorrelated observation noise with data collected at observation time points t1, . . . , t ns. We set ฯต= [ฯต(t1), ฯต(t2), . . . , ฯต (tns)] and use the fact that ฯต(t1)โˆผ N(0, ฯƒ2 OU/(2ฯ•)), with Equation (14) describing the noise at subsequent times t2, . . . , t ns. Given Equation (15), it follows that for i= 1, . . . , n sโˆ’1 the conditional distribution 7 ofฯต(ti+1) given ฯต(ti) is ฯต(ti+1)|ฯต(ti)โˆผ N/parenleftbigg ฯต(ti)eโˆ’ฯ•โˆ†i,ฯƒ2 OU 2ฯ•/parenleft๏ฃฌig 1โˆ’eโˆ’2ฯ•โˆ†i/parenright๏ฃฌig/parenrightbigg , (18) where โˆ† i=ti+1โˆ’tiis the time between consecutive observations. The autocorrelation is given by AC(โˆ† i) =eโˆ’ฯ•โˆ†ifor time lag โˆ† i. Hence we see that the autocorrelation diminishes as the time interval between observations increases, and that the smaller the value of ฯ•, the stronger the autocorrelation. The likelihood function can be expressed as L(ฯต) =โˆ’ns 2log/parenleftbigg ฯ€ฯƒ2 OU ฯ•/parenrightbigg โˆ’nsโˆ’1/summationdisplay i=11 2log/parenleft๏ฃฌig 1โˆ’eโˆ’2ฯ•โˆ†i/parenright๏ฃฌig โˆ’ฯ• ฯƒ2 OUฯต2(t1)โˆ’ฯ• ฯƒ2 OUnsโˆ’1/summationdisplay i=1/parenleftbig ฯต(ti+1)โˆ’ฯต(ti)eโˆ’ฯ•โˆ†i/parenrightbig2 1โˆ’eโˆ’2ฯ•โˆ†i, (19) or, equivalently, L(ฯต) =โˆ’ns 2log(2ฯ€)โˆ’1 2log(det( ฮฃ))โˆ’1 2ฯตฮฃโˆ’1ฯตT, (20) where ฮฃโˆˆRnsร—nsis the covariance matrix of the OU process, with entries ฮฃij= Cov[ ฯต(ti), ฯต(tj)] =ฯƒ2 OU 2ฯ•eโˆ’ฯ•|tiโˆ’tj|, (21) fori, j= 1, . . . , n s. Given that ฯต=Yโˆ’C(ฮธ), we can write L(ฮธ|Y) =โˆ’ns 2log(2ฯ€)โˆ’1 2log(det( ฮฃ))โˆ’1 2(Yโˆ’C(ฮธ))ฮฃโˆ’1(Yโˆ’C(ฮธ))T. (22) To avoid issues with practical identifiability, we assume that ฯ•is known, and we apply maximum likelihood estimation to estimate the volatility coefficient as ห†ฯƒ2 OU=2ฯ• ns(Yโˆ’C(ฮธ))หœฮฃโˆ’1(Yโˆ’C(ฮธ))T, (23) where the entries of หœฮฃare หœฮฃij=eโˆ’ฯ•|tiโˆ’tj|. (24) We note that ฯƒ2 OUdoes not affect the process of optimising the experimental design as it serves merely as a scaling factor in the objective function. 8 The entries of the Fisher information matrix, F, can be written Fi,j=ฮธiฮธjโˆ‚C โˆ‚ฮธiฮฃโˆ’1/parenleftbiggโˆ‚C โˆ‚ฮธj/parenrightbiggT , (25) and entries of the global sensitivity matrix, G, are defined as Gij=Siฮฃโˆ’1ST j, (26) where ฮฃis as defined in Equation (21) and, as before, Si= [Si(t1), . . . , S i(tns)] is a row vector composed of the total effect Sobolโ€™ indices of parameter iat each time ts,s= 1, . . . , n s, as defined in Equation
https://arxiv.org/abs/2504.19233v1
(8). It is worth noting that the inverse of ฮฃ, also known as the precision matrix, contains information about the conditional dependencies between different time points, and so their inclusion in the sensitivity matrices FandGenable quantification of how parameter sensitivities are influenced by temporal correlations in the data. 2.5 Optimal experimental design The objective of optimal experimental design in this work is to improve the reliability of param- eter estimation through optimising the experimental conditions; in this work, we will focus on optimising the observation times t1, . . . , t ns. Fisher information matrix. In the literature, there are three main quantities calculated from the Fisher information matrix that are routinely used to measure the quality of parameter estimates, namely the trace, the determinant and the minimum expected information gain [21, 22]. We found these metrics to yield similar results when used for optimizing the selection of observation times. Therefore, we choose to present results generated using the determinant of the Fisher information matrix. We denote the set of measurement times as t= [t1, . . . , t ns] so that the aim is to optimise max tf= max t/bracketleft๏ฃฌig det(F(ฮธ,t))/bracketright๏ฃฌig , (27) where F(ฮธ,t) is the Fisher information matrix evaluated for parameters ฮธat times t. We impose the following constraints on t: t1โ‰ฅ0, t nsโ‰คtfinal, t s+1โˆ’tsโ‰ฅโˆ†tfors= 1, . . . , n sโˆ’1, (28) where โˆ†tis the minimum time interval between observations that is practical for experimental 9 design (throughout this work we take โˆ†t= 2.0). This optimal experimental design problem is a non-convex, continuous nonlinear programming problem [23]. We solve it using the fmincon function in Matlab with the interior-point method. To mitigate the risk of the algorithm con- verging to local optima, we implement a random restart technique, which involves generating 50 different random initial guesses to serve as starting points for fmincon. Global information matrix. Since Sobolโ€™ indices need to be calculated using a Monte Carlo method, it is prohibitively time-consuming to consider the same optimisation problem as that for the Fisher information matrix. To make progress, we instead establish a mixed-integer nonlinear programming problem. We first, as for the Fisher information matrix, define โˆ†tto be the minimum time interval between observations that is practical for experimental design (again, taking โˆ†t= 2.0 throughout this work). We then define a fine grid of kpotential observation times [ tp 1, . . . , tp k], with tp i=t+(iโˆ’1)โˆ†tfori= 1, . . . , k โˆ’1, and tp k=tfinal, where kis the largest integer such that tp kโˆ’1โ‰คtfinal. The optimal experimental design aim then is to select the ns< k time points that give rise to the optimal objective function, which is now det( G), where Gis the global information matrix defined in Equation (13) (uncorrelated noise) or Equation (26) (correlated noise). We formulate the optimisation problem using the decision variables I= [I1, . . . , I k], where Iiโˆˆ {0,1}fori= 1, . . . , k indicates whether an observation is made at time point tior not. The aim is then to optimise max
https://arxiv.org/abs/2504.19233v1
Ig= max I/bracketleft๏ฃฌig det(G(ฮธ,I))/bracketright๏ฃฌig , (29) subject to the constraint k/summationdisplay i=1Ii=ns. (30) We solve the optimisation problem in Equations (29)โ€“(30) using PlatEMO, an evolutionary multi-objective optimisation platform for Matlab [24, 25] and, in particular, we use the com- petitive swarm optimizer. Throughout this work we use the parameter ranges rโˆˆ[0.14,0.26], Kโˆˆ[35,65] and C0โˆˆ[3.15,5.85]. 2.6 Profile likelihood approach In order to assess the performance of the observation time points selected by solving the optimal experimental design problem, we calculate the profile likelihood. The practical identifiability of a parameter can be determined by examining the shape and width of the profile likelihood function [26]. A parameter is considered practically identifiable if its profile likelihood is well- 10 defined, meaning it is unimodal with a single peak, and a finite confidence interval. Conversely, a flat or broad profile likelihood suggests that a wide range of parameter values yield similar fits to the data, thereby indicating that the parameter is not practically identifiable. To calculate the univariate profile likelihoods, we partition the parameter vector ฮธinto the interest parameter, ฮธi, and nuisance parameters, ฮธโˆผi. We define the univariate, normalised profile log-likelihood function for the interest parameter ฮธi, lp(ฮธi|Y) = sup ฮธโˆผiL(ฮธ|Y)โˆ’sup ฮธL(ฮธ|Y). (31) Given this definition, we can use lp(ฮธi|Y) = 1 .92 to define an approximate 95% confidence interval for ฮธi[27], as follows CI =/braceleftbig ฮธi|lp(ฮธi|Y)โ‰ฅ โˆ’ฯ‡2 0.95;1/2 =โˆ’1.92/bracerightbig , (32) where ฯ‡2 0.95;1represents the critical value of the chi-square distribution with one degree of freedom at the 95% confidence level. We use the fmincon function in Matlab to calculate the MLE, the profile likelihood and 95% confidence interval for each parameter. We use the 95% confidence intervals to generate prediction intervals for the model [28]. The prediction intervals are calculated through parameter sampling and log-likelihood evaluation: parameters are uniformly sampled and, for each sample, the normalised log-likelihood is com- puted and compared to a threshold of ฯ‡2 0.95;3/2. Parameters that exceed this threshold are retained, ensuring they lie within the 95% confidence region. From the valid samples, Mpop- ulation trajectories are generated at time points t= 1, . . . , n s, and the upper and lower bounds of these trajectories are calculated. Noise model bounds, based on the 5% and 95% quantiles of the relevant distribution, are then added into the upper and lower bounds, respectively, so as to form the prediction interval. 3 Results In this section we present the optimal experimental design results and the impact of optimally choosing measurement times on the parameter estimates. 3.1 Parameter identifiability under the different noise models We first explore the extent to which the model parameters can be estimated under different noise models when the data are sampled at regular time intervals, with 11 observations at times 11 Figure 1: Parameter identifiability under the different noise models, with data collected at 11 regularly spaced time points t= 0,8,16, . . . , 80. The left column shows the sampled data, the underlying model solution using the input parameters, and the solution generated using the MLE parameters. The shaded region shows the 95% prediction
https://arxiv.org/abs/2504.19233v1
interval generated using the 95% confidence intervals from the profile likelihoods. The right-most three panels show the profile likelihoods for each parameter. The results in the top row are generated using uncorrelated Gaussian noise with ฯƒ2 IID= 9.0. The results in the middle row are generated using correlated OU noise with ฯ•= 0.02 and ฯƒ2 OU/(2ฯ•) = 9 .0. The results in the bottom row are generated using a misspecified noise model: the data are generated using correlated OU noise with ฯ•= 0.02 and ฯƒ2 OU/(2ฯ•) = 9 .0, whilst the analysis is carried out assuming IID Gaussian noise with ฯƒ2 IID= 9.0. t= 0,8,16, . . . , 80. Results for ฯ•= 0.02 are shown in Figure 1, with similar results for ฯ•= 0.1 (less correlated noise) shown in Supplementary Figure S1. The left column shows the sampled data (circles) along with the underlying model solution using the input parameters (dashed line) and the solution generated using the MLE parameters (solid line), and the shaded region shows the 95% prediction interval. The top row shows the results from the uncorrelated noise model, the second row shows the results from the correlated noise model, whilst the bottom row shows the results in the case of misspecification in the noise model, where correlated noise was used to generate the observations but the profile likelihoods were generated using the uncorrelated noise 12 Figure 2: Change in the mean 95% confidence interval for parameters r,KandC0as the variance increases under both correlated and uncorrelated noise, and when the noise is misspecified (as per Figure 1). Top row: ฯƒ2=ฯƒ2 IID=ฯƒ2 OU/(2ฯ•) with ฯ•= 0.02 so that the variance is equivalent across the different noise models. Bottom row: ฯƒ2=ฯƒ2 IID=ฯƒ2 OUwith ฯ•= 0.02 so that the variance is larger in the correlated noise process than the uncorrelated noise process. In all cases, results were generated by averaging the confidence interval width from 1000 simulations for each noise model and parameter set, and 11 observations were used. model. To enable a comparison of the effects of uncorrelated observation noise and correlated observation noise on parameter inference under consistent variance, we choose parameters such thatฯƒ2 IID=ฯƒ2 OU/(2ฯ•). We see that in all cases all model parameters are practically identifiable, with the widths of the confidence intervals depending on the both the noise model and the data. A key point to note is that incorrectly assuming uncorrelated noise can impact the parameters in different ways. For example, the carrying capacity, K, is predicted to be more confidently estimated, whilst the uncertainty in the growth rate is predicted much higher in the misspecified case. Figure 2 shows how the width of the confidence intervals changes with the variance for both uncorrelated and correlated noise, as well as misspecified noise1, in each case using 11 regularly spaced observation time points, t= 0,8,16, . . . , 80. The top row shows results with ฯƒ2=ฯƒ2 IID=ฯƒ2 OU/(2ฯ•), and the mean reversion rate of the OU process held constant at ฯ•= 0.02 1Recall that in the misspecified noise case, the data are generated using correlated
https://arxiv.org/abs/2504.19233v1
data but the confidence intervals are generated assuming the data are uncorrelated. 13 so that the noise variance is equivalent across both noise models. We see that, for all values of ฯƒ2, the confidence intervals for all parameters are almost identical for uncorrelated noise and misspecified noise, with greater confidence in the parameter estimated for the carrying capacity K, and less confidence in the estimates for the growth rate, r, and the initial condition, C0, compared to the correlated noise case. This plots highlights the potential pitfalls of incorrect assumptions on the noise model, in that it can potentially lead a modeller to either have too little, or too much, confidence in parameter estimates. The bottom row of Figure 2 illustrates that setting ฯƒ2 IID=ฯƒ2 OUwith significant noise correlation ( ฯ•= 0.02) results in greatly reduced confidence in the parameter estimates for the correlated noise case compared to the uncorrelated and misspecified cases. This is to be expected as, in this case, the variance of the correlated noise process is 25 times larger than that of the uncorrelated noise process. 3.2 Optimal experimental design In this section, we use the framework outlined in Section 2.5 to explore how the optimal time points for observations change as the total number of observations, ns, is varied, and also how correlations in the observation noise influence the optimal measurement protocol. We use both the local measures obtained using the Fisher information matrix, and global measures obtained using the total Sobolโ€™ index, as outlined in Section 2.2. All results are generated using ฯ•= 0.02, which entails a significant degree of correlation in the noise process. Figure 3 shows the results of optimising the experimental design for uncorrelated noise, detailing the placement of the observations as the total number of observations is varied from three to ten. It also compares the confidence in parameter estimates under both optimised and evenly spread observations when five observations are made in total. We see that both the Fisher information matrix and the global information matrix lead to experimental designs that place the observations into three groups, with the first two groups of observations lying in the first 20 time units, which corresponds to the exponential growth phase, and the third group gathered towards the final time, which corresponds to the process being at steady state (carrying capacity). These results make sense intuitively, with the first two groups of time points enabling accurate inference of the growth rate, r, and the initial condition, C0, and the third group enabling accurate inference of the carrying capacity, K. These intuitive explanations are further supported by the plots in Figure 4 which shows how the sensitivity of the Fisher information matrix and global information matrix measures to variation in each parameter change over the course of the experiment. Our results also highlight how careful placement of the observations can enable accurate inferences to be made with fewer time points: the right-hand three plots 14 Figure 3: The results of optimising experimental design in the uncorrelated noise case as the number of observations, ns, is
https://arxiv.org/abs/2504.19233v1
varied from three to ten. The top row shows the results from evenly distributed observations, whilst the second row shows the optimal experimental design derived using the Fisher information matrix, and the bottom row shows the optimal experimental design derived using the global information matrix. In each case, the left-hand plots show the observations time points, whilst the right-hand three plots show the profile likelihoods obtained from five observations. In all plots, ฯƒ2 IID= 9.0. highlight that under the optimised protocols, all parameters can be accurately estimated using just five observations, whereas neither the growth rate, r, or the initial condition, C0, can be accurately inferred using five equally spaced observations. When the observation noise is correlated (see Figure 5), the optimisation results change dramatically: the majority of the observation points are evenly distributed during the first half of the experiment, until approximately 40 time units, with only a single observation in the second half of the experiment, which is generally placed at, or very close to, the terminal time, tfinal. This change in experimental design is intuitive: the significant correlation in the noise process (small ฯ•) means that less information about the parameters is gained from two closely placed observations. Once again, we see that the Fisher information matrix and the global 15 Figure 4: (a) Fisher information sensitivity and (b) global information sensitivity, formulated in terms of the gradient of the output, C(t;ฮธ), with respect to the individual parameters. We evaluate the sensitivity at tk= 0,2,4, . . . , 80, with the Fisher information sensitivity calcu- lated as ฮธiโˆ‚C(tk)/โˆ‚ฮธiand the global information sensitivity calculated as Si(tk), as defined in Equation (8). information matrix provide very similar results in terms of optimal placement of the observation time points. We also note that with correlated observation noise, all parameters can in fact be inferred from five observations without any optimisationโ€”evenly spacing the time points provides closed confidence intervals for all model parametersโ€”although optimisation clearly reduces the uncertainty in the parameter estimates. 3.3 Impact of optimal experimental design upon parameter identifiability In this section, we explore how changes in the number of observation time points impact the confidence in parameter estimates. The mean parameter confidence intervals, calculated using the profile likelihood method presented in Section 2.6, are shown in Figure 6. The top row displays results for the uncorrelated noise model, the second row for the correlated noise model, and the bottom row for the case of model misspecification, where correlated noise was used to generate observations, but profile likelihoods were computed assuming an uncorrelated noise model. Additional results, for different values of ฯ•, are shown in Supplementary Information Figure S2. Figure 6 shows the intuitive result that, in all cases, parameter estimates become more confident as the number of observation time points is increased, and that optimising the timing of observations generally improves the confidence in parameter estimates compared to the case of evenly distributing the observation time points. The exception to this rule is the carrying capacity parameter, K, where the confidence intervals are often marginally smaller for evenly spaced
https://arxiv.org/abs/2504.19233v1
time points. This is a direct result of the fact that the optimisation algorithms place the majority of the observations at early times in order to accurately estimate the growth rate, r, 16 Figure 5: The results of optimising experimental design in the correlated noise case as the number of observations, ns, is varied from three to ten. The top row shows the results from evenly distributed observations, whilst the second row shows the optimal experimental design derived using the Fisher information matrix, and the bottom row shows the optimal experimental design derived using the global information matrix. In each case, the left-hand plots show the observations time points, whilst the right-hand three plots show the profile likelihoods obtained from five observations. In all plots, ฯƒ2 OU/(2ฯ•) = 9 .0. and the initial condition, C0. Note that for small numbers of observation time points ( ns= 3,4) the confidence intervals are half open when the observation points are evenly distributed, hence we do not display a result in those cases. In addition, note that when the number of evenly distributed points increases from five to seven in Figure 6, the width of the confidence interval for the parameter rincreases. One possible explanation is that among the five evenly distributed points, some provide more information for estimating the parameter r. Close investigation reveals that this is indeed the case: observations placed close to t= 20 are important for the estimation of r(see Supplementary Figure S3). It is worth noting that for the uncorrelated noise model, the mean confidence intervals for the carrying capacity, K, generated using local and global approaches to optimising the observation time points overlap. This is because the selected 17 Figure 6: Mean parameter confidence intervals in the case of uncorrelated noise (top row), correlated noise (middle row) and misspecified noise (bottom row). In each case, we plot the mean confidence interval as a function of the number of observation points, ns. The plots in each row are generated used 1000 simulations with ฯƒIID= 0.8 (top row), ฯƒOU= 0.16 and ฯ•= 0.02 (middle and bottom rows) so that ฯƒ2 IID=ฯƒ2 OU/(2ฯ•) = 0 .64. observation points lie in the saturation region and are nearly identical across the methods (as shown in the left column of Figure 3). 3.4 Impact of the autocorrelation level on experimental design In this section, we will investigate how the mean-reversion rate ฯ•, reflecting the level of auto- correlation in the observation noise, influences the optimised distribution of observation times. Figure 7 shows how the optimised observation times (generated using the Fisher information matrix) change as ฯ•increased (which corresponds to decreasing the autocorrelation). For very 18 Figure 7: The optimal observation time points as a function of the mean-reversion rate, ฯ•. Note that the degree of autocorrelation decreases as ฯ•increases. high autocorrelation, the majority of measurement points are placed within the first half of the experiment in order that the growth rate, r, and initial condition, C0, can be accurately measured. As the degree of autocorrelation diminishes ( ฯ•is increased), a measurement point is placed at,
https://arxiv.org/abs/2504.19233v1
or close to, the terminal time, which allows estimation of the carrying capacity, K. With more total observations, ns, further points are added at the end of time interval for larger values of ฯ•, which enables increases in the accuracy of estimation of the carrying capacity, K. Figure 8 demonstrates how the parameter confidence intervals vary as a function of the degree of autocorrelation. In the top row the variance, ฯƒ2 OU/(2ฯ•), is held constant as ฯ•is varied and we see that there is a complicated relationship between the confidence interval width and ฯ•. For very low values of ฯ•, which correspond to highly correlated observation noise, the majority of the observation points are placed early in the experiment which leads to accurate estimates for the growth rate, r, and carrying capacity, C0, and far less confidence in estimates of the carrying capacity, K. As the autocorrelation decreases (with increasing ฯ•), increased numbers of observations are placed in the second half of the experiment and confidence in estimates of the carrying capacity, K, increase significantly. The bottom row demonstrates the intuitive result that, for ฯƒOU= 0.3 held constant and increasing ฯ•(which corresponds to decreasing the variance, ฯƒ2 OU/(2ฯ•), of the noise process), the mean confidence interval widths for all parameters decreases. 19 Figure 8: The mean confidence interval width for parameters estimated using 11 optimised observation time points as the mean-reversion rate, ฯ•, increases from 0 .01 to 2 .0. In the top row the variance is held constant at ฯƒ2 OU/(2ฯ•) = 4 .0 whilst in the bottow row the variance changes with ฯƒOU= 0.3 held constant. In all cases, results were generated using 1000 simulations. 4 Discussion This paper investigates the influence of different types of observation noise on optimal exper- imental design, where the aim is to determine the optimal observation times for accurate and confident parameter inference. Specifically, we analyze both uncorrelated (IID, Gaussian) and correlated (OU) noise and apply local techniques based on the Fisher information matrix, as well as global sensitivity approaches based on Sobolโ€™ indices, to optimise observation times for the the logistic growth model. Our results highlight several key points. Firstly, the uncertainty in parameter estimates depends on the type of observation noise, with some model parameters becoming more identifi- able, and others less so, as the noise model is varied. Second, we demonstrate that, under both uncorrelated and correlated noise, optimising the placement of observation time points reduces the uncertainty in parameter estimates compared to naยจ ฤฑve placement of measurement points and that, in many cases, this allows the practitioner to confidently estimate parameter values using fewer observations. Thirdly, our results show that the optimal observation time points are sensitive to the degree of noise correlation, with high autocorrelation favouring the majority of 20 time points in the first half of the experiment. On the whole, are results are relatively insensitive to the choice of objective function, though we highlight that use of the Fisher information matrix requires selection of parameter values a priori where as the global information matrix takes parameter ranges
https://arxiv.org/abs/2504.19233v1
as inputs, which means that it may be more appropriate for cases in which parameter values are relatively uncertain. The trade-off is the increased computational complexity of the global method compared to that of the local method. Our methodology is very general, in the sense that it can be applied in any context where it is possible to explicitly write down the likelihood and evaluate the associated Fisher and global information matrices. For example, it can be easily applied in the context of ordinary and partial differential equation-based models, and with a huge range of observation noise models. It would also be possible to integrate a cost-benefit analysis into the framework, for example using multi-objective optimisation to balance the costs of each observation against the quality of parameter estimates. In the future, we aim to extend our methodology to other models in mathematical biology to uncover more general insights regarding the effects of autocorrelated measurement processes on parameter estimates for differential equation-based models. Developing a method to diag- nose whether noise is correlated or independent would be beneficial. Additionally, it would be promising to optimize other experimental conditions, such as external inputs, beyond just measurement points, or to consider multi-objective optimal experimental design problems. Acknowledgements J.Q. would like to thank the Mathematical Institute, University of Oxford, for their support and hospitality during a visit to Oxford. R.E.B. acknowledges support from the Simons Foundation (MP-SIP-00001828). References [1] Gutenkunst R, Waterfall J, Casey F, Brown K, Myers C, Sethna J. 2007 Universally sloppy parameter sensitivities in systems biology. PLOS Computational Biology 3, 1871โ€“1878. [2] Chis OT, Villaverde AF, Banga JR, Balsa-Canto E. 2016 On the relationship between sloppiness and identifiability. Mathematical Biosciences 282, 147โ€“161. 21 [3] Telen D, Vercammen D, Logist F, Van Impe J. 2014 Robustifying optimal experiment design for nonlinear, dynamic (bio) chemical systems. Computers & Chemical Engineering 71, 415โ€“425. [4] Zhang JF, Papanikolaou NE, Kypraios T, Drovandi CC. 2018 Optimal experimental design for predatorโ€“prey functional response experiments. Journal of The Royal Society Interface 15, 20180186. [5] Walter E, Pronzato L, Norton J. 1997 Identification of Parametric Models From Experi- mental Data vol. 1. Springer. [6] Schenkendorf R, Xie X, Rehbein M, Scholl S, Krewer U. 2018 The impact of global sen- sitivities and design measures in model-based optimal experimental design. Processes 6, 27. [7] Iooss B, Lemaห† ฤฑtre P. 2015 A review on global sensitivity analysis methods. In Dellino G, Meloni C, editors, Uncertainty Management in Simulation-Optimization of Complex Systems: Algorithms and Applications , pp. 101โ€“122. Springer. [8] Pozzi A, Xie X, Raimondo DM, Schenkendorf R. 2020 Global sensitivity methods for design of experiments in lithium-ion battery context. IFAC-PapersOnLine 53, 7248โ€“7255. [9] Chu Y, Hahn J. 2013 Necessary condition for applying experimental design criteria to global sensitivity analysis results. Computers & Chemical Engineering 48, 280โ€“292. [10] Lambert B, Lei CL, Robinson M, Clerx M, Creswell R, Ghosh S, Tavener S, Gavaghan DJ. 2023 Autocorrelated measurement processes and inference for ordinary differential equation models of biological systems. Journal of the Royal Society Interface 20, 20220725. [11] Fu X, Patel HP, Coppola S,
https://arxiv.org/abs/2504.19233v1
Xu L, Cao Z, Lenstra TL, Grima R. 2022 Quantifying how post-transcriptional noise and gene copy number variation bias transcriptional parameter inference from mRNA distributions. eLife 11, e82493. [12] Kuhlmann H. 2001 Importance of autocorrelation for parameter estimation in regression models. In Proceedings of the 10th FIG International Symposium on Deformation Measure- ments, Orange, California . Citeseer. [13] Simoen E, Papadimitriou C, Lombaert G. 2013 On prediction error correlation in Bayesian model updating. Journal of Sound and Vibration 332, 4136โ€“4152. 22 [14] Durbin J, Koopman SJ. 2012 Time series analysis by state space methods vol. 38 Oxford Statistical Science Series . Oxford Academic 2nd edition. [15] Biagini F, Hu Y, ร˜ksendal B, Zhang T. 2008 Stochastic Calculus for Fractional Brownian motion and Applications . Springer Science & Business Media. [16] Dereich S, Neuenkirch A, Szpruch L. 2012 An Euler-type method for the strong approxima- tion of the Coxโ€“Ingersollโ€“Ross process. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 468, 1105โ€“1115. [17] Maller RA, Mยจ uller G, Szimayer A. 2009 Ornsteinโ€“Uhlenbeck processes and extensions. Handbook of Financial Time Series pp. 421โ€“437. [18] Borgonovo E, Plischke E. 2016 Sensitivity analysis: A review of recent advances. European Journal of Operational Research 248, 869โ€“887. [19] Saltelli A, Sobolโ€™ IM. 1995 Sensitivity analysis for nonlinear mathematical models: numer- ical experience. Matematicheskoe Modelirovanie 7, 16โ€“28. [20] Cannavยด o F. 2012 Sensitivity analysis for volcanic source modeling quality assessment and model selection. Computers & Geosciences 44, 52โ€“59. [21] Banks HT, Holm K, Kappel F. 2011 Comparison of optimal design methods in inverse problems. Inverse Problems 27, 075002. [22] Bauer I, Bock HG, Kยจ orkel S, Schlยจ oder JP. 2000 Numerical methods for optimum experi- mental design in DAE systems. Journal of Computational and Applied Mathematics 120, 1โ€“25. [23] Danilova M, Dvurechensky P, Gasnikov A, Gorbunov E, Guminov S, Kamzolov D, Shibaev I. 2022 Recent theoretical advances in non-convex optimization. In High-Dimensional Op- timization and Probability: With a View Towards Data Science , pp. 79โ€“163. Springer. [24] Tian Y, Cheng R, Zhang X, Jin Y. 2017 PlatEMO: A MATLAB platform for evolutionary multi-objective optimization. IEEE Computational Intelligence Magazine 12, 73โ€“87. [25] Tian Y, Zhu W, Zhang X, Jin Y. 2023 A practical tutorial on solving optimization problems via PlatEMO. Neurocomputing 518, 190โ€“205. [26] Raue A, Kreutz C, Maiwald T, Bachmann J, Schilling M, Klingmยจ uller U, Timmer J. 2009 Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood. Bioinformatics 25, 1923โ€“1929. 23 [27] Royston P. 2007 Profile likelihood for estimation and confidence intervals. The Stata Journal 7, 376โ€“387. [28] Simpson MJ, Baker RE. 2024 Parameter identifiability, parameter estimation and model prediction for differential equation models. arXiv preprint arXiv:2405.08177 . 24 Supplementary Information Optimal experimental design for parameter estimation in the presence of autocorrelated observation noise Jie Qi1and Ruth E. Baker2 1College of Information Science and Technology, Donghua University, Shanghai, China 2Mathematical Institute, University of Oxford, Oxford, United Kingdom 1arXiv:2504.19233v1 [math.ST] 27 Apr 2025 S1 Parameter identifiability under different noise models Figure S1 shows equivalent results to those shown in Figure 1 of
https://arxiv.org/abs/2504.19233v1
the main text, but with decreased noise correlation ( ฯ•= 0.1 compared to ฯ•= 0.02 in the main text). Figure S1: Parameter identifiability under the different noise models, with data collected at 11 regularly spaced time points t= 0,8,16, . . . , 80. The left column shows the sampled data, the underlying model solution using the input parameters, and the solution generated using the MLE parameters. The shaded region shows the 95% prediction interval generated using the 95% confidence intervals from the profile likelihoods. The right-most three panels show the profile likelihoods for each parameter. The results in the top row are generated using uncorrelated Gaussian noise with ฯƒ2 IID= 9.0. The results in the middle row are generated using correlated OU noise with ฯ•= 0.1 and ฯƒ2 OU/(2ฯ•) = 9.0. The results in the bottom row are generated using a misspecified noise model: the data are generated using correlated OU noise with ฯ•= 0.1 and ฯƒ2 OU/(2ฯ•) = 9.0, whilst the analysis is carried out assuming IID Gaussian noise with ฯƒ2 IID= 9.0. 2 S2 Confidence interval widths for autocorrelated noise We perform parameter inference using measurement times optimised under both IID and OU noise assumptions, where the actual measurement noise follows an OU process. The resulting mean parameter confidence interval widths are shown in Figure S2, illustrating the impact of different autocorrelation levels in the OU noise. Figure S2: Mean confidence interval widths generated using 1000 simulations, under OU noise with ฯƒ2 OU/(2ฯ•) = 4.0 and (from top to bottom row) ฯ•= 0.02,0.12,0.30. We perform parameter inference using the measurement times optimised with the FIM and Sobol indices under both IID and OU noise assumptions. 3 S3 Sensitivity of the confidence intervals to small changes in measurement times We assessed the change in confidence intervals with small changes in the measurement times, as per the following matrix: t=๏ฃฎ ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฐ0 20 40 60 80 0 20 40 60 64 80 0 13 .3 20 40 60 80 0 11 .4 20 40 45 .7 60 68 .6 80 0 10 20 30 40 50 60 70 80๏ฃน ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃป. (S1) The results are shown in Figure S3, and demonstrate that small changes in the measurement points can significantly affect the confidence intervals for the growth rate, r. Figure S3: Variation of the confidence interval width as the placement of the measurement points is varied according to Equation (S1). In each case, we plot the mean confidence interval as a function of the number of observation points, ns. The plots in each row are generated using 1000 simulations and ฯƒ2 IID= 0.64. 4
https://arxiv.org/abs/2504.19233v1
arXiv:2504.19331v1 [math.ST] 27 Apr 2025Bahadur asymptotic e๏ฌƒciency in the zone of moderate deviation probabilities Mikhail Ermakov April 2025 Institute of Problems of Mechanical Engineering RAS,and St . Petersburg State University ะะฝะฝะพั‚ะฐั†ะธั For a sequence of independent identically distributed rand om variables having a distribution function with an unknown parameter fr om a setฮ˜โŠ‚Rd, we prove an analogue of the lower bound of Bahadur asymptotic e๏ฌƒciency for the zone of moderate deviation prob abilities. The assumptions coincide with assumptions conditions unde r which the locally asymptotically minimax lower bound of Hajek-Le Cam was proved. The lower bound for local Bahadur asymptotic e๏ฌƒcienc y is a special case of this lower bound. 1 Introduction LetX1,...,X nbe independent indentically distributed random variables having probability measure Pฮธ,ฮธโˆˆฮ˜. Probabilitu measures Pฮธ,ฮธโˆˆฮ˜, are de๏ฌned on a ฯƒ- algebra Bof setS. The set ฮ˜is open bounded subset ofRd. The value of parameter ฮธis unknown. We are interested in the lower bounds of asymptotic e๏ฌƒciency for estimators of parameter ฮธ. There are two approaches for the study of asymptotic e๏ฌƒcienc y. One of them is a locally asymptotically minimax lower bound. Local ly asymptotically minimax lower bound of the Hajek-Le Cam [7, 9, 8, 11] speci๏ฌes a lower bound 1The research was supported by the Ministry of Science and Hig her Education of Russian Federation (project 124041500008-1). 1 on the asymptotic e๏ฌƒciency for estimators ห†ฮธn=ห†ฮธn(X1,...,X n)that deviate from the true value of the parameter ฮธby an order of nโˆ’1/2. In this setting we get the lower bound of the supremum of risk of statistical e stimates for any arbitrarily small neighborhood of the true value of the p arameter. Such a lower bound of asymptotic e๏ฌƒciency does not provide precis e information about the behavior of the risk of statistical estimator for a speci๏ฌc value of the parameter. The lower bound for the Bahadur asymptotic e๏ฌƒciency [1, 2, 3, 8], proved for probabilities of large deviations of statistical estim ates, provides the lower bounds of the risks of statistical estimates at each speci๏ฌc point of possible parameter values. In this case, only consistency of estimat es is assumed. In the zone of moderate deviation probabilities bounds for l ocally asymptotically minimax risk of statistical estimators have been establish ed in [4, 5, 10]. The interest in this issue is due to the fact that the constructin g the boundaries of con๏ฌdence intervals is based on tails of estimator distribu tions and, therefore, can be treated as a problem of moderate deviation probabilit ies. For logarithmic asymptotics, locally asymptotically minimax lower bounds for the moderate deviation probabilities were obtained under the same assum ptions [4, 10] as the lower bound for the local asymptotic e๏ฌƒciency of Hajek -Le Cam [7, 9, 8, 11]. For the strong asymptotics of moderate deviati on probabilities of statistical estimators, the lower bound of the asymptoti cally minimax risk has been proved under not very strong additional assumption s [4, 5]. In the zone of moderate deviation probabilities, one can pro ve lower bounds for asymptotic e๏ฌƒciency both in the asymptotically m inimax setting and in
https://arxiv.org/abs/2504.19331v1
the Bahadur setting. For the asymptotically minimax s etting the lower bounds have been proved in [4, 5, 10]. The goal of presen t paper is to obtain an analogue of the Bahadur lower bound in the zone of moderate deviation probabilities assuming the same conditions as th e lower bound on the Hajek-Le Cam local asymptotic e๏ฌƒciency. Note that the st raightforward application of Bahadur method of the proof leads to signi๏ฌca nt additional conditions [6, 8] even for establishing the lower bound for t he local Bahadur asymptotic e๏ฌƒciency. The lower bound for the local Bahadur a symptotic e๏ฌƒciency is a special case of the lower bound on the asymptoti c e๏ฌƒciency in the zone of moderate deviation probabilities established i n the present paper. Forฮ˜โŠ‚R1, we also prove an analogue of Bahadur lower bound separately on each side of the exterior of the interval to which the estim ator should not belong. A multidimensional analogue of these โ€œone-sidedโ€ l ower bounds was also obtained. 2 The paper is organized as follows. All results and proofs are collected in section 2. In subsection 2.1 we introduce the condition un der which the main results are proved. This condition is the same one under which the locally asymptotically minimax Hajek-Le Cam risk bound is e stablished. In subsection 2.2, for the sake of completeness of presentat ion and better understanding of further results, a locally asymptoticall y minimax risk bound for moderate deviation probabilities of statistical estim ates is presented. In subsection 2.3, analogue of the lower bound for Bahadur asym ptotic e๏ฌƒciency of statistical estimators is provided in the zone of moderat e deviation probabilities. The lower bounds for Bahadur local asymptotic e๏ฌƒciency are p articular cases of these ones. In subsection 2.4 we give a generalization of B ahadur lower bounds to the multivariate case, covering "one-sided"lowe r bounds, and show that its proof is no di๏ฌ€erent from the proof of traditional Ba hadur lower bounds on the asymptotic e๏ฌƒciency [3, 8]. For the case of multidimensional parameter ฮธโˆˆฮ˜โŠ‚Rd,d >1, we shall denote by Greek letters ฮธ,ฯ„,ฯ†,... vectors or vector functions. We denote 1(A)the indicator of an event A. 2 Main Results 2.1 Main condition Suppose probability measures Pฮธ,ฮธโˆˆฮ˜, ะฐre absolutely continuous with respect to probability measure ฮฝ, de๏ฌned on the same ฯƒ- algebra Bof setS, and have the densities f(x,ฮธ) =dPฮธ dฮฝ(x), xโˆˆS. (2.1) For any ฮธ,ฮธ0โˆˆฮ˜, denote Pa ฮธฮธ0andPs ฮธฮธ0absolutely continuous and singular components of probability measure Pฮธwith respect to probability measure Pฮธ0. For allฮธ0,ฮธ0+ฯ„โˆˆฮ˜de๏ฌne the function g(x,ฯ„) =g(x,ฮธ0,ฮธ0+ฯ„) =/parenleftBigf(x,ฮธ0+ฯ„) f(x,ฮธ0)/parenrightBig1/2 โˆ’1 (2.2) for allxbelonging to the support of measure Pa ฮธ0+ฯ„,ฮธ0and equal to zero otherwise. 3 We say that statistical experiment E={(S,B),Pฮธ,ฮธโˆˆฮ˜}has the ๏ฌnite Fisher information at the point ฮธ0โˆˆฮ˜, if there exists vector function ฯ†: Sโ†’Rdsuch that we have /integraldisplay S(g(x,ฯ„)โˆ’ฯ„Tฯ†)2dPฮธ0=o(|ฯ„|2),Ps ฮธ0,ฮธ0+ฯ„(S) =o(|ฯ„|2) (2.3) asฯ„โ†’0and the matrix components I(ฮธ0) = 4/integraldisplay Sฯ†ฯ†TdPฮธ0. (2.4) take on ๏ฌnite values. MatrixI(ฮธ0)is called Fisher information matrix. Let we have a sequence un>0,unโ†’0,nu2 nโ†’ โˆž asnโ†’ โˆž. We say that estimator ห†ฮธn=ฮธn(X1,...,X n)of parameter ฮธโˆˆฮ˜isun- consistent, if for any ฮธ0โˆˆฮ˜there
https://arxiv.org/abs/2504.19331v1
is a vicinity Uofฮธ, such that, for any ฮด >0, we have lim nโ†’โˆžsup ฮธโˆˆUPฮธ(|ห†ฮธnโˆ’ฮธ|> ฮดun) = 0. (2.5) 2.2 Locally asymptotically minimax lower bound The locally asymptotically minimax lower bound of risks in t he zone of moderate deviation probabilities does not require any cond itions of consistency. Theorem 2.1 Let the statistical experiment E={(S,B),Pฮธ,ฮธโˆˆฮ˜}have the ๏ฌnite Fisher information at the point ฮธ0โˆˆฮ˜. Then, for any estimator ห†ฮธn, for the points ฮธ0,ฮธn=ฮธ0+2unโˆˆฮ˜, we have lim inf nโ†’โˆžsup ฮธ=ฮธ0,ฮธn(nu2 nI(ฮธ)/2)โˆ’1logPฮธ(|ห†ฮธnโˆ’ฮธ|> un)โ‰ฅ โˆ’1.(2.6) For the proof we consider the problem of hypothesis testing H0:ฮธ=ฮธ0 versus alternative Hn:ฮธ=ฮธ0+vn,vn= 2un. De๏ฌne the test Kn= Kn(X1,...,X n) =1(ห†ฮธnโˆ’ฮธ0> un). Denote ฮฑ(Kn)andฮฒ(Kn)respectively type I and type II error probabilities of test Kn. By Theorem 2.2 in [4], if ฮฑ(Kn)< c <1andฮฒ(Kn)< c <1, we have lim sup nโ†’โˆž(nv2 n)โˆ’1/2(|2logฮฑ(Kn)|1/2+|2logฮฒ(Kn)|1/2)โ‰ค1 (2.7) 4 Proof of (2.7) is based on the extension of a certain version o f the statement about local asymptotic normality to the zone of probabiliti es of moderate deviations and the fact that (2.7) is valid in the case of test ing a similar hypothesis for the normal distribution by virtue of the Neym an-Pearson Lemma. We have ฮฑ(Kn)โ‰คPฮธ0(|ฮธnโˆ’ฮธ0|> un) (2.8) and ฮฒ(Kn) =Pฮธn(ห†ฮธnโˆ’ฮธ0< un) =Pฮธn(ห†ฮธnโˆ’ฮธ0โˆ’2un<โˆ’un) =Pฮธn(|ห†ฮธnโˆ’ฮธn|> un). (2.9) By (2.7)- (2.9), we get (2.6). 2.3 Bahadur e๏ฌƒciency in the zone of moderate deviation probabilities Let us present an analogue of lower bound for Bahadur e๏ฌƒcienc y in the zone of moderate deviation probabilities d= 1. Theorem 2.2 Let the statistical experiment E={(S,B),Pฮธ,ฮธโˆˆฮ˜}has the ๏ฌnite Fisher information at the point ฮธ0โˆˆฮ˜โŠ‚R1. Let estimator ห†ฮธnbeun- consistent. Then, for any estimator ห†ฮธn, we have lim inf nโ†’โˆž(nu2 nI(ฮธ)/2)โˆ’1logPฮธ(|ห†ฮธnโˆ’ฮธ|> un)โ‰ฅ โˆ’1. (2.10) Moreover, we have lim inf nโ†’โˆž(nu2 nI(ฮธ)/2)โˆ’1logPฮธ(ห†ฮธnโˆ’ฮธ > un)โ‰ฅ โˆ’1. (2.11) For the proof we consider the problem of hypothesis testing H0:ฮธ=ฮธ0 versus alternatives Hn:ฮธ=ฮธ0+vn,vn=run,r >1. De๏ฌne the tests Kn=Kn(X1,...,X n) =1(|ห†ฮธโˆ’ฮธ0|> un). Since the estimator ห†ฮธnisun-consistent, we can implement (2.7) and we get lim sup nโ†’โˆž(nv2 n)โˆ’1/2(|2logPฮธ0(|ห†ฮธnโˆ’ฮธ0|> un)|1/2+|2logPฮธ0+vn(|ห†ฮธnโˆ’ฮธ0|< un)|1/2)โ‰ค1. (2.12) 5 Since the estimator ห†ฮธnisun-consistent, then lim nโ†’โˆžPฮธ0+vn(|ห†ฮธnโˆ’ฮธ0|< un) = 0. (2.13) By (2.19) and (2.13), we get liminf sup nโ†’โˆž(nr2u2 n)โˆ’1/2(|2logPฮธ0(|ห†ฮธnโˆ’ฮธ0|> un)|1/2โ‰ค1. (2.14) Since the choice of r >1is arbitrary, we get (2.10). Inequality (2.11) is proved similarly. It su๏ฌƒces to impleme nt (2.7) to the testKn=1(ห†ฮธnโˆ’ฮธ0> un). This reasoning are omitted. The multidimensional version of Theorem 2.2 is provided bel ow. In the following Theorems 2.3 and 2.5 the asymptotic e๏ฌƒciency vari es in di๏ฌ€erent directions. This is the main distinguishing feature of Theo rems. LetVbe bounded convex open set in Rdand0โˆˆV. Denote ยฏVthe complement of set V. Theorem 2.3 Let the statistical experiment E={(S,B),Pฮธ,ฮธโˆˆฮ˜}has the ๏ฌnite Fisher information at the point ฮธ0โˆˆฮ˜โŠ‚Rd. Let estimator ห†ฮธnbeun- consistent. Then, for any estimator ห†ฮธn, we have lim inf nโ†’โˆž(nu2 n)โˆ’1logPฮธ(ห†ฮธnโˆ’ฮธโˆˆunยฏV)โ‰ฅ โˆ’1 2inf ฯ„โˆˆยฏVฯ„TI(ฮธ)ฯ„ (2.15) Moreover, for any cone Kwith a vertex at zero and having a non-empty interior, there is lim inf nโ†’โˆž(nu2 n)โˆ’1logPฮธ(ห†ฮธnโˆ’ฮธโˆˆunยฏVโˆฉK)โ‰ฅ โˆ’1 2inf ฯ„โˆˆยฏVโˆฉKฯ„TI(ฮธ)ฯ„.(2.16) The case of ฮธโˆˆฮ˜โŠ‚Rdis similar to ฮธโˆˆฮ˜โŠ‚R1and also reduces to the problem of estimating a parameter at two points. The following lower bound for local asymptotic Bahadur e๏ฌƒci ency
https://arxiv.org/abs/2504.19331v1
is deduced from Theorem 2.3. . We call estimator ห†ฮธnlocally uniformly consistent, if for any ฮธ0โˆˆฮ˜, there is a neighborhood Uof the point ฮธ0such that, for anyฮต >0, there is lim nโ†’โˆžsup ฮธโˆˆUPฮธ(|ห†ฮธnโˆ’ฮธ|> ฮต) = 0. (2.17) 6 Theorem 2.4 Let the statistical experiment E={(S,B),Pฮธ,ฮธโˆˆฮ˜}has the ๏ฌnite Fisher information at the point ฮธโˆˆฮ˜โŠ‚R1. Let estimator ห†ฮธnbe locally uniformly consistent. Then, for any estimator ห†ฮธn, we have . lim inf uโ†’0lim nโ†’โˆž(nu2)โˆ’1logPฮธ(ห†ฮธnโˆ’ฮธโˆˆuยฏV)โ‰ฅ โˆ’1 2inf ฯ„โˆˆยฏVฯ„TI(ฮธ)ฯ„. (2.18) Moreover, for any cone Kwith a vertex at zero and having a nonempty interior, there is lim inf uโ†’0lim nโ†’โˆž(nu2)โˆ’1logPฮธ(ห†ฮธnโˆ’ฮธโˆˆuยฏVโˆฉK)โ‰ฅ โˆ’1 2inf ฯ„โˆˆยฏVโˆฉKฯ„TI(ฮธ)ฯ„.(2.19) The requirement of local uniform consistency replaces the c onsistency requirement under which the lower bound of Bahadur asymptotic e๏ฌƒciency i s usually proved. However, under the consistency condition, the proo f of (2.16) requires the introduction of additional regularity conditions for t he family of probability measures Pฮธ,ฮธโˆˆฮ˜(Th. 9.3, Ch.1 in [8] and [6]). Note that the requirement (2.3) of di๏ฌ€erentiability of the f unctionginL2 can be replaced by weaker conditions (2.5)-(2.7) in [4] on th e behavior of the function gitself. 2.4 Multidimensional lower bound of Bahadur asymptotic e๏ฌƒciency Estimator ห†ฮธnis called the consistent estimator of parameter ฮธโˆˆฮ˜, if for any ฮธโˆˆฮ˜, for any ฮต >0, we have lim nโ†’โˆžPฮธ(|ห†ฮธnโˆ’ฮธ|> ฮต) = 0. (2.20) Theorem 2.5 Letห†ฮธnbe a consistent estimator of parameter ฮธโˆˆฮ˜. Let ฮธ0โˆˆฮ˜. Letโ„ฆโŠ‚Rdbe open set such that 0/โˆˆโ„ฆandฮธ0+โ„ฆโŠ‚ฮ˜. Then, for any หœฮธโˆˆฮธ0+โ„ฆ, we have lim nโ†’โˆž1 nlogPฮธ0(ห†ฮธnโˆ’ฮธ0โˆˆโ„ฆ)โ‰ฅ โˆ’/integraldisplay Slogf(x,หœฮธ) f(x,ฮธ0)f(x,หœฮธ)ฮฝ(dx).(2.21) Proof is akin to [3, 8]. Denote โˆ’Kthe righthand side of (2.21). By Jensen inequality, the righthand side of (2.21) is not positive. 7 We de๏ฌne the indicator function ฮปn=ฮปn(ห†ฮธnโˆ’ฮธ0) = 1, ifห†ฮธnโˆ’ฮธ0โˆˆโ„ฆand ฮปn=ฮปn(ห†ฮธnโˆ’ฮธ0) = 0, ifห†ฮธnโˆ’ฮธ0/โˆˆโ„ฆ. We putr=n(K+ฮด)withฮด >0. Denote Gn=Gn(X1,...,X n,ฮธ0,หœฮธ) =n/productdisplay j=1f(Xj,หœฮธ) f(Xj,ฮธ0). (2.22) We have Pฮธ0(ห†ฮธnโˆ’ฮธ0โˆˆโ„ฆ) =Eฮธ0ฮปn โ‰ฅEฮธ0(ฮปn1(Gn<exp{r}))โ‰ฅexp{โˆ’r}Eหœฮธ/braceleftBig ฮปn1(Gn<exp{r})/bracerightBig โ‰ฅexp{โˆ’r}(Pหœฮธ(ห†ฮธnโˆ’ฮธ0โˆˆโ„ฆ)โˆ’Pหœฮธ(Gn>exp{r})).(2.23) Sinceห†ฮธnis consistent estimator, we have lim nโ†’โˆžPหœฮธ(ห†ฮธnโˆ’ฮธ0โˆˆโ„ฆ) (2.24) By the Law of Large Numbers, we have lim nโ†’โˆžPหœฮธ/parenleftBig1 n|Gnโˆ’nK|> ฮด/2/parenrightBig = 0. (2.25) By (2.23) -2.25), we get (2.21). ะกะฟะธัะพะบ ะปะธั‚ะตั€ะฐั‚ัƒั€ั‹ [1] Bahadur, R.R. (1960). Asymptotic e๏ฌƒciency of tests and e stimates. Sankhya. 22 229โ€“252. [2] Bahadur, R.R. Rates of convergence of estimates and test statistics. Ann.Math.Stat. 38(1967) 303-324 [3] Bahadur, R.R., Gupta J.C. and Zabel S.L. Large deviation s of tests and estimates. In Asymptotic Theoryof Statistical Tests and Es timation.(ed. I.M. Chakravati) (1980) 33-64 Academic NY [4] Ermakov, M.S. (2003). Asymptotically e๏ฌƒcient statisti cal inference for moderate deviation probabilities. Theory Probab. Appl. 48 (4) 676โ€“700. 8 [5] Ermakov, M.S. The sharp lower bounds of asymptotic e๏ฌƒcie ncy of estimators in the zone of moderate deviation probabilities . Electronic Journal of Statistics 6(2012) 2150-2184. [6] Fu J.C. On a theorem of Bahadur on rate of convergence of po int estimators. Ann.Stat. 4(1973)745-749) [7] Hajek, J. (1972). Local asymptotic minimax and admisibi lity in estimation. Proc.Sixth Berkeley Symp. on Math. Statist. an d Probab. 1 175โ€“194, Berkeley, California Univ. Press [8] Ibragimov, I.A. and Hasminskii, R.Z. (1981). Statistic al Estimation: Asymptotic Theory. Berlin: Springer. [9] Le Cam, L. (1972). Limits of Experiments Berlin: Springe r. Proc.Sixth Berkeley Symp. on Math. Statist. and Probab. 1 245โ€“261,
https://arxiv.org/abs/2504.19331v1
arXiv:2504.19337v1 [math.ST] 27 Apr 2025Submitted to Bernoulli Frequency Domain Resampling for Gridded Spatial Data SOUVICK BERA1,a, DANIEL J. NORDMAN2,cand SOUTIR BANDYOPADHYAY1,b 1Department of Applied Mathematics and Statistics, Colorado School of Mines, Golden, CO 80401, USA, aberasouvick@mines.edu ,bsbandyopadhyay@mines.edu 2Department of Statistics, Iowa State University, Ames, IA 50011, USA,cdnordman@iastate.edu In frequency domain analysis for spatial data, spectral averages based on the periodogram often play an impor- tant role in understanding spatial covariance structure, but also have complicated sampling distributions due to complex variances from aggregated periodograms. In order to nonparametrically approximate these sampling dis- tributions for purposes of inference, resampling can be useful, but previous developments in spatial bootstrap have faced challenges in the scope of their validity, speci๏ฌcally due to issues in capturing the complex variances of spatial spectral averages. As a consequence, existing frequency domain bootstraps for spatial data are highly restricted in application to only special processes (e.g. Gaussian) or certain spatial statistics. To address this lim- itation and to approximate a wide range of spatial spectral averages, we propose a practical hybrid-resampling approach that combines two different resampling techniques in the forms of spatial subsampling and spatial boot- strap. Subsampling helps to capture the variance of spectral averages while bootstrap captures the distributional shape. The hybrid resampling procedure can then accurately quantify uncertainty in spectral inference under mild spatial assumptions. Moreover, compared to the more studied time series setting, this work ๏ฌlls a gap in the theory of subsampling/bootstrap for spatial data regarding spectral average statistics. Keywords: Spatial Frequency Domain Bootstrap; Periodogram; Spectral Mean 1. Introduction In recent years, there has been a marked increase in research on the analysis of spatial data using the fre- quency domain approach (see, for example, ( Bandyopadhyay, Jentsch and Rao ,2017,Bandyopadhyay and Lahiri ,2010 ,Bandyopadhyay, Lahiri and Nordman ,2015 ,Bandyopadhyay and Rao ,2016 ,Fuentes , 2006 ,2007 ,Hall, Fisher and Hoffman ,1994 ,Im, Stein and Zhu ,2007 ,Matsuda and Yajima ,2009 , Van Hala et al. ,2017,2020 )). As a bene๏ฌt, analysis in the frequency domain allows inference about covariance structures through a data transformation (i.e., a Fourier or periodogram-based transform), without the need for a full probability model for spatial data. Resampling methods, such as subsam- pling and bootstrap, for spatial data have gained popularity in recent decades because these approaches can often allow for uncertainty quanti๏ฌcation and distributional approximations for complicated spa- tial statistics in an nonparametric or model-free fashion; see ( Davison and Hinkley ,1997 ,Sherman and Carlstein ,1994 ,Sherman ,1998 ). In a recent study, ( Ng, Yau and Chen ,2021 ) introduced a type of frequency domain bootstrap (FDB) for Gaussian spatial data on grid. Although this work offers a signi๏ฌcant contribution to nonparametric spatial approximations, the resampling methodology comes with certain hard limitations in application. Firstly, the FDB of ( Ng, Yau and Chen ,2021 ) is generally not valid for approximating many spatial spectral averages of interest, because the latter have complex variance structures that this FDB proposal cannot correctly estimate. In particular, the challenge is that spectral mean statistics have variances that depend both on
https://arxiv.org/abs/2504.19337v1
spatial covariances as well as higher or- der (i.e., fourth) process cumulants, which arise due to covariances between the spatial periodogram ordinates. Technically, it is the variance contributions related to higher order cumulants that are gener- ally missed or ignored in the FDB approach of ( Ng, Yau and Chen ,2021 ), unless the spatial data are 1 2 Gaussian (i.e., so that these variance components become zero). Consequently and secondly, the spatial bootstrap results of ( Ng, Yau and Chen ,2021 ) are established under assumptions of Gaussianity, which can be stringent and may not suitably re๏ฌ‚ect the distributional nature of spectral average statistics in general practice. These aspects motivate a need to modify the spatial FDB of ( Ng, Yau and Chen , 2021 ) to handle a broader range of spectral statistics with application potentially non-Gaussian spatial processes. To provide improved and more universally valid distributional approximations with spatial spec- tral statistics, we introduce a resampling technique called spatial Hybrid Frequency Domain Bootstrap (HFDB) which aims to merge two different resampling schemes in the forms of subsampling and boot- strap into one overall approach in the frequency domain. The idea that spatial subsampling can be used to correctly estimate the variances of complicated spectral averages, while bootstrap can be used to re-create the shape of the sampling distributions. Essentially then, subsampling plays a role in ap- propriately modifying the spreads of FDB distributional approximations, where this bootstrap would otherwise be invalid without such scaling adjustments. The advantage of the proposed HFDB method is that this can accurately capture the uncertainty in spatial spectral statistics for calibrating tests or con๏ฌdence intervals without requiring stringent conditions on the spatial processes or assumptions of Gaussianity; in particular, the conditions needed to validate HFDB basically amount to mild assump- tions for ensuring that spectral averages have limit distributions at all. This work then intends to close some gaps in the scope of applying resampling approximations for spatial data, which is practically valuable. Additionally, our work also bridges some gaps in the theory of subsampling and bootstrap for spatial data, and extends some recent theoretical ๏ฌndings for spectral inference with time series to the spatial data setting. We conclude this section by brie๏ฌ‚y overviewing FDB for time series and providing connections to the spatial bootstrap here. In time series analysis, FDB has received much attention and interest to- ward approximating the distributions of spectral statistics derived from the periodogram (cf. ( Kreiss and Paparoditis ,2012 ,Kreiss and Lahiri ,2012 ,Lahiri ,2003 ,Politis and McElroy ,2019 )). The key con- cept behind this method is to mimic a type of asymptotic independence that exists in the periodogram in order to generate versions of periodogram ordinates in the bootstrap world by independent resam- pling (cf. ( Dahlhaus and Janas ,1996 ,Jentsch and Kreiss ,2010 ,Kreiss and Paparoditis ,2003 ,2012 , 2023 ,Kreiss, Paparoditis and Politis ,2011)). In the last three decades, there has been a progression of FDB methods, often relying on speci๏ฌc assumptions about the underlying time series (e.g., linear process) or differing cases of
https://arxiv.org/abs/2504.19337v1
spectral mean parameters. Recently, a major advancement was achieved by (Meyer, Paparoditis and Kreiss ,2020 ) who introduced an innovative bootstrap method, called the hybrid periodogram bootstrap (HPB), that represents the state-of-the-art approach for time series in- ference concerning spectral means. Related to this, ( Yu, Kaiser and Nordman ,2023 ) recently studied subsampling for use in conjunction with HPB, where subsampling helps to justify the latter bootstrap approach under weaker assumptions than ( Meyer, Paparoditis and Kreiss ,2020 ) and so HPB extends to a wide range of application with various time processes. Unlike bootstraps that aim to re-create statis- tics at the same data level as the original sample, subsampling is a different resampling approach that generates smaller-scale copies of statistic; because of this distinction, subsampling often applies under weaker conditions than bootstrap, though bootstrap can provide more accurate distributional approxi- mations when valid (cf. ( Politis, Romano and Wolf ,1999 )). The spatial FDB proposed by ( Ng, Yau and Chen ,2021 ) is the spatial equivalent of one of the original FBD methods for time series (cf. ( Dahlhaus and Janas ,1996 ,Jentsch and Kreiss ,2010 ,Kreiss and Paparoditis ,2003 )), while the proposed spatial HFDB here intends to be a spatial analog of the HPB for time series as studied in ( Meyer, Paparodi- tis and Kreiss ,2020 ) when combined with a subsampling extension as in ( Yu, Kaiser and Nordman , 2023 ). This development is non-trivial for spatial data and, similar to the time series setting, is crucial for avoiding restrictive process and moment conditions for dependent data. Finally, it is worth men- tioning that, similar to ( Ng, Yau and Chen ,2021 ), we focus our resampling presentation on spatial Spatial Frequency Domain Bootstrap 3 observations lying on a lattice in two-dimensional space, though ๏ฌndings can be extended to gridded data in more general spatial sampling dimensions ฤšg2. The remainder of the paper is organized as follows. Section 2introduces the distributions of spec- tral mean statistics, which form the basis for inference, along with the associated framework. Section 3presents a spatial subsampling framework in the frequency domain and establishes its consistency, forming the foundation for the proposed bootstrap procedure. A detailed discussion of the hybrid boot- strap method for spatial data is provided in Section 4. Section 5presents numerical studies to evaluate the accuracy of the proposed procedure, and Section 6demonstrates an application for calibrating spatial isotropy tests. Finally, Section 7concludes with key insights. Additional technical details and simulation results can be found in Appendix andSupplementary Material , respectively. 2. Distributions of Spectral Mean Statistics 2.1. Spatial process and sampling design Throughout this paper, we follow spatial sampling framework similar to ( Bandyopadhyay and Lahiri , 2010 ) and ( Ng, Yau and Chen ,2021 ). Let{ฤ–(s)}:sโˆˆZ2}denote a real-valued second-order stationary process located on a regular integer grid Z2. In this fashion, potential observations lie on a spatial lattice with a constant separation in each coordinate direction (which we take to be 1 though other scaling is possible as well). We assume that
https://arxiv.org/abs/2504.19337v1
the random process ฤ–(s)is observed at ฤคโ‰กฤค1ฤค2sampling sites de๏ฌned by {s1,...., sฤค}={sโˆˆZ2:sโˆˆD(ฤค1ร—ฤค2)}=D(ฤค1ร—ฤค2)โˆฉZ2 given by those locations on the grid Z2that lie within a rectangular sampling region D(ฤค1ร—ฤค2) โ‰ก [1,ฤค1] ร— [1,ฤค2]for integersฤค1,ฤค2g1. For spatial data analysis, the above sampling framework corresponds to a so-called pure increasing domain for developing spatial results (cf. ( Cressie ,1993 )). That is, in this scheme, more spatial obser- vations are available (i.e., ฤคโ†’ โˆž ) as spatial sampling region expands in size in each direction (i.e., ฤคฤŸโ†’โˆž forฤŸ=1,2). 2.2. Spectral mean parameters Suppose that the second order stationary spatial process {ฤ–(s):sโˆˆZ2}has a spectral density denoted asฤœ(ฤ):ฮ 2โˆ’โ†’ [ 0,โˆž)whereฤœ(ฤ) โ‰ก ( 2รฟ)โˆ’2/summationtext.1 hโˆˆZ2Cov(ฤ–(0),ฤ–(h))exp(โˆ’ฤƒhTฤ), forฤƒโ‰กโˆš โˆ’1 and forฤโˆˆฮ 2โ‰ก [โˆ’รฟ,รฟ]2. Then, the target spatial parameter for inference is a spectral mean parameter de๏ฌned as an integral ฤ‰(ฤ‡)=/uni222B.dsp ฮ 2ฤ‡(ฤ)ฤœ(ฤ)ฤšฤ, (1) involvingฤœand some chosen/given function ฤ‡:ฮ 2โˆ’โ†’Rof bounded variation. Spectral mean param- eters are common in the spatial data analysis for studying the spatial covariance structure of the data with an unknown distribution. Such quantities have a well-established history for both time series and spatial data (see, ( Politis and McElroy ,2019 ) for more details), dating as far back as ( Parzen ,1957 ), and many spatial parameters of interest can be expressed in the form of a spectral mean. Standard examples of spectral means include autocovariances and spectral distributions, as described below, depending on the choice of ฤ‡above. Next we give some examples of spectral mean parameters arising in frequency 4 domain analysis (cf. ( Bandyopadhyay, Lahiri and Nordman ,2015 ,Dahlhaus ,1985 ,Dahlhaus and Janas , 1996 ,Parzen ,1957 ,Subba Rao ,2018 ,Van Hala et al. ,2017,2020 ) and the references therein). Examples of spectral mean parameters 1. Covariance function. For a lag vector hโˆˆZ2, if we consider the function ฤ‡(ฤ)=cos(hTฤ)in1, this leads to the autocovariance function ฤ€(h)=Cov(ฤ–(0),ฤ–(h)), which can be viewed as a spectral mean ฤ‰(ฤ‡). Note that, for testing the special hypothesis ฤ„0:ฤ–(s)is white noise (constant ฤœ), a test based on the process autovariances can be applied, similar to some Portmanteau tests (cf. ( Li and McLeod , 1986 ,Ljung and Box ,1978 )) using the fact that the spectral means ฤ‰(ฤ‡)are zero under this ฤ„0for the function choice ฤ‡(ฤ) โ‰ก [cos(h1Tฤ),...,(cos(hฤกTฤ)]Tfor some integer lags h1,..., hฤก. 2. Spectral distribution function. For a vector tโˆˆR2, the spectral distribution function is de๏ฌned as ฤ‚(t)=/uni222B R2I(โˆ’โˆž,t](ฤ)ฤœ(ฤ)ฤšฤ, which corresponds to a spectral mean parameter de๏ฌned by ฤ€(h)= I(โˆ’โˆž,t](ฤ)in (1) whereI(ยท)is the indicator function and (โˆ’โˆž,t] โ‰ก (โˆ’โˆž,ฤช1] ร— (โˆ’โˆž,ฤช2]. The spectral distribution function plays an important role in determining the smoothness of the sample paths of the spatial process ฤ–(ยท)(cf. ( Stein ,1999 )). 3. Assessment of spatial covariance structures. When analyzing spatial data, it can be useful to assess the spatial covariance structures in a nonparametric manner, without requiring speci๏ฌc distributional as- sumptions about the data. General testing methods can be developed of evaluating different hypotheses about spatial covariances, such as tests of isotropy or separability, by examining ฤ‰(ฤ‡)with appropriate choices ofฤ‡(ฤ)functions. For more details, refer to ( Van Hala et al. ,2017,2020 ). 4. Whittle Estimation. Suppose{ฤœฤ‚}represents a parametric
https://arxiv.org/abs/2504.19337v1
family of spectral densities (indexed by ฤ‚). Whittle estimation aims to identify the closest member ฤœฤ‚to the true density ฤœ(cf. ( Taniguchi ,1979 )). Assuming real-valued ฤ‚for illustration, the solution to a spectral mean equation ฤ‰(ฤ‡)=0 identi๏ฌes the appropriate ฤœฤ‚under certain conditions, using a function ฤ‡(ฤ) โ‰ก [ 1โˆ’ฤœฤ‚(ฤ)/ฤœ(ฤ)]ฤš ฤšฤ‚ฤœโˆ’1 ฤ‚(ฤ),ฤโˆˆ R2, de๏ฌned by the derivative of ฤœโˆ’1 ฤ‚(ฤ) โ‰ก1/ฤœฤ‚(ฤ). 5. Goodness-of-๏ฌt tests. There has been increasing interest in frequency domain based tests to assess model adequacy (cf. ( Crujeiras and Fernandez-Casal ,2010 ,Van Hala et al. ,2020 ,Weller and Hoeting , 2020 )). Consider a test involving a simple null hypothesis ฤ„0:ฤœ=ฤœ0against an alternative ฤ„0:ฤœโ‰ ฤœ0 for some candidate spectral density ฤœ0. One immediate test for ฤ„0is based on the function ฤ‡(ฤ)= 1/ฤœ0(ฤ)withฤ‰(ฤ‡)a constant under this simple ฤ„0. To test the composite hypothesis ฤ„0:ฤœโˆˆ F for a speci๏ฌed parametric class F, several frequency domain tests have been proposed in time series (cf. ( Beran ,1992 ), (Milhรธj ,1981 ), (Paparoditis ,2000 ), (Nordman and Lahiri ,2006 )). These tests can use Whittle estimation to choose the โ€œbestโ€ ๏ฌtting model from Fand then compare the ๏ฌtted density to the periodogram across all ordinates. One can adapt strategies for goodness-of-๏ฌt tests with time series (cf. ( Nordman and Lahiri ,2006 )), which combine aspects of model ๏ฌtting and model comparison into a choice of estimating function ฤ‡, to spatial processes. 6. Variogram model ๏ฌtting. A popular approach to ๏ฌtting a parametric variogram model to spatial data is through the method of least squares; see ( Cressie ,1993 ). Let{2ฤ€(ยท;ฤ‚):ฤ‚โˆˆฮ˜},ฮ˜ยขRฤฆdenote a class of valid variogram models for the true variogram 2 ฤ€(h) โ‰กVar(ฤ–(h) โˆ’ฤ–(0)),hโˆˆZ2of the spa- tial process. Let 2 /hatwideฤ€ฤค(h)denote the sample variogram at lag hbased onฤ–(s1),...,ฤ–(sฤค)(cf. Chapter 2, (Cressie ,1993 )). Then one can ๏ฌt the variogram model by estimating the parameter ฤ‚that minimizes the criterion: /hatwideฤ‚ฤค=argmin/braceleftBig/summationtext.1ฤฃ ฤŸ=1/parenleftBig 2/hatwideฤ€ฤค(hฤŸ) โˆ’2ฤ€(hฤŸ;ฤ‚)/parenrightBig2 :ฤ‚โˆˆฮ˜/bracerightBig for a given set of lags h1,..., hฤฃ. Expressing the variogram in terms of the spectral density function, we get an equivalent spectral esti- Spatial Frequency Domain Bootstrap 5 mating equation for identifying ฤ‚:ฤ‰(ฤ‡)=0 for ฤ‰(ฤ‡) โ‰ก/uni222B.dsp R2/bracketleftBigฤฃ/summationdisplay.1 ฤŸ=1/braceleftBig 1โˆ’cos(hฤŸTฤ)โˆ’ฤ€(h;ฤ‚)/bracerightBig โˆ‡[2ฤ€(hฤŸ;ฤ‚)]/bracketrightBig ฤœ(ฤ)ฤšฤ, usingฤ‡ฤ‚(ฤ) โ‰ก/summationtext.1ฤฃ ฤŸ=1/braceleftBig 1โˆ’cos(hฤŸTฤ)โˆ’ฤ€(h;ฤ‚)/bracerightBig โˆ‡[2ฤ€(hฤŸ;ฤ‚)]. 2.3. Spectral mean statistics and sampling distributions Based on the stationary spatial process ฤ–(ยท)as observed on the sampling region D(ฤค1ร—ฤค2), where ฤค=ฤค1ฤค2is the number of observations, we de๏ฌne Iฤค(ฤ)to be the periodogram at a frequency ฤโˆˆฮ 2 as Iฤค(ฤ) โ‰ก (2รฟ)โˆ’2ฤคโˆ’1/barex/barex/barex/barex/barex/barexฤค1/summationdisplay.1 ฤฉ1=1ฤค2/summationdisplay.1 ฤฉ2=1ฤ–(s)exp(โˆ’ฤƒsTฤ)/barex/barex/barex/barex/barex/barex2 ,sโ‰ก (ฤฉ1,ฤฉ2)T,whereฤƒ=โˆš โˆ’1. UsingIฤค(ยท)in place ofฤœ(ยท)in (1), a standard estimator of the spectral mean parameter ฤ‰(ฤ‡)is then thespectral mean statistic , orspectral average , given in Riemann sum form by /hatwideฤ‰ฤค(ฤ‡) โ‰ก (2รฟ)2ฤคโˆ’1/summationdisplay.1 jโˆˆJฤคฤ‡(ฤj,ฤค)Iฤค(ฤj,ฤค), using discrete frequencies ฤj,ฤคโ‰ก (2รฟฤ 1/ฤค1,2รฟฤ 2/ฤค2)de๏ฌned by jโ‰ก (ฤ 1,ฤ 2) โˆˆ Jฤคwith index set Jฤคโ‰ก {+โˆ’(ฤค1โˆ’1)/2,,...,ฤค 1โˆ’+ฤค1/2,}ร—{+โˆ’(ฤค2โˆ’1)/2,,...,ฤค 2โˆ’+ฤค2/2,}\{( 0,0)}denoting the (nonzero) discrete Fourier frequency grid. For calibrating tests and con๏ฌdence intervals, a spectral mean statistic has a large-sample normal distribution under mild conditions ( Brillinger ,2001 ,Dahlhaus ,1985 ) given by ฤ„ฤค(ฤ‡) โ‰กฤค1/2{/hatwideฤ‰ฤค(ฤ‡)โˆ’ฤ‰(ฤ‡)}ฤ€โˆ’โ†’N( 0,ฤ‚2=ฤ‚2 1+ฤ‚2 2)asฤคโ†’โˆž, (2) with ฤ‚2 1โ‰กฤ‚2 1(ฤ‡)=(2รฟ)2/uni222B.dsp ฮ 2ฤ‡(ฤ)[ฤ‡(ฤ)+ฤ‡(โˆ’ฤ)]ฤœ2(ฤ)ฤšฤ, ฤ‚2 2โ‰กฤ‚2 2(ฤ‡)=(2รฟ)2/uni222B.dsp ฮ 2/uni222B.dsp ฮ 2ฤ‡(ฤ1)ฤ‡(ฤ2)ฤœ4(ฤ1,ฤ2,โˆ’ฤ2)ฤšฤ1ฤšฤ2 where, for ฤ1,ฤ2,ฤ3โˆˆฮ 2, ฤœ4(ฤ1,ฤ2,ฤ3)=(2รฟ)โˆ’3/summationdisplay.1 h1,h2,h3โˆˆZ2cum(ฤ–(0),ฤ–(h1),ฤ–(h2),ฤ–(h3))exp/parenleftBigg โˆ’ฤƒ3/summationdisplay.1 โ„“=1hT โ„“ฤโ„“/parenrightBigg , denotes the 4th-order cumulant spectral
https://arxiv.org/abs/2504.19337v1
density. Given the complicated structure in the limit distri- bution from ( 2), one possible approach is to explore resampling for approximating the distribution ofฤ„ฤค(ฤ‡)nonparametrically. However, estimation of the variance component ฤ‚2 2, in particular, poses signi๏ฌcant challenges for bootstrap approximations to the distribution of ฤ„ฤค(ฤ‡), as needed for the in- ference of spatial mean parameters. Specially, the spatial FDB introduced in ( Ng, Yau and Chen ,2021 ) 6 cannot capture this component in general, unless the underlying process is assumed to be Gaussian in which case this variance component ฤ‚2 2=0 becomes zero. As a consequence, in order to improve bootstrap inference for spatial data, one must ๏ฌrst correctly estimate all variance components in ( 2) in the resampling mechanism used. For use in conjunction with spatial bootstrap, we next consider a spatial subsampling for the purpose of such variance estimation, as described in the following section. 3. Spatial Subsampling in Frequency Domain 3.1. Spatial subsampling variance estimators Based on the observed spatial data {ฤ–(s):sโˆˆ D(ฤค1ร—ฤค2) โˆฉZ2}lying within the spatial sampling regionD(ฤค1ร—ฤค2) โ‰ก [ 1,ฤค1] ร— [ 1,ฤค2], subsampling aims to create several โ€œsmaller scale copies" of the original spatial data by using data blocks of smaller size ฤ˜ฤคโ‰กฤค(ฤ˜) 1ฤค(ฤ˜) 2withฤค(ฤ˜) ฤก<ฤคฤก, ฤก=1,2 (cf. ( Lahiri ,2003 ,Sherman and Carlstein ,1994 ,Sherman ,1996 ,1998 )) compared to ฤค=ฤค1ฤค2, where the block size ฤค(ฤ˜) ฤกand the original sample size ฤคฤกin each direction satisfy the relation ฤค(ฤ˜) ฤกโ†’ โˆž andฤคโˆ’1 ฤกฤค(ฤ˜) ฤกโ†’0 asฤคโ†’ โˆž . In particular, we can de๏ฌne data blocks by all integer translates of a regionD(ฤค(ฤ˜) 1ร—ฤค(ฤ˜) 2) โ‰ก [1,ฤค(ฤ˜) 1]ร—[1,ฤค(ฤ˜) 2]that lie inside the original sampling region D(ฤค1ร—ฤค2) โ‰ก [1,ฤค1]ร—[1,ฤค2]; that is, each data block is of the form j+D(ฤค(ฤ˜) 1ร—ฤค(ฤ˜) 2)for an integer jโ‰ก (ฤ 1,ฤ 2)with components ฤ ฤก=0,...,ฤคฤกโˆ’ฤค(ฤ˜) ฤก,ฤก=1,2. In total, there are, say ฤˆโ‰ก (ฤค1โˆ’ฤค(ฤ˜) 1+1)(ฤค2โˆ’ฤค(ฤ˜) 2+1), such data blocks and, for simplicity, we can denote the โ„“-data block as Bโ„“forโ„“=1,...,ฤˆ . We can then de๏ฌne a subsample periodogram for the โ„“-th block as I(โ„“) ฤฉฤซฤ˜(ฤ) โ‰ก (2รฟ)โˆ’2ฤ˜โˆ’1 ฤค|/summationdisplay.1 sโˆˆBโ„“โˆฉZ2ฤ–(s)ฤ›โˆ’ฤƒsTฤ|2,ฤโˆˆฮ 2, โ„“=1,...,ฤˆ. The corresponding subsample analogue of the mean statistic /hatwideฤ‰ฤค(ฤ‡)can then be de๏ฌned as /hatwideฤ‰(โ„“) ฤฉฤซฤ˜(ฤ‡) โ‰ก (2รฟ)2ฤ˜โˆ’1 ฤค/summationdisplay.1 jโˆˆJฤ˜ฤ‡(ฤj,ฤ˜)I(โ„“) ฤฉฤซฤ˜(ฤj,ฤ˜), whereฤj,ฤ˜andJฤ˜are de๏ฌned in a manner similar to ฤj,ฤคandJฤคearlier, but now based on the subsample sizes ฤ˜ฤคrather than the original data size ฤค. These lead to a collection of subsample versions ฤ„(โ„“) ฤฉฤซฤ˜(ฤ‡)of the target spectral quantity ฤ„ฤค(ฤ‡) โ‰กฤค1/2{/hatwideฤ‰ฤค(ฤ‡)โˆ’ฤ‰(ฤ‡)}as ฤ„(โ„“) ฤฉฤซฤ˜(ฤ‡) โ‰กฤ˜1/2 ฤค{/hatwideฤ‰(โ„“) ฤฉฤซฤ˜(ฤ‡)โˆ’/tildewideฤ‰ฤค(ฤ‡)}, โ„“=1,...,ฤˆ, using the subsample mean /tildewideฤ‰ฤค(ฤ‡)=ฤˆโˆ’1/summationtext.1ฤˆ โ„“=1/hatwideฤ‰(โ„“) ฤฉฤซฤ˜(ฤ‡)which substitutes the unknown parameter ฤ‰(ฤ‡). Recall that spatial variance estimation is especially important here because the variance ฤ‚2of spec- tral mean statistics from ( 2) can be quite complex or dif๏ฌcult to determine accurately, which is our motivation for consideration of subsampling. Consequently, we can de๏ฌne the subsampling variance estimator of ฤ‚2as /hatwideฤ‚2 ฤค=/hatwideฤ‚2 ฤค(ฤ‡) โ‰ก/uni222B.dsp Rฤฐ2ฤš/hatwideฤ‚ฤ„ฤค(ฤ‡)(ฤฐ)=ฤˆโˆ’1ฤˆ/summationdisplay.1 โ„“=1ฤ˜ฤค{/hatwideฤ‰(โ„“) ฤฉฤซฤ˜(ฤ‡)โˆ’/tildewideฤ‰ฤค(ฤ‡)}2, (3) Spatial Frequency Domain Bootstrap 7 which corresponds to the sample variance of the subsample statistics {ฤ„(โ„“) ฤฉฤซฤ˜(ฤ‡) โ‰กฤ˜1/2 ฤค/hatwideฤ‰(ฤŸ) ฤฉฤซฤ˜(ฤ‡)}ฤˆ โ„“=1. Further, using subsampling, we also introduce an estimator for just the ๏ฌrst variance component ฤ‚2 1(ฤ‡) in (2) as /hatwideฤ‚2 1,ฤค=ฤ˜โˆ’1 ฤค(4รฟ2)2/summationdisplay.1 jโˆˆJฤ˜ฤ‡(ฤj,ฤ˜)(ฤ‡(ฤj,ฤ˜)+ฤ‡(ฤโˆ’j,ฤ˜))1 ฤˆฤˆ/summationdisplay.1
https://arxiv.org/abs/2504.19337v1
โ„“=1(I(โ„“) ฤฉฤซฤ˜(ฤj,ฤ˜)โˆ’/tildewideIฤค(ฤj,ฤ˜))2, (4) where/tildewideIฤค(ฤj,ฤ˜) โ‰กฤˆโˆ’1/summationtext.1ฤˆ โ„“=1I(โ„“) ฤฉฤซฤ˜(ฤj,ฤ˜). Here in ( 4), we are isolating a component from the overall sub- sampling variance estimator ฤ‚2 ฤคin (3) that arises due to marginal variances in subsample periodgram/braceleftbig I(โ„“) ฤฉฤซฤ˜(ฤj,ฤ˜)/bracerightbigฤˆ โ„“=1. This leads to a second subsampling estimator /hatwideฤ‚2 2,ฤคโ‰ก/hatwideฤ‚2 ฤคโˆ’/hatwideฤ‚2 1,ฤค(5) to approximate the second or remaining component ฤ‚2 2(ฤ‡)in (2). Spatial subsampling then enables a valid estimation of both variance components ฤ‚2 1andฤ‚2 2which contribute to the target distribution of ฤ„ฤค(ฤ‡)having a limit variance of ฤ‚2โ‰กฤ‚2 1+ฤ‚2 2. 3.2. Consistency of spatial subsampling estimators To establish the formal consistency properties for subsampling variance estimators with respect to spectral mean statistics, we use the following mild assumptions, related to the existence of limit laws. Assumption 1. {ฤ–(s):sโˆˆZ2}is 4-th order stationary. Assumption 2. The limiting law in ( 2) exists for any ฤ‡of bounded variation. For two sequences j(ฤค)โ‰ก (ฤ (ฤค) 1,ฤ (ฤค) 2),k(ฤค)โ‰ก (ฤก(ฤค) 1,ฤก(ฤค) 2) โˆˆZ2of integer pairs, de๏ฌne ฤ„j(ฤค),ฤค(ฤ‡) andฤ„k(ฤค),ฤค(ฤ‡)as versions of ฤ„ฤค(ฤ‡)when computed from translated spatial regions j(ฤค)+D(ฤค1ร—ฤค2) andk(ฤค)+D(ฤค1ร—ฤค2), instead of the region D(ฤค1ร—ฤค2) โ‰ก [1,ฤค1] ร— [1,ฤค2]used forฤ„ฤค(ฤ‡)in (2). Assumption 3. For any two j(ฤค)andk(ฤค)integer sequences whereby |ฤ (ฤค) ฤŸโˆ’ฤก(ฤค) ฤŸ|/ฤคฤŸโ†’โˆž holds for ฤŸ=1,2 asฤคโ‰กฤค1ร—ฤค2โ†’โˆž , then the following quantities are asymptotically normal and independent: /parenleftbiggฤ„j(ฤค),ฤค(ฤ‡) ฤ„k(ฤค),ฤค(ฤ‡)/parenrightbigg ฤ€โˆ’โ†’N/parenleftbigg/parenleftbigg0 0/parenrightbigg ,ฤ‚2(ฤ‡)/parenleftbigg1 0 0 1/parenrightbigg/parenrightbigg asฤคโ†’โˆž . For two integer sequences j(ฤค),k(ฤค)โˆˆZ2, we use one further assumption stated below that relates to the periodogram, say Ij(ฤค),ฤค(ฤ)andIk(ฤค),ฤค(ฤ), when computed on translated sampling regions j(ฤค)+D(ฤค1ร—ฤค2)andk(ฤค)+D(ฤค1ร—ฤค2), respectively, for ฤโˆˆฮ 2โ‰ก [โˆ’รฟ,รฟ]2. Assumption 4. Letmฤคโ‰ก (ฤฃฤค1,ฤฃฤค2) โˆˆZ2with 0f |ฤฃฤคฤŸ| fฤคฤŸโˆ’+ฤคฤŸ/2,,ฤŸ=1,2 be an integer sequence such that the discrete Fourier frequencies ฤmฤค,ฤคconverge to some limit ฤโ‰ก (ฤˆ1,ฤˆ2)with 0<|ฤˆฤŸ|<รฟ, ฤŸ=1,2, asฤคโ‰กฤค1ร—ฤค2โ†’โˆž . For any two j(ฤค)andk(ฤค)integer sequences whereby |ฤ (ฤค) ฤŸโˆ’ฤก(ฤค) ฤŸ|/ฤคฤŸโ†’โˆž 8 holds forฤŸ=1,2 asฤคโ‰กฤค1ร—ฤค2โ†’โˆž , then the following quantities are asymptotically exponential and independent: /parenleftbiggIj(ฤค),ฤค(ฤmฤค,ฤค) Ik(ฤค),ฤค(ฤmฤค,ฤค)/parenrightbigg ฤ€โˆ’โ†’ฤœ(ฤ)/parenleftbiggฤ‘1 ฤ‘2/parenrightbigg asฤคโ†’โˆž whereฤ‘1,ฤ‘2denote i.i.d standard exponential random variables. To comment on the conditions, Assumptions 1-2together guarantee that ฤ„ฤค(ฤ‡)has a valid limiting law, whileฤ‚2(ฤ‡)is ๏ฌnite by Assumption 1. Assumption 3is a mild characterization of weak depen- dence in terms of the limit distributions of well-separated statistics and is also strongly connected to Assumptions 1-2. This condition states that any two copies ฤ„j(ฤค),ฤค(ฤ‡)andฤ„k(ฤค),ฤค(ฤ‡)of the statisti- cal quantity of interest ฤ„ฤค(ฤ‡), when de๏ฌned on size ฤคโ‰กฤค1ร—ฤค2regions that are distantly separated, should be normal and asymptotically independent as ฤคโ‰กฤค1ร—ฤค2โ†’โˆž ; in particular, the spatial regions j(ฤค)+D(ฤค1ร—ฤค2)andk(ฤค)+D(ฤค1ร—ฤค2)for de๏ฌningฤ„j(ฤค),ฤค(ฤ‡)andฤ„k(ฤค),ฤค(ฤ‡)are indeed distantly separated in the sense that ฤŸth component of the difference j(ฤค)โˆ’k(ฤค)diverges faster than the sample sizeฤคฤŸin each direction, ฤŸ=1,2. Assumption 4is analogous to Assumption 3in spirit but generally weaker. Assumption 4says that spatial periodograms computed from distant regions should be asymp- totically independent and have exponential distributions up to scaling by the spectral density ฤœ; this is also a mild condition related to weak spatial dependence. Assumptions 1-4are generally intended to hold for many spatial processes, without requiring more speci๏ฌc characterizations of spatial depen- dence in terms of linearity or forms of mixing (cf. ( Lahiri ,2003 ,Yu, Kaiser and Nordman ,2023 )). We can next state a formal result on the consistency of subsampling variance estimators for spatial
https://arxiv.org/abs/2504.19337v1
spectral mean statistics. Theorem 3.1. Suppose Assumptions 1-4hold and the subsample size ฤ˜ฤคโ‰กฤค(ฤ˜) 1ร—ฤค(ฤ˜) 2satis๏ฌes 1/ฤค(ฤ˜) ฤก+ฤคโˆ’1 ฤกฤค(ฤ˜) ฤกโ†’0 forฤก=1,2 asฤคโ‰กฤค1ร—ฤค2โ†’ โˆž . Then, the spatial subsampling estimators of variance components are consistent: /hatwideฤ‚2 1,ฤคฤŒโˆ’โ†’ฤ‚2 1,/hatwideฤ‚2 2,ฤคฤŒโˆ’โ†’ฤ‚2 2,and/hatwideฤ‚2 ฤคฤŒโˆ’โ†’ฤ‚2โ‰กฤ‚2 1+ฤ‚2 2asฤคโ†’โˆž . The theorem above provides a device to modify and improve spatial bootstraps for approximating spectral mean statistics, particularly when such bootstraps can fail to appropriately re๏ฌ‚ect the spreads of such statistics. This then leads to the spatial hybrid bootstrap procedure proposed in the next section. 4. Spatial Hybrid Frequency Domain Bootstrap (HFDB) We may now describe our main resampling approach for approximating the distribution of spatial spec- tral mean statistics. Recall that, as explained in the Introduction, ( Ng, Yau and Chen ,2021 ) proposed a bootstrap for spatial data, which is generally invalid for inference about spectral averages and requires modi๏ฌcation for capturing the complicated variance of spectral averages. For reference, we shall term their bootstrap method as technically being a Frequency Domain Wild Bootstrap (FDWB). Recently, for the case of time series data, ( Meyer, Paparoditis and Kreiss ,2020 ) and ( Yu, Kaiser and Nordman , 2023 ) described a hybrid periodogram bootstrap (HPB) as the most advanced resampling scheme for spectral mean inference with time series in terms of broad validity and performance. Our hybrid re- sampling approach for spatial data, termed the Hybrid Frequency Domain Bootstrap (HFDB), intends types of extensions of both FDWB and HPB methods. That is, the spatial HFDB aims to combine spatial bootstrap (i.e., FDWB) with spatial subsampling (Sec. 3.1) in order to correct scaling issues Spatial Frequency Domain Bootstrap 9 in bootstrap for spatial data. This also produces a spatial analog version of HPB from time series, as explained more in the following. For setting distributional approximations for spatial spectral mean statistics, both HFDB and FDWB approaches start with a common step of mimicking the distribution of spectral statistics by a scheme of independently resampling or re-creating periodogram values based on an estimator /hatwideฤœฤคof the spectral densityฤœ. To this end, let /hatwideฤœฤคrepresent a consistent estimator of the spectral density ฤœacross spatial discrete Fourier frequencies, i.e. max jโˆˆJฤค|/hatwideฤœฤค(ฤj,ฤค)โˆ’ฤœ(ฤj,ฤค)|=ฤฅฤŒ(1)asฤคโ†’โˆž. (6) Note that all FDB approaches involve spectral density estimation, which is true for time series (cf. ( Dahlhaus and Janas ,1996 ,Jentsch and Kreiss ,2010 ,Kreiss and Paparoditis ,2003 ,Meyer, Pa- paroditis and Kreiss ,2020 )) as well as for FDWB with spatial data as in ( Ng, Yau and Chen ,2021 ). We then de๏ฌne an initial bootstrap re-creation of the target spectral mean quantity ฤ„ฤค(ฤ‡)from ( 2) as ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡) โ‰กฤค1/2{(2รฟ)2ฤคโˆ’1/summationdisplay.1 jโˆˆJฤคฤ‡(ฤj,ฤค)/hatwideฤœฤค(ฤj,ฤค)(ฤ‘โˆ— jโˆ’1)} (7) by drawingฤ‘โˆ— jas i.i.d exponential random variables for indices jโ‰ก (ฤ 1,ฤ 2) โˆˆ Jฤคinvolvingฤ 1>0 or involvingฤ 1=0 withฤ 2>0 and then setting ฤ‘โˆ— โˆ’jโ‰กฤ‘โˆ— jfor the remaining indices jโˆˆ Jฤค(i.e., whereฤ 1< 0 or whereฤ 1=0 withฤ 2<0). Note that the bootstrap approximation in ( 7) corresponds to the FDWB of (Ng, Yau and Chen ,2021 ) for spatial data and also represents a basic step in any FDB with time series (cf. ( Meyer, Paparoditis and Kreiss ,2020 )). This bootstrap
https://arxiv.org/abs/2504.19337v1
form aims to produce bootstrap versions {ฤ‘โˆ— j/hatwideฤœฤค(ฤj,ฤค)}of the periodogram ordinates {Iฤค(ฤj,ฤค)}(i.e., indexed over jโˆˆ Jฤค), by treating the latter as approximately independent exponential variables with respective means {ฤœ(ฤj,ฤค)}. However, the problem in this bootstrap re-creation is that dependence among periodogram ordinates Iฤค(ฤj,ฤค), jโˆˆ Jฤค, is generally not completely ignorable so the bootstrap quantity ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)cannot correctly capture the true variance ฤ‚2โ‰กฤ‚2 1+ฤ‚2 2ofฤ„ฤค(ฤ‡)in (2). Essentially, by independently resampling or re- constructing spatial periodogram values by {ฤ‘โˆ— j/hatwideฤœฤค(ฤj,ฤค)}, the resulting FDWB in ( 7) then estimates the second variance component ฤ‚2 2to be zero, where the latter may not hold in general for the underlying spatial process (i.e., unless the process is assumed to be Gaussian). To overcome the above shortcoming in FDWB, a scaling adjustment with spatial subsampling can be made to de๏ฌne a hybrid FDB (HFDB) rendition of ฤ„ฤค(ฤ‡)as ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค(ฤ‡) โ‰ก [Varโˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)}+/hatwideฤ‚2 2,ฤค]1/2ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡) [Varโˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)}]1/2, (8) where we ๏ฌrst re-scale ( 7) to have unit variance at the bootstrap level using the bootstrap variance of ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)}given by Varโˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)}=ฤคโˆ’1(4รฟ2)2/summationdisplay.1 jโˆˆJฤคฤ‡(ฤj,ฤค)(ฤ‡(ฤj,ฤค)+ฤ‡(ฤโˆ’j,ฤค))/hatwideฤœ2 ฤค(ฤj,ฤค) and then introduce a correction Var โˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)}+/hatwideฤ‚2 2,ฤคto estimate the correct target variance ฤ‚2โ‰ก ฤ‚2 1+ฤ‚2 2; this scaling correction Var โˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)}+/hatwideฤ‚2 2,ฤคcombines the original bootstrap variance Varโˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)}as an approximation of the ๏ฌrst variance component ฤ‚2 1, with a subsampling variance estimator /hatwideฤ‚2 2,ฤค=/hatwideฤ‚2 ฤคโˆ’/hatwideฤ‚2 1,ฤคfrom ( 5) that approximates the second component ฤ‚2 2. In this 10 manner, the hybrid bootstrap approximation ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค(ฤ‡)in (8) can capture the correct spread in addition to the shape of the target sampling distribution of ฤ„ฤค(ฤ‡)for spectral inference. In the HFDB method, note that spatial subsampling plays a role in speci๏ฌcally estimating the variance component ฤ‚2 2that would otherwise be missed by FDWB, while the bootstrap variance Var โˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)}from FDWB continues to estimate the ๏ฌrst component ฤ‚2 1as implicit in ( 7). The next result provides a broad theoretical justi๏ฌcation for the HFDB method. Theorem 4.1. Suppose assumptions of Theorem 3.1along with ( 6). Then, for the HFDB version ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค(ฤ‡)ofฤ„ฤค(ฤ‡), (a) HFDB spread is consistent for the limit variance ฤ‚2โ‰กฤ‚2 1+ฤ‚2 2ofฤ„ฤค(ฤ‡): Varโˆ—{ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค(ฤ‡)}=Varโˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)}+/hatwideฤ‚2 2,ฤคฤŒโˆ’โ†’ฤ‚2asฤคโ†’โˆž. (b) the HFDB approximation is consistent for the target distribution of ฤ„ฤค(ฤ‡): sup ฤฎโˆˆR/barex/barex/barexฤŒโˆ—(ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค(ฤ‡) fฤฎ)โˆ’ฤŒ(ฤ„ฤค(ฤ‡) fฤฎ)/barex/barex/barexฤŒโˆ’โ†’0 asฤคโ†’โˆž, whereฤŒโˆ—(ยท)denotes bootstrap probability induced by resampling. Because the HFDB approach combines bootstrap with spatial subsampling, Theorem 4.1shows that HFDB achieves consistency for distributional approximations of spectral means statistics under less stringent moment assumptions than previous spatial bootstraps (cf. ( Ng, Yau and Chen ,2021 )). Fur- thermore, the methodology presented above for spatial HFDB can then be applied to both Gaussian and non-Gaussian processes, while the original FDWB can only be applied to Gaussian spatial data. Numerical results in Section 5further demonstrate that the HFDB can perform well, including cases where previous bootstraps approximations (FDWB) can be seen to fail. Remark 1. As a further observation, we can decompose the target quantity ฤ„ฤค(ฤ‡)as ฤ„ฤค(ฤ‡) โ‰กฤค1/2{/hatwideฤ‰ฤค(ฤ‡)โˆ’ฤ‰(ฤ‡)}=ฤค1/2{/hatwideฤ‰ฤค(ฤ‡)โˆ’ฤ(/hatwideฤ‰ฤค(ฤ‡))}+ฤค1/2{ฤ(/hatwideฤ‰ฤค(ฤ‡))โˆ’ฤ‰(ฤ‡)}, where the ๏ฌrst part in the decomposition is a stochastic quantity with mean zero while the second part can be treated as a non-stochastic
https://arxiv.org/abs/2504.19337v1
bias. While the above bias decreases to zero with increasing spatial sample size ฤค, this bias term can potentially impact approximations in small sample sizes. Therefore, in small samples, it is also possible to consider spatial subsampling for estimating this bias part as /hatwidestBias sub=ฤˆโˆ’1/summationtext.1ฤˆ โ„“=1ฤ˜1/2 ฤค(/hatwideฤ‰(โ„“) sub(ฤ‡) โˆ’/hatwideฤ‰ฤค(ฤ‡)). In which case, we can use ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค+/hatwidestBias sub as a bootstrap approximation to ฤ„ฤค(ฤ‡), which uses subsampling to adjust bootstrap approximations ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)from ( 7) for both variance as in ( 8) as well as for bias. Remark 2. For selecting spatial block size ฤ˜ฤคin practice, one may use the minimum volatility method as suggested by ( Politis, Romano and Wolf ,1999 ). The idea here is that bootstrap con๏ฌdence intervals, over an โ€œappropriate" range of block sizes, should remain stable when computed as a function of block sizeฤ˜ฤค. Using this approach, one can compute bootstrap intervals for a number of block sizes and look for a region where the intervals do not change substantially according to some criterion (e.g., length). One example is to use a procedure based on minimizing a running standard deviation, as suggested by (Politis, Romano and Wolf ,1999 ). Further, one can also examine a range of block sizes with blocks D(ฤค(ฤ˜) 1ร—ฤค(ฤ˜) 2)de๏ฌned by scaling ฤค(ฤ˜) ฤกโ‰ˆรฟฤค1/4=รฟ(ฤค1ฤค2)1/4for someรฟ>0, which can be an optimal order of spatial subsample size as suggested by ( Nordman and Lahiri ,2004 ). Spatial Frequency Domain Bootstrap 11 5. Numerical Studies of Accuracy In this section, we demonstrate the ๏ฌnite sample performance of the HFDB approximation of spatial spectral statistics. We compare this to the FDWB approximation of ( Ng, Yau and Chen ,2021 ) for spec- tral mean inference (cf. Section 4) and discuss the relative usefulness of the proposed HFDB method in applications. Note that our theoretical results presented in Section 3and Section 4indicate that HFDB should perform well regardless of whether the underlying process is Gaussian or not. That is, HFDB is again intended to remain valid even when the variance component ฤ‚2 2from ( 2) is signi๏ฌcantly greater than zero, which represents a condition that can be encountered in practice for non-Gaussian data. In cases though where ฤ‚2 2is exactly zero (as would occur for a Gaussian process) or approximately zero, the HFDB approach is anticipated to perform similarly to FDWB. In the following, we present numerical results for scenarios including a Gaussian process as well as a non-Gaussian process with ฤ‚2 2differing from zero. To assess the performance of HFDB and FDWB approximations, we consider the coverage accuracy of 90% two-tailed con๏ฌdence intervals for the covariance parameter ฤ€(h)with h=(1,0)T, which are de๏ฌned as /parenleftBig /hatwideฤ€(h)โˆ’/hatwideฤง0.95ฤคโˆ’1/2,/hatwideฤ€(h)โˆ’/hatwideฤง0.05ฤคโˆ’1/2/parenrightBig , based the sample covariance estimator /hatwideฤ€(h)as a case of a spectral mean statistic. The con๏ฌdence intervals are constructed using bootstrap quantiles /hatwideฤง0.05and/hatwideฤง0.95approximated from 500 Monte Carlo draws for any simulated spatial data set. To estimate the spectral density ฤœin (7) for bootstrap purposes, we used a kernel-based estimator, as detailed in ( Crujeiras and Fernandez-Casal ,2010 ). For each sample size and type of spatial process considered, we performed 1000 Monte
https://arxiv.org/abs/2504.19337v1
Carlo simulations to evaluate coverage and used 500 simulations per simulated dataset to approximate bootstrap distributions. For purposes for understanding the performance of both the FDWB and the HFDB methods in terms of coverage accuracy, below we separate numerical ๏ฌndings by Gaussian and non-Gaussian processes. 5.1. Results for Gaussian processes For this study, we considered weakly stationary spatial processes with a Matรฉrn covariance function. The parametric model for the spectral density of stationary processes in the Matรฉrn class (cf. ( Stein , 1999 )) is given by: ฤœ(ฤ)=ฤ(ฤ‚2+||ฤ||2)โˆ’ฤ‡โˆ’1, whereฤ>0 is a positive constant, ฤ‚is the inverse of the range parameter, ฤ‡is the smoothness parame- ter, and||ยท||denotes theฤข2norm. For simulations, we have used ฤ‡=1, as this is often a practical choice for many climate applications. The range parameter ฤ‚andฤwere controlled to ensure that the process variance is very close to 1, thereby avoiding any nugget variance. This setup allows us to simulate real- istic spatial processes and accurately evaluate the performance of our proposed methods. We generated rectangular spatial datasets of sizes 30 ร—30(ฤค=900),50ร—50(ฤค=2500), and 70ร—70(ฤค=4900). Figure 1presents the coverage accuracy of 90% two-tailed con๏ฌdence intervals for the covariance parameter. From Figure 1, we observe that the HFDB and FDWB methods perform similarly for Gaus- sian processes ( ฤ‚2 2=0), as perhaps expected. This similarity is anticipated to occur in this case because HFDB involves a scaling correction to FDWB that is speci๏ฌcally intended to be impactful when ฤ‚2 2in (2) is non-zero. Moreover, as the overall spatial sample size ฤคincreases, the coverage accuracy im- proves across different range parameters in Figure 1, indicating that both HFDB and FDWB can be effective for Gaussian processes. 12 Figure 1 . Coverages of 90% HFDB intervals for the covariance parameter ฤ€(h),h=(1,0)ฤ, based on either FDWB (ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)) (blue line) or corrected ( ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค(ฤ‡)) (green line) HFDB versions with different sub- sample sizes ฤ˜ฤค, range, and sample sizes ฤค. 5.2. Results for non-Gaussian processes In the non-Gaussian case, we ๏ฌrst consider a separable spatial covariance structure by de๏ฌning a spatial process indexed in the plane as the product of two independent processes indexed in one dimension. This simpli๏ฌcation facilitates analysis, providing a clearer understanding of how existing bootstrap methods extend from temporal to spatial settings, particularly with respect to higher-order cumulants and spectral density behavior. Consider two independent weakly stationary time-series {ฤ”ฤŸ}and{ฤ•ฤ } and we generate observation ฤ–(sฤŸ,ฤ )at each spatial location sฤŸ,ฤ =(ฤŸ,ฤ )on a gridฤŸ,ฤ =1,...โˆšฤค(forโˆšฤค=30,50,70) usingฤ–(sฤŸ,ฤ )=ฤ”ฤŸยทฤ•ฤ . In this study, we have used the processes ฤ”ฤช=0.2ฤ”ฤชโˆ’1+ฤ…ฤช, andฤ•ฤช=โˆ’0.7ฤ„ฤชโˆ’1+ฤ„ฤชbased on i.i.d innovations {ฤ…ฤช}and{ฤ„ฤช}. Figure 2presents coverage results under two distributional scenarios for these innovations. The top row of Figure 2involves innovations ฤ…ฤชas Gaussian, while ฤ„ฤชfollow a standard exponential distribution; the bottom row involves both ฤ…ฤช andฤ„ฤชas standard exponential variables. Using the described model, the columns of Figure 2denote coverages over sample sizes 30 ร—30(ฤค=900),50ร—50(ฤค=2500), and 70ร—70(ฤค=4900). We note that in Figure 2the coverage results are based on the bias-corrected HFDB estimator, as described in Remark 1. From this ๏ฌgure, it is evident that the HFDB method signi๏ฌcantly outperforms the FDWB estimator in terms of coverage accuracy across all sample sizes,
https://arxiv.org/abs/2504.19337v1
achieving the best coverage across block sizes, and furthermore showing the best coverages as the sample size ฤคincreases. The bias correction also helps with smaller sample sizes to achieve improvement in coverages for HFDB. In contrast, the performance of the the FDWB deteriorates with increasing sample size. This issue with Spatial Frequency Domain Bootstrap 13 Figure 2 . Coverages of 90% HFDB intervals for the covariance parameter ฤ€(h),h=(1,0)T, based on either non- corrected (ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)) (blue line) or corrected ( ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค(ฤ‡)) (green line) HFDB versions with different subsample sizes ฤ˜ฤค, innovations, and sample sizes ฤค. the FDWB arises from an incorrect estimation of the limiting variance ฤ‚2(orฤ‚2 2speci๏ฌcally) in ( 2) so that this bootstrap then leads to incorrect intervals, exhibiting extreme undercoverage in this case involving non-Gaussian processes with non-zero values of ฤ‚2 2. We next consider another illustration of coverage accuracy using a non-Gaussian process, which serves to illustrate the behavior of spatial processes found by a nonlinear transformation to a Gaussian process. That is, we ๏ฌrst generated a Gaussian process using a Matรฉrn covariance function (cf. ( Stein , 1999 )), as described earlier in Section 5.1, and applied a quartic transformation of the data to generate a non-Gaussian process which has a positive variance component ฤ‚2 2. The quartic transformation en- sures the resulting process deviates from normality while retaining structured dependence through the underlying Matรฉrn covariance function, allowing for a controlled study of higher-order cumulants (i.e., impactingฤ‚2 2) and their impact on the frequency domain bootstrap methods. Figure 3 . Coverages of 90% HFDB intervals for the covariance parameter ฤ€(h),h=(1,0)T, based on either non- corrected (ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)) (blue line) or corrected ( ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค(ฤ‡)) (green line) HFDB versions with different subsample sizes ฤ˜ฤค, and sample sizes ฤค. 14 From Figure 3, we note that the FDWB performs very poorly when the underlying data are from a non-Gaussian distribution here. In contrast, the HFDB consistently outperforms the FDWB across all sample sizes and block sizes. This superior performance of HFDB over FDWB is again due to the fact that the FDWB can fail to capture important variance components in the distribution of spectral mean statistics. By incorporating subsample variance estimation in ( 8), the HFDB achieves signi๏ฌcantly bet- ter results than the FDWB and achieves the desired con๏ฌdence level as the sample size increases. For non-Gaussian processes with ฤ‚2 2โ‰ˆ0, our simulation study shows that the performance of both methods are comparable, as may expected in this case. To save space, the detailed results for these cases are presented in the Supplementary Material . 6. Application to Calibrating Spatial Isotropy Tests Here we examine the performance of bootstrap methods when applied to tests of spatial isotropy (i.e., the covariance function of the stationary spatial process is rotationally invariant). To construct a test, we follow the setup of ( Guan, Sherman and Calvin ,2004 ), which has also been adopted in ( Ng, Yau and Chen ,2021 ). In particular, we consider testing the following hypothesis: ฤ„0: 2ฤ„(hฤŸ)=2ฤ„(hฤ ),for all hฤŸ,hฤ โˆˆฮ›,hฤŸโ‰ hฤ ,and||hฤŸ||=||hฤ ||, whereฮ›โ‰ก{h1,h2,...,hฤฃ}is a prespeci๏ฌed set of sites and 2
https://arxiv.org/abs/2504.19337v1
ฤ„(h) โ‰กฤ(ฤ–(0)โˆ’ฤ–(h))2is the variogram at lag h; also above ||h|| โ‰กโˆš hฤh. This null hypothesis can also be written in terms of a spectral mean assessment of whether ฤ‰(ฤ‡)=0 holds for various spectral mean parametered de๏ฌned by ฤ‡(ฤ) โ‰ก {2 cos(hฤ ฤ ฤ) โˆ’2 cos(hฤ ฤŸฤ)}. Lettingฤƒโ‰ก (2ฤ„(h1),...,2ฤ„(hฤฃ))denote a vector of variograms in ฮ›, it holds that, under ฤ„0, there exists a full row rank matrix รฝsuch thatรฝฤƒ=0. For example, if ฮ› = {(1,0),(0,1)}, thenฤƒ=(2ฤ„(1,0),2ฤ„(0,1)), and we may set รฝ=/bracketleftbig1โˆ’1/bracketrightbig . Exploiting this property, (Guan, Sherman and Calvin ,2004 ) proposed a test statistic ฤฤฤคโ‰กฤคร— (รฝห†ฤƒฤค)ฤ(รฝห†ฮฃฤŽรฝฤ)โˆ’1(รฝห†ฤƒฤค), where ห†ฤƒฤคis the vector of sample variogram estimates at lag hโˆˆฮ›,ฤคis number of locations, and ห†ฮฃฤŽis an estimator of the covariance matrix ฮฃฤŽof sample variograms. Under ฤ„0, (Guan, Sherman and Calvin , 2004 ) derived that ฤฤฤคฤ€โ†’ฤ†2 ฤกasฤคโ†’โˆž , whereฤกis the row rank of รฝ. Due to the slow convergence of the test statistic, ( Guan, Sherman and Calvin ,2004 ) further considered a subsampling method to determine the p-value of the test. However, since this test statistic involves spectral mean statistics in the form of variogram (or related covariance) estimators, we may study the proposed bootstrap approach in HFDB for approximating the p-values of the test. For purposes of simulating spatial data to examine size control and power in testing, we use a mean zero Gaussian process with a spherical covariance function, ฤ€(h) โ‰ก๏ฃฑ๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃณฤ‚2/parenleftbigg 1โˆ’3ฤจ 2ฤ+ฤจ3 2ฤ3/parenrightbigg +ฤI{h=0} if 0fฤจfฤ, ฤI{h=0} otherwise,(9) whereฤ‚2is the partial sill parameter, ฤis the range parameter, ฤis the nugget effect, and ฤจโ‰กโˆš hฤรพh is a distance related to a matrix รพ, described next, as geometric anisotropy transformation. Given an anisotropy angle ฤƒรฝand anisotropy ratio ฤƒฤŽ, de๏ฌne the rotation matrix ฤŽand shrinking matrix ฤas ฤŽโ‰ก/bracketleftbiggcos(ฤƒรฝ)sin(ฤƒรฝ) โˆ’sin(ฤƒรฝ)cos(ฤƒรฝ)/bracketrightbigg andฤโ‰ก/bracketleftbigg1 0 0ฤƒฤŽ/bracketrightbigg , Spatial Frequency Domain Bootstrap 15 then we may set รพโ‰กฤŽโ€ฒฤโ€ฒฤฤŽis a 2ร—2 positive de๏ฌnite matrix representing a geometric anisotropy transformation. A random ๏ฌeld with spherical covariance function as in ( 9) is called generally anisotropic unless ฤƒฤŽ=1 holds, which corresponds to the isotropic case. Also, if ฤƒรฝ=0 holds, then the main anisotropic axes are aligned with the the standard coordinate axes (ฤฎ,ฤฏ)in the plane making ฤƒรฝ=0 as a helpful choice in interpretation. In particular, we consider the model parameters (ฤ,ฤ‚2,ฤ,ฤƒรฝ,ฤƒฤŽ)=(0,1,5,0,ฤƒฤŽ)for different anisotropy ratio ฤƒฤŽ. Also, we set ฮ› ={h1,h2}forh1โ‰ก (1,0)and h2โ‰ก (0,1)along with ฤƒ= (2ฤ„(h1),2ฤ„(h2))ฤ, andรฝ=/bracketleftbig1โˆ’1/bracketrightbig so that the test statistic form may be written as ฤฤฤคโ‰กฤคร— [2 ห†ฤ„(h1) โˆ’2 ห†ฤ„(h2)]2(รฝห†ฮฃฤŽรฝฤ)โˆ’1, where ห†ฤ„(hฤŸ),ฤŸ=1,2, are sample variograms and where ห†ฮฃฤŽmay be determined by subsampling or some other device. However, since (รฝห†ฮฃฤŽรฝฤ)โˆ’1is only a real-valued factor, we may consider a related, but simpler, test statistic ฤฤฤค=ฤคร—(2 ห†ฤ„(1,0)โˆ’2 ห†ฤ„(0,1))2=ฤค/hatwideฤ‰2 ฤค(ฤ‡) to assess the null hypothesis of isotropy, which involves examining the square /hatwideฤ‰2 ฤค(ฤ‡)of a spectral mean statistic /hatwideฤ‰ฤค(ฤ‡)based onฤ‡(ฤ) โ‰ก { 2 cos(hฤ 1ฤ) โˆ’2 cos(hฤ 2ฤ)}that should be near zero un- der isotropy. To consider different resampling approximations for calibrating this test statistic, we examine subsampling as in ( Guan, Sherman and Calvin ,2004 ) as well as FDWB from ( Ng, Yau and Chen ,2021 ), and the proposed HFDB
https://arxiv.org/abs/2504.19337v1
here. Both FDWB and HFDB versions of this test statis- tic are computed as [ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)]2and[ฤ„โˆ— ฤ„ฤ€ฤ“รพ,ฤค(ฤ‡)]2, respectively, upon applying ฤ‡(ฤ) โ‰ก {2 cos(hฤ 1ฤ) โˆ’2 cos(hฤ 2ฤ)}in (7) and ( 8); note that bootstrap statistics are then centered to have mean zero which mimics how [ฤ„ฤค(ฤ‡) โ‰กโˆšฤค[/hatwideฤ‰ฤค(ฤ‡) โˆ’ฤ‰ฤค(ฤ‡)]behaves under null hypothesis with ฤ‰ฤค(ฤ‡)=0. In the simulation study, we have used the spherical covariance function as in ( 9) to generate a mean- zero Gaussian spatial process, following ( Guan, Sherman and Calvin ,2004 ) and ( Ng, Yau and Chen , 2021 ). To construct a mean-zero non-Gaussian spatial process, we use the Matรฉrn covariance with pa- rameters(ฤ‚,ฤ‡)=(1/3,1), as described in Section 5.1. Speci๏ฌcally, we generate the ๏ฌeld by multiplying the Cholesky factor of the covariance matrix with a vector of standard exponential random variables shifted to have mean zero, thereby introducing non-Gaussianity while preserving the desired spatial dependence structure. Tables 1-2display observed rejection probabilities (based on 1000 simulations) for the Gaussian and non-Gaussian processes, respectively, based on sample sizes of 50 ร—50 with a nominal testing level of 10%; similar results for a sample region 30 ร—30 are presented in Supplementary Material to save space. We have used block sizes of 9 ร—9 for the Gaussian process and 5 ร—5 for the non- Gaussian processes to account for differences in spatial dependence and marginal behavior. The larger block size captures the smoother structure of Gaussian data, while the smaller size helps preserve the features, such as skewness and heavy tails. Table 1demonstrates that all methods perform well when the underlying distribution is Gaussian, and both FDWB and HFDB perform similarly. In particular, both maintain size close to the nominal level of 10% when the null hypothesis of isotropy is true (i.e.,ฤƒฤŽ=1), and both show increasing power as ฤƒฤŽdeviates from 1. However, under non-Gaussianity, FDWB fails to maintain size under isotropy to the extent that this is unsuitable for power comparison under non-Gaussian scenarios in Table 2. In contrast, both HFDB and subsampling remain effective under non-Gaussianity, with HFDB showing some advantage over subsampling as the anisotropy ratio increases. 16 Table 1. Rejection rates for Gaussian data. ฤƒฤŽ Subsampling FDWB HFDB 1 0.084 0.103 0.103 1.1 0.135 0.223 0.223 1.2 0.424 0.537 0.537 1.3 0.787 0.88 0.88 1.4 0.912 0.983 0.983 1.5 0.996 1 1Table 2. Rejection rates for Non-Gaussian data.a ฤƒฤŽ Subsampling HFDB 1 0.078 0.1 1.2 0.112 0.17 1.3 0.391 0.497 1.4 0.693 0.77 1.5 0.907 0.972 1.6 0.993 1 aFor non-Gaussian data, FDWB fails to maintain the size at the 10% level, with a rejection rate of 0 .243 forฤƒฤŽ=1; this bootstrap becomes inappropriate for non-Gaussian data and is so excluded from Table 2. 7. Concluding Remarks The proposed spatial resampling method, called Hybrid Frequency Domain Bootstrap (HFDB), com- bines two resampling approaches in the form of subsampling and bootstrap in order to validly ap- proximation the distribution of spatial spectral mean statistics. Spectral means are fundamental spatial parameters, yet spectral statistics have complicated distributions owing to complex variances that are dif๏ฌcult to estimate. Previous bootstrap methods, such
https://arxiv.org/abs/2504.19337v1
as the FDWB proposed by ( Ng, Yau and Chen , 2021 ), rely on speci๏ฌc distributional assumptions and can fail to provide valid distributional approx- imations for non-Gaussian spatial data, due to issues in correctly capturing the variances of spectral statistics for such data. In contrast, the HFDB overcomes these limitations by incorporating a scaling adjustment from spatial subsampling to correct the spread of bootstrap approximations in the frequency domain. This leads to more accurate and general estimation of the sampling distribution of spatial spec- tral mean statistics. The numerical results in Sections 5-6illustrate the ๏ฌnite-sample performance of the HFDB method in terms of testing and interval estimation, while the theoretical results in Sections 3-4provide some formal guarantees of consistency in HFDB distributional approximations. Importantly, our results hold under mild spatial conditions, which allows a broad scope of application for applying HFDB to spec- tral problems of assessing spatial covariance structure, with both Gaussian and non-Gaussina spatial processes. Main results along with further technical results on establishing subsampling and bootstrap for spatial data are provided in the Appendix . Appendix For notational simplicity ฤ˜will be used instead of ฤ˜ฤคin the following proofs. All proofs are performed within the framework outlined in Section 2. Now, to prove Theorem 3.1and Theorem 4.1we will use the following lemmas which summarize all the subsampling results for spectral means that are used repeatedly in later proofs. De๏ฌne ฤƒโˆ—(ฤj,ฤค)/colonequal/hatwideฤœฤค(ฤj,ฤค)ฤ‘โˆ— j, whereฤ‘โˆ— js are i.i.d standard exponential drawn using the exponential resampling mechanism. We begin by presenting the lemmas and proposi- tion necessary for proving the theorems. The proofs of the lemmas are provided last. Lemma A.1. Suppose Assumptions 1-3hold for a generic function ฤ‡:ฮ 2โ†’Rof bounded variation withฤ„ฤค(ฤ‡) โ‰กฤค1/2{/hatwideฤ‰ฤค(ฤ‡) โˆ’ฤ‰(ฤ‡)}ฤ€โ†’ N( 0,ฤ‚2), where /hatwideฤ‰ฤค(ฤ‡),ฤ‰(ฤ‡), andฤ‚2are de๏ฌned analo- gously as in Section 2.2and Section 2.3. Then if subsample size ฤ˜satis๏ฌes 1 /ฤค(ฤ˜) ฤก+ฤคโˆ’1 ฤกฤค(ฤ˜) ฤกโ†’0 for ฤก=1,2 asฤคโ†’โˆž , Spatial Frequency Domain Bootstrap 17 (i) supฤฎโˆˆR/barex/barex/barexฤˆโˆ’1/summationtext.1ฤˆ ฤข=1I{ฤ˜1/2(/hatwideฤ‰(โ„“) sub(ฤ‡)โˆ’ฤ‰(ฤ‡)) fฤฎ}โˆ’ฤŒ(ฤ„ฤค(ฤ‡) fฤฎ)/barex/barex/barexฤŒโ†’0; (ii) supฤฎโˆˆR/barex/barex/barexฤˆโˆ’1/summationtext.1ฤˆ ฤข=1I{ฤ˜1/2(/hatwideฤ‰(โ„“) sub(ฤ‡)โˆ’/tildewideฤ‰ฤค(ฤ‡)) fฤฎ}โˆ’ฤŒ(ฤ„ฤค(ฤ‡) fฤฎ)/barex/barex/barexฤŒโ†’0; (iii)ฤ˜1/2(/tildewideฤ‰ฤค(ฤ‡)โˆ’ฤ‰(ฤ‡))ฤŒโ†’0; (iv)ฤˆโˆ’1/summationtext.1ฤˆ ฤข=1ฤ˜(/hatwideฤ‰(โ„“) sub(ฤ‡)โˆ’ฤ‰(ฤ‡))2ฤŒโ†’ฤ‚2(ฤ‡); (v)ฤˆโˆ’1/summationtext.1ฤˆ ฤข=1ฤ˜(/hatwideฤ‰(โ„“) sub(ฤ‡)โˆ’/tildewideฤ‰ฤค(ฤ‡))2ฤŒโ†’ฤ‚2(ฤ‡), where /hatwideฤ‰(โ„“) sub(ฤ‡), and/tildewideฤ‰ฤค(ฤ‡)are analogously de๏ฌned as in Section 3.1. Lemma A.2. Suppose Assumptions 1-3and4hold. Then for a generic function ฤ‡:ฮ 2โ†’Rof bounded variation, if the subsample length ฤ˜satis๏ฌes 1 /ฤค(ฤ˜) ฤก+ฤคโˆ’1 ฤกฤค(ฤ˜) ฤกโ†’0 forฤก=1,2 asฤคโ†’ โˆž then, 4รฟ2ฤ˜โˆ’1/summationdisplay.1 jโˆˆJฤ˜ฤ‡(ฤj,ฤ˜)ฤˆโˆ’1ฤˆ/summationdisplay.1 โ„“=1/parenleftBig I(โ„“) sub(ฤj,ฤ˜)โˆ’/tildewideIฤค(ฤj,ฤ˜)/parenrightBig2ฤŒโ†’/uni222B.dsp ฮ 2ฤ‡(ฤ)ฤœ2(ฤ)ฤšฤ. Proposition A.3. Forฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡), andฤ‚2 1(ฤ‡)as de๏ฌned in Section 4, and Section 2.3the following are true. (i) Var โˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)}ฤŒโ†’ฤ‚2 1(ฤ‡); (ii) supฤฎโˆˆR/barex/barex/barexฤŒโˆ—(ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡) fฤฎ)โˆ’ยจ(ฤฎ/ฤ‚1(ฤ‡))/barex/barex/barex=ฤฅฤŒ(1), whereฤŒโˆ—is the probability distribution induced by resampling, Var โˆ—is the resampled variance, and ยจ(ยท)is the standard normal distribution function. Proof of Proposition A.3. By the exponential resampling mechanism, we have Cov โˆ—(ฤ‘โˆ— j,ฤ‘โˆ— k)=I{j=โˆ’k}. Hence, with /hatwideฤœฤคbeing an even function it follows, Cov โˆ—(ฤƒโˆ—(ฤj,ฤค),ฤƒโˆ—(ฤk,ฤค))=/hatwideฤœ2 ฤค(ฤj,ฤค)I{j=โˆ’k}and, Varโˆ—(ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡))=(4รฟ2)2 ฤค/summationdisplay.1 j,kโˆˆJฤคฤ‡(ฤj,ฤค)ฤ‡(ฤk,ฤค)Covโˆ—(ฤƒโˆ—(ฤj,ฤค),ฤƒโˆ—(ฤk,ฤค)) =(2รฟ)2/summationdisplay.1 jโˆˆJฤค(2รฟ)2 ฤคฤ‡(ฤj,ฤค){ฤ‡(ฤj,ฤค)+ฤ‡(ฤโˆ’j,ฤค)}ฤœ2(ฤj,ฤค) +(4รฟ2)2 ฤค/summationdisplay.1 jโˆˆJฤคฤ‡(ฤj,ฤค){ฤ‡(ฤj,ฤค)+ฤ‡(ฤโˆ’j,ฤค)}(/hatwideฤœ2 ฤค(ฤj,ฤค)โˆ’ฤœ2(ฤj,ฤค)) =:ฤ1+ฤ2 Since,ฤ1is a Riemann sum, it immediately follows that ฤ1=ฤ‚2 1(ฤ‡) +ฤ‹(1). Using|Jฤค|=ฤ‹(ฤค)and Eq. ( 6) we haveฤ2=ฤฅฤŒ(1)which completes the proof of A.3(i). For notational ease, we de๏ฌne J+ ฤคโ‰ก {jโ‰ก (ฤ 1,ฤ 2) โˆˆ Jฤค: eitherฤ 1>0 orฤ 1=0 andฤ 2>0}. Since {ฤ‘โˆ— j,jโˆˆ J+ ฤค}are i.i.d standard exponentially distributed and ฤ‘โˆ— jโ‰กฤ‘โˆ— โˆ’jforjโˆˆ Jฤคwhereโˆ’jโˆˆ J+
https://arxiv.org/abs/2504.19337v1
ฤค, 18 thenฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)can be written as ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค=ฤคโˆ’1/2/summationdisplay.1 jโˆˆJ+ฤคฤ’โˆ— j,ฤค, (10) where, ฤ’โˆ— j,ฤค=(2รฟ)2(ฤ‡(ฤj,ฤค)/hatwideฤœฤค(ฤj,ฤค)+ฤ‡(โˆ’ฤj,ฤค)/hatwideฤœฤค(โˆ’ฤj,ฤค))(ฤ‘โˆ— jโˆ’1),jโˆˆ J+ ฤค. Since,|J+ ฤค|โ†’โˆž , asฤคโ†’โˆž , a conditional version of Lyapunovโ€™s CLT can be applied to Eq. ( 10) (using the established convergence in probability of Var โˆ—(ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡))to a positive constant from (i)), i.e., we have to ๏ฌnd a ฤ„>0 such that ฤคโˆ’(1+ฤ„/2)/summationdisplay.1 jโˆˆJ+ฤคฤโˆ—/parenleftBig |ฤ’โˆ— j,ฤค|2+ฤ„/parenrightBig =ฤฅฤŒ(1). (11) Choosingฤ„=1, we have /summationdisplay.1 jโˆˆJ+ฤค|ฤ‡(ฤj,ฤค)/hatwideฤœฤค(ฤj,ฤค)+ฤ‡(โˆ’ฤj,ฤค)/hatwideฤœฤค(โˆ’ฤj,ฤค)| fฤคยทsup ฤโˆˆฮ |ฤ‡(ฤ)|ยทsup ฤโˆˆฮ |/hatwideฤœฤค(ฤ)|=ฤ‹ฤŒ(ฤค), since bothฤ‡andฤœ(due to absolutely summable autocovariance function) are bounded, and due to Eq. ( 6). Hence, it holds that /summationdisplay.1 jโˆˆJ+ฤค/barex/barex/barexฤ‡(ฤj,ฤค)/hatwideฤœฤค(ฤj,ฤค)+ฤ‡(โˆ’ฤj,ฤค)/hatwideฤœฤค(โˆ’ฤj,ฤค)/barex/barex/barex3 =ฤ‹ฤŒ(ฤค). (12) Since,ฤ‘โˆ— jare standard exponential we have ฤโˆ—(|ฤ‘โˆ— jโˆ’1|3)=12ฤ›โˆ’1โˆ’2=ฤ‡(Say)gives us /summationdisplay.1 jโˆˆJ+ฤคEโˆ—/parenleftBig |ฤ’โˆ— j,ฤค|2+ฤ„/parenrightBig =(4รฟ2)3ฤ‡/summationdisplay.1 jโˆˆJ+ฤค/barex/barex/barexฤ‡(ฤj,ฤค)/hatwideฤœฤค(ฤj,ฤค)+ฤ‡(โˆ’ฤj,ฤค)/hatwideฤœฤค(โˆ’ฤj,ฤค)/barex/barex/barex3 =ฤ‹ฤŒ(ฤค), due to Eq. ( 12). This shows Lyapunov condition holds for Eq. ( 11) whenฤ„=1. Together with A.3(i), the assertion in A.3(ii) follows. Proof of Theorem 3.1 Using Lemma A.2, we have /hatwideฤ‚2 1,ฤค(ฤ‡)ฤŒโ†’ฤ‚2 1(ฤ‡). Since,ฤ‡(ยท)is of bounded variation, so is ฤ‡(ฤ)(ฤ‡(ฤ)+ ฤ‡(โˆ’ฤ)). Using Slutskyโ€™s theorem and Lemma A.1(v) we have /hatwideฤ‚2 2,ฤค(ฤ‡)=/hatwideฤ‚2 ฤค(ฤ‡)โˆ’/hatwideฤ‚2 1,ฤค(ฤ‡)ฤŒโ†’ฤ‚2(ฤ‡)โˆ’ ฤ‚2 1(ฤ‡)=ฤ‚2 2(ฤ‡). Then the statement of Theorem 3.1follows from (v). Proof of Theorem 4.1 (a) Now Proposition A.3(i) and Slutskyโ€™s theorem gives us Var โˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)}+/hatwideฤ‚2 2,ฤค(ฤ‡)ฤŒโ†’ฤ‚2(ฤ‡), which completes the proof. (b) Next we prove the consistency of the distribution of the HFDB estimator. Consider an arbitrary sequence {ฤคฤฃ}of{ฤค}. Then it follows from Proposition A.3(i) and (ii) that there exists a further Spatial Frequency Domain Bootstrap 19 subsequence {ฤคฤฃฤก}of{ฤคฤฃ}such that, sup ฤฎโˆˆR/barex/barex/barexฤŒโˆ—(ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค ฤฃฤก(ฤ‡) fฤฎ)โˆ’ยจ(ฤฎ/ฤ‚1(ฤ‡))/barex/barex/barexฤ—.ฤฉ.โ†’0; Var โˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค ฤฃฤก(ฤ‡)}ฤ—.ฤฉ.โ†’ฤ‚2 1(ฤ‡). It follows from Corollary 3.1(b) that Var โˆ—{ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค ฤฃฤก(ฤ‡)}+/hatwideฤ‚2 2,ฤคฤฃฤก(ฤ‡)ฤ—.ฤฉโ†’ฤ‚2(ฤ‡)asฤคฤฃฤกโ†’โˆž . Now, along the subsequence {ฤคฤฃฤก}we haveฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค ฤฃฤก(ฤ‡)ฤ€โ†’N( 0,ฤ‚2 1(ฤ‡)). Thus, using Slutskyโ€™s theo- rem gives us ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค ฤฃฤก(ฤ‡)ฤ€โ†’N( 0,ฤ‚2(ฤ‡)). Therefore, we have sup ฤฎโˆˆR/barex/barex/barexฤŒโˆ—(ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค ฤฃฤก(ฤ‡) fฤฎ)โˆ’ยจ(ฤฎ/ฤ‚(ฤ‡))/barex/barex/barexฤ—.ฤฉโ†’0 asฤคฤฃฤกโ†’โˆž . Since the choice of the sub-sequence is arbitrary, we have sup ฤฎโˆˆR/barex/barex/barexฤŒโˆ—(ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค(ฤ‡) fฤฎ)โˆ’ยจ(ฤฎ/ฤ‚(ฤ‡))/barex/barex/barexฤŒโ†’0,asฤคโ†’โˆž. Now using supฤฎโˆˆR|ฤŒ(ฤ„ฤค(ฤ‡) fฤฎ)โˆ’ยจ(ฤฎ/ฤ‚(ฤ‡))|=ฤฅฤŒ(1)and triangle inequality we have, sup ฤฎโˆˆR/barex/barex/barexฤŒโˆ—(ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค(ฤ‡) fฤฎ)โˆ’ฤŒ(ฤ„ฤค(ฤ‡) fฤฎ)/barex/barex/barexฤŒโ†’0 asฤคโ†’โˆž which completes the proof. Proof of Lemma A.1 To formalize our proofs, we introduce the following: for ฤจ=1,2, andฤฎโˆˆR, ฤ„ฤคโ‰กฤ„ฤค(ฤ‡);ฤ•โˆผN( 0,ฤ‚2(ฤ‡)),ฤช(ฤจ) ฤคโ‰กฤ(ฤ„ฤจ ฤค),ฤช(ฤจ) ฤค,รฟโ‰กฤ(ฤ„ฤคโˆ’ฤ(ฤ„ฤค))ฤจ, ฤ‚ฤค(ฤฎ) โ‰กฤŒ(ฤ„ฤคfฤฎ),ฤ‚(ฤฎ) โ‰กฤŒ(ฤ•fฤฎ), /hatwideฤ‚ฤค(ฤฎ) โ‰กฤˆโˆ’1/summationtext.1ฤˆ โ„“=1I[ฤ˜1/2 ฤค(/hatwideฤ‰(โ„“) sub(ฤ‡) โˆ’ฤ‰(ฤ‡)) fฤฎ];/hatwideฤ‚ฤค,รฟ(ฤฎ) โ‰กฤˆโˆ’1/summationtext.1ฤˆ โ„“=1I[ฤ˜1/2 ฤค(/hatwideฤ‰(โ„“) sub(ฤ‡) โˆ’/tildewideฤ‰ฤค(ฤ‡)) f ฤฎ]; /hatwideฤช(ฤจ) ฤคโ‰กฤˆโˆ’1/summationtext.1ฤˆ โ„“=1/braceleftBig ฤ˜1/2 ฤค/parenleftBig /hatwideฤ‰(โ„“) sub(ฤ‡)โˆ’ฤ‰(ฤ‡)/parenrightBig/bracerightBigฤจ ;/hatwideฤช(ฤจ) ฤค,รฟโ‰กฤˆโˆ’1/summationtext.1ฤˆ โ„“=1/braceleftBig ฤ˜1/2 ฤค/parenleftBig /hatwideฤ‰(โ„“) sub(ฤ‡)โˆ’/tildewideฤ‰ฤค(ฤ‡)/parenrightBig/bracerightBigฤจ . These de๏ฌnitions establish key quantities used to prove the asymptotic validity of bootstrap pro- cedures. They distinguish between population-level targets (e.g., ฤ‰(ฤ‡)) and sample-based estimates (e.g.,/tildewideฤ‰ฤค(ฤ‡)) to carefully track bias and variability in the resampling framework. We de๏ฌne ฤจ-th order central moments and empirical distributions centered around both the true and estimated targets. First we prove statement A.1(i). By notation de๏ฌned above, we have to show supฤฎโˆˆR|/hatwideฤ‚ฤค(ฤฎ)โˆ’ฤ‚(ฤฎ)|ฤŒโ†’0 asฤคโ†’ โˆž . LetC ยขRis the set of continuity points of ฤ‚. It suf๏ฌces to show /hatwideฤ‚ฤค(ฤฎ0)ฤŒโ†’ฤ‚(ฤฎ0)as ฤคโ†’ โˆž for an arbitrary ๏ฌxed ฤฎ0โˆˆ C. From this, one may take a countable, dense(in R) collection {ฤฎฤŸ:ฤŸg1} ยข C and, for any sub-sequence {ฤคฤฃ}of{ฤค}, extract a further sub-sequence {ฤคฤฃฤก} โ‰ก {ฤคฤก} where, almost surely(a.s.), it holds that /hatwideฤ‚ฤคฤฃฤก(ฤฎฤŸ) โ†’ฤ‚(ฤฎฤŸ)asฤคฤฃฤกโ†’โˆž for eachฤŸg1; the latter implies that supฤฎโˆˆR|/hatwideฤ‚ฤคฤฃฤก(ฤฎ) โˆ’ฤ‚(ฤฎ)|ฤ—.ฤฉ.โ†’0, which is equivalent
https://arxiv.org/abs/2504.19337v1
to supฤฎโˆˆR|/hatwideฤ‚ฤค(ฤฎ) โˆ’ฤ‚(ฤฎ)|ฤŒโ†’0 as{ฤคฤฃ}was arbitrary. Then statement (i) follows from a triangle inequality. Now, due to boundedness, /hatwideฤ‚ฤค(ฤฎ0)ฤŒโ†’ฤ‚(ฤฎ0)is equivalent to ฤ(/hatwideฤ‚ฤค(ฤฎ0)โˆ’ฤ‚(ฤฎ0))2โ†’0, and we show the latter. 20 To control this, ๏ฌx ฤ…โˆˆ (0,1)whereฤ…sets a threshold for spatial proximity, and for j=(ฤ 1,ฤ 2),k= (ฤก1,ฤก2)integer vectors in the plane (the ฤŸ-th component can be 0 ,1,...,/parenleftBig ฤคฤŸโˆ’ฤค(ฤ˜) ฤŸ/parenrightBig forฤŸ=1,2) we split the double sum of covariances in the MSE into indices j,ksuch that: (a)|ฤ ฤŸโˆ’ฤกฤŸ|<ฤ…ฤคฤŸfor someฤŸ=1,2, (b) and those for which |ฤ ฤŸโˆ’ฤกฤŸ| gฤ…ฤคฤŸfor bothฤŸ=1,2. In case (a), the number of such dependent index pairs has an upper bound ฤคฤ…/ฤˆ. In case (b), the assumption |ฤ ฤŸโˆ’ฤกฤŸ| ฤค(ฤ˜) ฤŸโ†’โˆž forฤŸ=1,2, along withฤค(ฤ˜) ฤŸ/ฤคฤŸโ†’0 (orฤคฤŸ/ฤค(ฤ˜) ฤŸโ†’ โˆž ), implies that the covariance between I(ฤ„j,ฤ˜fฤฎ0)and I(ฤ„k,ฤ˜fฤฎ0)must disappear asymptotically. That is, sup |ฤ ฤŸโˆ’ฤกฤŸ|gฤ…ฤคฤŸ โˆ€ฤŸ=1,2Cov/parenleftbigI(ฤ„j,ฤ˜fฤฎ0),I(ฤ„k,ฤ˜fฤฎ0)/parenrightbig = sup |ฤ ฤŸโˆ’ฤกฤŸ|gฤ…ฤคฤŸ โˆ€ฤŸ=1,2/barex/barexฤŒ(ฤ„j,ฤ˜fฤฎ0,ฤ„k,ฤ˜fฤฎ0)โˆ’ฤŒ(ฤ„j,ฤ˜fฤฎ0)ฤŒ(ฤ„k,ฤ˜fฤฎ0)/barex/barexโ†’0, by Assumption 3. This guarantees ฤ€1,ฤค(ฤ…):= sup |ฤ ฤŸโˆ’ฤกฤŸ|gฤ…ฤคฤŸ โˆ€ฤŸ=1,2|ฤŒ(ฤ„j,ฤ˜fฤฎ0,ฤ„k,ฤ˜fฤฎ0)โˆ’ฤ‚(ฤฎ0)2|โ†’0, ฤ€2,ฤค(ฤ…):= sup ฤ ฤŸโˆˆ{0,1,..,(ฤคฤŸโˆ’ฤค(ฤ˜) ฤŸ)} โˆ€ฤŸ=1,2|ฤŒ(ฤ„j,ฤ˜fฤฎ0)/parenleftbigฤ‚(ฤฎ0)โˆ’ฤŒ(ฤ„j,ฤ˜fฤฎ0)/parenrightbig|โ†’0. Combining, we obtain: ฤ/parenleftBig /hatwideฤ‚ฤค(ฤฎ0)โˆ’ฤ‚(ฤฎ0)/parenrightBig2 fฤคฤ… ฤˆ+ฤ€1,ฤค(ฤ…)+ฤ€2,ฤค(ฤ…) โ†’ฤ…. Sinceฤ…>0 is arbitrary, we conclude: ฤ/parenleftBig /hatwideฤ‚ฤค(ฤฎ0)โˆ’ฤ‚(ฤฎ0)/parenrightBig2 โ†’0, completing the proof of Lemma A.1(i). Next, we prove statements (iii) - (v) together. First we would like to prove the following statements: if limฤคโ†’โˆžฤ(ฤ„2 j(ฤค),ฤค)=ฤ(ฤ•2)hold for any vector of integer sequence j(ฤค), then /hatwideฤช(1) ฤคฤŒโ†’ฤช(1) ฤค,/hatwideฤช(2) ฤคฤŒโ†’ฤช(2) ฤค,and/hatwideฤช(2) ฤค,รฟฤŒโ†’ฤช(2) ฤค,รฟasฤคโ†’โˆž. (13) We show /hatwideฤช(ฤจ) ฤคฤŒโ†’ฤช(ฤจ) ฤค, forฤจ=1,2. We would also have /hatwideฤช(2) ฤค,รฟฤŒโ†’ฤช(2) ฤค,รฟby Slutskyโ€™s theorem. For anyรฟ>0 such that ยฑรฟโˆˆ C, it follows from statement (ฤŸ)that /hatwideฤช(ฤจ) ฤค(รฟ) โ‰ก1 ฤˆฤˆ/summationdisplay.1 โ„“=1ฤ„ฤจ j(โ„“),ฤ˜I{|ฤ„j(โ„“),ฤ˜| fรฟ}=/uni222B.dsp ฤฏฤจI{|ฤฏ| fรฟ}ฤš/hatwideฤ‚ฤค(ฤฏ)ฤŒโ†’ฤ(ฤ•ฤจI{|ฤ•| fรฟ}). (14) Spatial Frequency Domain Bootstrap 21 That is, by statement (i), for any sub-sequence {ฤคฤ }of{ฤค}, there exists again further sub-sequence {ฤคฤก} ยข {ฤคฤ }where supฤฎโˆˆR|/hatwideฤ‚ฤคฤก(ฤฎ) โˆ’ฤ‚(ฤฎ)|ฤ—.ฤฉ.โ†’0 as{ฤคฤก} โ†’ 0. Then using Continuous Mapping the- orem, for a random variable ฤ•โˆ— ฤคฤกโˆผ/hatwideฤ‚ฤคฤก, we have (ฤ•โˆ— ฤคฤก)ฤจI{|ฤ•โˆ— ฤคฤก| fรฟ}ฤ€โ†’ฤ•ฤจI{|ฤ•fรฟ}, and so the cor- responding expencted values (bounded) converge as/uni222B ฤฏฤจI{|ฤฏ| fรฟ}ฤš/hatwideฤ‚ฤคฤก(ฤฏ)ฤ—.ฤฉ.โ†’ฤ(ฤ•ฤจI{|ฤ•| fรฟ})as ฤคฤกโ†’โˆž implying Eq. ( 14). For givenฤ„>0 andฤ…โˆˆ (0,1), pick and ๏ฌx รฟsuch thatฤ(ฤ•ฤจI{|ฤ•|>รฟ})< ฤ„ฤ…/3. Then it suf๏ฌces to show lim sup ฤคโ†’โˆžฤŒ/parenleftBig |/hatwideฤช(ฤจ) ฤคโˆ’/hatwideฤช(ฤจ) ฤค(รฟ)|>ฤ„/3/parenrightBig fฤ…, (15) from which lim sup ฤคโ†’โˆžฤŒ/parenleftBig |/hatwideฤช(ฤจ) ฤคโˆ’ฤ(ฤ•ฤจ)|>ฤ„/parenrightBig flim sup ฤคโ†’โˆžฤŒ/parenleftBig |/hatwideฤช(ฤจ) ฤค(รฟ)โˆ’ฤ(ฤ•ฤจI{|ฤ•| fรฟ})|>ฤ„/3/parenrightBig +lim sup ฤคโ†’โˆžฤŒ/parenleftBig |/hatwideฤช(ฤจ) ฤคโˆ’/hatwideฤช(ฤจ) ฤค(รฟ)|>ฤ„/3/parenrightBig fฤ… follows by Eq. ( 14)-(15) andฤŒ(ฤ(ฤ•ฤจI{|ฤ•|>รฟ})>ฤ„/3)=0, which gives us /hatwideฤช(ฤจ) ฤคฤŒโ†’ฤ(ฤ•ฤจ). Since we have assumed for any vector of integer sequence j(ฤค),ฤ(ฤ„2 j(ฤค),ฤค) โ†’ฤ(ฤ•2)asฤคโ†’โˆž , we haveฤช(2) ฤคโ‰ก ฤ(ฤ„2 ฤค) โ†’ฤ(ฤ•2)=ฤ‚2(ฤ‡)and, by dominated convergence theorem (DCT), ฤ(|ฤ„ฤค|) โ†’ฤ(|ฤ„|)and ฤช(1) ฤคโ‰กฤ(ฤ„ฤค) โ†’ฤ(ฤ•)=0 asฤคโ†’โˆž . Thus, we have established /hatwideฤช(ฤจ) ฤคฤŒโ†’ฤช(ฤจ) ฤคforฤจ=1,2. By Markovโ€™s inequality, for ฤจ=1,2, we have ฤŒ/parenleftBig |/hatwideฤช(ฤจ) ฤคโˆ’/hatwideฤช(ฤจ) ฤค(รฟ)|>ฤ„/3/parenrightBig fฤ„ 3max 1fโ„“fฤˆฤ/parenleftBig |ฤ„j(โ„“),ฤ˜|ฤจI{|ฤ„j(โ„“),ฤ˜|>รฟ}/parenrightBig , so that, Eq. ( 15) follows from lim supฤคโ†’โˆžmax 1fโ„“fฤˆฤ/parenleftBig |ฤ„j(โ„“),ฤ˜|ฤจI{|ฤ„j(โ„“),ฤ˜|>รฟ}/parenrightBig fฤ„ฤ…/3 which is implied by the moment conditions. Note that a contrary result lim sup ฤคโ†’โˆžmax 1fโ„“fฤˆฤ/parenleftBig |ฤ„j(โ„“),ฤ˜|ฤจI{|ฤ„j(โ„“),ฤ˜|>รฟ}/parenrightBig >ฤ„ฤ…/3 would imply existence of a sub-sequence {ฤคฤ }and a positive integer ฤฃฤคฤ such that ฤ/parenleftbigg |ฤ„j(ฤฃฤคฤ ),ฤ˜ฤคฤ |ฤจI{|ฤ„j(ฤฃฤคฤ ),ฤ˜ฤคฤ |>รฟ}/parenrightbigg >ฤ„ฤ…/3 holds for each ฤคฤ , which contradicts limฤคฤ โ†’โˆžฤ/parenleftbigg |ฤ„j(ฤฃฤคฤ ),ฤ˜ฤคฤ |ฤจI{|ฤ„j(ฤฃฤคฤ ),ฤ˜ฤคฤ |>รฟ}/parenrightbigg =ฤ(|ฤ•|ฤจI{|ฤ•|>รฟ})<ฤ„ฤ…/3, that follows by DCT from ฤ„j(ฤฃฤคฤ ),ฤ˜ฤคฤ ฤ€โ†’ฤ•along withฤ(|ฤ„j(ฤฃฤคฤ ),ฤ˜ฤคฤ |ฤจ) โ†’ฤ(|ฤ•|ฤจ)from the assumed moment conditions. 22 Now, the remaining is to show lim ฤคโ†’โˆžฤ/parenleftBig ฤ„2 j(ฤค),ฤค/parenrightBig =ฤ(ฤ•2). By 4th-order stationarity we only need to prove lim โ†’โˆžฤ(ฤ„2 ฤค)=ฤ‚2(ฤ‡). By Assumption 1, (Brillinger ,2001 ), and ( Fuentes ,2002 ) we have,
https://arxiv.org/abs/2504.19337v1
ฤ(Iฤค(ฤ))=ฤœ(ฤ)+ฤ‹(ฤคโˆ’1),ฤโˆˆฮ 2(16) and forฤ1,ฤ2โˆˆฮ 2, andฤŸ=1,2, Cov(Iฤค(ฤ1),Iฤค(ฤ2))=๏ฃฑ๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃณฤœ2(ฤ1)+ฤ‹(ฤคโˆ’1) if|ฤˆ1ฤŸ|=|ฤˆ2ฤŸ|(โ‰ 0,โˆ€ฤŸ=1,2) (2รฟ)2 ฤคฤœ4(ฤ1,ฤ2,โˆ’ฤ2)+ฤ‹(ฤคโˆ’1) if|ฤˆ1ฤŸ|โ‰ |ฤˆ2ฤŸ|.(17) The error terms are uniform in ฤ. Hence, forฤŸ=1,2, Var(ฤ„ฤค(ฤ‡))=ฤคโˆ’1(4รฟ2)2/summationdisplay.1 j,kโˆˆJฤคฤ‡(ฤj,ฤค)ฤ‡(ฤk,ฤค)Cov(Iฤค(ฤj,ฤค),Iฤค(ฤk,ฤค)) =ฤคโˆ’1(4รฟ2)2๏ฃฎ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฐ/summationdisplay.1 jโˆˆJฤคฤ‡(ฤj,ฤค)/parenleftbigฤ‡(ฤj,ฤค)+ฤ‡(โˆ’ฤj,ฤค)/parenrightbigVar(Iฤค(ฤj,ฤค)) +/summationdisplay.1 j,kโˆˆJฤค |ฤฤ ฤŸ|โ‰ |ฤฤกฤŸ|ฤ‡(ฤj,ฤค)ฤ‡(ฤk,ฤค)Cov(Iฤค(ฤj,ฤค),Iฤค(ฤk,ฤค))๏ฃน๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃป =:ฤ1+ฤ2. By Eq. ( 17) and Riemann sum form, it holds that ฤ1=(2รฟ)2ฤคโˆ’14รฟ2/summationdisplay.1 jโˆˆJฤคฤ‡(ฤj,ฤค)/parenleftbigฤ‡(ฤj,ฤค)+ฤ‡(โˆ’ฤj,ฤค)/parenrightbigฤœ2(ฤj,ฤค)+ฤ‹(ฤคโˆ’1) =(2รฟ)2/uni222B.dsp ฮ 2ฤ‡(ฤ)(ฤ‡(ฤ)+ฤ‡(โˆ’ฤ))ฤœ2(ฤ)ฤšฤ+ฤ‹(ฤคโˆ’1)=ฤ‚2 1(ฤ‡)+ฤ‹(ฤคโˆ’1). Similarly, we have that ฤ2=ฤคโˆ’1(4รฟ2)2/summationdisplay.1 j,kโˆˆJฤค |ฤฤ ฤŸ|โ‰ |ฤฤกฤŸ|ฤ‡(ฤj,ฤค)ฤ‡(ฤk,ฤค)/parenleftbigg(2รฟ)2 ฤคฤœ4(ฤj,ฤค,ฤk,ฤค,โˆ’ฤk,ฤค)+ฤ‹(ฤคโˆ’1)/parenrightbigg =(2รฟ)2ฤคโˆ’24รฟ2๏ฃฎ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฐ/summationdisplay.1 j,kโˆˆJฤคฤ‡(ฤj,ฤค)ฤ‡(ฤk,ฤค)ฤœ4(ฤj,ฤค,ฤk,ฤค,โˆ’ฤk,ฤค) โˆ’/summationdisplay.1 jโˆˆJฤคฤ‡(ฤj,ฤค)(ฤ‡(ฤj,ฤค)+ฤ‡(โˆ’ฤj,ฤค))ฤœ4(ฤj,ฤค,ฤj,ฤค,โˆ’ฤj,ฤค)+ฤ‹(ฤคโˆ’1)๏ฃน๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃป =(2รฟ)2/uni222B.dsp ฮ 2/uni222B.dsp ฮ 2ฤ‡(ฤ1)ฤ‡(ฤ2)ฤœ4(ฤ1,ฤ2,โˆ’ฤ2)ฤšฤ1ฤšฤ2+ฤ‹(ฤคโˆ’1)=ฤ‚2 2(ฤ‡)+ฤ‹(ฤคโˆ’1). Spatial Frequency Domain Bootstrap 23 Now, we have Var (ฤ„ฤค(ฤ‡))=ฤ‚2 1(ฤ‡) +ฤ‚2 2(ฤ‡) +ฤ‹(ฤคโˆ’1)=ฤ‚2(ฤ‡) +ฤ‹(ฤคโˆ’1). By Eq. ( 16) and bounded variation ofฤ‡(ยท), we also have ฤ(ฤ„ฤค(ฤ‡))=ฤค1/2/parenlefttpA/parenleftexA /parenleftbtA(2รฟ)2 ฤค/summationdisplay.1 jโˆˆJฤคฤ‡(ฤj,ฤค){ฤœ(ฤj,ฤค)+ฤ‹(ฤคโˆ’1)}โˆ’/uni222B.dsp ฮ 2ฤ‡(ฤ)ฤœ(ฤˆ)ฤšฤˆ/parenrighttpA/parenrightexA /parenrightbtA=ฤ‹(ฤคโˆ’1/2). Hence, we conclude that ฤ(ฤ„ฤค(ฤ‡))2=Var(ฤ„ฤค(ฤ‡))+ฤ2(ฤ„ฤค(ฤ‡))=ฤ‚2(ฤ‡)+ฤ‹(ฤคโˆ’1). Finally, we prove statement (ii), i.e., we will show supฤฎโˆˆR/barex/barex/barex/hatwideฤ‚ฤค,รฟ(ฤฎ)โˆ’ฤ‚ฤค(ฤฎ)/barex/barex/barexฤŒโ†’0 asฤคโ†’ โˆž . Letฤ•โˆ— ฤค denotes a random variable with distribution function /hatwideฤ‚ฤค, then(ฤ•โˆ— ฤคโˆ’/hatwideฤช(1) ฤค)is a random variable with distribution function /hatwideฤ‚ฤค,รฟ. Because supฤฎโˆˆR/barex/barex/barex/hatwideฤ‚ฤค(ฤฎ)โˆ’ฤ‚(ฤฎ)/barex/barex/barexฤŒโ†’0 and/hatwideฤช(1) ฤคฤŒโ†’ฤช(1) ฤค, andฤช(1) ฤคโ†’ฤ(ฤ•)= 0 guaranteed by the moment assumption, we have, for any sub-sequence {ฤคฤ } ยข {ฤค}, there exists a further sub-sequence {ฤคฤก} ยข {ฤคฤ }such thatฤ•โˆ— ฤคฤกฤ€โ†’ฤ•and/hatwideฤช(1) ฤคฤกโ†’0 hold asฤคฤกโ†’ โˆž almost surely (a.s.). By Slutskyโ€™s theorem, (ฤ•โˆ— ฤคฤกโˆ’/hatwideฤช(1) ฤคฤก)ฤ€โ†’ฤ•follows a.s. which, because {ฤคฤ }was arbitrary, implies supฤฎโˆˆR/barex/barex/barex/hatwideฤ‚ฤค,รฟ(ฤฎ)โˆ’ฤ‚(ฤฎ)/barex/barex/barexฤŒโ†’0. Statement (ii) follows from a triangle inequality. Proof of Lemma A.2 Note that, (2รฟ)2ฤ˜โˆ’1/summationdisplay.1 jโˆˆJฤ˜ฤ‡(ฤj,ฤ˜)1 ฤˆฤˆ/summationdisplay.1 โ„“=1/parenleftBig I(โ„“) sub(ฤj,ฤ˜)โˆ’/tildewideIฤค(ฤj,ฤ˜)/parenrightBig2 =(2รฟ)2ฤ˜โˆ’1/bracketleftBiggฤˆ/summationdisplay.1 โ„“=1ฤ‡(ฤj,ฤ˜)/parenleftBigg 1 ฤˆฤˆ/summationdisplay.1 โ„“=1(I(โ„“) sub(ฤj,ฤ˜))2โˆ’2ฤœ2(ฤj,ฤ˜)/parenrightBigg +ฤˆ/summationdisplay.1 โ„“=1ฤ‡(ฤj,ฤ˜)/parenleftBig ฤœ2(ฤj,ฤ˜)โˆ’(/tildewideIฤค(ฤj,ฤ˜))2/parenrightBig +ฤˆ/summationdisplay.1 โ„“=1ฤ‡(ฤj,ฤ˜)ฤœ2(ฤj,ฤ˜)/bracketrightBigg =:ฤ1ฤค+ฤ2ฤค+ฤ3ฤค. It is immediate from the Riemann sum expression that, ฤ3ฤคโ†’/uni222B ฮ 2ฤ‡(ฤ)ฤœ2(ฤ)asฤคโ†’โˆž . Now, pro- ceeding in the same way as ( Yu,2023 ), using Cauchy-Schwarz inequality and boundedness of ฤ‡(ยท)we have|ฤ2ฤค|=ฤฅ(1). Also, using the moment conditions we have ฤ1ฤคฤŒโ†’0, which gives us the statement of Lemma A.2. Supplementary Material Supplementary Material for Frequency Domain Resampling for Gridded Spatial Data. In this supplementary material we present more simulation results for ๏ฌnite samples in support of the proposed method. 24 References BANDYOPADHYAY , S., J ENTSCH , C. and R AO, S. S. (2017). A spectral domain test for stationarity of spatio- temporal data. Journal of Time Series Analysis 38326โ€“351. BANDYOPADHYAY , S. and L AHIRI , S. N. (2010). Asymptotic properties of Discrete Fourier transforms for spatial data. Sankhya 71221-259. BANDYOPADHYAY , S., L AHIRI , S. N. and N ORDMAN , D. J. (2015). A frequency domain empirical likelihood method for irregularly spaced spatial data. The Annals of Statistics 43519โ€“545. BANDYOPADHYAY , S. and R AO, S. S. (2016). A test for stationarity for irregularly spaced spatial data. Journal of the Royal Statistical Society: Series B 7995-123. BERAN , J. (1992). A goodness-of-๏ฌt test for time series with long range dependence. Journal of the Royal Statis- tical Society Series B: Statistical Methodology 54749โ€“760. BRILLINGER , D. R. (2001). Time Series . Society for Industrial and Applied Mathematics, Philadelphia, PA. CRESSIE , N. A. C. (1993). Statistics for Spatial Data , revised ed. Wiley. CRUJEIRAS , R. M. and F ERNANDEZ -CASAL , R. (2010). On the estimation of the spectral density for continuous spatial processes. Statistics 44587โ€“600. DAHLHAUS , R. (1985). Asymptotic normality of spectral estimates. Journal of Multivariate Analysis 16412โ€“431. DAHLHAUS , R. and J ANAS , D. (1996). A frequency domain bootstrap for ratio statistics in time series
https://arxiv.org/abs/2504.19337v1
analysis. The Annals of Statistics 241934โ€“1963. DAVISON , A. C. and H INKLEY , D. V. (1997). Bootstrap methods and their application 1. Cambridge university press. FUENTES , M. (2002). Spectral methods for nonstationary spatial processes. Biometrika 89197-210. FUENTES , M. (2006). Testing for separability of spatial-temporal covariance functions. Journal of Statistical Planning and Inference 136447-466. FUENTES , M. (2007). Approximate likelihood for large irregularly spaced spatial data. Journal of the American Statistical Association 102321-331. GUAN, Y., S HERMAN , M. and C ALVIN , J. A. (2004). A nonparametric test for spatial isotropy using subsampling. Journal of the American Statistical Association 99810โ€“821. HALL, P., F ISHER , N. I. and H OFFMAN , B. (1994). On the nonparametric estimation of covariance functions. Annals of Statistics 222115-2134. IM, H. K., S TEIN , M. L. and Z HU, Z. (2007). Semiparametric estimation of spectral density with irregular observations. Journal of the American Statistical Association 102726โ€“735. JENTSCH , C. and K REISS , J.-P. (2010). The multiple hybrid bootstrapโ€”resampling multivariate linear processes. Journal of Multivariate Analysis 1012320โ€“2345. KREISS , J.-P. and L AHIRI , S. N. (2012). Bootstrap methods for time series. In Handbook of statistics ,303โ€“26. Elsevier. KREISS , J.-P. and P APARODITIS , E. (2003). Autoregressive-aided periodogram bootstrap for timeseries. The Annals of Statistics 311923โ€“1955. KREISS , J.-P., P APARODITIS , E. and P OLITIS , D. N. (2011). On the range of validity of the autoregressive sieve bootstrap. Annals of Statistics 2103-2130. KREISS , J.-P. and P APARODITIS , E. (2012). The hybrid wild bootstrap for time series. Journal of the American Statistical Association 1071073โ€“1084. KREISS , J.-P. and P APARODITIS , E. (2023). Bootstrapping Whittle estimators. Biometrika 110499โ€“518. LAHIRI , S. N. (2003). Resampling methods for dependent data . Springer Science & Business Media. LI, W. and M CLEOD, A. (1986). Fractional time series differencing. Biometrika 73217โ€“221. LJUNG , G. M. and B OX, G. E. (1978). On a measure of lack of ๏ฌt in time series models. Biometrika 65297โ€“303. MATSUDA , Y. and Y AJIMA , Y. (2009). Fourier analysis of irregularly spaced data on Rd. Journal of the Royal Statistical Society: Series B 71191โ€“217. MEYER , M., P APARODITIS , E. and K REISS , J.-P. (2020). Extending the validity of frequency domain bootstrap methods to general stationary processes. The Annals of Statistics 482404-2427. MILHร˜J , A. (1981). A test of ๏ฌt in time series models. Biometrika 68177โ€“187. NG, W. L., Y AU, C. Y. and C HEN, X. (2021). Frequency domain bootstrap methods for random ๏ฌelds. Electronic Journal of Statistics 156586โ€“6632. Spatial Frequency Domain Bootstrap 25 NORDMAN , D. J. and L AHIRI , S. N. (2004). On optimal spatial subsample size for variance estimation. The Annals of Statistics 321981-2027. NORDMAN , D. J. and L AHIRI , S. N. (2006). A frequency domain empirical likelihood for short-and long-range dependence. The Annals of Statistics 3019โ€“3050. PAPARODITIS , E. (2000). Spectral density based goodness-of-๏ฌt tests for time series models. Scandinavian Jour- nal of Statistics 27143โ€“176. PARZEN , E. (1957). On consistent estimates of the spectrum of a
https://arxiv.org/abs/2504.19337v1
stationary time series. The Annals of Mathemat- ical Statistics 329โ€“348. POLITIS , D. N. and M CELROY , T. S. (2019). Time series: A ๏ฌrst course with bootstrap starter . CRC Press. POLITIS , D. N., R OMANO , J. P. and W OLF, M. (1999). Subsampling . Springer Science & Business Media. SHERMAN , M. (1996). Variance estimation for statistics computed from spatial lattice data. Journal of the Royal Statistical Society Series B: Statistical Methodology 58509โ€“523. SHERMAN , M. (1998). Ef๏ฌciency and robustness in subsampling for dependent data. Journal of statistical plan- ning and inference 75133โ€“146. SHERMAN , M. and C ARLSTEIN , E. (1994). Nonparametric estimation of the moments of a general statistic com- puted from spatial data. Journal of the American Statistical Association 89496โ€“500. STEIN , M. L. (1999). Interpolation of spatial data: some theory for kriging . Springer Science & Business Media. SUBBA RAO, S. (2018). Statistical inference for spatial statistics de๏ฌned in the Fourier domain. The Annals of Statistics 46469โ€“499. TANIGUCHI , M. (1979). On estimation of parameters of Gaussian stationary processes. Journal of Applied Prob- ability 16575โ€“591. VANHALA, M., B ANDYOPADHYAY , S., L AHIRI , S. N. and N ORDMAN , D. J. (2017). On the non-standard distribution of empirical likelihood estimators with spatial data. Journal of Statistical Planning and Inference 187109โ€“114. VANHALA, M., B ANDYOPADHYAY , S., L AHIRI , S. N. and N ORDMAN , D. J. (2020). A general frequency domain method for assessing spatial covariance structures. Bernoulli 262463โ€“2487. WELLER , Z. D. and H OETING , J. A. (2020). A nonparametric spectral domain test of spatial isotropy. Journal of statistical planning and inference 204177โ€“186. YU, H. (2023). Resampling-based inference for time series in the frequency domain, PhD thesis, Iowa State University PhD Thesis. YU, H., K AISER , M. S. and N ORDMAN , D. J. (2023). A subsampling perspective for extending the validity of state-of-the-art bootstraps in the frequency domain. Biometrika asad006. Submitted to Bernoulli Supplementary Material for Frequency Domain Resampling for Gridded Spatial Data SOUVICK BERA1,a, DANIEL J. NORDMAN2,cand SOUTIR BANDYOPADHYAY1,b 1Department of Applied Mathematics and Statistics, Colorado School of Mines, Golden, CO 80401, USA, aberasouvick@mines.edu ,bsbandyopadhyay@mines.edu 2Department of Statistics, Iowa State University, Ames, IA 50011, USA,cdnordman@iastate.edu Keywords: Spatial Frequency Domain Bootstrap; Periodogram; Spectral Mean 1. Numerical Studies of Accuracy For a comprehensive comparison and to complete the simulation study presented in Section 5, we further investigate the performance of the FDWB and HFDB methods for inferring spectral means for non-Gaussian spatial processes, particularly focusing on scenarios where ฤ‚2 2โ‰ˆ0. This comparison aims to highlight the effectiveness of both approaches in cases where ฤ‚2 2is very close to zero. By examining these scenarios, our overall goal is to demonstrate the robustness and accuracy of the HFDB method in a wider range of conditions. Moreover, we seek to con๏ฌrm that the HFDB-based estimator performs adequately in situations where ฤ‚2 2is low, as expected theoretically. This detailed analysis further ensures that our ๏ฌndings are thorough and applicable to various practical situations involving non-Gaussian spatial processes. 1.1. Results for Non-Gaussian Process For
https://arxiv.org/abs/2504.19337v1
non-Gaussian processes with low ฤ‚2 2values, we considered the Gaussian-Log-Gaussian process as described in ?. We generated datasets of sizes 30 ร—30(ฤค=900),50ร—50(ฤค=2500), and 70ร—70(ฤค= 4900)on regularly spaced square-shaped lattice regions. For each dataset size, we ran 500 Monte Carlo simulations for coverage and 500 simulations to generate the bootstrap distributions. This process was examined for different range parameters with the scalar parameter as 0.01 (see ?for more details on the rationale behind this choice where the scalar parameter is denoted by ฤ‡). This setup allows us to evaluate the performance of our proposed methods in a context where ฤ‚2 2is small but not zero, providing insight into their effectiveness under these speci๏ฌc conditions. 1 2 Figure 1 . Coverages of 90% HFDB intervals for the covariance parameter ฤ€(h),h=(1,0)ฤ, based on either non-corrected ( ฤโˆ— ฤ‚ฤ€ฤ“รพ,ฤค(ฤ‡)) (red line) or corrected ( ฤ„โˆ— ฤ„ฤ‚ฤ€รพ,ฤค(ฤ‡)) (green line) HFDB versions with different subsample sizes ฤ˜ฤค, range, and sample sizes ฤค. We note from Figure 1that both methods perform similarly when ฤ‚2 2is very small. This contrasts with the results in Figure 2 of Section 5, where there is a signi๏ฌcant difference in the performance of the two methods due to a higher value of ฤ‚2 2. We further observe that as the subsample size ฤ˜ฤคand the overall sample size ฤคincrease, the coverage accuracy improves across different range parameters. 2. Application to Calibrating Spatial Isotropy Tests As described in Section 6, the simulation results for a sample region 30 ร—30 , using the same parameters as before for both Gaussian and non-Gaussian processes, are based on 1000 simulations at a 10% nominal level and are as follows. HFDB: Supplementary Material 3 Table 1. Rejection rates for Gaussian data. ฤƒฤŽ Subsampling FDWB HFDB 1 0.071 0.108 0.108 1.1 0.107 0.14 0.14 1.2 0.236 0.324 0.324 1.3 0.563 0.646 0.646 1.4 0.845 0.92 0.92 1.5 0.914 0.966 0.966 1.6 0.992 1 1Table 2. Rejection rates for Non-Gaussian data.a ฤƒฤŽ Subsampling HFDB 1 0.066 0.105 1.2 0.101 0.127 1.4 0.325 0.464 1.6 0.62 0.743 1.8 0.873 0.922 2 0.987 1 aFor non-Gaussian data, FDB fails to maintain the size at the 10% level, with a rejection rate of 0 .277 forฤƒฤŽ=1, indicating its inappropriateness for such data. The tables indicate that all frequency domain resampling methods perform well when the underly- ing distribution is Gaussian, as expected. However, under non-Gaussianity, FDWB fails to maintain size under isotropy, making it unsuitable for such scenarios. In contrast, both HFDB and subsampling remain effective even with smaller sample sizes.
https://arxiv.org/abs/2504.19337v1
SIGNAL DETECTION FROM SPIKED NOISE VIA ASYMMETRIZATION BYZHIGANG BAO1,a, KHAMANCHEONG2,cAND YUJILI1,b 1University of Hong Kong,azgbao@hku.hk;bu3011732@connect.hku.hk 2Hong Kong University of Science and Technology,ckmcheong@connect.ust.hk The signal plus noise model H=S+Yis a fundamental model in signal detection when a low rank signal Sis polluted by noise Y. In the high-dimensional setting, one often uses the leading singular values and cor- responding singular vectors of Hto conduct the statistical inference of the signalS. Especially, when Yconsists of iid random entries, the singular val- ues of Scan be estimated from those of Has long as the signal Sis strong enough. However, when the Yentries are heteroscedastic or correlated, this standard approach may fail. Especially in this work, we consider a situation that can easily arise with heteroscedastic noise but is particularly difficult to address using the singular value approach, namely, when the noise Yitself may create spiked singular values. It has been a recurring question how to distinguish the signal Sfrom the spikes in Y, as this seems impossible by examining the leading singular values of H. Inspired by the work [27], we turn to study the eigenvalues of an asymmetrized model when two samples H1=S+Y1andH2=S+Y2are available. We show that by looking into the leading eigenvalues (in magnitude) of the asymmetrized model H1Hโˆ— 2, one can easily detect S. Unlike [27], we show that even if the spikes from Yis much larger than the strength of S, and thus the operator norm of Yis much larger than that of S, the detection is still effective. Second, we estab- lish the precise detection threshold. Third, we do not require any structural assumption on the singular vectors of S. Finally, we derive precise limiting behaviour of the leading eigenvalues of the asymmetrized model. Based on the limiting results, we propose a completely data-based approach for the detection of S. Simulation studies further show that our detection remains robust even in the heavy-tailed scenario, where only the second moments of the noise matrix entries exist. This has been widely believed to be impossible using the singular value approach. 1. Introduction. In this paper, we consider the following signal-plus-noise model H=S+Yโ‰กS+ ฮฃX, where X= (xij)โˆˆRpร—nis a random matrix with independent mean 0entries. We further denote the variance profile of the matrix by T= (tij) :=n(Var(xij)) which is the matrix with entries given by the variances ofโˆšnxijโ€™s. Throughout the paper, we make the following assumption on the variance profile: there exist two positive constants tโˆ—โ‰คtโˆ—such that tโˆ—โ‰คmin i,jtijโ‰คmax i,jtijโ‰คtโˆ—. (1.1) MSC2020 subject classifications: Primary 60B20, 62G10; secondary 62H10. Keywords and phrases: signal-plus-noise model, spiked model, signal detection, asymmetrization, outlying eigenvalues. 1arXiv:2504.19450v1 [math.ST] 28 Apr 2025 2 For simplicity, we further assume that all moment of xijโ€™s exist, although by some standard truncation technique the condition can be relaxed. Here we assume that SโˆˆRpร—nis a fixed rank matrix and ฮฃโˆˆRpร—pis a fixed rank perturbation of identity, i.e., S=kX i=1diuivโˆ— i=:UDVโˆ—, ฮฃ =I+rX j=1ฯƒiฮพiฮธโˆ— i=:I+ ฮžโˆ†ฮ˜โˆ—, (1.2) where kandrare fixed nonnegative integers, and {ui},{vi},{ฮพi}and{ฮธi}are 4 classes of deterministic orthonormal vectors. Here D=diag(d1,...,d k)andโˆ† = diag(ฯƒ1,...,ฯƒ r), where {di}โ€ฒsand{ฯƒi}โ€ฒsare two collections of nonnegative numbers, both
https://arxiv.org/abs/2504.19450v1
ordered in de- scending order. Throughout the paper, we always assume that ฮฃis invertible and further assume โˆฅฮฃโˆ’1โˆฅopโ‰คKfor some constant K >0. We interpret Sas a signal, which is polluted by the noise part ฮฃX. Let 1be the all-one matrix. Differently from the classical setting in many previous literature where (ฮฃ,T) = (I, 1), here we consider the general setting when the noise part ฮฃXitself may be heteroscedastic or even correlated. We remark here that in caseฮฃis diagonal, in principle we can simply write ฮฃXandeX, where the latter still has independent entries and a general variance profile eT, analogously to T. Even in this case, we would prefer to keep the writing ฮฃX, as this will give us the flexibility of choosing ฯƒiโ€™s to be evenn-dependent, so that the eT-entries no longer satisfy the assumption (1.1). In case (ฮฃ,T) = (I, 1), a standard approach to detect Sfrom His to investigate the leading singular values of H, as natural estimators of the counterpart of S. In the high- dimensional setting when pandnare proportional, there has been a vast of literature on the singular value approach; see [32, 11, 23, 26, 19, 47, 36, 28, 43, 37, 42] for instance. Specif- ically, as a prominent example of the famous Baik-Ben Arous-P ยดechยดe (BBP) phase transition phenomenon [9], one knows that the i-th leading singular value of Hwill jump out of the sup- port of the Marchenko-Pastur law, and converges to a limiting location which is a function of di, if the diis sufficiently large. From the limiting location of the leading singular value of H, one can recover the value of di. We also refer to [10, 18, 7, 26, 50, 14, 24, 25, 12, 46, 45, 15] and the reference therein for the study of BBP transition on other models such as deformed Wigner matrices and spiked covariance matrices. The BBP transition for the signal-plus-noise model can be further extended to the case when ฮฃis general but itself does not create any outliers in the singular value distribution of ฮฃX; see [33] for instance. In this case, if diis sufficiently large, one can still observe outliers in the singular value distribution of H. Nevertheless, this time, the limiting location of the leading singular value will depend on the parameters in ฮฃandTas well, which are often unknown and hard to estimate in applications. This prevents one from using these limiting locations to estimate diโ€™s. As the signal plus noise model with heteroscedastic noise is ubiq- uitous (see [47, 22, 30, 31, 39, 52, 55] for instance), alternative approach to detect the signal from such general model is in high demand. In order to solve the signal detection problem for the heteroscedastic case, in [27] the authors proposed an asymmetrized non-Hermitian random matrix model for this purpose when two samples of the data are available. Actually, the work [27] primarily focuses on the deformed Wigner matrices. But the strategy can be generalized to signal-plus-noise model as well, as mentioned in [27]. More specifically, if one has two independent samples H1= S+ ฮฃX1andH2=S+ ฮฃX2, one
https://arxiv.org/abs/2504.19450v1
can turn to consider the random matrix model H1Hโˆ— 2 or a linearization of it. Under an assumption that the operator norm of the noise part ฮฃXis significantly smaller than the signal part S, the authors in [27] find that the leading eigenvalue (in magnitude) of the non-Hermitian model contains the precise information of diโ€™s, and the heteroscedasticity of ฮฃXonly shows up in a subleading order, and thus the unknown parameters do not matter if one aims for the first order estimate of the signals. From the SIGNAL DETECTION VIA ASYMMETRIZATION 3 mathematical point of view, the closeness between the leading eigenvalues of the signal plus non-Hermitian noise matrix model and its signal part was previously revealed in [54], where the outlier of the low rank deformation of non-Hermitian square matrix with iid entries is studied. In particular, it reveals a striking difference from the Hermitian case, where the outlier exhibits an order-1 bias toward the true signal. We also refer to [20, 16, 51, 38] for further study on the outliers of the deformed non-Hermitian random matrices. The work [27] provides a very interesting application of this closeness and also provide a non-asymptotic analysis, under general assumption on T. In this work, we will continue this line of research to show that the asymmetrization technique is powerful even various conditions in [27] are not satisfied, and it can be used to tackle other challenging questions in signal detection. Especially, we will answer a recurring question on how to detect the signal part Sfrom H when ฮฃitself may create large spiked singular values. Although distinguishing the outlying singular values of Hcaused by the signal part Sand the noise part ฮฃis difficult, we find again, the asymmetrization approach is effective for this purpose. Unlike the situation in [27], where detecting the existence of the signal is not an issue and only obtaining a precise estimate is challenging using the singular value approach, in our case, even detecting the existence of the signal using the singular value approach is difficult. In addition, differently from [27], we show that even when the spikes in ฮฃis much larger than diโ€™s inS, and thus the operator norm of the noise part is much larger than S, one can still effectively detect S. More specifically, we will provide the optimal detectability threshold,p nโˆ’1โˆฅTโˆฅop, fordiโ€™s, as long as the spikes in ฮฃis not n1/4times larger than diโ€™s. Even when ฮฃ =I, our threshold in terms of Tis more precise than the sufficiently large signal-to-noise ratio imposed in [27]. It particularly shows that the operator norm of the noise matrix ฮฃXitself is not essential in the detectability of diโ€™s. Instead, the essential threshold is given byp nโˆ’1โˆฅTโˆฅopwhich could be significantly smaller than โˆฅฮฃXโˆฅop. We then take a step further to identify the precise fluctuation of the outliers of H1Hโˆ— 2around the limiting location. More specifically, we will work with the following non-Hermitian random matrix Y= H1 Hโˆ— 2 = ฮฃX1 (ฮฃX2)โˆ— + S Sโˆ— =:X+S (1.3) which is a linearization of H1Hโˆ— 2. Note that, due to the block structure,
https://arxiv.org/abs/2504.19450v1
the eigenvalues of Yare in pair. We denote the non-zero eigenvalues of Ybyฮปยฑiโ‰ก ยฑฮปi,i= 1,...,n โˆงp. We make the convention that ฮป1,...,ฮป nโˆงpare those eigenvalues with arguments in (โˆ’ฯ€/2,ฯ€/2]. Further, ฮป1,...,ฮป nโˆงpare in descending order (in magnitude). Due to the fact that we are considering real matrix, we have another symmetry of the eigenvalues, namely, all non-real eigenvalues in {ฮป1,...,ฮป nโˆงp}can find their complex conjugates in this collection. Hence, regarding the ordering according to magnitude, we further make the convention that the one with argument in (0,ฯ€/2)is followed by its complex conjugate in (โˆ’ฯ€/2,0). Throughout the paper, we will be working with the following assumption. ASSUMPTION 1.1. We make the following assumptions. (i) (On dimensionality): We assume that pโ‰กp(n)andnare comparable, i.e. there exists a constant c, such that cnโ‰กp nโ†’cโˆˆ(0,โˆž)asnโ†’ โˆž . (ii) (On signal S): We assume that Sis a low rank matrix with rank kand admits the singular value decomposition, i.e. S=kX i=1diuivโˆ— i=:UDVโˆ— 4 Here kโ‰ฅ0is fixed, D=diag(d1,...,d r)withC > d 1โ‰ฅ...โ‰ฅdkโ‰ฅ0for some constant C >0, and uiโ€™s and viโ€™s are the associated unit left and right singular vectors, respectively. For simplicity, we further assume that diโ€™s are well separated, i.e., min iฬธ=j|diโˆ’dj| โ‰ฅc for some small constant c >0. (iii) (On ฮฃ): We assume that ฮฃis a rank rperturbation of identity, i.e. ฮฃ =I+rX j=1ฯƒiฮพiฮธโˆ— i=:I+ ฮžโˆ†ฮ˜โˆ—, where rโ‰ฅ0is fixed; โˆ† = diag(ฯƒ1,...,ฯƒ r)withโˆฅโˆ†โˆฅopโ‰คn1/4โˆ’ฮต0for some small but fixed ฮต0>0;ฮพiโ€™s and ฮธiโ€™s are the associated unit left and right singular vectors of the low rank matrix ฮฃโˆ’I, respectively. In particular, โˆฅฮฃโˆฅopcan be much larger than the constant order. Finally, we always assume ฮฃto be invertible and โˆฅฮฃโˆ’1โˆฅopโ‰คK, for some constant K >0. (iv) (On matrix X): We assume that X= (xij)has independent entries and have general variance profile1 nT=1 n(tij). Specifically, the entries xijare real random variables with Exij= 0,Ex2 ij=tij n. We restate (1.1) below: there exist two positive constants tโˆ—โ‰คtโˆ—such that tโˆ—โ‰คmin i,jtijโ‰คmax i,jtijโ‰คtโˆ—. For simplicity, we further assume that all moments of xijโ€™s exist, i.e. for any integer pโ‰ฅ3, there exists a constant Cp>0, such that max i,jE|xij|pโ‰คCp<โˆž. REMARK 1.2. Here, we remark on several possible extensions of our assumptions. First, our analysis can be extended to the case where some diโ€™s are multiple singular values. For brevity, however, we focus on the simpler case where diโ€™s are all distinct and well-separated. Second, another popular setting considered in the literature is (ฮฃ,T) = ( general , 1). Here, by โ€general,โ€ we mean that ฮฃis not necessarily a fixed-rank perturbation of I; see [49], for instance. We expect similar phenomena to occur under this setting. However, technically, it requires a separate derivation. We choose to work under Assumption 1.1 mainly to facilitate a more direct comparison with the results in [27]. Finally, it is also possible to extend our discussion to the case where the noise components of the two samples have different (ฮฃ,T), which can occur in reality when the variance profile of the noise depends on the sampling. We leave all these extensions for future discussion. For simplicity, in the sequel, we denote ฯƒmax:=โˆฅฮฃโˆฅop. (1.4)
https://arxiv.org/abs/2504.19450v1
Throughout this paper, we adopt the notion of stochastic domination introduced in [35], which allows a loss of boundedness, up to a small power of N, with high probability. DEFINITION 1.3. (Stochastic domination) Let X= XN(u) :NโˆˆN,uโˆˆU , Y = YN(u) :NโˆˆN,uโˆˆUN SIGNAL DETECTION VIA ASYMMETRIZATION 5 be two families of real random variables, where Yis nonnegative, and UNis a possibly N- dependent parameter set. We say that Xis stochastically dominated by Y, uniformly in u, if for arbitrary small ฯต >0, and large D >0, sup uโˆˆUNP |XN(u)|> NฯตYN(u) โ‰คNโˆ’D for large Nโ‰ฅN0(ฯต,D). We write X=Oโ‰บ(Y)orXโ‰บYwhen Xis stochastically bounded byYuniformly in u. Note that in the special case when XandYare deterministic, Xโ‰บY means for any given ฯต >0,|XN(u)| โ‰คNฯตYN(u)uniformly in u, for all sufficiently large Nโ‰ฅN0(ฯต). In addition, we say that an event E โ‰ก E (N)holds with high probability if P(E)โ‰ฅ1โˆ’Nโˆ’D for any large constant D >0when Nis large enough. Our main results are stated as follows. THEOREM 1.4 (First order limit). Suppose that Assumption 1.1 holds. If there exists a positive integer หœkโ‰คksuch that d1โ‰ฅ...โ‰ฅdหœkโ‰ฅp nโˆ’1โˆฅTโˆฅop+ฮดfor any small (but fixed) ฮด >0, then the spectrum of Yhasหœkpairs of outliers. Additionally, such outlying eigenvalues converge in probability to the strength of the signal diin magnitude. Specifically, for i= 1,...,หœk, with high probability, we have for any ฯต >0, |ฮปiโˆ’di| โ‰คnโˆ’1 2+ฯตฯƒ2 max. REMARK 1.5. In the null case when S= 0andฮฃ =I, we have a random matrix Ywith independent entries and two diagonal 0 blocks. For non-Hermitian matrix with general flat variance profile, i.e., the variances of all entries are comparable to order nโˆ’1, it is known from [2] that the spectral radius of the matrix is given by the square root of the spectral radius of the variance profile. If the result extended to the case with 0 diagonal blocks, we would see that the spectral radius of Yexactly converges top nโˆ’1โˆฅTโˆฅop, in the null case. This indicates that our threshold is optimal. It is possible to prove the convergence of the spectral radius of Ytop nโˆ’1โˆฅTโˆฅopby adapting the discussions in [2, 3], for instance. But for our results on outliers, it is enough to show an upper bound of the spectral radius in the null case. In the sequel, we denote by โƒ—Tkยทthek-th row of T, and by โƒ—Tยทkthek-th column of T. THEOREM 1.6 (Second order fluctuation). Suppose that Assumption 1.1 holds. Recall thatuiandviare the left and right singular vectors of signal Sassociated with singular value di. Ifdiโ‰ฅp nโˆ’1โˆฅTโˆฅop+ฮด, we have the expansion to the fluctuation order ฮปi=di+uโˆ— iฮฃX1vi+vโˆ— i(ฮฃX2)โˆ—ui+1โˆšng(1 +op(1)). (1.5) Heregis independent of uโˆ— iฮฃX1vi+vโˆ— i(ฮฃX2)โˆ—ui, and it is a centered Gaussian random variable with variance Var(g) =n d4 iX ฮฑฮฒ" d2 i n2V(ฮฃโˆ—uiuโˆ— iฮฃ,ฮฃโˆ—uiuโˆ— iฮฃ)M(T,ฮฑ,ฮฒ ) +2 n3V(ฮฃโˆ—uiuโˆ— iฮฃ,vivโˆ— i)ฮฑ,ฮฒN(TTโˆ—,ฮฑ,ฮฒ) +d2 i n2V(vivโˆ— i,vivโˆ— i)ฮฑ,ฮฒM(Tโˆ—,ฮฑ,ฮฒ)# , 6 where M(T,ฮฑ,ฮฒ ) =โƒ—Tฮฑยท Iโˆ’1 n2|z|2(Tโˆ—T)โˆ’1 โƒ—Tฮฒยท, N(TTโˆ—,ฮฑ,ฮฒ) = TTโˆ— Iโˆ’1 n2|z|2(TTโˆ—)โˆ’1 โƒ—Tยทฮฒ! ฮฑ, V(pqโˆ—,rsโˆ—)ฮฑ,ฮฒ= (pqโˆ—)ฮฑฮฑ(rsโˆ—)ฮฒฮฒ for some vector p,q,r,s โˆˆRp. REMARK 1.7. Note that the fluctuation order is not necessarily order 1/โˆšndue to the possible n-dependence of ฯƒiโ€™s. It could be as large as nโˆ’1/2ฯƒ2 max, depending
https://arxiv.org/abs/2504.19450v1
on ฮฃโˆ—ui. The above theorem reveals a non-universal feature of the limiting distribution of the outlier, which has been previously observed in other Hermitian or non-Hermitian model; see [24, 45, 51] for instance. Particularly, the distribution of the outlier is (asymptotically) a convolution of the distribution of a linear combination of X1,X2entries and a Gaussian, which may not be Gaussian in case ui,vi,ฮฃโˆ—uiare all localized, i.e., only a fixed number of components of them are nonzero. But apart from this case, the limiting distribution is still Gaussian by CLT. REMARK 1.8. Our discussion can also be easily extended to the case when there are some multiple di, i.e, some of diโ€™s are equal. In this case, a supercritical diwith multiplicity kiwill create kicorresponding outliers ฮปiโ€™s. The joint distribution of these kieigenvalues is given by that of the eigenvalues of an kibykirandom matrix, whose entry distribution can also be analyzed by our derivations. As a consequence, on fluctuation level, some of these ฮปiโ€™s will be truly complex, i.e., the imaginary part is not 0. It can also be seen in our simulation study in Section 2. For brevity, we leave the detailed discussion to future study. 1.1. Proof Strategy. In this section, we briefly describe our proof strategy for the main results. We will start with the null case (S,ฮฃ) = (0 ,I). We shall first show that, in this case, there is no outlier. More specifically, we denote by X0= X1 Xโˆ— 2 , (1.6) which can be regarded as a linearization of X1Xโˆ— 2. The variance profile of X0is V=1 n T Tโˆ— . (1.7) This non-Hermitian matrix model can be regarded as a special case of the models consid- ered in [1], where the authors consider a general Kronecker random matrix, which is a linear combination of Kronecker products of deterministic matrices and random matrices with inde- pendent entries but general variance profile. The results in [1] shows that under rather general assumption, the spectrum of the Kronecker random matrix is contained in the self-consistent ฯ„-pseudospectrum for any ฯ„ >0. In our case, the self-consistent ฯ„-pseudospectrum is defined as Dฯ„:={zโˆˆC:dist(0,suppฯz)โ‰คฯ„}, where ฯzis the so called self-consistent density of states, which is the deterministic approxi- mation of the spectral distribution of X0โˆ’zโ€™s Hermitization, i.e., Hz= X0โˆ’z Xโˆ— 0โˆ’ยฏz . SIGNAL DETECTION VIA ASYMMETRIZATION 7 One calls supp ฯzthe self-consistent spectrum of Hz. Hence, in order to prove that for any ฮด >0, the spectral radius of X0is bounded byp nโˆ’1โˆฅTโˆฅop+ฮดand thus that of X1Xโˆ— 2is bounded by nโˆ’1โˆฅTโˆฅop+ฮด, it suffices to show that zฬธโˆˆDฯ„for some ฯ„ >0, given that |z| โ‰ฅp nโˆ’1โˆฅTโˆฅop+ฮด. Such a conclusion can be obtained by analyzing the Hermitized Dyson equation following the strategy in [2]. In our case, the Dyson equation boils down to a system of four vector equations. The analysis of the Hermitized random matrix Hzdoes not only lead to the precise upper bound of the spectral radius of X0andX1Xโˆ— 2, it also provides the high probability upper bound of the operator norms of the Green function โˆฅG(ฮป)โˆฅopโ‰ก โˆฅ(X1Xโˆ— 2โˆ’ฮป)โˆ’1โˆฅopโ‰คC, ifฮปโ‰ฅnโˆ’1โˆฅTโˆฅop+ฮด. We then proceed with
https://arxiv.org/abs/2504.19450v1
studying the case S= 0 butฮฃis with deformation as defined in (1.2). In this case, we consider the noise part Xin (1.3), which can be regarded as a mul- tiplicative deformation of X0. Our aim is to show that as long as ฯƒmaxโ‰คn1/4โˆ’ฮต0for some small constant ฮต0>0, the results proved for X0, including the upper bound of the spectral radius and also the upper bound for Green function, still hold. That means, the multiplicative deformation does not change the spectrum of X0significantly. In particular, we show the spectral radius of Xis also bounded byp nโˆ’1โˆฅTโˆฅop+ฮดw.h.p, and the operator norm of the Green function of ฮฃX1Xโˆ— 2ฮฃโˆ—is order 1. To this end, we shall prove that det(X โˆ’z)is uni- formly nonzero for all |z| โ‰ฅp nโˆ’1โˆฅTโˆฅop+ฮด. Based on the result for X0, it will be sufficient to show the smallness of the following centered quadratic form of Green function of X1Xโˆ— 2: For any deterministic unit vectors uandv, any|ฮป| โ‰ฅnโˆ’1โˆฅTโˆฅop+ฮด, and any small constant ฮต >0 |uโˆ—G(ฮป)v| โ‰บnโˆ’1/2+ฮต, G (ฮป) =G(ฮป) +1 ฮป. (1.8) The above type of estimates has been considered in previous literature, such as [54, 51, 20, 27], for other non-Hermitian random matrix models under various assumptions and with varying levels of precision. But all of them are carried out by rather delicate combinatorial moment method after applying a Neumann expansion. In our work, we prove (1.8) via a rather robust cumulant expansion approach, which is applied to the Green function directly. The cumulant expansion approach has been widely used to study Green function in random matrix theory; see [44, 48, 40, 34] for instance. It is a bit surprising that it has never been used to study the limiting behaviour of the outliers of non-Hermitian matrix in the literature, despite the fact that it has been widely used for the counterpart of the Hermitian matrices; see the recent survey [13] and the reference therein. After establishing these bounds for X0andX, we then turn to investigate the outliers of our model Y, created by the signal part S. The asymptotic behaviour of the outliers of Yeven- tually boils down to the analysis of the quadratic form uโˆ—G(ฮป)v. From the estimate in (1.8) one can conclude Theorem 1.4 via a standard argument in [54]. Regarding the fluctuation, we find that ฮปiโˆ’dican be written as a linear combination of Green function quadratic forms of the form uโˆ—G(ฮป)vandฯˆโˆ—Xaฯ•,a= 1,2for various different choices of u,v,ฯˆandฯ•. A key fact is that ฯˆโˆ—Xaฯ•is not necessarily asymptotic Gaussian, depending on the structure of uandv, butuโˆ—G(ฮป)vis asymptotically Gaussian. Hence, we shall show both the asymptotic Gaussianity of the linear combinations of uโˆ—G(ฮป)vโ€™s and the asymptotic independence be- tween it and the ฯˆโˆ—Xaฯ•terms. To this end, we turn to study the joint characteristic function ofuโˆ—G(ฮป)vโ€™s and ฯˆโˆ—Xaฯ•โ€™s and study its limit. Again, the limiting characteristic function is obtained via the robust cumulant expansion approach. In contrast, even in the simpler case of iid non-Hermitian random matrix, the distribution of the outlier was previously obtained in [51, 20] via a rather involved moment method. Our approach is much more straightforward,
https://arxiv.org/abs/2504.19450v1
and robust under the general variance profile assumption, thanks to the inputs from [1]. 8 1.2. Notation. Throughout this paper, we regard nas our fundamental large parameter. Any quantities that are not explicit constant or fixed may depend on n; we often omit the argument nfrom our notation. We further use โˆฅAโˆฅopfor the operator norm of a matrix A. We use CorKto denote some generic (large) positive constant. For any positive integer m, letJmKdenote the set {1,...,m }. We use 1to denote the all one vector, whose dimension is often apparent from the context, and thus is omitted from the notation. For any vector uโˆˆCm, we denote โŸจuโŸฉthe average of its components. 1.3. Organization. The rest of the paper is organized as follows.In Section 2, we propose an approach to detect the signal and present some simulation study. In Section 3, we prove Theorem 1.4, based on Propositions 3.1 and 3.2. In Section 4, we prove Theorem 1.6, based on Proposition 4.1. The proofs of Propositions 3.1, 3.2 and 4.1 will be postponed to Sections 6, 7 and 5, respectively. 2. Simulation study. In this section, we present some simulation study, in order to com- pare the singular value approach based on the symmetric model and the eigenvalue approach based on the asymmetric model. We may apply our result in the following way. We define the following random domain, which is purely data based DN:=n zโˆˆC: Rezโ‰ฅฮปs max+Nโˆ’1/2o , where we used the notation ฮปs max:= max i|ฮปi|1 arg(ฮปi)โˆˆ[ฯ€ logN,ฯ€ 2] . (2.1) Our detection criterion is as follows: If ฮปiโˆˆDN, we identify diuivโˆ— ias a signal . The rough idea is to check if a leading eigenvalue (in magnitude), ฮปi, is significantly away from the threshold. Here we use ฮปs max as a random approximation of the limiting spectral radius,p nโˆ’1โˆฅTโˆฅop, which is unknown in reality. Such an approximation is possible since the lim- iting spectral distribution of X0is rotationally symmetric, which is a consequence of the fact that our Matrix/Vector Dyson equation in (6.6) depends on |z|2only; see the discussion in Section 6. Hence, ฮปs maxserves as a random approximation of the true threshold. Our choice of the additional shift Nโˆ’1/2is inspired by the recent result of the iid random matrix [29]. Especially, in [29], it is known that the fluctuation of the spectral radius of iid random ma- trix (without deformations) is of order o(Nโˆ’1/2). It is not clear if this fluctuation order still applies to our model with a block structure, also with a multiplicative perturbation ฮฃ. But we expect that it should be still true at least when ฯƒmaxโˆผ1. Certainly, if diis sufficiently away from the threshold, by a constant order distance ฯต >0, as we assumed in our main the- orems, we do not have to choose a correction as delicate as Nโˆ’1/2. Such a choice is mainly for the detection of those weak signals close to the threshold. The reason why we start from ฯ€/logNin the definition of ฮปs maxis because that on fluctuation level, the outliers can indeed have nonzero imaginary part, when we
https://arxiv.org/abs/2504.19450v1
have some multiple diโ€™s or close diโ€™s. But in any case, the fluctuation order is no larger than Nโˆ’ฮตaccording to our assumption of ฯƒmax. Hence, the choice of the lower phase bound ฯ€/logNin (2.1) is enough to distinguish the true signals from the other eigenvalues. In the simulation study, we fix (p,n) = (800 ,2000) . We consider two settings of variance profile T, T1= 1, T 2= (Ip/2โŠ•1.5Ip/2) 1. We further consider following two distribution types of xij: (i)โˆšnxijโˆผN(0,tij); (ii)โˆšnxij follows Studentโ€™s t distribution with degree of freedom ฮฝ= 2.2, normalized to be mean 0 and SIGNAL DETECTION VIA ASYMMETRIZATION 9 variance tij. For the choices of S, we primarily consider the case of simple di, but we also perform simulation for multiple di. Specifically, we choose S=d1e3หœeโˆ— 4+d2e4หœeโˆ— 5+d3e5หœeโˆ— 6 and consider various choices of (d1,d2,d3). Here eiโˆˆRp,หœejโˆˆRnrepresent the standard basis of respective dimensions. For ฮฃ, we consider ฮฃ =I+ฯƒ1e1eโˆ— 1+ฯƒ2e2eโˆ— 2. In all figures, we simply call all singular values/eigenvalues which are close to the support of the limiting singular value/eigenvalue distributions the singular values/eigenvalues , and we call those away from the support of limiting distributions the outlying singular values/ outlying eigenvalues . We determine if an eigenvalue is an outlying eigenvalue by the criteria ifฮปiโˆˆDN, and we also color the negative copy of it. In all eigenvalue figures, we denote by thre sthe random threshold ฮปs max. We determine if a singular value is an outlying one by simply checking if it is away from the theoretical right end point of (general) Marchenko Pastur law. In Fig 1, we plot the singular values of Hunder the choice (T,X) = (T1,Gaussian ), and in Fig 2, we plot the eigenvalues of Y(c.f. (1.3)) under the same choice of (T,X). In both figures, we choose simple diโ€™s. The results for multiple diโ€™s are presented in Fig 3 and 4. In Fig 5 and 6, we consider the setting (T,X) = (T2,Gaussian ). From these figures, we can clearly see that the eigenvalue approach can always detect the signals and also the locations of the outlying eigenvalues precisely tell the value of diwhich are above the threshold. In contrast, the spiked ฮฃcan create additional outliers when one use the singular value approach, and thus it could be falsely detected as signals. Fig 1 Although our main theorems do not cover the heavy-tailed regime, we also present the simulation results in Fig. 7 and Fig. 8 under the choice (T,X) = (T1,Studentโ€™s t )to il- lustrate the robustness of our approach in the heavy-tailed setting. The figure for the case (T,X) = (T2,Studentโ€™s t )is similar to that of (T,X) = (T2,Gaussian ), we omit it for brevity. In this case, an additional challenge arises: even when ฮฃ =I, there are many outlying singu- lar values due to the fatness of the distribution tail. We can also view them as spiked singular values, although they are not created by ฮฃ. Detecting the signal from all these outliers seems impossible. In contrast, we observe that the eigenvalue approach remains robust in this
https://arxiv.org/abs/2504.19450v1
case, 10 Fig 2 Fig 3 SIGNAL DETECTION VIA ASYMMETRIZATION 11 Fig 4 Fig 5 12 Fig 6 as long as the second moments of the matrix entries exist. This distinction between the ex- treme singular values and extreme eigenvalues is supported by results from more classical random matrix models; see, for instance, [8, 53, 5, 21, 38]. 3. First order: Proof of Theorem 1.4. Recall the matrix model Yfrom (1.3). Denote the eigendecomposition of Sby S=WDWโˆ—:=X i=ยฑ1,...,ยฑkdiwiwโˆ— i, where dยฑi=ยฑdi, w ยฑi=1โˆš 2ui ยฑvi , i = 1,...,k. By considering the characteristic polynomial, if ฮปis an eigenvalue of Ybut not an eigenvalue ofX, we can derive from det X+WDWโˆ—โˆ’ฮป = det X โˆ’ฮป det I+DWโˆ—(X โˆ’ฮป)โˆ’1W = 0 SIGNAL DETECTION VIA ASYMMETRIZATION 13 Fig 7 Fig 8 14 the following identity det I+DWโˆ—(X โˆ’ฮป)โˆ’1W = 0. (3.1) We further write (X โˆ’ฮป)โˆ’1= ฮปGฯƒ(ฮป2) ฮฃ X1Gฯƒ(ฮป2) Gฯƒ(ฮป2)Xโˆ— 2ฮฃโˆ—ฮปGฯƒ(ฮป2) , (3.2) where Gฯƒ(z) := (ฮฃ X1Xโˆ— 2ฮฃโˆ—โˆ’z)โˆ’1,Gฯƒ(z) := ( Xโˆ— 2ฮฃโˆ—ฮฃX1โˆ’z)โˆ’1. We further set G(z) := ( X1Xโˆ— 2โˆ’z)โˆ’1,G(z) := ( Xโˆ— 2X1โˆ’z)โˆ’1. We denote by ฯ(A)the spectral radius of a square matrix A. Recall the notion โ€œwith high probabilityโ€ from Definition 1.3. We have the following proposition regarding the spectral radius and Green functions of X1Xโˆ— 2andฮฃX1Xโˆ— 2ฮฃโˆ—. PROPOSITION 3.1. Under Assumption 1.1, for any small constant ฮด >0, we have ฯ(X1Xโˆ— 2)โ‰คnโˆ’1โˆฅTโˆฅop+ฮด, ฯ (ฮฃX1Xโˆ— 2ฮฃโˆ—)โ‰คnโˆ’1โˆฅTโˆฅop+ฮด with high probability. Further, uniformly in zwith|z| โ‰ฅnโˆ’1โˆฅTโˆฅop+ฮด, we have for some large constant C >0, โˆฅG(z)โˆฅop,โˆฅGฯƒ(z)โˆฅop,โˆฅG(z)โˆฅop,โˆฅGฯƒ(z)โˆฅopโ‰คC with high probability. The proof of the above proposition will be deferred to Section 6. We then introduce the centered Green functions G(z) :=G(z) +zโˆ’1,G(z) :=G(z) +zโˆ’1. (3.3) We set for any small but fixed ฮต >0. qnโ‰กqn(ฮต) =:n1 2โˆ’ฮต. (3.4) Here ฮตshall always be chosen to be sufficiently small according to ฮต0in Assumption 1.1 of (iii). The following proposition provides the estimates of the quadratic forms of centered Gand G, which will be one of our main technical inputs. PROPOSITION 3.2. Under Assumption 1.1, for any |z| โ‰ฅnโˆ’1โˆฅTโˆฅop+ฮด, and any deter- ministic vectors u,vโˆˆSpโˆ’1 Candฯˆ,ฯ•โˆˆSnโˆ’1 C, we have โŸจu,G(z)vโŸฉ=Oโ‰บ(qโˆ’1 n),โŸจฯˆ,G(z)ฯ•โŸฉ=Oโ‰บ(qโˆ’1 n) โŸจu,X 1G(z)ฯ•โŸฉ=Oโ‰บ(qโˆ’1 n),โŸจฯˆ,G(z)Xโˆ— 2vโŸฉ=Oโ‰บ(qโˆ’1 n). (3.5) The proof of Proposition 3.2 will be deferred to Section 7. We then proceed with the proof of Theorem 1.4. Notice that ฮฃโˆ’1is also a low rank perturbation of I. We then first write Gฯƒ(z) = (ฮฃโˆ—)โˆ’1 X1Xโˆ— 2โˆ’z+z Iโˆ’(ฮฃโˆ—ฮฃ)โˆ’1โˆ’1ฮฃโˆ’1=:G(z) +E(z). SIGNAL DETECTION VIA ASYMMETRIZATION 15 The low rank matrix Iโˆ’(ฮฃโˆ—ฮฃ)โˆ’1admits a spectral decomposition Iโˆ’(ฮฃโˆ—ฮฃ)โˆ’1=:Lฮ“Lโˆ—, (3.6) where ฮ“is a low rank diagonal matrix consisting of the non-zero eigenvalues of Iโˆ’(ฮฃโˆ—ฮฃ)โˆ’1. Then, by Woodbury matrix identity, we can write  X1Xโˆ— 2โˆ’z+z Iโˆ’(ฮฃโˆ—ฮฃ)โˆ’1โˆ’1= [X1Xโˆ— 2โˆ’z+zLฮ“Lโˆ—]โˆ’1 =Gโˆ’zGL I+zฮ“Lโˆ—GLโˆ’1ฮ“Lโˆ—G=Gโˆ’zGL Iโˆ’ฮ“ +zฮ“Lโˆ—GLโˆ’1ฮ“Lโˆ—G, which gives E(z) = (ฮฃโˆ—)โˆ’1โˆ’I G(z) +G(z) ฮฃโˆ’1โˆ’I + (ฮฃโˆ—)โˆ’1โˆ’I G(z) ฮฃโˆ’1โˆ’I โˆ’z(ฮฃโˆ—)โˆ’1GL[Iโˆ’ฮ“ +zฮ“Lโˆ—GL]โˆ’1ฮ“Lโˆ—Gฮฃโˆ’1. We remark here that if we replace G(z)byโˆ’1/zin the definition of E(z), it gives 0. Simi- larly, we can also write Gฯƒ(z) =G(z)โˆ’ G(z)Xโˆ— 2Lฮ›(Iโˆ’zLโˆ—GLฮ›)โˆ’1Lโˆ—X1G(z) =:G(z) +F(z), where ฮ› = ( Iโˆ’ฮ“)โˆ’1โˆ’I. Hence, with the above notations, we can rewrite (3.2) as (X โˆ’ฮป)โˆ’1= ฮปG(ฮป2) ฮฃ X1G(ฮป2) G(ฮป2)Xโˆ— 2ฮฃโˆ—ฮปG(ฮป2) + ฮปE(ฮป2) ฮฃ X1F(ฮป2) F(ฮป2)Xโˆ— 2ฮฃโˆ—ฮปF(ฮป2). . (3.7) Recall the
https://arxiv.org/abs/2504.19450v1
definition of ฯƒmaxfrom (1.4). Further, we introduce the notation Ri(z),i= 1,2 to include all the matrices in the remainder terms after applying expansion, which satisfy that for any given unit vectors u,v โŸจu,Ri(z)vโŸฉ=Oโ‰บ(qโˆ’i nฯƒ2i max). (3.8) By our assumption on ฮฃ, it is easy to check ฯƒ2 maxโ‰คCn1/2โˆ’ฮต0for some ฮต0> ฮต > 0. Hence, the eigenvalues of Iโˆ’ฮ“are no smaller than nโˆ’1/2+ฮต0. Meanwhile, the entries of Lโˆ—GLare Oโ‰บ(qโˆ’1 n)according to Proposition 3.2. Consequently, โˆฅ(Iโˆ’ฮ“)โˆ’1ฮ“Lโˆ—GLโˆฅop=o(1)with high probability, we can thus express Iโˆ’ฮ“ +zฮ“Lโˆ—GLโˆ’1as a Neumann series and bound the remainder terms, i.e. z(ฮฃโˆ—)โˆ’1GL Iโˆ’ฮ“ +zฮ“Lโˆ—GLโˆ’1ฮ“Lโˆ—Gฮฃโˆ’1 =z(ฮฃโˆ—)โˆ’1G(ฮฃโˆ—ฮฃ) Iโˆ’(ฮฃโˆ—ฮฃ)โˆ’1 Gฮฃโˆ’1 โˆ’z2(ฮฃโˆ—)โˆ’1G(ฮฃโˆ—ฮฃ) Iโˆ’(ฮฃโˆ—ฮฃ)โˆ’1 G(ฮฃโˆ—ฮฃ) Iโˆ’(ฮฃโˆ—ฮฃ)โˆ’1 Gฮฃโˆ’1+R2(z), (3.9) where R2(z)is defined in (3.8). With the above expansion and Proposition 3.2, we can expand G(ฮป2)andG(ฮป2)around โˆ’1/ฮป2in (3.7) and obtain Wโˆ—(X โˆ’ฮป)โˆ’1W =โˆ’1 ฮป+Wโˆ— ฮปG(ฮป2) ฮฃ X1G(ฮป2) G(ฮป2)Xโˆ— 2ฮฃโˆ—ฮปG(ฮป2) W +Wโˆ— ฮปE(ฮป2) 0 0 0 W+Wโˆ—R2(ฮป2)W, (3.10) where E(z) =Aโˆ—G+GA+Aโˆ—GA+Bโˆ—Gฮฃโˆ’1+ (ฮฃโˆ—)โˆ’1GB+Bโˆ—GB+R(z), (3.11) 16 R(z) =โˆ’z(ฮฃโˆ—)โˆ’1GBฮฃGBฮฃGฮฃโˆ’1โˆ’z(ฮฃโˆ—)โˆ’1GBฮฃGฮฃโˆ’1โˆ’z(ฮฃโˆ—)โˆ’1GBฮฃGB โˆ’zBโˆ—GBฮฃGฮฃโˆ’1+R2(z), Here we introduced the shorthand notations A:= ฮฃโˆ’1โˆ’I, B := (ฮฃโˆ—ฮฃ)(Iโˆ’(ฮฃโˆ—ฮฃ)โˆ’1)ฮฃโˆ’1= ฮฃโˆ—โˆ’ฮฃโˆ’1. Note that Proposition 3.2 and (3.9) imply that the first six terms in (3.11) are R1(z), while R(z)is anR2(z). Then, we have  Wโˆ—h (X โˆ’ฮป)โˆ’1+1 ฮป+X ฮป2i W =Wโˆ— ฮปG(ฮป2) ฮฃ X1G(ฮป2)) G(ฮป2)Xโˆ— 2ฮฃโˆ—ฮปG(ฮป2) W +Wโˆ— ฮปE(ฮป2) 0 0 0 W+Wโˆ—R2(ฮป2)W, (3.12) With Proposition 3.2, one can follow the argument in [54] to show that ฮปยฑi(Y) =ยฑฮปi(Y) =ยฑdi+Oโ‰บ(qโˆ’1 nฯƒ2 max). This concludes the proof of Theorem 1.4. 4. Second order: Proof of Theorem 1.6. In this section, we take a step further to study the fluctuation of ฮปi. Based on (3.1) and Proposition 3.1, we further expand ฮปiaround di, 0 = det I+DWโˆ—(X โˆ’ฮป)โˆ’1W = det I+DWโˆ—(X โˆ’di)โˆ’1W+ (diโˆ’ฮปi)DWโˆ—(X โˆ’di)โˆ’2W+O(|ฮปiโˆ’di|2) = det I+DWโˆ—(X โˆ’di)โˆ’1W+ (diโˆ’ฮปi)dโˆ’2 iD+O(|ฮปiโˆ’di|qโˆ’1 nฯƒ2 max) =h 1 +diwโˆ— i(X โˆ’di)โˆ’1wi+ (diโˆ’ฮปi)dโˆ’1 iiY jฬธ=i 1โˆ’dj di +O(|ฮปiโˆ’di|qโˆ’1 nฯƒ2 max) +O(qโˆ’2 nฯƒ2 maxโˆฅuโˆ— iฮฃโˆฅ2 2), (4.1) where in the third step the estimate of Wโˆ—(X โˆ’ di)โˆ’2Wfollows simply from that of Wโˆ—(X โˆ’di)โˆ’1Wby applying a contour integral around di, and the last step follows from the expansion of the determinant and fact that the contribution from off-diagonal entries is small. For simplicity, we will write the error term in (4.1) as Ei:=O(|ฮปiโˆ’di|qโˆ’1 nฯƒ2 max) +O(qโˆ’2 nฯƒ2 maxโˆฅuโˆ— iฮฃโˆฅ2 2). In the sequel, we will also use Eito denote any generic term of the above order. From (4.1), we have ฮปiโˆ’di=di 1 +diwโˆ— i(X โˆ’di)โˆ’1wi +Ei =d2 iwโˆ— ih (X โˆ’di)โˆ’1+1 di+X d2 ii wiโˆ’wโˆ— iXwi+Ei. (4.2) By further using (3.10), we have ฮปiโˆ’di=d2 iwโˆ— i diG(d2 i) ฮฃX1G(d2 i) G(d2 i)Xโˆ— 2ฮฃโˆ—diG(d2 i) wi+d2 iwโˆ— i diE(d2 i) 0 0 0 wiโˆ’wโˆ— iXwi+Ei.(4.3) SIGNAL DETECTION VIA ASYMMETRIZATION 17 Hence, our main task is to establish the joint distribution of the following quadratic forms uโˆ—G(z)v, ฯˆโˆ—G(z)ฯ•, sโˆ—X1G(z)ฮถ, rโˆ—G(z)Xโˆ— 2ฮท and also the linear (in X) term wโˆ— iXwi. We then have the following key proposition. PROPOSITION 4.1. Let|z| โ‰ฅnโˆ’1โˆฅTโˆฅop+ฮดfor any small (but fixed) ฮด >0. Letmi,i= 1,...,4be fixed positive integers. For any collection of unit vectors ui,vi,ฯˆj,ฯ•j,qk,ฮณk,ฮทโ„“,rโ„“ withiโˆˆJm1K,jโˆˆJm2K,kโˆˆJm3K,โ„“โˆˆJm4K, the collection of random variables nโˆšnuโˆ— iG(z)vi,โˆšnฯˆโˆ— jG(z)ฯ•j,โˆšnqโˆ— kX1G(z)ฮณk,โˆšnฮทโˆ— โ„“G(z)Xโˆ— 2rโ„“: iโˆˆJm1K,jโˆˆJm2K,kโˆˆJm3K,โ„“โˆˆJm4Ko (4.4) converges to the collection of jointly Gaussian variables
https://arxiv.org/abs/2504.19450v1
{Ai,Bj,Ck,Dโ„“:iโˆˆJm1K,jโˆˆJm2K,kโˆˆJm3K,โ„“โˆˆJm4K} with mean 0 and the covariance structure given by Cov(Ai,Aj) =1 n|z|4X ฮฑ,ฮฒuiฮฑujฮฑviฮฒvjฮฒโƒ—TโŠค ฮฑยทh Iโˆ’1 n2|z|2Tโˆ—Tiโˆ’1โƒ—Tฮฒยท, Cov(Bi,Bj) =1 n|z|4X ฮฑ,ฮฒฯˆiฮฑฯˆjฮฑฯ•iฮฒฯ•jฮฒโƒ—TโŠค ยทฮฑh Iโˆ’1 n2|z|2TTโˆ—iโˆ’1โƒ—Tยทฮฒ, Cov(Ci,Cj) =1 n2|z|4X ฮฑ,ฮฒqiฮฑqjฮฑฮณiฮฒฮณjฮฒ TTโˆ—h Iโˆ’1 n2|z|2TTโˆ—iโˆ’1โƒ—Tยทฮฒ! ฮฑ, Cov(Di,Dj) =1 n2|z|4X ฮฑ,ฮฒriฮฑrjฮฑฮทiฮฒฮทjฮฒ TTโˆ—h Iโˆ’1 n2|z|2TTโˆ—iโˆ’1โƒ—Tยทฮฒ! ฮฑ. (4.5) The collections {Ai},{Bj},{Ck},{Dโ„“}are mutually independent. Further, the collection (4.4) is asymptotically independent of any collection of finitely many linear terms of the formโˆšnaโˆ—Xib,i= 1,2for any deterministic unit vectors a,b. The proof of Proposition 4.1 will be stated in Section 5. Now, we proceed with the proof of Theorem 1.6. It amounts to the estimate of the variance of the Gaussian part in (4.3), as we have already shown the asymptotic independence between the Gaussian part and wโˆ— iXwi in Proposition 4.1. Notice that we have ฮปiโˆ’di=d2 i(diuโˆ— iG(d2 i)ui+uโˆ— iฮฃX1G(d2 i)vi+vโˆ— iG(d2 i)Xโˆ— 2ui+divโˆ— iG(d2 i)vi+diuโˆ— iE(d2 i)ui) โˆ’uโˆ— iฮฃX1viโˆ’vโˆ— iXโˆ— 2ฮฃโˆ—ui+Ei,(4.6) where Eis defined in (3.11). A simplification leads to uโˆ— iE(d2 i)ui= (ฮฃโˆ’1uiโˆ’ui)โˆ—G(d2 i)ui+uโˆ— iG(d2 i)(ฮฃโˆ’1uiโˆ’ui) + (ฮฃโˆ’1uiโˆ’ui)โˆ—G(d2 i)(ฮฃโˆ’1uiโˆ’ui) + (ฮฃโˆ—uiโˆ’ฮฃโˆ’1ui)โˆ—G(d2 i)ฮฃโˆ’1ui+ ฮฃโˆ’1uโˆ— iG(d2 i)(ฮฃโˆ—uiโˆ’ฮฃโˆ’1ui) + (ฮฃโˆ—uiโˆ’ฮฃโˆ’1ui)โˆ—G(d2 i)(ฮฃโˆ—uiโˆ’ฮฃโˆ’1ui) +Ei =uโˆ— iฮฃG(d2 i)ฮฃโˆ—uiโˆ’uโˆ— iG(d2 i)ui+Ei.(4.7) 18 The Gaussian part of (4.2), comes from the first line of (4.6). For notational simplicity, we set M(T,ฮฑ,ฮฒ ) =โƒ—Tฮฑยท Iโˆ’1 n2|z|2(Tโˆ—T)โˆ’1 โƒ—Tฮฒยท, N(TTโˆ—,ฮฑ,ฮฒ) = TTโˆ— Iโˆ’1 n2|z|2(TTโˆ—)โˆ’1 โƒ—Tยทฮฒ! ฮฑ, V(pqโˆ—,rsโˆ—)ฮฑ,ฮฒ= (pqโˆ—)ฮฑฮฑ(rsโˆ—)ฮฒฮฒ, for some vector p,q,r,s โˆˆRp. Combining (4.6), (4.7), and (4.5) in Proposition 4.1, we can conclude the proof of Theorem 1.6. 5. Gaussianity of Green function quadratic forms: Proof of Proposition 4.1. In this section, we prove Proposition 4.1, based on Propositions 3.1 and 3.2. For brevity, we consider the following linear combination of the quadratic forms (5.1)Q:=โˆšn(zuโˆ—G(z)v+zฯˆโˆ—G(z)ฯ•+qโˆ—X1G(z)ฮณ+ฮทโˆ—G(z)Xโˆ— 2r) =โˆšn(uโˆ—X1Xโˆ— 2G(z)v+ฯˆโˆ—Xโˆ— 2X1G(z)ฯ•+qโˆ—X1G(z)ฮณ+ฮทโˆ—G(z)Xโˆ— 2r) In order to prove Proposition 4.1, we shall consider an arbitrary linear combination of all the quadratic forms in (4.4). Nevertheless, the following derivation for the CLT of the above Q is sufficient to show the mechanism. The generalization is straightforward. After establishing the CLT for Q, we will further comment on how to involve the linear term of Xand show the asymptotic independence simultaneously. We further define the smooth cutoff function ฯ‡Gโ‰กฯ‡(nโˆ’1TrGGโˆ—), (5.2) where ฯ‡(t)is a smooth cutoff function which takes 1when|t| โ‰คKfor some sufficiently large constant K > 0and takes 0when|t| โ‰ฅ2Kand interpolates smoothly in between so that|ฯ‡(k)(t)| โ‰คCkfor some positive constant Ckfor any fixed integer kโ‰ฅ0. By the upper bound of โˆฅGโˆฅopin Propositions 3.1, we have |nโˆ’1TrGGโˆ—| โ‰คKw.h.p., when Kis large enough. Hence, ฯ‡G= 1w.h.p. and ฯ‡(k) Gโ‰กฯ‡(k)(nโˆ’1TrGGโˆ—) = 0 w.h.p. for any fixed kโ‰ฅ1. Further, we have the deterministic bound โˆฅGโˆฅopฯ‡(k) Gโ‰ค2CkKn, for any fixed kโ‰ฅ0. (5.3) Note that Q=Qฯ‡Gw.h.p. Hence, it suffices to establish the CLT for Qฯ‡Ginstead. Our aim is to establish Efโ€ฒ(t) =โˆ’d2tEf(t) +o(1), where f(t) = exp(i tQฯ‡ G) (5.4) for some d>0. We will use the cumulant expansion formula [40, Lemma 1.3] to establish the above equation. The reason why we include the ฯ‡Gfactor is to make sure that G-factors in the derivation has a deterministic upper bound so that we can take expectations. In the sequel, for simplicity, we omit zfrom the notations of the Green functions. We further
https://arxiv.org/abs/2504.19450v1
write Efโ€ฒ(t) = iEQฯ‡Gf(t) = iโˆšnE(uโˆ—X1Xโˆ— 2Gv+ฯˆโˆ—Xโˆ— 2X1Gฯ•+qโˆ—X1Gฮณ+ฮทโˆ—GXโˆ— 2r)ฯ‡Gf(t) =:I+II+III+IV (5.5) SIGNAL DETECTION VIA ASYMMETRIZATION 19 In the sequel, we show the estimate of the first term Iin details, and the other terms can be estimated similarly. We write via cumulant expansion [40, Lemma 1.3] I= iโˆšnEuโˆ—X1Xโˆ— 2Gvf(t) = iโˆšnX ijEx1,ijh (Xโˆ— 2Gvuโˆ—)jiฯ‡Gf(t)i = iโˆšnmX ฮฑ=1X ijฮบฮฑ+1(x1,ij) ฮฑ!Eโˆ‚ฮฑ 1,ijh (Xโˆ— 2Gvuโˆ—)jiฯ‡Gf(t)i +R (5.6) where R=Oโ‰บ(nโˆ’C)for a large constant C >0when mis large enough. The estimate can be done similarly to (7.4). As the probability for the ฯ‡Gฬธ= 1 orฯ‡(k) Gฬธ= 0 forkโ‰ฅ1is extremely small, any term involving the derivatives of ฯ‡Gcan be neglected easily in the cumulant expansion. Hence, in the sequel, we will focus on the derivatives of other factors, and always include the terms involving the ฯ‡Gderivatives into the error terms without further explanation. The first order ( ฮฑ= 1) term in (5.6) reads iโˆšnX ijEโˆ‚1 1,ijh (Xโˆ— 2Gvuโˆ—)jiฯ‡Gf(t)i =iโˆšnX ijtijEh โˆ’(Xโˆ— 2G)ji(Xโˆ— 2Gvuโˆ—)ji+ itโˆšn(Xโˆ— 2Gvuโˆ—)ji โˆ’zuโˆ—GEijXโˆ— 2Gv โˆ’zฯˆโˆ—GXโˆ— 2EijGฯ•+qโˆ—EijGฮณโˆ’qโˆ—X1GXโˆ— 2EijGฮณโˆ’ฮทโˆ—GXโˆ— 2EijGXโˆ— 2r ฯ‡Gf(t)i =:โˆ’tX ijtijEh (Xโˆ— 2Gvuโˆ—)jiuโˆ—EijXโˆ— 2Gv ฯ‡Gf(t)i +R1. (5.7) Here Eij= (ฮดkiฮดโ„“j)k,โ„“โˆˆRpร—n, i.e., Eij=eiหœeโˆ— j, where eiโˆˆRp,หœejโˆˆRnrepresent the stan- dard basis of respective dimensions. We claim that in the RHS of the above equation, the first term is the leading term and R1is the remainder term that is negligible. More precisely, with the aid of Proposition 3.2 and Proposition 6.1, we can easily show R1=Oโ‰บ(qโˆ’1 n). (5.8) We then proceed with showing that all higher order ( ฮฑโ‰ฅ2) terms in (5.6) are negligible. Notice that the second order term is proportional to โˆšnX ijฮบ3(x1,ij)โˆ‚2 1,ijh (Xโˆ— 2Gvuโˆ—)jiฯ‡Gf(t)i =โˆšnX ijฮบ3(x1,ij)h โˆ‚2 1,ij(Xโˆ— 2Gvuโˆ—)ji+ 2itโˆ‚1 1,ij(Xโˆ— 2Gvuโˆ—)jiโˆ‚1 1,ijQ + it(Xโˆ— 2Gvuโˆ—)jiโˆ‚2 1,ijQโˆ’t2(Xโˆ— 2Gvuโˆ—)ji(โˆ‚1 1,ijQ)2i ฯ‡Gf(t) +R1 = (1) + (2) + (3) + (4) + R1, (5.9) Using (3.2) and the fact that ฮบ3(x1,ij) =O(nโˆ’3 2), one can simply bound (1) by |(1)| โ‰คCnโˆ’1X ij|(Xโˆ— 2G)ji|2|หœeโˆ— jXโˆ— 2Gv||ui|C.S. โ‰คโˆšnqโˆ’3 n, 20 As the other terms in (5.9) all involve the derivative of Q, we first derive (5.10)โˆ‚1 1,ijQ=โˆšnh uโˆ—EijXโˆ— 2Gvโˆ’uโˆ—X1Xโˆ— 2GEijXโˆ— 2Gv +ฯˆโˆ—Xโˆ— 2EijGฯ•โˆ’ฯˆโˆ—Xโˆ— 2X1GXโˆ— 2EijGฯ• +qโˆ—EijGฮณโˆ’qโˆ—X1GXโˆ— 2EijGฮณโˆ’ฮทโˆ—GXโˆ— 2EijXโˆ— 2Gri . It is easy to obtain |โˆ‚1 1,ijQ|=O(โˆšnqโˆ’1 n)by Proposition 3.2. Then for (2), by extracting an factor (Xโˆ— 2G)jifrom โˆ‚1 1,ij Xโˆ— 2Gvuโˆ— ji, we have |(2)| โ‰คCnโˆ’1X ij|(Xโˆ— 2G)ji|  Xโˆ— 2Gvuโˆ— jiโˆ‚1 1,ijQ โ‰คCnโˆ’1/2qโˆ’1 nX ij|(Xโˆ— 2G)ji||(Xโˆ— 2Gvuโˆ—)ji| โ‰คCnโˆ’1/2qโˆ’1 np TrXโˆ— 2GGโˆ—X2ยทvโˆ—Gโˆ—X2Xโˆ— 2Gvโ‰บqโˆ’1 n, where in the last two steps we used Cauchy Schwarz inequality and the fact that the operator norms of the matrices are Oโ‰บ(1). For term (3),we observe that in comparison to compare toโˆ‚1 1,ijQ,โˆ‚2 1,ijQwill bring each term an additional factor, which can be bounded crudely by O(1). Furthermore, in each term of โˆ‚1 1,ijQ, there is a factor of the form ฮธโˆ—eifor some vector ฮธwithโˆฅฮธโˆฅ2โ‰บ1, and another factor of the form หœeโˆ— jฮพfor some vector ฮพwithโˆฅฮพโˆฅ2โ‰บ1. We further write (Xโˆ— 2Gvuโˆ—)ji= หœeโˆ— jXโˆ— 2Gvยทuโˆ— i, and apply Cauchy Schwarz inequality for the i-sum and j-sum respectively, one can easily get |(3)| โ‰คCnโˆ’1X ij  Xโˆ— 2Gvuโˆ— jiโˆ‚1 1,ijQ โ‰คCnโˆ’1X ij uโˆ— iฮธโˆ—eiหœeโˆ— jXโˆ— 2Gvหœeโˆ— jฮพ โ‰บnโˆ’1/2. For the term (4), since
https://arxiv.org/abs/2504.19450v1
(โˆ‚1 1,ijQ)2will produce an additional factor of n, we need to deal with this term in a finer way. Bounding one โˆ‚1 1,ijQbyโˆšnqโˆ’1 n, and for the rest applying the same reasoning as in (3), we obtain |(4)| โ‰คCnโˆ’1X ij  Xโˆ— 2Gvuโˆ— ji(โˆ‚1 1,ijQ)2 โ‰คCqโˆ’1 nX ij uโˆ— iฮธโˆ—eiหœeโˆ— jXโˆ— 2Gvหœeโˆ— jฮพโˆ‚1 1,ijQ โ‰ฒqโˆ’1 n. Altogether, the second order term can be bounded by qโˆ’1 n=o(1). For higher order terms, o(1)bound can be obtained similarly and actually more easily as ฮบa+1(xij)decays by nโˆ’1/2 ifฮฑincreases by 1. Hence, we omit the details and claim I=โˆ’tX ij|ui|2tijEh (Xโˆ— 2Gvvโˆ—Gโˆ—X2)jjฯ‡Gf(t)i +Oโ‰บ(qโˆ’1 n). (5.11) Similarly, for the other terms in (5.5), we also have II=โˆ’tX ij|ฯ•j|2tijEh (X2Gโˆ—ฯˆฯˆโˆ—GXโˆ— 2)iiฯ‡Gf(t)i +Oโ‰บ(qโˆ’1 n), III=โˆ’t |z|2X ijk|qi|2tijtkjEh (X1Gฮถฮถโˆ—Gโˆ—Xโˆ— 1)kkฯ‡Gf(t)i +Oโ‰บ(qโˆ’1 n), IV=โˆ’t |z|2X ijk|rj|2tjitkiEh (X2Gโˆ—ฮทฮทโˆ—GXโˆ— 2)kkฯ‡Gf(t)i +Oโ‰บ(qโˆ’1 n). (5.12) SIGNAL DETECTION VIA ASYMMETRIZATION 21 In the following lemma, we provide a further estimate of the leading term in (5.11). The terms in (5.12) can be estimated similarly. LEMMA 5.1. With the above notations, we have โˆ’tX ij|ui|2tijEh (Xโˆ— 2Gvvโˆ—Gโˆ—X2)jjฯ‡Gf(t)i =โˆ’t n|z|2X ฮฑ,ฮฒ|uฮฑ|2|vฮฒ|2โƒ—TโŠค ฮฑยทh Iโˆ’1 n2|z|2Tโˆ—Tiโˆ’1โƒ—TฮฒยทEf(t) +Oโ‰บ(qโˆ’1 n). PROOF OF LEMMA 5.1. By cumulant expansion, we have Eh (Xโˆ— 2Gvvโˆ—Gโˆ—X2)jjฯ‡Gf(t)i =X ktkj nEโˆ‚2,kjh (Gvvโˆ—Gโˆ—X2)kjฯ‡Gf(t)i +Oโ‰บ(1 nqn) =1 zX ktkj nEh (X1Xโˆ— 2Gvvโˆ—Gโˆ—)kkโˆ’(vvโˆ—Gโˆ—)kk ฯ‡Gf(t)i +Oโ‰บ(1 nqn),(5.13) where the error terms come from the estimates of the higher order terms in the expansion, and their estimate can be done similarly as before, and thus the details are omitted. Further, by applying another cumulant expansion, we have Eh (X1Xโˆ— 2Gvvโˆ—Gโˆ—)kkฯ‡Gf(t)i =X btkb nEโˆ‚1,kbh (Xโˆ— 2Gvvโˆ—Gโˆ—)bkฯ‡Gf(t)i =โˆ’X btkb nEh (vโˆ—Gโˆ—X2ebeโˆ— bXโˆ— 2Gekeโˆ— kGv) + (Xโˆ— 2Gvvโˆ—Gโˆ—X2)bbGโˆ— kk ฯ‡Gf(t)i +Oโ‰บ(1 nqn).(5.14) Note that when we plug (5.14) to (5.13), the contribution from the first term in (5.14) is negligible, since 1 n2X k,btkjtkbvโˆ—Gโˆ—X2ebeโˆ— bXโˆ— 2Gekeโˆ— kGv=1 n2X ktkjvโˆ—Gโˆ—X2TkยทXโˆ— 2Gekeโˆ— kGv =X ktkjvโˆ—Akekeโˆ— kGv=1 n2vโˆ—Bjv=Oโ‰บ(1 n2) where Tiยท=diag({tij}n j=1),Tยทj=diag({tij}p i=1), and Ai,Biare some matrices that de- pends on TiยทorTยทiwithOโ‰บ(1)operator norm bounds. Hence, approximating Gโˆ— kkin (5.14) and(vvโˆ—Gโˆ—)kkin (5.13) by โˆ’1/ยฏzandโˆ’|vk|2/ยฏzrespectively, we can derive from (5.13) that Eh (Xโˆ— 2Gvvโˆ—Gโˆ—X2)jjฯ‡Gf(t)i =1 n2|z|2X k,btkjtkbEh (Xโˆ— 2Gvvโˆ—Gโˆ—X2)bbฯ‡Gf(t)i +1 n|z|2X ktkj|vk|2E[f(t)] +Oโ‰บ(1 nqn). A further estimate of the variances of (Xโˆ— 2Gvvโˆ—Gโˆ—X2)jjโ€™s via cumulant expansion leads to Eh (Xโˆ— 2Gvvโˆ—Gโˆ—X2)jjฯ‡Gf(t)i =Eh (Xโˆ— 2Gvvโˆ—Gโˆ—X2)jjฯ‡Gi E[f(t)] +Oโ‰บ(1 nqn) 22 =1 n2|z|2X k,btkjtkbEh (Xโˆ— 2Gvvโˆ—Gโˆ—X2)bbฯ‡Gi E[f(t)] +1 n|z|2X ktkj|vk|2E[f(t)] +Oโ‰บ(1 nqn). (5.15) It is also easy to see that the second equation still holds if we cancel out the E[f(t)]factors. In light of this fact, we denote by โƒ—Tkยทthek-th row of T, and by โƒ—Tยทkthek-th column of T. Further, we set โƒ—M= (Mj)n j=1withMj=E(nXโˆ— 2Gvvโˆ—Gโˆ—X2)jj. Then we have the self consistent equation โƒ—M=1 |z|2X k|vk|2 Iโˆ’1 n2|z|2(Tโˆ—T)โˆ’1 โƒ—Tkยท+โƒ— ฮต, (5.16) where โƒ— ฮตis an error vector with โˆฅโƒ— ฮตโˆฅโˆž=Oโ‰บ(qโˆ’1 n). Plugging (5.16) back to (5.15), we get โˆ’tX ij|ui|2tijEh (Xโˆ— 2Gvvโˆ—Gโˆ—X2)jjฯ‡Gf(t)i =โˆ’t n|z|2X ik|ui|2|vk|2โƒ—TโŠค iยท Iโˆ’1 n2|z|2(Tโˆ—T)โˆ’1 โƒ—TkยทEf(t) +Oโ‰บ(qโˆ’1 n). This concludes the proof of Lemma 5.1. We then proceed with the proof of Proposition 4.1 for the quadratic forms in (5.1). Simi- larly to the proof of Lemma 5.1, we can also estimate other terms in (5.12). In summary, we have I=โˆ’t
https://arxiv.org/abs/2504.19450v1
n|z|2X ik|ui|2|vk|2โƒ—TโŠค iยท Iโˆ’1 n2|z|2(Tโˆ—T)โˆ’1 โƒ—TkยทEf(t) +Oโ‰บ(qโˆ’1 n), II=โˆ’t n|z|2X jk|ฯ•j|2|ฯˆk|2โƒ—TโŠค ยทj Iโˆ’1 n2|z|2(TTโˆ—)โˆ’1 โƒ—TยทkEf(t) +Oโ‰บ(qโˆ’1 n), III=โˆ’t n2|z|4X ik|qi|2|ฮถk|2 TTโˆ—h Iโˆ’1 n2|z|2(TTโˆ—)iโˆ’1โƒ—Tยทk iEf(t) +Oโ‰บ(qโˆ’1 n), IV=โˆ’t n2|z|4X jk|rj|2|ฮทk|2 TTโˆ—h Iโˆ’1 n2|z|2(TTโˆ—)iโˆ’1โƒ—Tยทk jEf(t) +Oโ‰บ(qโˆ’1 n). Plugging the above estimates into (5.5), we get an estimate of the form in (5.4), and thus a CLT follows for the particular linear combination in (5.1). In order to prove the general joint CLT for the quadratic forms in (4.4), we shall consider the characteristic function of more general linear combination of the form โˆšn๏ฃซ ๏ฃญzm1X i=1c1iuโˆ— iG(z)vi+zm2X j=1c2jฯˆโˆ— jG(z)ฯ•j+m3X k=1c3kqโˆ— kX1G(z)ฮณk+m4X โ„“=1c4โ„“ฮทโˆ— โ„“G(z)Xโˆ— 2rโ„“๏ฃถ ๏ฃธ. The derivation is a straightforward extension of that is done for Qin (5.1). For brevity, we omit the details and claim (4.5). What remains is to show the asymptotic independence of the Green function quadratic forms in (4.4) and the linear (in Xi) term aโˆ—Xib. It can be proved via a slight modification SIGNAL DETECTION VIA ASYMMETRIZATION 23 of the above derivation of the CLT. We illustrate the necessary modification as follows, again based on the simple linear combination Qdefined in (5.1). We now involve the term of the form aโˆ—X1b, for instance, and define f(t,s) = exp(i tQฯ‡ G+โˆšnisaโˆ—X1b). We can add the term of the form cโˆ—X2das well. But for brevity, we restrict ourselves to the above quantity. Instead of (5.4), we now need to show โˆ‚ โˆ‚tEf(t,s) =โˆ’d2tEf(t,s) +o(1). (5.17) Solving the above equation with Ef(0,s) =Eexp(โˆšnisaโˆ—X1b)gives Ef(t,s) = exp( โˆ’d2t2 2)Eexp(โˆšnisaโˆ—X1b) +o(1) which proves the asymptotic Gaussianity of Q, and the asymptotic independence between Q andโˆšnaโˆ—X1bsimultaneously. Hence, it suffices to illustrate how to adapt the proof of (5.17) from that of (5.4). Similarly to (5.5), we can write โˆ‚ โˆ‚tEf(t,s) = iEQf(t,s) = iโˆšnE(uโˆ—X1Xโˆ— 2Gv+ฯˆโˆ—Xโˆ— 2X1Gฯ•+qโˆ—X1Gฮณ+ฮทโˆ—GXโˆ— 2r)ฯ‡Gf(t,s) =:eI+fII+gIII+fIV . It suffices to illustrate the estimate of eI, as an adapt of the estimate of Iin (5.5). The other terms can be estimated similarly. We again apply cumulant expansion as (5.6). In the first order term of the cumulant expansion, we will have all terms analogous to those in (5.7), with an additional term which involves the derivative of the newly addedโˆšnaโˆ—X1b. This terms reads iX ijEh (Xโˆ— 2Gvuโˆ—)jiaibjฯ‡Gf(t,s)i = iE[uโˆ—aยทbโˆ—Xโˆ— 2Gvฯ‡ Gยทf(t,s)] =Oโ‰บ(qโˆ’1 n), where we used Proposition 3.2. The derivative of theโˆšnaโˆ—X1bterm will also show up in the higher order terms in the cumulant expansion, but the related terms can all be estimated easily with the aid of Proposition 3.2 and Cauchy Schwarz inequality. Hence, we omit the details and conclude (5.17). 6. Spectral norm estimate: Proof of Proposition 3.1. In this section we will prove Proposition 3.1. We will first prove the result on X1Xโˆ— 2without using Proposition 3.2; see Proposition 6.1 and its proof below. Then, we prove those bounds for ฮฃX1Xโˆ— 2ฮฃโˆ—with the aid of Proposition 3.2; see Proposition 6.3 and its proof below. We remark here that the proof of Proposition 3.2 in Section 7 will need those bounds in Proposition 6.1. Recall the linearization of X1Xโˆ— 2from (1.6) and its variance profile (1.7). For simplicity, in this section we denote
https://arxiv.org/abs/2504.19450v1
by T=nโˆ’1T (6.1) the variance profile of X1andX2and we recall the flatness assumption in (1.1). We will follow the strategy developed in [1, 2, 3]. Especially, our matrix X0can be re- garded as a special case of the general model considered in [1]. Based on the result for X0, we then derive the results for the model Xvia a perturbation argument. 24 A standard strategy to study the spectrum of a non-Hermitian random matrix is via the Girkoโ€™s Hermitization/linearization. Specifically, the spectrum of the (n+p)ร—(n+p)-matrix X0can be studied by analyzing the following 2(n+p)ร—2(n+p)Hermitian matrix Hz= X0โˆ’z Xโˆ— 0โˆ’ยฏz , where zis a generic complex number. It is known that the possible eigenvalues of X0around zcan be studied via the spectrum of Hzaround 0. Heuristically, if the eigenvalues of Hzare aways from 0, the eigenvalues of X0will be away from z. The spectrum of Hzcan then be studied via its Green function Gz(ฯ‰) = (Hzโˆ’ฯ‰)โˆ’1. Following the study in [1, 2, 3], when the variance profile Vis general, one needs to consider the solution to the following Matrix Dyson Equation, which shall be regarded as an approximation of the Green function Gz(iฮท) for any ฮท >0. It is defined as (6.2) โˆ’Mz(iฮท)โˆ’1=iฮท1โˆ’Az+V[Mz(iฮท)] where in our case Az= 0โˆ’z โˆ’ยฏz0 . Here the functional V[ยท]is defined as V[W] = diag(Vw2) 0 0 diag(Vโˆ—w1) for any 2(n+p)ร—2(n+p)matrix Wsuch that W= (wij)2(n+p) i,j=1โˆˆC2(n+p)ร—2(n+p), w1= (wii)n+p i=1โˆˆCn+p, w 2= (wii)2(n+p) i=n+p+1โˆˆCn+p. It is shown in [41] that (6.2) has a unique solution under the constraint that ImMz:= (Mzโˆ’ (Mz)โˆ—)/2iis positive definite. According to [4], we define the self-consistent density of states ฯzofHzas the unique measure whose Stietjes transform is1 2(p+n)TrMz(ฯ‰). More precisely, the solution Mz(iฮท)to (6.2) has the Stieltjes transform representation Mz(iฮท) =Z RV(dx) xโˆ’iฮท whereVis a matrix-valued, compactly supported measure on R. Then, ฯz(dx) :=1 2(n+p)TrV(dx). Letmz j(iฮท)be the j-th diagonal entry of the matrix Mz(iฮท). For any ฯ„ >0, define the set Dฯ„:={zโˆˆC:dist(0,suppฯz)โ‰คฯ„} and (6.3) eDฯ„:={z: limsup ฮทโ†’01 ฮทmax j|Immz j(iฮท)| โ‰ฅ1 ฯ„} Dฯ„is called the self-consistent ฯ„-pseudospectrum of X0. It is shown in [1] that eigenvalues ofX0will concentrate on Dฯ„for any fixed ฯ„ >0. It is also shown in [1] that Dฯ„andeDฯ„are comparable in the sense that for any ฯ„ >0, we have Dฯ„1โŠ‚eDฯ„โŠ‚Dฯ„2for certain ฯ„1,ฯ„2>0. SIGNAL DETECTION VIA ASYMMETRIZATION 25 Further, in our case, the matrix equation (6.2) implies that Mzhas the block structure such that its top-left, top-right, lower-left and lower-right (n+p)ร—(n+p)blocks are all diagonal matrices. After simplification, the Matrix Dyson Equation (6.2) admit the following form for some vectors uandv, (6.4) Mz(iฮท) =diag(iu)โˆ’zdiag(u ฮท+Vโˆ—u) โˆ’ยฏzdiag(v ฮท+Vv) diag(iv) Therefore, to determine eDฯ„, as well as Dฯ„, we only need to analyze the following coupled vector equations derived from (6.2) and (6.4) via Schur complement 1 u=ฮท+Vv+|z|2 ฮท+Vโˆ—u, 1 v=ฮท+Vโˆ—u+|z|2 ฮท+Vv. (6.5) Here u,vโˆˆRp+n +are the unique solutions to the vector equations. The uniqueness and exis- tence of the positive solutions to this vector equation is a consequence of the solution to the Matrix Dyson Equation. Bijection between the solution to the Matrix Dyson Equation (6.2) with positive definite ImMzand the positive solutions
https://arxiv.org/abs/2504.19450v1
to the vector equation (6.5) is proved in [2]. One can check from (6.4) that the diagonal part of Mzis in fact purely imaginary and the imaginary part Immz j(iฮท)form the vector uandv. Using the block structure of V, one can further write the system of vector equations (6.5) as a system of four equations, with the notation (6.1), 1 u1=ฮท+Tv2+|z|2 ฮท+Tu2 1 u2=ฮท+Tโˆ—v1+|z|2 ฮท+Tโˆ—u1 1 v1=ฮท+Tu2+|z|2 ฮท+Tv2 1 v2=ฮท+Tโˆ—u1+|z|2 ฮท+Tโˆ—v1(6.6) Here u=u1 u2 , v =v1 v2 , u 1,v1โˆˆRp +, u 2,v2โˆˆRn +. According to Theorem 2.4 and Remark 2.5 (iv) in [1] , to get the upper bound ฯ(X0)โ‰คp ฯ(V) +ฮด, (6.7) it suffices to show that limsup ฮทโ†’01 ฮทmax j|Immz j(iฮท)|<1 ฮดโ€ฒ(6.8) for some ฮดโ€ฒ>0, given that |z| โ‰ฅp ฯ(V) +ฮด. If (6.8) holds, by the equivalence of Dฯ„and eDฯ„, the eigenvalues of Hzis away from zero by a distance of ฮดโ€ฒ. We can then conclude from Theorem 4.7 in [1] that Hzis invertible and the resolvent at 0 is bounded by a constant, if (6.8) is granted. Specifically, we will have the following bound on the operator norm of (X0โˆ’z)โˆ’1and the Green functions G(z2) = (X1Xโˆ— 2โˆ’z2)โˆ’1,G(z2) = (Xโˆ— 2X1โˆ’z2)โˆ’1. 26 PROPOSITION 6.1. With high probability, X0โˆ’zis invertible uniformly in |z| โ‰ฅp ฯ(V) +ฮดโ€ฒfor some ฮดโ€ฒ>0, and in addition (6.9) ||(X0โˆ’z)โˆ’1||op<1 ฮด,||G(z2)||op<1 ฮด,||G(z2)||op<1 ฮด hold for some constant ฮด >0, uniformly in |z| โ‰ฅp ฯ(V) +ฮดโ€ฒ. Here, we first give the proof of Proposition 6.1, assuming (6.8) is satisfied. Since the system of equations ( 6.5) is scaling invariant, in the following we may assume ฯ(V) = 1 . PROOF OF PROPOSITION 6.1. Since one can find ฮดโ€ฒ>0such that (6.8) holds, then z /โˆˆ eDฮดโ€ฒ. This implies, by a slight abuse of notations, dist(0,suppฯz)> ฮดโ€ฒ. Further, Theorem 4.7 in [1] tells us that uniformly in z, no eigenvalue of Hzis away from suppฯzwith high probability. Specifically, for a fixed z, there exist constants ฮด0,C > 0such that P Spec(Hz)โŠ‚ {xโˆˆR:dist(x,suppฯz)โ‰คNโˆ’ฮด0} โ‰ฅ1โˆ’C N. Hence, there exists ฮด >0such that the smallest singular value of X0โˆ’zsatisfies P(ฯƒmin(X0โˆ’z)> ฮด)โ‰ฅ1โˆ’C N. Then with high probability, X0โˆ’zis invertible and (6.10) ||(X0โˆ’z)โˆ’1||op<1 ฮด. According to Schurโ€™s complement (X0โˆ’z)โˆ’1= (โˆ’z+1 zX1Xโˆ— 2)โˆ’1 (โˆ’z+1 zXโˆ— 2X1)โˆ’1 . zG(z2)is a submatrix of X0โˆ’z, then by ( 6.10), for|z|2>1 ||G(z2)||op<1 ฮด. The uniformity of the estimates follow simply by applying Neumann expansion. For in- stance, for sufficiently large z, say,|z| โ‰ฅNCfor some large constant C, one can expand G(z)around โˆ’zโˆ’1, by applying a crude order 1upper bound of the operator norm X1Xโˆ— 2. For thosep ฯ(V) +ฮดโ€ฒโ‰ค |z| โ‰คNC, we can apply a standard ฯต-net argument. We can find anฯต-net of this domain with cardinality NO(C)such that for each zin this domain, one can find an zโ€ฒin the ฯต-net, so that |zโˆ’zโ€ฒ| โ‰คNโˆ’C. By the definition of stochastic dominance in Definition 1.3, one can readily conclude that the estimates in (6.9) hold uniformly on the ฯต-net, with high probability. Then for any other zsatisfyingp ฯ(V) +ฮดโ€ฒโ‰ค |z| โ‰คNC, we can expand G(z)around G(zโ€ฒ)using Neumann expansion, where zโ€ฒis a point in the ฯต-net and |zโˆ’zโ€ฒ| โ‰คNโˆ’2C. Then by
https://arxiv.org/abs/2504.19450v1
the boundedness of the operator norm G(zโ€ฒ), one can conclude the proof of the uniformity for all |z| โ‰ฅp ฯ(V) +ฮดโ€ฒ. In the sequel, we prove (6.8), which follows from the lemma below, according to (6.4). SIGNAL DETECTION VIA ASYMMETRIZATION 27 LEMMA 6.2. The solution of (6.5) satisfies (6.11) โŸจu(ฮท)โŸฉ=โŸจv(ฮท)โŸฉ for all ฮท >0, where โŸจa(ฮท)โŸฉ= (n+p)โˆ’1Pn+p i=1ai(ฮท)fora=u,v. Uniformly in 0< ฮทโ‰ค1 and|z|2>1 +ฮดโ€ฒ, we have the estimate (6.12) u(ฮท)โˆผv(ฮท)โˆผฮท |z|2โˆ’1 +ฮท2/3. Here we follow the proof strategy of Proposition 3.2 in [2]. The main difference is that our variance matrix Vhas zero blocks and the entries of Vdo not satisfy the flatness assump- tion, although the off-diagonal blocks do satisfy the flatness assumption; see (1.1). Necessary modifications will be made in the following proof. PROOF OF LEMMA 6.2. First, multiplying on both sides of two equations in ( 6.5) byฮท+ Vโˆ—uandฮท+Vvrespectively, we get (6.13)u ฮท+Vโˆ—u=v ฮท+Vv which leads to (6.14) 0 =ฮท(uโˆ’v) + (uVvโˆ’vVโˆ—u) Taking average on both sides and using the fact that โŸจuVvโŸฉ=โŸจvVโˆ—uโŸฉ, we obtain ( 6.11). The matrix Vdoes not satisfy the flatness assumption, namely we cannot deduce the esti- mate immediately from assumption (1.1) on Sthat (6.15) Vvโˆผ โŸจv(ฮท)โŸฉ,Vuโˆผ โŸจu(ฮท)โŸฉ But we can still make use of the block structure of Vto prove (6.15). Note that by the defini- tion in (1.7) and (6.16) Vv=Tv2 Tโˆ—v1 , we immediately get from (1.1) the estimate (6.17) โŸจv1โŸฉ โˆผTโˆ—v1,โŸจv2โŸฉ โˆผTv2,โŸจu1โŸฉ โˆผTโˆ—u1,โŸจu2โŸฉ โˆผTu2 Now together with ( 6.16), (6.17) and the fact that โŸจuโŸฉ=p n+pโŸจu1โŸฉ+n n+pโŸจu2โŸฉ,โŸจvโŸฉ=p n+pโŸจv1โŸฉ+n n+pโŸจv2โŸฉ, (6.18) in order to prove ( 6.15), it suffices to show (6.19) โŸจu1โŸฉ โˆผ โŸจu2โŸฉ,โŸจv1โŸฉ โˆผ โŸจv2โŸฉ We first need to show an auxiliary bound for โŸจuโŸฉandโŸจvโŸฉ (6.20) ฮทโ‰ฒโŸจuโŸฉ=โŸจvโŸฉโ‰ฒ1 From the first equation in ( 6.6) and ( 6.17) we have (6.21) u1=ฮท+Tu2 (ฮท+Tu2)(ฮท+Tv2) +|z|2โˆผฮท+โŸจu2โŸฉ (ฮท+โŸจu2โŸฉ)(ฮท+โŸจv2โŸฉ) +|z|2 Suppose โŸจuโŸฉ=โŸจvโŸฉโ‰ฒฮท, then โŸจu2โŸฉ,โŸจv2โŸฉโ‰ฒฮทfollows immediately from ( 6.18). Then ( 6.21) givesโŸจu1โŸฉ โˆผฮท. The lower bound follows. 28 To show the upper bound for โŸจuโŸฉandโŸจvโŸฉ, from ( 6.6) one can obtain u1 ฮท+Tu2=v1 ฮท+Tv2,u2 ฮท+Tโˆ—u1=v2 ฮท+Tโˆ—v1. Multiplying T,Tโˆ—on both sides of the above two equations respectively, and using ( 6.17) gives โŸจu1โŸฉ ฮท+โŸจu2โŸฉโˆผโŸจv1โŸฉ ฮท+โŸจv2โŸฉ,โŸจu2โŸฉ ฮท+โŸจu1โŸฉโˆผโŸจv2โŸฉ ฮท+โŸจv1โŸฉ. (6.22) Using the lower bound on โŸจuโŸฉand (6.18), we may assume โŸจu2โŸฉโ‰ณฮท. In case โŸจv2โŸฉโ‰ณฮทalso holds, the first estimation becomes (6.23)โŸจu1โŸฉ โŸจu2โŸฉโˆผโŸจv1โŸฉ โŸจv2โŸฉ IfโŸจv2โŸฉโ‰ฒฮท, we can easily get from (6.21) that โŸจu1โŸฉโ‰ณฮท. This together with the second equa- tion in (6.22) that (6.23) is still valid. It then follows from (6.11), (6.18) and (6.23) that (6.24) โŸจu1โŸฉ โˆผ โŸจv1โŸฉ,โŸจu2โŸฉ โˆผ โŸจv2โŸฉ. Now, from the first equation in ( 6.6), we know (6.25) 1 =ฮทu1+u1Tv2+|z|2u1 ฮท+Tu2โ‰ฅu1Tv2 Taking average gives (6.26) 1โ‰ฅ โŸจu1Tv2โŸฉโ‰ณโŸจu1โŸฉโŸจv2โŸฉ Similarly, using the second equation in ( 6.6),1โ‰ณโŸจu2โŸฉโŸจv1โŸฉ. IfโŸจu1โŸฉ,โŸจv2โŸฉโ‰ฒ1, together with (6.18), (6.24), the upper bound follows. Otherwise, suppose โŸจu2โŸฉis of order greater than 1, then โŸจu1โŸฉ โˆผ โŸจv1โŸฉis of order smaller than 1. However, the second equation in ( 6.6) gives โŸจu2โŸฉโ‰ฒฮท+โŸจu1โŸฉ, which leads to a contradiction. Hence, ( 6.20) is proved. Using ( 6.20), (6.21) and ( 6.24) and supposing โŸจu2โŸฉโ‰ณฮท, we obtain (6.27) u1โˆผ1 ฮท+โŸจu2โŸฉ+|z|2 ฮท+โŸจu2โŸฉโˆผ โŸจu2โŸฉ. Hence, ( 6.19) follows by taking average. Consequently,
https://arxiv.org/abs/2504.19450v1
we have (6.15). With ( 6.15), the remaining proof can be done similarly to that in [2]. Using the first equa- tion of ( 6.5) gives (6.28) ฮท=u(ฮท+Vv)(ฮท+Vโˆ—u) +|z|2uโˆ’ Vโˆ—u. By the Perron-Frobenius theorem, there exists a vector ฯฑโˆˆRn+p +such that (6.29) Vฯฑ=ฯฑ,โŸจฯฑโŸฉ= 1,ฯฑโˆผ1 Note that Vhas zero blocks and indeed we apply the Perron-Frobenius theorem to TTโˆ—and Tโˆ—Tto get ฯฑโˆˆRn+p +. Taking scalar product of ( 6.28) with ฯฑ, we get (6.30) ฮท=โŸจฯฑu(ฮท+Vv)(ฮท+Vโˆ—u)โŸฉ+ (|z|2โˆ’1)โŸจฯฑuโŸฉ โˆผ โŸจuโŸฉ3+ (|z|2โˆ’1)โŸจuโŸฉ where in the last step we also use ( 6.11) and ( 6.15). Now one can conclude ( 6.12) foru(ฮท), and similarly for v(ฮท). It further implies (6.8). Then we can conclude the proof of Proposition 6.1. SIGNAL DETECTION VIA ASYMMETRIZATION 29 Next, we extend the conclusions in Proposition 6.1 from the matrix X0toX. We have the following proposition. PROPOSITION 6.3. With high probability, X โˆ’ zis invertible uniformly in all |z| โ‰ฅp ฯ(V) +ฮดโ€ฒfor some ฮดโ€ฒ>0. PROOF . We first show the proof for the invertibility for one fixed zwhich is larger thanp ฯ(V) +ฮดโ€ฒin magnitude. Later, we will prove the uniformity. From Proposition 6.1, we know (6.31) det(X0โˆ’z)ฬธ= 0 with high probability. It suffices to show that det(X โˆ’z)ฬธ= 0. Recall the definition from (1.3) X= ฮฃX1 (ฮฃX2)โˆ— We use the following basic identity of the determinant of a block matrix (6.32) det A B C D = det( A โˆ’ BDโˆ’1C)ยทdet(D) which holds when Dis invertible. One can then easily check that det(X โˆ’z) = (โˆ’1)pdet(ฮฃ X1Xโˆ— 2ฮฃโˆ—โˆ’z2) = (โˆ’1)pdet(ฮฃฮฃโˆ—)ยทdet(X1Xโˆ— 2โˆ’z2(ฮฃโˆ—ฮฃ)โˆ’1) (6.33) Recall from (3.6) that ฮ“is a low rank diagonal matrix which satisfies det(Iโˆ’ฮ“) = 1/det(ฮฃโˆ—ฮฃ). Since (6.31) implies that det(X1Xโˆ— 2โˆ’z2)ฬธ= 0, we have det(X1Xโˆ— 2โˆ’z2(ฮฃโˆ—ฮฃ)โˆ’1) = det( X1Xโˆ— 2โˆ’z2+z2Lฮ“Lโˆ—) = det( X1Xโˆ— 2โˆ’z2)ยทdet(I+z2ฮ“Lโˆ—(X1Xโˆ— 2โˆ’z2)โˆ’1L) (6.34) By our assumption, 1โˆ’ฮ“iiis no smaller than ฮปโˆ’1 max(ฮฃโˆ—ฮฃ). Further using the fact that ฮปmax(ฮฃโˆ—ฮฃ)ยทqโˆ’1 n=o(1), we have that 1โˆ’ฮ“iiis of order greater than qโˆ’1 nfor all i. Moreover, using the fact that ฮ“is of low rank and diagonal, we have for some fixed constant k det(I+z2ฮ“Lโˆ—(X1Xโˆ— 2โˆ’z2)โˆ’1L) = (1 + o(1))ยทkY i=1(1 +z2ฮ“iicโˆ— i(X1X2โˆ’z2)โˆ’1ci) = (1 + o(1))ยทkY i=1(1 +z2ฮ“ii(โˆ’1 z2+Oโ‰บ(qโˆ’1 n))) = (1 + o(1))ยทkY i=1(1โˆ’ฮ“ii+Oโ‰บ(qโˆ’1 n)) (6.35) where the first two steps follow from Proposition 3.2, and especially in the first step we used the fact that all the off-diagonal entries are of order qโˆ’1 nโ‰ช1โˆ’ฮ“ii. Now together with (6.33), (6.34) and (6.35), det(X โˆ’z)ฬธ= 0holds with high probability, for a fixed z. Next, we show that the fact det(X โˆ’z)ฬธ= 0 holds uniformly in |z| โ‰ฅp ฯ(V) +ฮดโ€ฒ, with high probability. It suffices to have Proposition 3.2 uniformly in z. Again, the uniformity of the estimates in Proposition 3.2 follows simply by applying Neumann expansion. For instance, for sufficiently large z, say,|z| โ‰ฅNCfor some large constant C, one can expand G(z)around โˆ’zโˆ’1, and 30 apply a crude upper bound of the operator norm X1Xโˆ— 2, to see that Proposition 3.2 is true uniformly in |z| โ‰ฅNC, with high probability. For thosep ฯ(V) +ฮดโ€ฒโ‰ค |z| โ‰คNC, we can apply a standard ฯต-net argument. We can find an ฯต-net of this
https://arxiv.org/abs/2504.19450v1
domain with cardinality NO(C) such that for each zin this domain, one can find an zโ€ฒin the ฯต-net, so that |zโˆ’zโ€ฒ| โ‰คNโˆ’C. By the definition of stochastic dominance in Definition 1.3, one can readily conclude from Proposition 3.2 that the estimates therein holds uniformly on the ฯต-net, with high probability. Then for any other zsatisfyingp ฯ(V) +ฮดโ€ฒโ‰ค |z| โ‰คNC, we can expand G(z)around G(zโ€ฒ) using Neumann expansion, where zโ€ฒis a point in the ฯต-net and |zโˆ’zโ€ฒ| โ‰คNโˆ’2C. Then by the boundedness of the operator norm G(zโ€ฒ)from Proposition 6.1, one can conclude the proof of the uniformity of the estimate in Proposition 3.2 for all |z| โ‰ฅp ฯ(V) +ฮดโ€ฒ, which holds with high probability. PROOF OF PROPOSITION 3.1. With Propositions 6.1 and 6.3, we can conclude the proof of Proposition 3.1. 7. A Priori bound for Green function quadratic forms: Proof of Proposition 3.2. PROOF OF PROPOSITION 3.2. For brevity, in the sequel, we show the proof of the first bound in (3.5). The other terms can be bounded in a similar manner and thus we omit the details. Recall the definition of qnfrom (3.4). We denote by P:=zqnuโˆ—Gvฯ‡G=qnuโˆ—X1Xโˆ— 2Gvฯ‡ G, where ฯ‡Gis defined in (5.2). We also refer to the discussion in (5.2)-(5.7) regarding the ฯ‡G factor. Therefore, in the sequel, we may simply regard ฯ‡Gas1and neglect all the terms involving its derivatives. Notice that from the definition in (3.3) we have the trivial bound |P|=|qnuโˆ—v+zqnuโˆ—Gv|ฯ‡Gโ‰บqn (7.1) and also the deterministic bound |P| โ‰ค CKn for some constant C >0; see (5.3). We aim for a recursive moment estimate for E(P)2k. By cumulant expansion formula [40, Lemma 1.3], we have E(P)2k=qnX ijEh x1,ij(Xโˆ— 2Gvuโˆ—)jiฯ‡G(P)2kโˆ’1i =qnmX ฮฑ=1X ijฮบฮฑ+1(x1,ij) ฮฑ!Eโˆ‚ฮฑ 1,ijh (Xโˆ— 2Gvuโˆ—)jiฯ‡G(P)2kโˆ’1i +R, (7.2) where we again use Rto denote the remainder term, with certain abuse of notation. Specifi- cally, here Ris given by |R| โ‰ค CE|x1,ij|m+2E sup |x1,ij|โ‰คc โˆ‚m+1 1,ijh (Xโˆ— 2Gvuโˆ—)jiฯ‡G(P)2kโˆ’1i  +CE |x1,ij|m+21(|x1,ij|> c) E sup x1,ijโˆˆR โˆ‚m+1 1,ijh (Xโˆ— 2Gvuโˆ—)jiฯ‡G(P)2kโˆ’1i  . (7.3) As the derivative can only generate matrix entries which are Oโ‰บ(1), and there are totally 2kโˆ’1qn-factors from P2kโˆ’1, we can trivially bound โˆ‚m+1 1,ijh (Xโˆ— 2Gvuโˆ—)jiฯ‡G(P2kโˆ’1) โ‰บq2kโˆ’1 n. SIGNAL DETECTION VIA ASYMMETRIZATION 31 In order to bound the remainder terms in the cumulant expansion, we also need to have the above bound when we take supremum over one Xentry. This technical problem can be handled by simply truncating the matrix entries at nโˆ’1/2+ฮตat the beginning. We omit the details. Further notice that E|x1,ij|m+2โˆผnโˆ’m+2 2.Hence, it is easy to show that when mis sufficiently large, |R| โ‰บ nโˆ’C(7.4) for a constant C=C(m,ฮต)>0. From the first mterms in (7.2), we start from ฮฑ= 1. After an elementary calculation, we arrive at qnX ijฮบ2(x1,ij)Eโˆ‚1 1,ijh (Xโˆ— 2Gvuโˆ—)jiฯ‡G(P)2kโˆ’1i =qn nEh vโˆ—Gโˆ—X2Xโˆ— 2Guฯ‡ Gยท P2kโˆ’1i โˆ’zq2 n nEh vโˆ—Gโˆ—X2Xโˆ— 2Gvยทuโˆ—Gโˆ—uฯ‡Gยท P2kโˆ’2i =qn nEh Oโ‰บ(1)P2kโˆ’1i +q2 n nEh Oโ‰บ(1)P2kโˆ’2i Forฮฑโ‰ฅ0, we first notice that each term in โˆ‚ฮฑ 1,ij(Xโˆ— 2Gvuโˆ—)jimust contain the factor (Xโˆ— 2Gvuโˆ—)ji and the other factors can be simply bounded by Oโ‰บ(1). Further, we notice that each term in โˆ‚ฮฑ 1,ijPmust contain the factor uโˆ—Geiยทeโˆ— jXโˆ— 2Gv and the
https://arxiv.org/abs/2504.19450v1
other factors can be simply bounded by Oโ‰บ(1). Apart from the qnfactors, the ijsum of all the derivatives can be simply bounded by X ij|eโˆ— j(Xโˆ— 2Gvuโˆ—)ei||eโˆ— jXโˆ— 2Gv||uโˆ—Gei| โ‰คX i|uโˆ—Gei|sX j|eโˆ— j(Xโˆ— 2Gvuโˆ—)ei|2sX j|eโˆ— jXโˆ— 2Gv|2 โ‰คX i|uโˆ—Gei||uโˆ—ei|p vโˆ—Gโˆ—X2Xโˆ— 2Gvp vโˆ—Gโˆ—X2Xโˆ— 2Gv โ‰บX i|uโˆ—Gei||uโˆ—ei| โ‰บ1, where the last step follows from Cauchy Schwarz inequality. Then, if we further consider theqnfactors generated from the derivative of P, the most dangerous terms from โˆ‚ฮฑ 1,ijP2kโˆ’1 would be (โˆ‚1 1,ijP)ฮฑP2kโˆ’1โˆ’ฮฑ which creates a factor qฮฑ n. The contribution of such term to (7.2) would be qnโˆšnฮฑ+1 P2kโˆ’1โˆ’ฮฑ. Putting all these estimates together, we arrive at the recursive moment estimate E|P|2k=2kX ฮฑ=1qnโˆšnฮฑ Eh Oโ‰บ(1)P2kโˆ’ฮฑi +O(nโˆ’C). 32 Applying a Youngโ€™s inequality, we can immediately get E|P|2k=o(1). Due to arbitrariness of k, we obtain the first estimate in (3.5). The other terms can be esti- mated similarly. Hence, we conclude the proof of Proposition 3.2. Acknowledgement. Z. Bao is grateful to Johannes Alt and Torben Kr ยจuger for explaining the work [1] and several related works. We would also like to thank Xiucai Ding for refer- ence. Z. Bao is supported by Hong Kong RGC Grant GRF 16304724, NSFC12222121 and NSFC12271475. K. Cheong and Y . Li are supported by Hong Kong RGC Grant 16303922. REFERENCES [1] Alt, J., Erd หos, L., Kr ยจuger, T., Nemish, Y . (2019). Location of the spectrum of Kronecker random matrices. Annales de lI.H.P. Probabilit ยดes et Statistiques, 55(2). https://doi.org/10.1214/18-AIHP894 [2] Alt, J., Erd หos, L., Kr ยจuger, T. (2018). Local inhomogeneous circular law. The Annals of Applied Probability, 28(1), 148203. https://doi.org/10.1214/17-AAP1302 [3] Alt J, Erd หos, L., Kr ยจuger, T. Spectral radius of random matrices with independent entries. Probability and Mathematical Physics. 2021 May 22;2(2):221-80. [4] Ajanki, O. H., Erd หos, L., Kr ยจuger, T. (2019). Stability of the matrix Dyson equation and random matrices with correlations. Probability Theory and Related Fields, 173, 293-373. [5] Auffinger A, Ben Arous G, P ยดechยดe S. Poisson convergence for the largest eigenvalues of heavy tailed random matrices. InAnnales de lโ€™IHP Probabilits et statistiques 2009 (V ol. 45, No. 3, pp. 589-610). [6] Bai Z, Yao JF. Central limit theorems for eigenvalues in a spiked population model. InAnnales de lโ€™IHP Probabilit ยดes et statistiques 2008 (V ol. 44, No. 3, pp. 447-474). [7] Bai Z, Yao J. On sample eigenvalues in a generalized spiked population model. Journal of Multivariate Analysis. 2012 Apr 1;106:167-77. [8] Bai ZD, Yin YQ. Limit of the smallest eigenvalue of a large dimensional sample covariance matrix. Ann. Probab. 1993 Jul 1;21(3):1275-94. [9] Baik J, Ben Arous G, P ยดechยดe S. Phase transition of the largest eigenvalue for nonnull complex sample co- variance matrices. Ann. Probab., 33(5):16431697, 2005. [10] Baik J, Silverstein JW. Eigenvalues of large sample covariance matrices of spiked population models. Journal of multivariate analysis. 2006 Jul 1;97(6):1382-408. [11] Bao, Z., Ding, X., Wang, K. Singular vector and singular subspace distribution for the matrix denoising model. Ann. Statist. 49(1): 370-392 (2021) [12] Bao Z, Ding X, Wang J, Wang K. Principal components of spiked covariance matrices in the supercritical
https://arxiv.org/abs/2504.19450v1
regime. Ann. Stat., 50 (2), 1144-1169, (2022). [13] Bao Z, He Y , Yang F. Random matrix theory: Local laws and applications. Handbook of Statistics. 2024. [14] Bao, Z., Pan, G., Zhou, W. Universality for the largest eigenvalue of sample covariance matrices with general population. Ann. Statist., 43(1):382421, 2015. [15] Bao Z, Wang D. Eigenvector distribution in the critical regime of BBP transition. Probability Theory and Related Fields. 2022 Feb;182(1):399-479. [16] Belinschi S, Bordenave C, Capitaine M, Cbron G. Outlier eigenvalues for non-Hermitian polynomials in independent iid matrices and deterministic matrices. Electronic Journal of Probability. 2021;26:1-37. [17] Benaych-Georges F, Guionnet A, Maida M. Fluctuations of the extreme eigenvalues of finite rank deforma- tions of random matrices. Electron. J. Probab., 16:1621-1662, 2011. [18] Benaych-Georges F, Nadakuditi RR. The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices. Advances in Mathematics. 2011 May 1;227(1):494-521. [19] Benaych-Georges F, Nadakuditi RR. The singular values and vectors of low rank perturbations of large rectangular random matrices. Journal of Multivariate Analysis. 2012 Oct 1;111:120-35. [20] Bordenave, C., Capitaine, M. (2016). Outlier eigenvalues for deformed iid random matrices. Communica- tions on Pure and Applied Mathematics, 69(11), 2131-2194. [21] Bordenave C, Chafa ยจฤฑ D, Garc ยดฤฑa-Zelada D. Convergence of the spectral radius of a random matrix through its characteristic polynomial. Probability Theory and Related Fields. 2022 Apr 1:1-9. [22] Burgess, D.J., 2019. Spatial transcriptomics coming of age. Nature Reviews Genetics, 20(6), pp.317-317. [23] Capitaine, M. (2018). Limiting eigenvectors of outliers for spiked information-plus-noise type matrices. Sยดeminaire de Probabilit ยดes XLIX, 119-164. SIGNAL DETECTION VIA ASYMMETRIZATION 33 [24] Capitaine M, Donati-Martin C, F ยดeral D. The largest eigenvalues of finite rank deformation of large Wigner matrices: convergence and nonuniversality of the fluctuations. Ann. Probab., 37(1):147, 2009. [25] Capitaine M, Donati-Martin C, F ยดeral D. Central limit theorems for eigenvalues of deformations of Wigner matrices. InAnnales de lโ€™IHP Probabilits et statistiques 2012 (V ol. 48, No. 1, pp. 107-133). [26] Capitaine M, Donati-Martin C. Spectrum of deformed random matrices and free probability. arXiv preprint arXiv:1607.05560. 2016 Jul 19. [27] Chen Y , Cheng C, Fan J. Asymmetry helps: Eigenvalue and eigenvector analyses of asymmetrically per- turbed low-rank matrices. Annals of statistics. 2021 Feb;49(1):435. [28] Choi Y , Taylor J, Tibshirani R. Selecting the number of principal components: Estimation of the true rank of a noisy matrix. The Annals of Statistics. 2017 Dec 1:2590-617. [29] Cipolloni G, Erd หos L, Xu Y . Universality of extremal eigenvalues of large random matrices. arXiv preprint arXiv:2312.08325. 2023 Dec 13. [30] Cochran RN, Horne FH. Statistically weighted principal component analysis of rapid scanning wavelength kinetics experiments. Analytical Chemistry. 1977 May 1;49(6):846-53 [31] Foi A. Clipped noisy images: Heteroskedastic modeling and practical denoising. Signal Processing. 2009 Dec 1;89(12):2609-29. [32] Ding, X. (2020). High dimensional deformed rectangular matrices with applications in matrix denoising. Bernoulli 26(1): 387-417 (2020). [33] Ding X, Yang F. Tracy-Widom distribution for heterogeneous Gram matrices with applications in signal detection. IEEE Transactions on Information Theory. 2022 May 20;68(10):6682-715. [34] Erd หos L, Kr ยจuger T, Schr ยจoder D. Random matrices with slow
https://arxiv.org/abs/2504.19450v1
correlation decay. InForum of Mathematics, Sigma 2019 Jan (V ol. 7, p. e8). Cambridge University Press. [35] Erd หos L., Knowles A., and Yau H.-T. Averaging fluctuations in resolvents of random band matrices. Ann. Henri Poincar ยดe, 14(8):1837โ€“1926, 2013. [36] Gavish M, Donoho DL. The optimal hard threshold for singular values is 4/โˆš 3. IEEE Transactions on Information Theory. 2014 Jun 30;60(8):5040-53. [37] Gavish M, Donoho DL. Optimal shrinkage of singular values. IEEE Transactions on Information Theory. 2017 Jan 17;63(4):2137-52. [38] Han Y . Finite rank perturbation of non-Hermitian random matrices: heavy tail and sparse regimes. arXiv preprint arXiv:2407.21543. 2024 Jul 31. [39] Hafemeister C, Satija R. Normalization and variance stabilization of single-cell RNA-seq data using regu- larized negative binomial regression. Genome biology. 2019 Dec 23;20(1):296. [40] He Y , Knowles A. Mesoscopic eigenvalue statistics of Wigner matrices. The Annals of Applied Probability, 27(3), 1510-1550, 2017. [41] Helton, J. W., Far, R. R., Speicher, R. (2007). Operator-valued semicircular elements: solving a quadratic matrix equation with positivity constraints. International Mathematics Research Notices, 2007(9), rnm086-rnm086. [42] Jung JH, Chung HW, Lee JO. Detection of signal in the spiked rectangular models. InInternational Confer- ence on Machine Learning 2021 Jul 1 (pp. 5158-5167). PMLR. [43] Johnstone IM, Nadler B. Roys largest root test under rank-one alternatives. Biometrika. 2017 Mar 1;104(1):181-93. [44] Khorunzhy AM, Khoruzhenko BA, Pastur LA. Asymptotic properties of large random matrices with inde- pendent entries. Journal of Mathematical Physics. 1996 Oct 1;37(10):5033-60. [45] Knowles A, Yin J. The isotropic semicircle law and deformation of Wigner matrices. Comm. Pure Appl. Math., 66(11): 16631750, 2013. [46] Knowles A, Yin J. The outliers of a deformed Wigner matrix. The Annals of Probability, 42(5): 19802031, 2014. [47] Landa B, Kluger Y . The dyson equalizer: Adaptive noise stabilization for low-rank signal detection and recovery. Information and Inference: A Journal of the IMA. 2025 Mar;14(1):iaae036. [48] Lee JO, Schnelli K. Local law and TracyWidom limit for sparse random matrices. Probability Theory and Related Fields. 2018 Jun;171:543-616. [49] Lin Z, Pan G, Zhao P, Zhou J. Asymptotic distribution of spiked eigenvalues in the large signal-plus-noise models. arXiv preprint arXiv:2401.11672. 2024 Jan 22. [50] Paul D. Asymptotics of sample eigenstructure for a large dimensional spiked covariance model. Statistica Sinica. 2007 Oct 1:1617-42. [51] Rajagopalan, A. B. (2015). Outlier eigenvalue fluctuations of perturbed iid matrices. University of Califor- nia, Los Angeles. [52] Salmon J, Harmany Z, Deledalle CA, Willett R. Poisson noise reduction with non-local PCA. Journal of mathematical imaging and vision. 2014 Feb;48:279-94. 34 [53] Soshnikov A. Poisson statistics for the largest eigenvalues of Wigner random matrices with heavy tails. Electron. Commun. Probab. 9: 82-91 (2004) [54] Tao, T. (2013). Outliers in the spectrum of iid matrices with bounded rank perturbations. Probability Theory and Related Fields, 155(1), 231-263. [55] Wallach HM. Topic modeling: beyond bag-of-words. InProceedings of the 23rd international conference on Machine learning 2006 Jun 25 (pp. 977-984).
https://arxiv.org/abs/2504.19450v1
STOCHASTIC SUBSPACE VIA PROBABILISTIC PRINCIPAL COMPONENT ANALYSIS FOR CHARACTERIZING MODEL ERROR Akash Yadav University of Houston ayadav4@uh.eduRuda Zhang University of Houston rudaz@uh.edu ABSTRACT This paper proposes a probabilistic model of subspaces based on the probabilistic principal component analysis (PCA). Given a sample of vectors in the embedding spaceโ€”commonly known as a snapshot matrixโ€”this method uses quantities derived from the probabilistic PCA to construct distributions of the sample matrix, as well as the principal subspaces. It is applicable to projection-based reduced-order modeling methods, such as proper orthogonal decomposition and related model reduction methods. The stochastic subspace thus constructed can be used, for example, to char- acterize model-form uncertainty in computational mechanics. The proposed method has multiple desirable properties: (1) it is naturally justified by the probabilistic PCA and has analytic forms for the induced random matrix models; (2) it satisfies linear constraints, such as boundary conditions of all kinds, by default; (3) it has only one hyperparameter, which significantly simplifies training; and (4) its algorithm is very easy to implement. We demonstrate the performance of the proposed method via several numerical examples in computational mechanics and structural dynamics. Keywords Model error ยทModel-form uncertainty ยทStochastic reduced-order modeling ยทProbabilistic principal component analysis ยทStochastic proper orthogonal decomposition 1 Introduction Engineering systems can often be described by a set of partial differential equations, commonly referred to as governing equations. These governing equations are typically derived by making assumptions and simplifying the underlying physical process, which inevitably introduces errors. The errors are further manifested via numerical approximations, imprecise initial and boundary conditions, unknown model parameters, and errors in the differential equations themselves. Model error (also known as model discrepancy, model inadequacy, structural uncertainty, or model-form error in various contexts) is ubiquitous in computational engineering, but its probabilistic analysis has been a notoriously difficult challenge. Unlike parametric uncertainties that are associated with model parameters, model-form uncertainties concern the variability of the model itself, which are inherently nonparametric. Pernot [1]argues that model-form uncertainty should not be absorbed in parametric uncertainty as it leads to prediction bias. The analysis of model-form uncertainty can bearXiv:2504.19963v2 [cs.CE] 6 May 2025 SS-PPCA divided into two sub-problems: 1) characterization and 2) correction. Determining the predictive error of an inadequate model is categorized as characterization . Reducing the predictive error of an inadequate model via some form of adjustment is called correction , which is much more complex than characterization. The central focus of our current work is to characterize model error. There are various studies in the literature that address model error. The approaches can be divided into two categories: 1) direct representation and 2) indirect representation. The methods based on direct representation are mainly subdivided into two categories, depending on where model errors are addressed: 1) external, and 2) internal [ 2]. In external direct representation methods, model error is accounted for by adding a correction term to the model output, which is usually calibrated using a Gaussian process to match the observed data. One of the earliest studies to address model error was carried out by Kennedy and Oโ€™Hagan [3]in 2001, in which
https://arxiv.org/abs/2504.19963v2
the authors presented a Bayesian calibration technique as an improvement on the traditional techniques. Since then, the Bayesian inferential and modeling framework has been adopted and further developed by many studies [ 4โ€“8]. KOH Bayesian calibration corrects the model output but has limited extrapolative capability to unobserved quantities [ 9]. Farrell et al. [10] present an adaptive modeling paradigm for Bayesian calibration, and model selection, where the authors developed an Occam- Plausibility algorithm to select the best model with calibration. However, if the initial set of models is invalid, then another set of models is required, which requires a lot of prior information and modeling experience. In general, methods using external direct representation tend to violate physical constraints [ 11,12], rely heavily on prior information [ 13], fail to elucidate model error and data error, and struggle with extrapolation and prediction of unobserved quantity [ 1,6,14]. To alleviate some of the drawbacks, Brynjarsdรณttir and Oโ€™Hagan [13] and Plumlee [15] show that prior does play an important role in improving performance. He and Xiu [2]proposed a general physics-constrained model correction framework, ensuring that the corrected model adheres to physical constraints. Despite all the developments, methods based on external direct representation have several limitations, as listed above, mainly because model error is inherently nonparametric. Internal direct representation methods improve a model through intrusive modifications, rather than by adding an extra term to the output. Embedded model enrichment [ 14] provides a framework for extrapolative predictions using composite models. Morrison et al. [16] address model inadequacy with stochastic operators introduced as a source in the governing equations while preserving underlying physical constraints. Similarly, Portone and Moser [17] propose a stochastic operator for an unclosed dispersion term in an averaged advectionโ€“diffusion equation for model closure. However, modifying the governing equations of the model brings challenges in the computational cost and implementation. Additionally, significant prior knowledge of the system is required to alter the governing equations and achieve improvements effectively. Bhat et al. [18] address model-form error by embedding dynamic discrepancies within small-scale models, using Bayesian smoothing splines (BS-ANOV A) framework. Strong and Oakley [11] introduce a novel way to quantify model inadequacies by decomposing the model into sub-functions and addressing discrepancies in each sub-function. The authors show the application of their method in health economics studies. Sargsyan et al. [19] improve uncertainty representation and quantification in computational models by introducing a Bayesian framework with embedded model error representation using polynomial chaos expansion. Other studies have used internal model error correction for various application areas, e.g., Reynolds-averaged Navier Stokes (RANS) applications [ 20], large eddy simulation (LES) computations [ 21], molecular simulations [ 22], particle-laden flows [ 23], and chemical ignition modeling [ 24]. In general, internal direct representation methods require a deep understanding of 2 SS-PPCA the system to implement the changes necessary for improvement. This knowledge is not always available and often demands an extensive study of the system. An approach based on indirect representation is presented by Sargsyan et al. [12], where the authors aim to address model error at the source by hyperparameterizing
https://arxiv.org/abs/2504.19963v2
the parameters of assumptions. However, it is not always easy to pinpoint the source of the errors especially when errors arise from multiple sources. Furthermore, this approach does not correct for model error. Another indirect representation approach to address model error is based on the concept of stochastic reduced order modeling, originally introduced by Soize in the early 2000s [ 25โ€“27] and reviewed in [ 28]. In their earlier approaches, the authors build the reduced-order model (ROM) via projection onto a deterministic subspace and construct stochastic ROMs by randomizing the coefficient matrices using the principle of maximum entropy. The term โ€œstochastic reduced order modelโ€ is also used by a method developed around 2010 in a different context [ 29โ€“31], which is essentially a discrete approximation of a probability distribution and can be constructed non-intrusively from a computational model. A recent work by Soize and Farhat [32] in model-form uncertainty merges ideas from projection- based reduced-order modeling [ 33,34] and random matrix theory [ 35,36]. In the new approach, instead of randomizing coefficient matrices, the reduced-order basis (ROB) is randomized. The key observation is that since the ROB determines the ROM, randomizing the ROB also randomizes the ROM, which can be used for efficient probabilistic analysis of model error. Using computationally inexpensive ROM instead of a full-order model makes the approach computationally tractable for uncertainty quantification tasks. The probabilistic model and estimation procedure proposed in [ 32] have been applied to various engineering problems [ 37]. Because this method can face challenges due to a large number of hyperparameters and complex implementation, later developments have aimed at reducing the number of hyperparameters and simplifying hyperparameter training [ 38]. In settings with multiple candidate models, Zhang and Guilleminot [39] proposed a different probabilistic model for ROBs, and has been used for various problems in computational mechanics [40, 41]. This paper proposes a novel framework for probabilistic modeling of principal subspaces. The stochastic ROMs constructed from such stochastic subspaces enable improved characterization of model uncertainty in computational mechanics. The main contributions of our work are as follows: 1.We introduce a new class of probabilistic models of subspaces, which is simple, has nice analytical properties and can be sampled efficiently. This is of general interest, and can be seen as a stochastic version of the proper orthogonal decomposition. 2.We use this stochastic subspace model to construct stochastic ROMs, and reduce the number of hyperparameters to a single scalar. This contrasts with existing stochastic ROM approaches, which use stochastic bases and typically involve more hyperparameters. 3.The optimization of the hyperparameter is fully automated and done efficiently by exploiting one-dimensional optimization algorithms. 4.We demonstrate in a series of numerical experiments that our method provides consistent and sharp uncertainty estimates, with low computational costs. This paper is organized as follows: A brief introduction to stochastic reduced-order modeling is presented in Section 2. The proposed stochastic subspace model, along with the algorithm, is described in Section 3. Critical information regarding the hyperparameter optimization is given in 3 SS-PPCA Section 4. Related works in the literature are reviewed in Section 5.
https://arxiv.org/abs/2504.19963v2
The accuracy and efficiency of the proposed method are validated using numerical examples in Section 6. Finally, Section 7 concludes the paper with a brief summary and potential future work. 2 Stochastic reduced-order modeling This section presents the concept of stochastic reduced-order modeling. The high-dimensional model (also known as the full-order model) is described in Section 2.1. Section 2.2 describes the reduced-order model and its formulation from a high-dimensional model. Finally, Section 2.3 presents the stochastic reduced-order model and its formulation from a reduced-order model. The presentation in this section is very general and can easily be applied to any engineering system. 2.1 High-dimensional model In general, we consider a parametric nonlinear system given by a set of ordinary differential equations (ODEs): ห™x=f(x, t;ยต), (1) withxโˆˆRnandtโˆˆ[0,โˆž), and is subject to initial conditions x(0) = x0and linear constraints (e.g., boundary conditions) BโŠบx= 0, where BโˆˆRnร—nCD. These equations often come from a spatial discretization of a set of partial differential equations that govern a given physical system, with dimension nโ‰ซ1. We call this the high-dimensional model (HDM). If the system is time- independent, Eq. (1) reduces to a set of algebraic equations: f(x;ยต) = 0 . 2.2 Reduced-order model The Galerkin projection of the HDM onto a k-dim subspace Vof the state space Rngives a reduced-order model (ROM), which can be solved faster than the HDM. Let VโˆˆSt(n, k)be an orthonormal basis of the subspace V, then the ROM can be written as: x=Vq,ห™q=VโŠบf(Vq, t;ยต), (2) withqโˆˆRkand initial conditions q(0) = VโŠบx0. To satisfy the linear constraints, we must haveBโŠบV= 0. For time-independent systems, Eq. (2) reduces to a set of algebraic equations: VโŠบf(Vq;ยต)=0. 2.3 Stochastic reduced-order model The stochastic reduced-order model (SROM) builds upon the structure of ROM with the difference that the deterministic ROB Vis replaced by its stochastic counterpart W. The change from a deterministic basis to a stochastic basis introduces randomness to the model and allows it to capture model error. The SROM can be written as: x=Wq,ห™q=WโŠบf(Wq, t;ยต),WโˆผยตV (3) where stochastic basis Wfollows a probability distribution ยตV. However, it is more natural to impose a probability distribution ยตVon the subspace rather than the basis, because the Galerkin projection is uniquely determined by the subspace and is invariant under changes of basis. In this work, we use the stochastic subspace model introduced in Section 3 to sample stochastic basis W and construct SROM, which is then used to characterize model error. 4 SS-PPCA 3 Stochastic subspace model This section presents the formulation of the stochastic subspace model using probabilistic principal component analysis. The derivation of a deterministic principal subspace via proper orthogonal decomposition is outlined in Section 3.1. The procedure of constructing random matrices is described in Section 3.2. The stochastic subspace model and the sampling algorithm are discussed in Section 3.3. 3.1 Subspace from the PCA: proper orthogonal decomposition Principal component analysis (PCA) is a common technique for dimension reduction. Closely related to PCA, a common way to find the ROB Vand the associated subspace Vis the proper orthogonal decomposition (POD). Following the notations in Section 2, let X=
https://arxiv.org/abs/2504.19963v2
[x1ยทยทยทxm]โˆˆ Rnร—mbe a sample of the state, with xi=x(ti;ยตi), sample mean x=1 mPm i=1xi, centered sample X0=Xโˆ’x1โŠบ m, and sample covariance S=1 mX0XโŠบ 0. LetX0=Vrdiag(ฯƒr)WโŠบ rbe a compact singular value decomposition (SVD), where ฯƒrโˆˆRr >0โ†“is in non-increasing order. The sample covariance matrix Scan be written as S=Vrdiag(ฯƒr/โˆšm)2VโŠบ r. POD takes the principal basis Vkโ€”the leading keigenvectors of Sโ€”as the deterministic ROB. The corresponding subspace is theprincipal subspace Vk:=range (Vk). Since all state samples satisfy the linear constraints, Vk satisfies the constraints automatically. POD has several other desirable properties: it can extract coherent structures, provide error estimates, and is computationally efficient and straightforward to implement [42]. 3.2 Random matrices from the PPCA Probabilistic PCA (PPCA) [ 43] reformulates PCA as the maximum likelihood solution of a proba- bilistic latent variable model. Here, we briefly overview PPCA and introduce some related random matrix models. Assume that the data-generating process can be modeled as: x=ยต+Uz+ฯต, where ยตโˆˆRn,Uโˆˆ Rnร—k,zโˆผNk(0,Ik),ฯตโˆผNn(0, ฮต2In),nโˆˆZ>0is the ambient dimension, and kโˆˆ {1,ยทยทยท, n}is the latent dimension. Then the observed data follows a Gaussian distribution xโˆผNn(ยต,C), where covariance matrix C=UUโŠบ+ฮต2In. Assume that we have a data sample X= [x1ยทยทยทxm]โˆˆRnร—mof size m. LetS=Vdiag(ฮป)VโŠบ be an eigenvalue decomposition (EVD) of the sample covariance matrix, where VโˆˆO(n)is an orthogonal matrix and ฮปโˆˆRn โ‰ฅ0โ†“is in non-increasing order. The maximum likelihood estimator of the model parameters (ยต,U, ฮต2)are:eยต=x,eU=Vk[diag( ฮปiโˆ’หœฮต2)k i=1]1/2Q, and หœฮต2= 1 nโˆ’kPn i=k+1ฮปi, where Vk= [v1ยทยทยทvk]consists of the first kcolumns of V, and QโˆˆO(k) is any order- korthogonal matrix. The estimated covariance matrix can thus be written as eC= Vkdiag( ฮปiโˆ’หœฮต2)k i=1VโŠบ k+ หœฮต2In. We call the columns of Vkprincipal components of the data sample andฮปk= (ฮปi)k i=1the corresponding principal variances . The PPCA assumes that the latent dimension kis known, and estimates the unknown data distribution xโˆผNn(ยต,C)by maximum likelihood as exโˆผNn(eยต,eC). Without assuming kas known, the data distribution can simply be modeled as exโˆผNn(x,S), the multivariate Gaussian distribution parameterized by the sample mean and the sample covariance. From this empirical data distribution, we can derive distributions of some related matrices, all in analytical form. 5 SS-PPCA First, the data sample matrix has a matrix-variate Gaussian distribution eX= [ex1ยทยทยทexm]โˆผ Nn,m(x;S,Im), where the columns are independent and identically distributed as the empirical data distribution exiiidโˆผNn(x,S),i= 1,ยทยทยท, m. The orientation of the data sample matrix of size khas a matrix angular central Gaussian (MACG) distribution [44]: eVk:=ฯ€(eX0,k)โˆผMACG n,k(S), (4) where eX0,kโˆผNn,k(0;S,Ik)is a centered k-sample matrix and ฯ€(M) :=M(MโŠบM)โˆ’1/2isor- thonormalization by polar decomposition . As a mapping, ฯ€:Rnร—k โˆ—7โ†’St(n, k)is uniquely defined for all full column-rank n-by-kmatrices, kโ‰คn, and takes values in the Stiefel manifold St(n, k) :={VโˆˆRnร—k:VโŠบV=Ik}. The range (i.e., column space) of the data sample matrix of sizekalso has an MACG distribution: eVk:=range (eVk)โˆผMACG n,k(S), (5) which is supported on the Grassmann manifold Gr(n, k)ofk-dim subspaces of the Euclidean space Rn. Its probability density function (PDF) has a unique modeโ€”and therefore a unique global maximal pointโ€”at the principal subspace Vk, see A. This distribution is useful, for example, for modeling subspaces for parametric ROM [34]. 3.3 Stochastic subspace via the PPCA Here we propose a new class
https://arxiv.org/abs/2504.19963v2
of stochastic subspace models MACG n,k,ฮฒ(S)withฮฒโˆˆ[k,โˆž). The classical MACG n,k(S)distribution is a special case with ฮฒ=k. Define the principal subspace map ฯ€k(X) := range (Uk), where Ukconsists of singular vectors associated with the klargest singular values of X. As a mapping, ฯ€k:Rnร—m k>7โ†’Gr(n, k)is uniquely defined for all n-by-mmatrices whose k-th largest singular value is larger than the (k+ 1)-th, with kโ‰คmin(n, m). Using this map, the POD subspace can be written as: Vk=ฯ€k(X0). We define a stochastic subspace model: W:=ฯ€k(eX0,ฮฒ)defโˆผMACG n,k,ฮฒ(S), (6) whereeX0,ฮฒโˆผNn,ฮฒ(0;S,Iฮฒ)is a centered ฮฒ-sample matrix and ฮฒโˆˆ {k, k+ 1,ยทยทยท} is the resample size. Considering its close connection with the POD, we may call this model a stochastic POD . Similar to MACG n,k, the unique mode and global maximal of MACG n,k,ฮฒ is the principal subspace Vk. The hyperparameter ฮฒcontrols the concentration of the distribution: larger value means less variation around Vk. As with POD, the stochastic basis Wsampled from the stochastic subspace model defined in Eq. (6) satisfies the linear constraints BโŠบW= 0automatically. Sampling from this model can be done very efficiently for nโ‰ซ1ifkand the rank of Sare small, which is often the case in practical applications. Let S=Vrdiag(ฮปr)VโŠบ rbe a compact EVD where r=rank(S). We can show that: W=Vrฯ€k diag(ฮปr)1/2Zrร—ฮฒ , (7) where Zrร—ฮฒis an r-by-ฮฒstandard Gaussian matrix. Algorithm 1 gives a procedure for sampling MACG r,k,ฮฒ(S)withS= diag( s)2, which in effect implements the map ฯ€k(diag( s)Zrร—ฮฒ). For a gen- eral covariance matrix S, an orthonormal basis of a random subspace sampled from MACG n,k,ฮฒ(S) can be obtained by left multiplying the output of Algorithm 1 with Vr. Using Eq. (7) instead of Eq. (6) reduces the computational cost for truncated SVD from O(nkฮฒ)toO(rkฮฒ). 6 SS-PPCA Algorithm 1 SS-PPCA : Stochastic subspace via probabilistic principal component analysis. Input: scale vector sโˆˆRr >0; subspace dimension kโˆˆ {1,ยทยทยท, r}; resample size ฮฒโˆˆ {k, k+ 1,ยทยทยท}. 1:Generate Zrร—ฮฒโˆˆRrร—ฮฒwith entries zijiidโˆผN(0,1) 2:Mโ†diag(s)Zrร—ฮฒ 3:Truncated SVD: [Uk,dk,Vk]โ†svd(M, k) Output: W=Uk, an orthonormal basis of a random subspace sampled from MACG r,k,ฮฒ(S)with S= diag( s)2. We can generalize the concentration hyperparameter ฮฒto real numbers in [k,โˆž), which leads to a continuous family of distributions MACG n,k,ฮฒ(S)and can be useful when the optimal value of ฮฒis not much greater than one. In this case, we sample ZโˆˆRrร—โŒˆฮฒโŒ‰, where โŒˆฮฒโŒ‰denotes the smallest integer greater than or equal to ฮฒ. To account for the effect of real-valued ฮฒ, the weight of the final column of Zis set to ฮฒโˆ’ โŒŠฮฒโŒ‹, where โŒŠฮฒโŒ‹is the largest integer less than or equal to ฮฒ. The rest of the steps are the same as Algorithm 1. 4 Hyperparameter training To find the optimal hyperparameter ฮฒโˆˆ[k,โˆž), we minimize the following objective function: f(ฮฒ) :=E[|do(uL)โˆ’do(uE)|2|ฮฒ], (8) where uEis the experimental or ground-truth observation of the output, uLis the low-fidelity prediction of the SROM, and do(u) :=โˆฅuโˆ’uo LโˆฅL2is the L2distance to the low-fidelity prediction uo Lof a reference model. Given the value of the hyperparameter ฮฒ, the stochastic subspace model determines the stochastic ROM, which produces stochastic predictions uLthat is then summarized into the random variable do(uL). Given the experimental or ground-truth observation
https://arxiv.org/abs/2504.19963v2
uE, the objective function measures the mean squared error between do(uL)anddo(uE), which is a statistical measure of how closely the SROM resembles the ground truth. Overall, this optimization problem aims to improve the consistency of the SROM in characterizing the error of the reference model. We optimize the objective function f(ฮฒ)efficiently using a one-dimensional optimization algorithm, such as golden section search and successive parabolic interpolation. Implementations of such algorithms are readily available in many programming languages, e.g., fminbnd() in Matlab, scipy.optimize.minimize_scalar() in Python, and optimize() in R. The objective function, however, is not accessible due to the expectation over SROM. To reduce computation, we ap- proximate f(ฮฒ)at integer ฮฒvalues by Monte Carlo sampling, using 1,000 random samples of the SROM, and linearly interpolate f(ฮฒ)between consecutive integer points. In practice, when the optimization algorithm queries f(ฮฒ)at a real-valued ฮฒ, its values at the two closest integer points will be computed using Monte Carlo sampling and stored. If future queries need the value of f(ฮฒ) at a previously evaluated integer point, the stored value will be used to avoid re-evaluation. The optimization scheme employed in this study is very easy to implement, although it may not be the most efficient due to Monte Carlo approximation of the expectation operator. We leave improvements to the hyperparameter optimization procedure for future work. 7 SS-PPCA 5 Related works This section reviews in detail the two main works that use SROM for model error characterization. Firstly, the work by Soize and Farhat [32] is discussed in Section 5.1. In Section 5.2, the work by Zhang and Guilleminot [39] for characterizing model-form uncertainty across multiple models is presented. In Section 5.3, we distinguish the three methods based on their operational mechanisms and their dependence on available information. 5.1 Non-parametric model (NPM) The seminal work of NPM [ 32] introduced the idea of using SROM to analyze model-form uncertainty. The stochastic basis in the NPM can be written as: W=ฯ€ V+PSt V(PBโŠฅU) ,U=sGR,Gโˆผ GP (ฮณ). (9) The random matrix Grequires knowledge of the discretization Dof the spatial fields, and is sampled from a Gaussian process with a hyperparameter ฮณfor correlation length-scale. The column dependence matrix Ris an order- kupper triangular matrix with positive diagonal entries. The scale sโˆˆ[0,โˆž)is a non-negative number. Once Uis constructed, orthogonal projection PBโŠฅenforces the linear constraints, tangential projection PSt Vprojects a matrix to the tangent space of the Stiefel manifold at a reference ROB V, and orthonormalization ฯ€as in Eq. (4) gives an orthonormal basis. The hyperparameters (s, ฮณ,R)have a dimension of k(k+ 1)/2 + 2 , and are trained by minimizing a weighted average of JmeanandJstd: the former aims at matching the SROM mean prediction to ground-truth observations O; the latter aims at matching the SROM standard deviation to a scaled difference between Oanduo L. 5.2 Riemannian stochastic model (RSM) For problems where there exist a number of physically plausible candidate models, the RSM [ 39] characterizes model-form uncertainty using SROMs constructed with the following stochastic basis: W= expSt Vn cqX i=1pilogSt V(V(i))o ,pโˆผDirichlet (ฮฑ), (10) where cโˆˆ[0,โˆž)is a scale parameter, qis the number of
https://arxiv.org/abs/2504.19963v2
models, and p= (pi)q i=1is a probability vector following the Dirichlet distribution with concentration parameters ฮฑโˆˆRq >0. The Riemannian logarithm logSt Vmaps a model-specific ROB V(i)to the tangent space of the Stiefel manifold at a reference ROB V. The Riemannian exponential expSt Vmaps a tangent vector at Vback to the Stiefel manifold, giving the orthonormal basis W. The scale parameter cis usually set to one or omitted in later works [ 40,41]. The hyperparameters ฮฑare trained by minimizing โˆฅE[logSt VW]โˆฅ2 F, to match the center of mass of the distribution to V. It is a quadratic program and can be solved efficiently. We note that the RSM cannot be applied outside multi-model settings. 5.3 Comparison of model structures To deepen understanding of the three methods, we outline their structures in Figure 1 and highlight a few key differences below. Both NPM and RSM construct probabilistic models of reduced-order bases, whereas SS-PPCA models subspaces directly. Another distinction lies in the reliance on 8 SS-PPCA NPM B,D 33// OO{X(i)}// V// ยตV O(i) //(s, ฮณ,R)77 RSM B,D//{X(i)}q i=1//({V(i)}q i=1,V)// ยตV (c,ฮฑ)77 SS-PPCA B,D//{X(i)}// (Vr,ฮปr, k)// ยตV O(i) //ฮฒ77 Figure 1: Dependence diagrams for (from top to bottom) the NPM and the RSM models of stochastic basis and the SS-PPCA model of stochastic subspace. The dashed lines indicate that the connection only exists when characterizing the ROM-to-HDM error. discretization information: NPM requires discretization and boundary condition data for hyper- parameter optimization and SROM sampling, while SS-PPCA and RSM do not. Finally, while RSM optimizes its hyperparameters using only the reduced bases, SS-PPCA and NPM incorporate observed quantities of interest into their objective functions. 6 Numerical experiments In this section, we evaluate the proposed method, SS-PPCA, in characterizing model error using three numerical examples. In Section 6.1, we examine a parametric linear static problem and characterize model error at unseen parameters. In Section 6.2, we demonstrate how our approach can be used to characterize the HDM error, assuming that experimental data is available. Finally, in Section 6.3, we discuss a linear dynamics problem involving a space structure. All data and code for these examples are available at https://github.com/UQUH/SS_PPCA. The performance of the SROMs is assessed using two general metrics for stochastic models: consis- tency and sharpness. By consistent UQ, we mean that the statistics on prediction uncertaintyโ€”such as standard deviation and predictive intervals (PI)โ€”derived from the model match the prediction error on average. Without consistency, the model either under- or over-estimates its prediction error, which risks being over-confident or unnecessarily conservative. By sharp UQ, we mean the modelโ€™s PIs are narrower than alternative methods. This is essentially the probabilistic version of model accuracy and is desirable as we want to minimize error. 9 SS-PPCA X Figure 2: HDM vs ROM displacement at the test parameter 6.1 Parametric linear static problem We consider a one-dimensional parametric linear static problem with n= 1,000degrees of freedom (DoFs), governed by the system of equations: Kx(ยต) =f(ยต), (11) where x(ยต)โˆˆRnis the displacement vector, and ยตโˆˆ[0,1]5is a parameter vector which controls the loading conditions. The system is subject to homogeneous Dirichlet boundary conditions
https://arxiv.org/abs/2504.19963v2
at the first and the last node, i.e., x1(ยต) =xn(ยต) = 0 . These constraints can be compactly represented asBโŠบx(ยต) = 0 withB= [e1en], where eiis the i-th standard basis vector of Rn. The stiffness matrix KโˆˆRnร—nis constructed as K=ฮฆฮ›ฮฆโŠบ, with ฮ›= diag(4 ฯ€2j2)nโˆ’2 j=1andฮฆ= [0S0]โŠบ. The matrix S=q 2 nโˆ’1 sinjkฯ€ nโˆ’1j=1,...,nโˆ’2 k=1,...,nโˆ’2is the order- (nโˆ’2)type-I discrete sine transform (DST-I) matrix, scaled to be orthogonal. We can see that ฮ›โˆˆR(nโˆ’2)ร—(nโˆ’2)andฮฆโˆˆRnร—(nโˆ’2). Therefore Khas eigenpairs (ฮปj,ฯ•j)withฮปj= 4ฯ€2j2andฯ•jthej-th column of ฮฆ. The force is given by f(ยต) =โˆฅg(ยต)โˆฅโˆ’1 โˆžg(ยต)with g(ยต) =P6 i=2ยตiฯ•iandยตiโˆˆ[0,1].This setup provides a controlled framework for studying parameter-dependent structural responses, especially in the context of model error. In this example, Kdoes not depend on ยต. Hence, it can be factorized once and stored for efficient inversion. However, for a general case, the parametric HDM can be computationally expensive, especially if it needs to be evaluated multiple times. This justifies the use of ROM. We construct a ROM by evaluating the HDM at 100 parameter samples, ยตjโˆˆ[0,1]5, drawn via Latin Hypercube Sampling (LHS). For each sample, the solution is computed as xj=x(ยตj) =Kโˆ’1f(ยตj), j= 1,ยทยทยท,100. These solutions are concatenated to form a snapshot matrix X, which is used to construct a deterministic ROB V=ฯ€k(X)with dimension k= 4via POD. The governing equation of the ROM is given by: xR=Vq,Kkq=VโŠบf(ยต),Kk=VโŠบKV. (12) To evaluate the accuracy of this ROM, we define a test parameter as ยตtest= [1 2,1 2,1 2,1 2,1], which lies within the parameter domain but was not included in the training set. The goal is to characterize 10 SS-PPCA XDisplacement Figure 3: Linear static problem: prediction by SS-PPCA. the ROM error at this previously unseen parameter point, providing insight into the modelโ€™s generalization capabilities. Figure 2 shows the error between HDM and ROM displacement at the test parameter. To characterize the error induced by ROM, we construct SROMs by replacing the deterministic ROB Vwith SROBs W. The SROBs Ware sampled using the stochastic subspace model described in Section 3.3 and Algorithm 1. To achieve consistent UQ, the hyperparameter ฮฒis optimized by minimizing the objective function given in Eq. (8). For this example, uErepresents the HDM displacement at training parameters, and uo Lrepresents the ROM displacement at the same parameters. The optimal value of the hyperparameter ฮฒis 20, which is optimized efficiently using the strategy given in Section 4. The hyperparameter training time for this example is 9.1 seconds. Figure 3 shows the ROM error characterization for the linear parametric system at the test parameters. It can be observed that the SROM mean displacement is very close to the ROM displacement, and the SROM 95% PI is able to capture the ROM error consistently and sharply. The coverage for this example is 99%, that is, the SROM 95% PI has captured the ROM error over most of the domain. It can also be observed that this coverage is achieved while maintaining tight bounds around the HDM displacement. 6.2 Static problem: characterizing HDM error In the previous example, we demonstrated the use of SROM to characterize the model truncation error induced by ROM in a
https://arxiv.org/abs/2504.19963v2
linear parametric system. However, the utility of SROM is not limited to characterizing ROM error. It can also be employed to characterize discrepancies between HDM and experimental data, a crucial task in model validation. Here we characterize HDM error via the SROM approach in a linear static problem. The synthetic experimental data is generated by solving the system: KExE=f, (13) withn(model dimension) = 1,000,f=P5 i=20.5ฯ•i+ 1ฯ•6, and homogeneous Dirichlet boundary conditions BโŠบxE= 0where B= [e1en]. The stiffness matrix is defined as KE=K+Kฯต, where Kis formulated the same as in the previous example and Kฯตis a perturbation matrix designed to induce a 5% change in the Euclidean norm of KE. In addition to the model error, we simulate 11 SS-PPCA 0 0.2 0.4 0.6 0.8 1 X-1012Displacement#10!4 GroundTruth GroundTruth(NoNoise) HDM SROMmean SROM95%PI Figure 4: Linear static problem with experimental data: SS-PPCA prediction. measurement error by adding 5% noise to the experimental displacement xE. Hence, the observed experimental data follow: xobs=xE+ฮท, where ฮทโˆผN(0, ฯƒ2I)with variance chosen to achieve the specified noise level. Experimental data can be very expensive to acquire. Therefore, we limit ourselves to experimental data from a sparse subset of spatial locations. These sparse measurements are used to characterize HDM error via the SROM approach. The computational HDM is defined as: Kx H=f, (14) with the same Kandfas above. However, the HDM produces biased predictions because the true system differs from this model. To capture this error, we perturb the HDM so that the perturbed HDM responses can sufficiently cover the experimental data. The HDM is perturbed 100 times, and the responses are collected in a snapshot matrix. ROM of dimension k= 4is then constructed using POD, which further induces model truncation error. The ROM derived from the HDM is defined as: xR=Vq,VโŠบKVq =VโŠบf. (15) To characterize both the ROM and HDM error, we construct SROMs by substituting the deterministic ROBVwith the SROBs W. The SROBs Ware sampled using the stochastic subspace model described in Section 3.3 and Algorithm 1. To have consistent UQ, the hyperparameter ฮฒis optimized by minimizing the objective function given in Eq. (8). For this example, uErepresents the sparsely observable experimental displacement, and uo Ldenotes the ROM displacement at the same locations. The optimum value of the hyperparameter ฮฒis 8, which is optimized efficiently using the strategy given in Section 4. The hyperparameter training time for this example is 1.9 seconds. Figure 4 shows the HDM error characterization by the SS-PPCA method using the sparse ex- perimental observations. It can be observed that our method successfully captures the model discrepancy, yielding accurate uncertainty quantification for the HDM predictions. The SROM 95% PI captures the noiseless ground-truth data well, and the bounds are very sharp. The trained SROM model via SS-PPCA enables fast prediction with error estimates and eliminates the dependence on computationally expensive HDM. 12 SS-PPCA a b 0 20 40 60 Time (ms)-3-2-10123Force (105 lbf) Y XZ Figure 5: Dynamics problem: (a) space structure; (b) loading. 6.3 Dynamics problem: space structure In Sections 6.1 and 6.2, we demonstrated that the SS-PPCA method performs well in characterizing uncertainty
https://arxiv.org/abs/2504.19963v2
with low computational cost. However, those cases involved relatively simple linear static problems. To evaluate the robustness and scalability of the method, we compare it in a much more complex linear dynamics problem of a space structure. Because of the higher model complexity and more realistic structural dynamics, this example may better assess the efficiency and effectiveness of the method. We consider a major component of a space structure [ 45] given an impulse load in the z-direction, see Figure 5. The space structure consists of two major parts: an open upper and a lower part. A large mass (approximately 100 times heavier than the remainder of the model) is placed at the center of the upper part. This mass is connected to the sidewalls via rigid links. The lower part of the structure consists of the outer cylindrical shell and an inner shock absorption block with a hollow space to hold the essential components. The impulse load, as shown in Figure 5b, is applied at the center of the mass in the z-direction, which is transmitted to the upper part through the sidewalls and then to the lower part via the mounting pedestal. The impulse load can compromise the functionality of the essential components located in the lower part. Hence, monitoring the acceleration and velocity at the critical points of the essential parts is important for their safe operation. We take the quantity of interest (QoI) to be the x-velocity of a critical point at one of the essential components. The space structure has no boundary conditions; it can be assumed to be floating in outer space. The governing equation of the HDM of the space structure is given by: MHยจx+CHห™x+KHx=f, (16) with the initial conditions x(0) = 0 andห™x(0) = 0 . The system matrices MH,KH, and force fare exported from the finite element model in LS-DYNA and CH=ฮฒHKHwith a Raleigh damping coefficient ฮฒH= 6.366eโˆ’6. We denote the solution of the HDM as xH, which is obtained by numerically integrating Eq. (16) using the Newmark- ฮฒmethod with a time step of 5ร—10โˆ’2ms. A single HDM simulation takes approximately 38 minutes, making it computationally expensive, especially when multiple runs are needed. To address this challenge, we construct a ROM with dimension k= 10 via POD. The ROM derived from the HDM is defined as xR=Vq,MH,Vยจq+CH,Vห™q+KH,Vq=VโŠบf, (17) 13 SS-PPCA 0 10 20 30 40 50 60 70 Time(ms)-8-6-4-20246VelocityinX(in/s) HDM ROM SROMmean SROM95%PI 30 35 40-202 0 5 10-5056870727476-0.500.5 Figure 6: Dynamic prediction of the SS-PPCA model: velocity. where reduced matrices are denoted with a pattern AV:=VโŠบAV, so that MH,V=VโŠบMHV, for example. The initial condition of ROM is given by q(0) = 0 andห™q(0) = 0 . In contrast to the high simulation time of HDM, ROM takes 0.2 secondsโ€”roughly 11,200 times faster. However, this computational efficiency comes at the expense of reduced accuracy. Hence, error characterization of ROM is essential, which is done by SROM throughout this study. To construct the SROMs, we substitute the deterministic ROB Vby the SROBs W. For the SS-PPCA method, the SROBs Ware sampled using the stochastic
https://arxiv.org/abs/2504.19963v2
subspace model described in Section 3.3 and Algorithm 1. For this example, uEcorresponds to the velocity from the HDM at a critical structural node, and uo Lcorresponds to the velocity predicted by ROM at the same location. The optimum value of the hyperparameter ฮฒcomes out to be 39, which is optimized using the strategy given in Section 4. The hyperparameter training time for this example is 785 seconds, which is low compared to the computational cost of the HDM. This shows that our method is scalable and can provide fast and reliable predictions even in complex systems. The HDM velocity shows a high-frequency behavior, especially during the initial 10 milliseconds. The ROM with just 10 modes fails to capture this behavior. Figure 6 illustrates that the 95% PI of the SS-PPCA method is notably consistent and significantly sharp, even in the initial 10 milliseconds period (see the Figure 6 inset for [0,10]ms). The SS-PPCA method not only captures the high-frequency behavior but also maintains tight bounds around it. Additionally, the SROM mean velocity of SS-PPCA closely matches the deterministic ROM velocity, as it should be. The coverage for velocity via the SS-PPCA method is 94.83%. Figure 7 shows the x-direction acceleration at the critical node predicted by the SS-PPCA. The acceleration also exhibits a high-frequency behavior, due to the impulse loading. Despite this, our SS-PPCA method is able to produce consistent UQ with sharp bounds. It is important to note that even though velocity (the QoI for this example) is used for training the hyperparameters and 14 SS-PPCA 0 10 20 30 40 50 60 70 Time(ms)-3-2-10123AccelerationinX(104in/s2)HDM ROM SROMmean SROM95%PI 30 35 40-101 0 5 10-2026870727476-0.200.2 Figure 7: Dynamic prediction of the SS-PPCA model: acceleration. 010203040506070 Time(ms)-5051015DisplacementinX(10!3in) HDM ROM SROMmean SROM95%PI30 35 40246 0 5 10-101 68707274761414.515 Figure 8: Dynamic prediction of the SS-PPCA model: displacement. displacement is used for obtaining the deterministic ROB V, our approach enables predictions of unobserved quantities such as acceleration. The fact that acceleration has not been observed makes our result very intriguing. Furthermore, our approach enables efficient predictions of any unobserved QoIs of the whole system, a capability that many studies in the literature fail to achieve. This highlights the effectiveness and practicality of our approach to engineering applications. The coverage for acceleration via the SS-PPCA method is 85.49%, which means that the error 15 SS-PPCA 0 10 20 30 40 50 60 70 Time(ms)-8-6-4-20246VelocityinY(in/s) HDM ROM SROMmean SROM95%PI 30 35 40-4-2024 0 5 10-5056870727476-101 Figure 9: Dynamic prediction of the SS-PPCA model: velocity at a random DoF. characterization can be improved a bit. However, given that the acceleration is an unobserved quantity and exhibits a high-frequency behavior, the result is quite good. Figure 8 compares the x-direction displacement at the critical node predicted by the SS-PPCA. It can be observed that SS-PPCA displacement prediction is highly accurate: the SROM, ROM, and HDM show similar behavior, and the 95% PI is very tight. The coverage for displacement via the SS-PPCA method is 96.78 %, which means that the displacement prediction is consistent and highly accurate. All
https://arxiv.org/abs/2504.19963v2
the quantities compared in Figures 6 to 8, whether observed or unobserved, are from the same node of the space structure. However, we may need to make a prediction at a different node of the system, or we may be interested in the general behavior of the system. Figure 9 shows the prediction of the SS-PPCA method for the y-direction velocity at a random node; the result is still consistent and very sharp. This shows that the SS-PPCA methods allow us to make predictions about the entire structure just by training on the observation data of a single DoF. In summary, the low computational cost of hyperparameter optimization of SS-PPCA, combined with the low computational cost of ROM, can accelerate the prediction of complex engineering phenomena. Furthermore, SS-PPCA offers a fully automated, efficient, and principled training workflow with minimal user input. This makes it well suited for practical applications. One issue with methods based on SROM is that they cannot fully eliminate model error as they rely on a ROM. This issue can be addressed by increasing the number of reduced bases or incorporating model closure terms. A study on model closure for ROMs is presented by Ahmed et al. [46]. 16 SS-PPCA 7 Conclusion We introduced a novel stochastic subspace model, which is used to characterize model error in the framework of stochastic reduced-order models. It is simple, efficient, easy to implement, and well-supported by analytical results. Our SS-PPCA model has only one hyperparameter, which is optimized systematically to improve consistency. Through various numerical examples, we reveal the characteristics of this model, show its flexibility across single and multiple model cases, establish its consistency, and quantify its remarkable sharpness. It opens up a promising path to the challenging problem of model-form uncertainty. Across all examples, the stochastic subspace model effectively captures various forms of model error, including parametric uncertainty, ROM error, and HDM errors informed by noisy experimental data. Although the examples used in this study are linear systems, our method is generally applicable for characterizing model error in nonlinear systems. However, in scenarios with large errors, mere characterization may not suffice. In such cases, future work will focus on developing practical and efficient techniques for model error correction. Acknowledgments The authors thank Johann Guilleminot and Christian Soize for invaluable discussions on this topic. This work was supported by the University of Houston. 17 SS-PPCA A Mode of MACG n,k(ฮฃ) In this section, we establish the mode of the MACG probability distribution on Grassmann manifolds. To the best of our knowledge, this is the first appearance of such results. The matrix angular central Gaussian distribution MACG n,k(ฮฃ)on the Grassmann manifold Gr(n, k) [44,47] is the probability distribution with the probability density function (PDF) p(XXโŠบ;ฮฃ) = zโˆ’1|XโŠบฮฃโˆ’1X|โˆ’n/2with respect to the normalized invariant measure of Gr(n, k). Here ฮฃ>0is an order- nsymmetric positive definite matrix, a k-dim subspace X=range (X)is represented by the orthogonal projection matrix XXโŠบwhere XโˆˆSt(n, k)is an orthonormal basis, and the normalizing constant z=|ฮฃ|k/2where | ยท |denotes the matrix determinant. In the following, let ฮฃhave eigenvalues ฮป1โ‰ฅ ยทยทยท โ‰ฅ ฮปn>0and
https://arxiv.org/abs/2504.19963v2
corresponding eigenvectors v1,ยทยทยท,vnthat form an orthonormal basis of Rn. Theorem 1. The PDF of the MACG n,k(ฮฃ)distribution on Gr(n, k)is maximized at any k-dim principal subspace of ฮฃ. Ifฮปk> ฮป k+1, then the principal subspace Vk=range (Vk)with Vk= [v1ยทยทยทvk]is the unique mode of the distribution. Proof. Let us first consider a special case when ฮฃis the diagonal matrix ฮ›= diag( ฮป1,ยทยทยท, ฮปn). Let subspace XโˆผMACG n,k(ฮ›)and have an orthonormal basis XโˆˆSt(n, k). The PDF can be written as: p(X) =|ฮ›|โˆ’k/2|XโŠบฮ›โˆ’1X|โˆ’n/2= nY i=1ฮปi!โˆ’k/2 |XโŠบฮ›โˆ’1X|โˆ’n/2. (18) We see that the PDF is a smooth function of X, and we first characterize its critical points. Taking the logarithm of the above equation, we get: logp(X) =โˆ’n 2log|XโŠบฮ›โˆ’1X| โˆ’k 2nX i=1logฮปi. (19) The second termk 2Pn i=1logฮปiis a constant, so the critical points of p(X)are the critical points oflog|XโŠบฮ›โˆ’1X|, which can be computed by setting tr(Sโˆ’1โˆ‚S) = 0 where S=XโŠบฮ›โˆ’1X. The partial derivative of XโŠบฮ›โˆ’1Xcan be written as โˆ‚(XโŠบฮ›โˆ’1X) = (โˆ‚X)โŠบฮ›โˆ’1X+XโŠบฮ›โˆ’1(โˆ‚X). The differential โˆ‚XโˆˆTXSt(n, k), meaning that it lies in the tangent space of the Stifel manifold St(n, k) at the point X, which must satisfy (โˆ‚X)โŠบX+XโŠบ(โˆ‚X) = 0 . In other words, (โˆ‚X)โŠบXmust be antisymmetric, which allows us to write โˆ‚Xin a projective form: โˆ‚X=Mโˆ’Xsym(XโŠบM)where MโˆˆRnร—kand sym (A) = (A+AโŠบ)/2. Now we have: tr(Sโˆ’1โˆ‚S) =tr{(XโŠบฮ›โˆ’1X)โˆ’1[(โˆ‚X)โŠบฮ›โˆ’1X+XโŠบฮ›โˆ’1(โˆ‚X)]} =tr{(โˆ‚X)โŠบฮ›โˆ’1X(XโŠบฮ›โˆ’1X)โˆ’1+ (XโŠบฮ›โˆ’1X)โˆ’1XโŠบฮ›โˆ’1(โˆ‚X)} = 2tr{(XโŠบฮ›โˆ’1X)โˆ’1XโŠบฮ›โˆ’1(โˆ‚X)} = 2tr{(XโŠบฮ›โˆ’1X)โˆ’1XโŠบฮ›โˆ’1[Mโˆ’Xsym(XโŠบM)]} = 2tr{(XโŠบฮ›โˆ’1X)โˆ’1XโŠบฮ›โˆ’1Mโˆ’sym(XโŠบM)} = 2tr{(XโŠบฮ›โˆ’1X)โˆ’1XโŠบฮ›โˆ’1Mโˆ’XโŠบM} = 2sum{[(XโŠบฮ›โˆ’1X)โˆ’1XโŠบฮ›โˆ’1โˆ’XโŠบ]โ—ฆM}.(20) With tr(Sโˆ’1โˆ‚S) = 0 for all MโˆˆRnร—k, we have (XโŠบฮ›โˆ’1X)โˆ’1XโŠบฮ›โˆ’1โˆ’XโŠบ= 0, that is, X= ฮ›โˆ’1X(XโŠบฮ›โˆ’1X)โˆ’1. Further simplification gives us (XXโŠบ)(ฮ›โˆ’1X) = (ฮ›โˆ’1X). Because XXโŠบis 18 SS-PPCA the orthogonal projection onto range (X), we have range (ฮ›โˆ’1X)โІrange (X); since both sides of the equation are k-dim subspaces, we have range (ฮ›โˆ’1X) =range (X). Therefore, X=range (X) is an invariant k-subspace of ฮ›โˆ’1, or equivalently, an invariant k-subspace of ฮ›. If the eigenvalues (ฮปi)n i=1are distinct, then Xmust contain a k-combination of the standard basis {ei}n i=1. So far we have proved that the critical points of p(X)are the invariant k-subspaces of ฮ›. Since the Grassmann manifold Gr(n, k)is compact and without a boundary, the smooth PDF p(X) has a global minimum and a global maximum, both of which are critical points. We define two k-dim subspaces EkandEkas the ranges of the bases Ek:= [e1ยทยทยทek]andEk:= [enโˆ’k+1ยทยทยทen], respectively. We have: logp(Ek) =โˆ’n 2log|EโŠบ kฮ›โˆ’1Ek| โˆ’k 2nX i=1logฮปi =โˆ’n 2log|diag( ฮปโˆ’1 1,ยทยทยท, ฮปโˆ’1 k)| โˆ’k 2nX i=1logฮปi =โˆ’n 2kX i=1logฮปโˆ’1 iโˆ’k 2nX i=1logฮปi =n 2kX i=1logฮปiโˆ’k 2nX i=1logฮปi.(21) Similarly, we have logp(Ek) =n 2Pn i=nโˆ’k+1logฮปiโˆ’k 2Pn i=1logฮปi. In general, let ฮปa1โ‰ฅ ยทยทยท โ‰ฅ ฮปak be the eigenvalues associated with an invariant k-subspace X0, then: logp(X0) =n 2kX j=1logฮปajโˆ’k 2nX i=1logฮปi. (22) Since ฮป1โ‰ฅ ยทยทยท โ‰ฅ ฮปn, we have p(Ek)โ‰คp(X0)โ‰คp(Ek), for all critical points X0. This means that EkandEkare the global minimal point and the global maximal point of p(X), respectively; that is,p(Ek)โ‰คp(X)โ‰คp(Ek). Now we show that a critical point X0cannot be a local maximum unless {ฮปa1,ยทยทยท, ฮปak}= {ฮป1,ยทยทยท, ฮปk}. Letva1,ยทยทยท,vakbe eigenvectors associated with eigenvalues ฮปa1,ยทยทยท, ฮปak, respec- tively, such that they form an orthonormal basis of X0. Assume that eigenvalue ฮปโ€ฒ/โˆˆ {ฮปa1,ยทยทยท, ฮปak} andฮปโ€ฒ> ฮป ak. Letvโ€ฒbe an eigenvector associated with the
https://arxiv.org/abs/2504.19963v2
eigenvalue ฮปโ€ฒ. Consider the trajectory X(ฮธ) = range (X(ฮธ)), where X(ฮธ) = [va1ยทยทยทvakโˆ’1v(ฮธ)],v(ฮธ) = cos( ฮธ)vak+ sin( ฮธ)vโ€ฒ, and ฮธโˆˆ[0,ฯ€ 2]. It is a constant-speed rotation that starts from X0. We can show that: logp(X(ฮธ)) =n 2 kโˆ’1X j=1logฮปajโˆ’log ฮปโˆ’1 akcos2ฮธ+ฮปโ€ฒโˆ’1sin2ฮธ! โˆ’k 2nX i=1logฮปi, (23) which increases monotonically. Therefore, X0is not a local maximum. In summary, MACG n,k(ฮ›)has its global maximum at Ek, or any k-dim principal subspace of ฮ›. These subspaces are also the modes of the distribution. If ฮปk> ฮป k+1, there is a unique k-dim principal subspace Ek, which is the unique mode of the distribution. For a general ฮฃ, it has an eigenvalue decomposition ฮฃ=Vฮ›VโŠบwhere V= [v1ยทยทยทvk]. Given a change of basis x=Vz, the previous arguments still hold, and the theorem follows immediately. 19 SS-PPCA References [1]P. Pernot, The parameter uncertainty inflation fallacy, The Journal of Chemical Physics 147 (2017). doi:10.1063/1.4994654. [2]Y . He, D. Xiu, Numerical strategy for model correction using physical constraints, Journal of Computational Physics 313 (2016) 617โ€“634. doi:10.1016/j.jcp.2016.02.054. [3]M. C. Kennedy, A. Oโ€™Hagan, Bayesian calibration of computer models, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63 (2001) 425โ€“464. doi:10.1111/1467- 9868.00294. [4]D. Higdon, M. Kennedy, J. C. Cavendish, J. A. Cafeo, R. D. Ryne, Combining field data and computer simulations for calibration and prediction, SIAM Journal on Scientific Computing 26 (2004) 448โ€“466. doi:10.1137/S1064827503426693. [5]D. Higdon, J. Gattiker, B. Williams, M. Rightley, Computer model calibration using high- dimensional output, Journal of the American Statistical Association 103 (2008) 570โ€“583. doi:10.1198/016214507000000888. [6]M. J. Bayarri, J. O. Berger, R. Paulo, J. Sacks, J. A. Cafeo, J. Cavendish, C.-H. Lin, J. Tu, A framework for validation of computer models, Technometrics 49 (2007) 138โ€“154. doi:10.1198/004017007000000092. [7]P. Z. G. Qian, C. F. J. Wu, Bayesian hierarchical modeling for integrating low-accuracy and high-accuracy experiments, Technometrics 50 (2008) 192โ€“204. doi:10.1198/004017008000000082. [8]R. B. Gramacy, H. K. H. Lee, Bayesian treed gaussian process models with an application to computer modeling, Journal of the American Statistical Association 103 (2008) 1119โ€“1130. doi:10.1198/016214508000000689. [9]K. A. Maupin, L. P. Swiler, Model discrepancy calibration across experimental settings, Reli- ability Engineering and System Safety 200 (2020) 106818. doi:10.1016/j.ress.2020.106818. [10] K. Farrell, J. T. Oden, D. Faghihi, A bayesian framework for adaptive selection, calibration, and validation of coarse-grained models of atomistic systems, Journal of Computational Physics 295 (2015) 189โ€“208. doi:10.1016/j.jcp.2015.03.071. [11] M. Strong, J. E. Oakley, When is a model good enough? deriving the expected value of model improvement via specifying internal model discrepancies, SIAM/ASA Journal on Uncertainty Quantification 2 (2014) 106โ€“125. doi:10.1137/120889563. [12] K. Sargsyan, H. N. Najm, R. Ghanem, On the statistical calibration of physical models, International Journal of Chemical Kinetics 47 (2015) 246โ€“276. doi:10.1002/kin.20906. [13] J. Brynjarsdรณttir, A. Oโ€™Hagan, Learning about physical parameters: the importance of model discrepancy, Inverse Problems 30 (2014) 114007. doi:10.1088/0266-5611/30/11/114007. [14] T. A. Oliver, G. Terejanu, C. S. Simmons, R. D. Moser, Validating predictions of unobserved quantities, Computer Methods in Applied Mechanics and Engineering 283 (2015) 1310โ€“1335. doi:10.1016/j.cma.2014.08.023. [15] M. Plumlee, Bayesian calibration of inexact computer models, Journal of the American Statistical Association 112 (2017) 1274โ€“1285. doi:10.1080/01621459.2016.1211016. 20 SS-PPCA [16] R. E. Morrison,
https://arxiv.org/abs/2504.19963v2
T. A. Oliver, R. D. Moser, Representing model inadequacy: A stochastic operator approach, SIAM/ASA Journal on Uncertainty Quantification 6 (2018) 457โ€“496. doi:10.1137/16M1106419. [17] T. Portone, R. D. Moser, Bayesian inference of an uncertain generalized diffu- sion operator, SIAM/ASA Journal on Uncertainty Quantification 10 (2022) 151โ€“178. doi:10.1137/21M141659X. [18] K. S. Bhat, D. S. Mebane, P. Mahapatra, C. B. Storlie, Upscaling uncertainty with dynamic discrepancy for a multi-scale carbon capture system, Journal of the American Statistical Association 112 (2017) 1453โ€“1467. doi:10.1080/01621459.2017.1295863. [19] K. Sargsyan, X. Huan, H. N. Najm, Embedded model error representation for Bayesian model calibration, International Journal for Uncertainty Quantification 9 (2019) 365โ€“394. [20] T. A. Oliver, R. D. Moser, Bayesian uncertainty quantification applied to rans turbulence models, Journal of Physics: Conference Series 318 (2011) 042032. doi:10.1088/1742- 6596/318/4/042032. [21] X. Huan, C. Safta, K. Sargsyan, G. Geraci, M. S. Eldred, Z. Vane, G. Lacaze, J. C. Oefelein, H. N. Najm, Global sensitivity analysis and quantification of model error for large eddy simulation in scramjet design, in: 19th AIAA Non-Deterministic Approaches Conference, American Institute of Aeronautics and Astronautics, 2017. doi:10.2514/6.2017-1089. [22] P. Pernot, F. Cailliez, A critical review of statistical calibration/prediction models han- dling data inconsistency and model inadequacy, AIChE Journal 63 (2017) 4642โ€“4665. doi:10.1002/aic.15781. [23] S. Zio, H. F. da Costa, G. M. Guerra, P. L. Paraizo, J. J. Camata, R. N. Elias, A. L. Coutinho, F. A. Rochinha, Bayesian assessment of uncertainty in viscosity closure models for turbidity currents computations, Computer Methods in Applied Mechanics and Engineering 342 (2018) 653โ€“673. doi:10.1016/j.cma.2018.08.023. [24] L. Hakim, G. Lacaze, M. Khalil, K. Sargsyan, H. Najm, J. Oefelein, Probabilistic parameter estimation in a 2-step chemical kinetics model forn-dodecane jet autoignition, Combustion Theory and Modelling 22 (2018) 446โ€“466. doi:10.1080/13647830.2017.1403653. [25] C. Soize, A nonparametric model of random uncertainties for reduced matrix models in struc- tural dynamics, Probabilistic Engineering Mechanics 15 (2000) 277โ€“294. doi:10.1016/S0266- 8920(99)00028-4. [26] C. Soize, Maximum entropy approach for modeling random uncertainties in transient elastodynamics, Journal of the Acoustical Society of America 109 (2001) 1979โ€“1996. doi:10.1121/1.1360716. [27] C. Soize, Random matrix theory for modeling uncertainties in computational mechan- ics, Computer Methods in Applied Mechanics and Engineering 194 (2005) 1333โ€“1366. doi:10.1016/j.cma.2004.06.038. [28] C. Soize, A comprehensive overview of a non-parametric probabilistic approach of model uncertainties for predictive models in structural dynamics, Journal of Sound and Vibration 288 (2005) 623โ€“652. doi:10.1016/j.jsv.2005.07.009. [29] M. Grigoriu, Reduced order models for random functions. application to stochastic problems, Applied Mathematical Modelling 33 (2009) 161โ€“175. doi:10.1016/j.apm.2007.10.023. 21 SS-PPCA [30] M. Grigoriu, A method for solving stochastic equations by reduced order models and local approximations, Journal of Computational Physics 231 (2012) 6495โ€“6513. doi:10.1016/j.jcp.2012.06.013. [31] J. E. Warner, M. Grigoriu, W. Aquino, Stochastic reduced order models for random vectors: Application to random eigenvalue problems, Probabilistic Engineering Mechanics 31 (2013) 1โ€“11. doi:10.1016/j.probengmech.2012.07.001. [32] C. Soize, C. Farhat, A nonparametric probabilistic approach for quantifying uncertainties in low-dimensional and high-dimensional nonlinear models, International Journal for Numerical Methods in Engineering 109 (2017) 837โ€“888. doi:10.1002/nme.5312. [33] P. Benner, S. Gugercin, K. Willcox, A survey of projection-based model reduction methods for parametric dynamical systems, SIAM Review 57 (2015) 483โ€“531.
https://arxiv.org/abs/2504.19963v2
doi:10.1137/130932715. [34] R. Zhang, S. Mak, D. Dunson, Gaussian process subspace prediction for model reduction, SIAM Journal on Scientific Computing 44 (2022) A1428โ€“A1449. doi:10.1137/21M1432739. [35] M. L. Mehta, Random matrices, number 142 in Pure and applied mathematics (Academic Press), 3rd ed. ed., Elsevier, Amsterdam, 2004. [36] A. Edelman, N. R. Rao, Random matrix theory, Acta Numerica 14 (2005) 233โ€“297. doi:10.1017/s0962492904000236. [37] H. Wang, J. Guilleminot, C. Soize, Modeling uncertainties in molecular dynamics simula- tions using a stochastic reduced-order basis, Computer Methods in Applied Mechanics and Engineering 354 (2019) 37โ€“55. doi:10.1016/j.cma.2019.05.020. [38] M.-J. Azzi, C. Ghnatios, P. Avery, C. Farhat, Acceleration of a physics-based machine learning approach for modeling and quantifying model-form uncertainties and performing model updating, Journal of Computing and Information Science in Engineering 23 (2022). doi:10.1115/1.4055546. [39] H. Zhang, J. Guilleminot, A riemannian stochastic representation for quantifying model uncertainties in molecular dynamics simulations, Computer Methods in Applied Mechanics and Engineering 403 (2023) 115702. doi:10.1016/j.cma.2022.115702. [40] H. Zhang, J. E. Dolbow, J. Guilleminot, Representing model uncertainties in brittle fracture simulations, Computer Methods in Applied Mechanics and Engineering 418 (2024) 116575. doi:10.1016/j.cma.2023.116575. [41] A. Quek, N. Ouyang, H.-M. Lin, O. Delaire, J. Guilleminot, Enhancing robustness in machine-learning-accelerated molecular dynamics: A multi-model nonparametric probabilistic approach, Mechanics of Materials 202 (2025) 105237. doi:10.1016/j.mechmat.2024.105237. [42] L. Sirovich, Turbulence and the dynamics of coherent structures part i: Coherent structures, Quarterly of Applied Mathematics 45 (1987) 561โ€“571. doi:10.1090/qam/910462. [43] M. E. Tipping, C. M. Bishop, Probabilistic principal component analysis, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 61 (1999) 611โ€“622. doi:10.1111/1467- 9868.00196. [44] Y . Chikuse, Statistics on Special Manifolds, volume 174 of Lecture Notes in Statistics , 1st ed., Springer-Verlag, New York, 2003. doi:10.1007/978-0-387-21540-2. [45] X. Zeng, R. Ghanem, Projection pursuit adaptation on polynomial chaos expan- sions, Computer Methods in Applied Mechanics and Engineering 405 (2023) 115845. doi:10.1016/j.cma.2022.115845. 22 SS-PPCA [46] S. E. Ahmed, S. Pawar, O. San, A. Rasheed, T. Iliescu, B. R. Noack, On closures for reduced order modelsโ€“a spectrum of first-principle to machine-learned avenues, Physics of Fluids 33 (2021) 091301. doi:10.1063/5.0061577. [47] Y . Chikuse, The matrix angular central gaussian distribution, Journal of Multivariate Analysis 33 (1990) 265โ€“274. doi:10.1016/0047-259X(90)90050-R. 23
https://arxiv.org/abs/2504.19963v2
arXiv:2504.20539v1 [math.PR] 29 Apr 2025Randomstrasse101: Open Problems of 2024 Afonso S. Bandeira*Anastasia KireevaยงAntoine Maillardโ™ฏAlmut Rยจ odderยถ May 21, 2025 Abstract Randomstrasse101 is a blog dedicated to Open Problems in Mathematics, with a focus on Probability Theory, Computation, Combinatorics, Statistics, and related topics. This manuscript serves as a stable record of the Open Problems posted in 2024, with the goal of easing academic referencing. The blog can currently be accessed at randomstrasse101.math.ethz.ch . Introduction In this manuscript we include the blog entries in Randomstrasse101 of 2024, containing a total of fifteen Open Problems. Randomstrasse101 is a blog created and maintained by our group at the Department of Mathematics at ETH Zยจ urich1. The focus is on mathematical open problems in Probability Theory, Computation, Combinatorics, Statistics, and related topics. The goal is not to necessarily write about the most important open questions in these fields but simply to discuss open questions and conjectures2that each of us find particularly interesting or intriguing, somewhat in the same style as the first authorโ€™s โ€“now almost a decade oldโ€“ list of forty-two open problems [Ban16]. The blog was created and is currently maintained by Afonso S. Bandeira, Daniil Dmitriev, Anastasia Kireeva, Antoine Maillard, Chiara Meroni, Petar Nizic-Nikolac, Kevin Lucca, and Almut Rยจ odder at ETH Zยจ urich. Each blog entry is generally written by one author and the author list of this manuscript is the union of the entry authors in it. Given the nature of this material, the writing of this manuscript is more informal than a typical academic text3. Nevertheless, we hope it is useful and inspires thought on these questions. Happy solving! Contents Entry 1 Welcome &Matrix Discrepancy (ASB) 2 Entry 2 Global Synchronization (ASB) 3 *ASB: Department of Mathematics, ETH Zยจ urich, Rยจ amistrasse 101, 8092 Zurich, Switzerland. bandeira@math.ethz.ch ยงAK: Department of Mathematics, ETH Zยจ urich, Rยจ amistrasse 101, 8092 Zurich, Switzerland. anastasia.kireeva@math.ethz.ch โ™ฏAM: ARGO Team, Inria Paris, 48 Rue Barrault, 75013 Paris, France. antoine.maillard@inria.fr . Blog entry written while at the Department of Mathematics, ETH Zยจ urich, Rยจ amistrasse 101, 8092 Zurich, Switzerland. ยถAR: Department of Mathematics, ETH Zยจ urich, Rยจ amistrasse 101, 8092 Zurich, Switzerland. almutmagdalena.roedder@ifor.math.ethz.ch 1The inspiration for the name of the blog will be clear to the reader after looking up the departmentโ€™s address and recalling the Probability Theory focus of the blog. 2Conjectures here should be interpreted as mathematical statements that we do not know not to be true, and for which a proof or a refutal would be interesting progress. We do not necessarily have very high confidence that conjectures here are true. 3If you would like to refer to an open question or blog entry, we encourage you to refer to this manuscript instead, since it is a more stable reference. 1 Entry 3 Fitting ellipsoids to random points (AM) 4 Entry 4 Detection in Multi-Frequency synchronization (AK) 6 Entry 5 Tensor PCA and the Kikuchi algorithm (ASB) 7 Entry 6 Did just a couple of deviations suffice all along? (ASB) 9 Entry 7 Sampling from the Sherrington-Kirkpatrick Model (AR) 10 Updates 12 Entry 1 Welcome &Matrix
https://arxiv.org/abs/2504.20539v1
Discrepancy (ASB) Let me start with one of my favourite Open Problems. Conjecture 1 (Matrix Spencer) .There exists a positive universal constant Csuch that, for all positive integers n, and all choices of nself-adjoint nร—nreal matrices A1, . . . , A nsatisfying, for all iโˆˆ[n],โˆฅAiโˆฅ โ‰ค1 (where โˆฅ ยท โˆฅdenotes the spectral norm) the following holds min ฮตโˆˆ{ยฑ1} nX i=1ฮตiAi โ‰คCโˆšn. (1) I first learned about this question from Nikhil Srivastava at ICM 2014, who pointed me to this very nice blog post of Meka [Mek14]. To the best of my knowledge the question first appeared in a paper of Zouzias [Zou12]. I have also written about it in [Ban16] and more recently in [Ban24]. The non-commutative Khintchine inequality (or a matrix concentration inequality) gives the bound up to aโˆšlognfactor. In the particular case of commutative matrices, one can simultaneously diagonalize the matrices and the question reduces to a vector problem (representing the diagonal of the respective matrices) where the spectral norm is replaced by the โ„“โˆžnorm. This is precisely the setting of Joel Spencerโ€™s celebrated โ€œSix Standard Deviations Sufficeโ€ theorem [Spe85b] which establishes the conjecture in that setting, for C= 6. It is noteworthy that in the commutative setting a random choice does not give (1), the argument is of entropic nature. In a sense, in the other extreme situation in which the matrices behave โ€œvery non-commutativelyโ€ (or, more precisely, โ€œfreelyโ€) we know that a random choice of signs works, from our (myself, Boedihardjo, and van Handel) recent work on matrix concentration [BBvH23]. The fact that the reason for the conjecture to be true in these two extreme situations appears to be so different (one based on entropy the other on concentration), together with the fact that we still do not know if it is true in general makes this (in my opinion) a fascinating question! In recent work, Hopkins, Raghavendra, and Shetty [HRS22] has established connections between this problem and Quantum Information, and Dadush, Jiang, and Reis [DJR22] establishes a connection with covering estimates for a certain notion of relative entropy. More recently, Bansal, Jiang, and Meka [BJM23] has proved Conjecture 1 for low-rank matrices (whose rank is nover a polylog factor) using the matrix concentration inequalities in [BBvH23] within a very nice decomposition argument. Motivated by Conjecture 1 and the suspicion that โ€œammount of commutativityโ€ may play an important role in this question, we (myself, Kunisky, Mixon, and Zeng) proposed a Group theoretic special case of this conjecture in [BKMZ22]. Conjecture 2 (Group Spencer) .LetGbe a finite group of size n. Conjecture 1 holds in the particular case in which A1, . . . , A nare the nร—nmatrices corresponding to the regular representation of G. 2 In [BKMZ22] we prove Conjecture 2 for simple groups, but the general case is still open. While the commutative case follows from Spencerโ€™s theorem, giving an explicit construction is not trivial (see this Mathoverflow post: https://mathoverflow.net/q/441860 ). Entry 2 Global Synchronization (ASB) In the 17th century, Christiaan Huygens (inventor of the pendulum clock) observed that pendulum clocks have a tendency to
https://arxiv.org/abs/2504.20539v1
spontaneously synchronize when hung on the same board. This phenomenon of spontaneous synchronization of coupled oscillators has since become a central object of study in dynamical systems. We will focus here on spontaneous synchronization of ncoupled oscillators with pairwise connections given by a graph with adjacency matrix AโˆˆRnร—n. The celebrated Kuramoto model [Kur75] models the oscillators as gradient flow on E(ฮธ) =1 2Pn i,j=1Aij(1โˆ’cos(ฮธiโˆ’ฮธj)),where the parameterization ฮธโˆˆ[0,2ฯ€[n represents each oscillators angle in the moving frame of its natural frequency (for the sake of the sequel the derivation of this potential is not crucial you can see, e.g. [ABKS23] for more details). This motivates the following definition which is central to what follows. Definition 2.1. We say an nร—nmatrix Ais globally synchronizing if the only local minima of E:Snโˆ’1โ†’R, parameterized by ฮธโˆˆ[0,2ฯ€[nand given by E(ฮธ) =1 2nX i,j=1Aij(1โˆ’cos(ฮธiโˆ’ฮธj)), are the global minima corresponding to ฮธi=c,โˆ€i. A graph Gis said to be globally synchronizing if its adjacency matrix is globally synchronizing. Abdalla, Bandeira, Kassabov, Souza, Strogatz, and Townsend [ABKS23] recently showed that a random Erdห os-Rยด enyi graph is globally synchronizing above the connectivity threshold with high probability (you can also see the Quanta article covering this at https://www.quantamagazine.org/new-proof-shows-that-e xpander-graphs-synchronize-20230724/ ). The same paper also shows that a uniform d-regular graph is globally synchronizing with high probability for dโ‰ฅ600, but it leaves open the following question. Conjecture 3 (Globally Synchronizing Regular Graphs) .A uniform random 3-regular graph is globally synchronizing with high probability (probability going to 1asnโ†’ โˆž ). The following question is motivated by trying to understand the Burer-Monteiro algorithmic approach to community detection in the stochastic block model [Kur16]. The most recent progress is in [MABB24] which answers a high rank version of this question. Conjecture 4 (Global Synchrony with negative edges) .Given any ฮต >0, the nร—nrandom matrix Awith zero diagonal and off-diagonal entries given by Aij= 1with probability1 2+ฮด โˆ’1with probability1 2โˆ’ฮด, withฮดโ‰ฅ(1 +ฮต)q logn 2n, is globally synchronizing with high probability. Another elusive question about global synchrony is the โ€œextremal combinatoricsโ€ version of the question. Conjecture 5 (Density threshold for Global Synchrony) .For any ฮต >0, there exists n >0and a graph G onnnodes such that the minimum degree of Gis at least3 4โˆ’ฮต nandGis not globally synchronizing. Kassabov, Strogatz, and Townsend [KST21] showed an upper bound on the3 4threshold in Conjecture 5, and the same paper contains evidence that3 4is the correct threshold (by arguing that an upper bound below 3 4would need to go beyond linear analysis of the dynamical system). 3 Entry 3 Fitting ellipsoids to random points (AM) In this entry I will discuss a seemingly simple question of high-dimensional stochastic geometry, which originated (as far as I know) in the series of works [Sau11, SCPW12, SPW13]: Forn, dโ‰ซ1, when can a sequence of ni.i.d. random points in Rd, drawn from N(0,Id/d), be fitted by the boundary of a centered ellipsoid? Figure 1: Fitting Gaussian random points xiโˆผ N (0,Id/d) to an ellipsoid. Notice that the unit sphere itself is close to being a fit by simple concentration of measure: a random xโˆผ N(0,Id/d) has
https://arxiv.org/abs/2504.20539v1
(with high probability) distance O(1/โˆš d) to it. The covariance being I d/dis a convention which ensures E[โˆฅxiโˆฅ2] = 1: it is clear that assuming a generic positive-definite covariance ฮฃ โ‰ป0 does not change this question, so we do not lose any generality with this assumption. The motivation of [Sau11, SCPW12, SPW13] for this question came from statistics: they showed that the probability of existence of an ellipsoid fit was equal to the one of the success of a canonical convex relaxation (called Minimum-Trace Factor Analysis) in a problem of low-rank matrix decomposition. Several other motivations were uncovered later on, and I recommend reading the introduction of [PTVW23] to learn more about them. Interestingly, it was soon conjectured that a sharp transition occurs in the regime n/d2โ†’ฮฑ >0, exactly at ฮฑ= 1/4, see e.g. Conjecture 2 of [SPW13]. Conjecture 6 (Ellipsoid fitting) .Letn, dโ‰ฅ1andx1,ยทยทยท, xnโˆผ N(0,Id/d). For any ฮต >0: lim sup dโ†’โˆžn d2โ‰ค1โˆ’ฮต 4โ‡’lim dโ†’โˆžP[โˆƒEan ellipsoid fit to (xi)n i=1] = 1, (2) lim inf dโ†’โˆžn d2โ‰ฅ1 +ฮต 4โ‡’lim dโ†’โˆžP[โˆƒEan ellipsoid fit to (xi)n i=1] = 0. (3) Upper bounds โ€“ Ellipsoid fitting can be written as a semidefinite program (SDP), since any origin-centered ellipsoid satisfies E={xโˆˆRd:xTSx= 1}for some positive semidefinite symmetric matrix S, so: P[โˆƒEan ellipsoid fit to ( xi)n i=1] =P[โˆƒSโˆˆRdร—d:Sโชฐ0 and xT iSxi= 1 for all iโˆˆ[n]]. (4) Itโ€™s then not hard to convince oneself that {xT iSxi= 1}n i=1is an independent system of nlinear equations onS: this already gives an upper bound ofd+1 2 โ‰ƒd2/2 to the number of points that can admit an ellipsoid fit (with high probability). As of now, this โ€œsillyโ€ argument (which does not take into account the constraint thatSโชฐ0) is the best upper bound established for Conjecture 6. 4 Lower bounds โ€“ On the other hand, there has been an abundance of works to establish lower bounds [Sau11, SCPW12, SPW13, KD23, PTVW23, HKPX23, TW23, BMMP24]. Not diving into details, they essentially all rely on carefully picking a candidate solution Sโ‹†to the linear system {xT iSxi= 1}n i=1(e.g. the least Frobenius norm solution), and establishing that Sโ‹†โชฐ0 for small enough n. The best results in this vein are due to [TW23, HKPX23, BMMP24], which independently proved the following. Theorem 3.1 ([TW23, HKPX23, BMMP24]) .Letn, dโ‰ฅ1andx1,ยทยทยท, xnโˆผ N(0,Id/d). There is ฮด >0 such that: lim sup dโ†’โˆžn d2โ‰คฮดโ‡’lim dโ†’โˆžP[โˆƒEan ellipsoid fit to (xi)n i=1] = 1. While it is not known if these methods can be pushed all the way to ฮด= 1/4, it is conjectured that picking Sโ‹†as the minimal nuclear norm solution to the linear system {xT iSxi= 1}n i=1gives this optimal value, and could be a path towards proving eq. (2) [MK24]. The transition at d2/4โ€“Notice that the d2/4 threshold can be uncovered both by numerical simulations (solving a SDP can be done efficiently), but also by a heuristic argument: if the linear equations {Tr[SxixT i] = 1}n i=1were replaced by {Tr[SGi] = 1}n i=1with GiGaussian i.i.d. matrices, the existence of a sharp transition would be a direct consequence of Gordonโ€™s theorem [Gor88], and the value d2/4 can even be recovered
https://arxiv.org/abs/2504.20539v1
in this case as the squared Gaussian width of the cone of positive semidefinite matrices! While this heuristic was well-known, we formalized it in [MB23], essentially by showing that the volume of the space of solutions is universal in both problems. We established from it the following sharp transition result. Theorem 3.2 ([MB23]) .For any ฮต, M > 0, we define EFP ฮต,M:โˆƒSโˆˆRdร—d: Sp( S)โІ[0, M]and1 nnX i=1|xT iSxiโˆ’1| โ‰คฮตโˆš d. Notice that the original ellipsoid fitting problem of Conjecture 6 is EFP 0,โˆž. We prove: lim sup dโ†’โˆžn d2=ฮฑ <1 4โ‡’ โˆƒMฮฑ>0 :โˆ€ฮต >0,lim dโ†’โˆžP[EFP ฮต,Mฮฑ] = 1, (5) lim inf dโ†’โˆžn d2=ฮฑ >1 4โ‡’ โˆƒฮตฮฑ>0 :โˆ€M > 0,lim dโ†’โˆžP[EFP ฮตฮฑ,M] = 0. (6) A few remarks are in order to clarify the conclusion of Theorem 3.2. โ€ขThe setting ฮตโ‰ช1 (but not going to 0 with d) is precisely the regime where the problem is not trivially solved by the unit sphere (i.e. S= Id), since (1 /n)Pn i=1|xT ixiโˆ’1|=O(1/โˆš d) (cf. also Fig. 1). โ€ขIn the regime n/d2<1/4, Theorem 3.2 shows that there exists ellipsoids which are: ( i) well-behaved (the spectral norm of Sis bounded), and ( ii) they fit the points ( xi) up to an arbitrarily small error (but not going to 0 with d). โ€ขIn the regime n/d2>1/4, Theorem 3.2 rules out any well-behaved ellipsoid (i.e. with bounded spectral norm) as a possible fit, even allowing a small fitting error. Theorem 3.2 provides the first mathematical result identifying a satisfiability transition in the ellipsoid fitting problem at the conjectured d2/4 threshold. While we were not yet able to close Conjecture 6 from it, some parts seem tantalizingly close: e.g. the non-existence of ellipsoid fits for n/d2>1/4 would now follow from showing that there is no ill-behaved ellipsoid fit (or that if there is an ill-behaved ellipsoid fit, there must also be a well-behaved one). Other works โ€“ To keep this post at a reasonable length, I have not mentioned some other recent works which are in some way related to this problem, and to which I contributed [MK24, MTM+24, ETB+24]. 5 In [MK24] we use non-rigorous tools that originate in statistical physics to extend the conjecture to other classes of random points beyond Gaussian distributions (and the conjectured satisfiability threshold can then be very different from d2/4!), but also to study the typical geometrical shape of ellipsoid fits, and even to predict analytically the performance of the methods used to prove the lower bounds discussed above (cf. the conjecture on the minimal nuclear norm solution mentioned below Theorem 3.1). Finally, in [MK24] we realized that the ideas behind Theorem 3.2 can be adapted to problems in statistical learning: precisely, we characterize optimal learning from data in a model of so-called โ€œextensive-widthโ€ neural network, a regime which is both particularly relevant in practical applications and which had so far largely resisted to theoretical analysis. In another recent preprint [ETB+24], we build upon these ideas to study a toy model of a transformer architecture. Entry 4 Detection in Multi-Frequency synchronization (AK) In the study of average-case complexity, one
https://arxiv.org/abs/2504.20539v1
is often interested in the critical threshold of the signal-to-noise ratio at which a problem becomes tractable. In this entry, we will consider a multi-frequency synchronization problem, where one obtains noisy measurements of the relative alignments of the signal elements through multiple frequency channels. We are interested whether it becomes possible to detect the signal using a time-efficient algorithm at a lower signal-to-noise ratio compared to the single-frequency model. Let us define the synchronization problem more formally. In a general synchronization problem over a group G, one aims to recover a group-valued signal g= (g1, . . . , g n)โˆˆGnfrom its noisy pairwise information ongkgโˆ’1 j. One way to model such observations is through receiving a function of gkgโˆ’1 jcorrupted with additive Gaussian noise Ykj=f(gkgโˆ’1 j) +Wkj, W kjโˆผ N(0,1). In this entry, we will focus on a setting where measurements are available for all pairs ( j, k). Motivated by the Fourier decomposition of a non-linear objective of the non-unique games problem [BCLS20], we consider the model as receiving measurements through several frequency channels. For example, for G=U(1) ={eiฯ†, ฯ†โˆˆ[0,2ฯ€)}, we consider measurements of the following form: ๏ฃฑ ๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃฒ ๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃณY1=ฮป nxxโˆ—+1โˆšnW(1), Y2=ฮป nx(2) x(2)โˆ— +1โˆšnW(2), ... YL=ฮป nx(L) x(L)โˆ— +1โˆšnW(L), where W(1), . . . , W(L)are independent matrices, and x(k)= (xk 1, . . . , xk n) denotes entrywise power. This case corresponds to the angular synchronization, where the objective is to determine phases ฯ†1, . . . , ฯ† nโˆˆ[0,2ฯ€] from their noisy relative observation ฯ†kโˆ’ฯ†jmod 2 ฯ€, and equivalent to the synchronization over SO(2), as SO(2)โˆผ=U(1). In the general case of compact groups, we consider the Peter-Weyl decomposition instead. Informally, in this case, the Fourier modes correspond to irreducible representations, and each pairwise measurement corresponding to an irreducible representation ฯis a block matrix with blocks given by Yฯ kj=ฮป nฯ(gk)ฯ gโˆ’1 j +1โˆšnW(ฯ) kjfor each ฯโˆˆฮจ. The single-frequency model (i.e., when L= 1) reduces to the Wigner spiked matrix model. In this model, the celebrated BBP transition [BAP05] postulates that above a critical value ฮปโ‰ฅฮปโˆ—= 1, detection is possible based on the top eigenvalue, while below this threshold, the top eigenvalue does not provide reliable 6 information on the presence of the signal as ngrows to infinity. Moreover, for a variety of dense priors on signal x, this procedure is optimal in the sense that no algorithm can detect the signal reliably below the spectral threshold, ฮปโˆ—= 1, including algorithms with no constraints on the runtime [PWBM16]. Does the situation change for the multi-frequency model? For simplicity, let us assume that the signal x is sampled uniformly from L-th roots of unity, xโˆผUnif({e2ฯ€ik/L}Lโˆ’1 k=0). This corresponds to synchronization over the cyclic group Z/LZ. In this case, [PWBM16] showed that the statistical threshold is ฮปstat= ฮ˜(log L/L), i.e., below this threshold it is impossible to detect the signal, while above the threshold, there exists an inefficient algorithm for this task. The authors give upper and lower bounds on ฮปstatwith exact constants, and the upper bound is lower than the spectral threshold for a sufficiently large number of
https://arxiv.org/abs/2504.20539v1
frequencies, namely, Lโ‰ฅ11. From the computational point of view, in [KBK24], it was shown that assuming the low-degree conjecture, no polynomial-time algorithm can detect the signal below the spectral threshold, ฮปโˆ—= 1, regardless of receiving additional information through additional channels. This result applies to the setting when the number of frequencies Lis constant compared to the dimension nand when the signal is sampled uniformly from SO(2) or any finite group of constant size. Combining this result with the optimality of PCA for the single-frequency model, this suggests that receiving only constant number of additional frequencies does not lower the computational threshold. This opens up an intriguing question: how much more frequencies one requires so that the detection becomes possible using an efficient algorithm below the spectral threshold? We can formulate the following conjecture. Open Problem 7. Consider synchronization model over SO(2)or synchronization model over a finite group, where the signal xis sampled uniformly over group elements. Find a scaling L=Lnof number of frequencies such that there exists a polynomial-time algorithm that can detect the signal reliably for all ฮป > ฮป comp ,Lfor some ฮปcomp ,L<1asnโ†’ โˆž . There is empirical evidence supporting the conjecture that, as Ldiverges, the computational threshold becomes lower than the spectral threshold: numerical simulations in [GZ19] suggest that the variant of AMP with a carefully performed initialization can surpass the threshold when the dimension nand the number of frequencies Lare comparable. In the computationally hard regime, we may also hope that there exists a sub-exponential algorithm whose runtime possibly depends on the signal strength. Such a tradeoff was observed in the sparse PCA problem [DKWB23]. For the synchronization model over finite groups, [PWBM16] proposed an algorithm for detecting the signal that works in the entire computationally hard regime. This algorithm involves maximizing a certain test function over all possible solutions gโˆˆGnand thus has an exponential runtime. Open Problem 8. Consider synchronization model over SO(2)or synchronization model over a finite group, where the signal xis sampled uniformly over group elements. Fix number of frequencies Lthat does not depend on n. In the computationally hard regime, i.e., when ฮปโˆˆ(ฮปstat,1), does there exist a sub- exponential algorithm, i.e., an algorithm running in time expnฮดfor some ฮด <1, that can detect the signal reliably as nโ†’ โˆž ? Entry 5 Tensor PCA and the Kikuchi algorithm (ASB) In 2014, Andrea Montanari and Emile Richard [MR14] proposed a statistical model for understanding a variant of Principal Component Analysis in Tensor data. This is currently referred as the Tensor PCA problem. We will consider symmetric version of the problem in which the signal of interest is a point in the hypercube. Given n, randฮป, the goal is to estimate (or detect) an unknown โ€œsignalโ€ xโˆˆ {ยฑ 1}n(drawn uniformly from the hypercube), from โ€œmeasurementsโ€ as follows: for i1< i2< ... < i r, Yi1,i2,...,ir=ฮปxโŠ—r+Zi1,i2,...,ir where Zi1,i2,...,irare iid N(0,1) random variables (and independent from x). 7 We will consider r(and โ„“, to be introduced below) fixed and nโ†’ โˆž , all big-O notation will be in terms ofn. Tensor PCA is believed to undergo a
https://arxiv.org/abs/2504.20539v1
statistical-to-computaional gap: without regards for computational efficiency, it is possible to estimate xforฮป= โ„ฆ nโˆ’r 2+1 2 . Efficient algorithms, such as the Sum-of-Squares hierarchy, are able to solve the problem at ฮป=หœโ„ฆ nโˆ’r 4 , where หœโ„ฆ hides logarithmic factors. Local methods, such as gradient descent and approximate message passing succeed at ฮป=หœโ„ฆ nโˆ’1 2 . For r= 2, the problem simply involves matrices, and indeed all these thresholds coincide. We point the reader to [WAM19] and references therein for more on each of these thresholds. In 2019, Alex Wein, Ahmed El Alaoui, and Cris Moore [WAM19] proposed an algorithm for this problem based on the so-called Kikuchi Free Energy, it roughly corresponds to a hierarchy of message passing algorithms. They showed that this approach achieves (up to logarithmic factors) the threshold for the sum- of-squares approach, shedding light on the gap between the message passing and sum-of-squares frameworks. We briefly describe the Kikuchi approach. We will focus on even r. There is a design parameter โ„“(with nโ‰ซโ„“โ‰ฅr 2) which we will consider fixed. The Kikuchi matrix Mis then โ„“ ร—n โ„“ matrix (with rows and columns indexed by subsets IโŠ‚[n] of size โ„“) given by M(ฮป)I,J=YIโˆ†J if |Iโˆ†J|=r, 0 otherwise, where Iโˆ†J= (IโˆชJ)\(IโˆฉJ) denotes the symmetric difference. The goal is to understand when the top of the spectrum of Mreveals the spike x. It is shown in [WAM19] that this happens for ฮป=หœโ„ฆ nโˆ’r 4 . Since Zis rotation invariant, we can assume WLOG that x=1, the all-ones vectors. The following conjecture also appears in [Ban24] and is a reformulation of a conjecture in [WAM19]: Conjecture 9 (Kikuchi Spectral Threshold) .Given r, โ„“, n positive integers satisfying nโ‰ซโ„“โ‰ฅr 2and r even (randโ„“will be fixed and nโ†’ โˆž ). For each SโŠ‚[n]with|S|=rletZSโˆผ N(0,1), and all independent. Let ฮปโ‰ฅ0and let Mbe then โ„“ ร—n โ„“ matrix (with rows and columns indexed by subsets IโŠ‚[n]of size โ„“) given by M(ฮป)I,J=ฮป+ZIโˆ†J if |Iโˆ†J|=r, 0 otherwise, where Iโˆ†J= (IโˆชJ)\(IโˆฉJ)denotes the symmetric difference. Letฮปโ™ฎ r,โ„“denote the threshold at which eigenvalues โ€œpop-outโ€ of the spectrum of M(ฮป): in other words ฮปโ™ฎ r,โ„“is the real number such that, for all ฮป > ฮปโ™ฎ r,โ„“, there exists ฮต >0such that EฮปmaxM(ฮป)>(1 + ฮต+ o(1))EฮปmaxM(0),where o(1)is a term that goes to zero as nโ†’ โˆž . For fixed r, we have nr 4ฮปโ™ฎ r,โ„“โ†’0 asโ„“โ†’ โˆž (note that this is after one has taken nโ†’ โˆž ). This would establish the very interesting phenomenon that there is no sharp threshold for polynomial time algorithms in Tensor PCA in the following sense: This conjecture would imply that by increasing โ„“ (while remaining ฮ˜(1), corresponding to increasing the computational cost of the algorithm while keeping it polynomial time), one would be able to decrease the critical signal-to-noise ratio in the sense of nr 4ฮปโ™ฎ r,โ„“ getting arbitrarily close to zero. Since the bound in [WAM19] contains logarithmic factors on n(present in หœโ„ฆ(ยท)), it does not allow to see this phenomenon for โ„“= ฮ˜(1). For even rand1 2rโ‰คโ„“ <3 4rthe threshold has been characterized in my work with Giorgio Cipolloni, Dominik Schrยจ oder,
https://arxiv.org/abs/2504.20539v1
and Ramon van Handel [BCSvH24], which is a follow up to the matrix concentration inequalities (leveraging intrinsic freeness) in my work with March Boedihardjo, and Ramon van Handel [BBvH23]. 8 Entry 6 Did just a couple of deviations suffice all along? (ASB) In September 2024, at a conference on Mathematical Aspects of Learning Theory in Barcelona ( https: //www.crm.cat/mathematical-aspects-of-learning-theory/ ) Dan Spielman gave a beautiful talk on discrepancy theory and some of its applications in clinical trials. Even though this was not the precise focus of the talk, it sparked a discussion that day about lower bounds for Joel Spencerโ€™s Six Deviations Suffice Theorem. I will describe some of the unanswered questions from this discussion. Many thanks to Dan Spielman, Tselil Schramm, Amir Yehudayoff, Petar Nizic-Nikolac, and Anastasia Kireeva with whom discussing this problem contributed to making the workshop a memorable week! Given n, a positive integer, and A, annร—nmatrix, we define the discrepancy of A, disc( A), as disc(A) = inf xโˆˆ{ยฑ1}nโˆฅAxโˆฅโˆž. One of the motivations of this question is to understand how much smaller is inf xโˆˆ{ยฑ1}nโˆฅAxโˆฅโˆžcompared to the value, e.g. in expectation, when xis drawn uniformly from the hypercube. We point to the references in this post for a more thorough discussion of the context, motivations, and state of the art. In 1985, Joel Spencer [Spe85a] showed the cellebrated โ€œSix Deviations Sufficeโ€ Theorem, which states that for any positive integer nand any Aโˆˆ {ยฑ 1}nร—ndisc(A)โ‰ค6โˆšn. There have since been improvements on this constant, but the question we will pursue here concerns lower bounds. When the condition of ยฑ1 entries is replaced by an โ„“2condition on the columns, the corresponding question is the famous Kยด omlos Conjecture (which coincidentally is the first open problem in my lecture notes from around a decade ago [Ban16]). Conjecture 10 (Kยด omlos Conjecture) .There exists a universal constant Ksuch that, for all square matrices Awhose columns have unit โ„“2norm, disc(A)โ‰คK. It is easy to see that the statement of this conjecture implies Spencerโ€™s Theorem, since the columns of annร—n,ยฑ1 matrix, have โ„“2normโˆšn. The best current lower bound for Kยด omlos is a recent construction by Tim Kunisky [Kun23] which shows a lower bound on Kof 1 +โˆš 2. Unfortunately, the resulting matrix is not a ยฑ1 matrix (even after scaling), and so it does not provide a lower bound for the Spencer setting. Open Problem 11 (How many deviations are needed afterall?) .What is the value of the following quantity? sup nโˆˆNsup Aโˆˆ{ยฑ1}nร—n1โˆšndisc(A) It is easy to see that the matrix1 1 1โˆ’1 has discrepancy 2 =โˆš 2โˆš 2, giving a lower bound ofโˆš 2 to this quantity. Also, Dan Spielman mentioned a numerical construction (of a specific size) that achieved a slightly larger discrepancy value. To the best of my knowledge, there is no known lower bound above 2, motivating the question: โ€œDid just a couple of deviations suffice all along?โ€. Also, to the best of my knowledge, there is no known construction (or proof of existence) of an infinite family achieving a lower bound strictly larger than one. More explicitly, we formulate
https://arxiv.org/abs/2504.20539v1